0% found this document useful (0 votes)
73 views89 pages

Psychology Topic ONE Notes

This document summarizes key topics in social psychology including interactions between individuals, the effect of being in groups, the effect of social situations, social roles, and theories of obedience. It discusses Milgram's study of obedience which found that participants obeyed destructive orders. The agency theory of obedience is described, suggesting that people take on an agentic state where they see themselves as agents carrying out orders rather than being responsible for their own actions. The social power theory is also mentioned as an alternative explanation for obedience.

Uploaded by

sam sahakian
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
73 views89 pages

Psychology Topic ONE Notes

This document summarizes key topics in social psychology including interactions between individuals, the effect of being in groups, the effect of social situations, social roles, and theories of obedience. It discusses Milgram's study of obedience which found that participants obeyed destructive orders. The agency theory of obedience is described, suggesting that people take on an agentic state where they see themselves as agents carrying out orders rather than being responsible for their own actions. The social power theory is also mentioned as an alternative explanation for obedience.

Uploaded by

sam sahakian
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 89

Psychology

Topic one: Social and Cognitive Psychology

Social Psychology
Topic overview: Students must show an understanding that social psychology is about aspects of

human behaviour that involve the individual’s relationship to other persons, groups and society, including

cultural influences on behaviour.

Interactions between individuals

● Individuals interact with others and affect one another’s behaviour.


● Agency theory suggests that people are agents for society and therefore they will do things
to fit in.
● People send signals to other people by the way they look and behave, and they obey certain
people and not others.

The effect of being in groups within a society

● People live within a culture and society and their behaviour is affected by their experiences
within society, where they are members of certain groups.
● Social identity theory suggests that by identifying oneself as being a member of a group, a
person can be prejudiced against for being part of that group.
● Prejudice, peer pressure and crowd behaviour is situated within the social approach.

The effect of the social situation

● Social situations can affect behaviour.


● Social construction is a theory of knowledge in sociology and communication theory that
examines the development of jointly-constructed understandings of the world that form the
basis for shared assumptions about reality.
○ Social constructs can be different based on the society and the events
surrounding the time period in which they exist. An example of a social
construct is money or concept of currency as people in society have agreed
to give it importance/value.

Social roles

● People have social roles and those roles have expectations attached to them.
● People act in accordance to their social roles

Milgram’s study- found that authority figures obeyed more than ordinary people.

1
Obedience

➢ Obedience: form of social influence that involves performing an action under the orders of
an authority figure.

➢ Conformity: A change in a person’s behaviour or opinions as a result of real or imagined


pressure from a person or group of people. It is when individuals alter their actions,
behaviours or beliefs to gain the acceptance of a group. (It is doing something which is
against one's own inclinations but not doing it with the intention of matching the behaviour
of the majority).

➢ Compliance: refers to any situation in which individuals change their behaviour because
they’re requested to do so. This is a unique social situation in as much as it is partially
voluntary. A superficial and temporary type of conformity where we outwardly go along with
the majority view, but privately disagree with it. The change in our behaviour only lasts as
long as the group is monitoring us. (going along with what someone says, while not
necessarily agreeing with it.

➢ Internalisation: A deep type of conformity where we take on the majority view because we
accept it as correct. It leads to a far-reaching and permanent change in behaviour, even
when the group is absent. (obeying with agreement)

Theories into obedience:


There are two theories of obedience: agency theory and social power theory.

Agency Theory
➢ Agency theory: suggests that humans have two mental states

It suggests that the participants obeyed because they were agents of authority. The theory
says people are agents of others in society because that is how society works. There is no
evidence for this other than it is a claim that makes sense.

In Milgram’s study of obedience, participants who stayed to the end tended to say that they were
just doing what they were told and would not have done it otherwise. They knew what they were
doing was wrong.

The participants showed moral strain, in that they knew that obeying the order was wrong, but
they felt unable to disobey.

Moral strain: experiencing anxiety, usually because you are asked to do something that goes
against your judgement. Moral strain occurs in between the two states and is relieved when shifted

2
into the agentic state. This is because the individual places the responsibility onto the authority
figure.

In Milgram’s study:

- The participants heard the cries of the learner (victim)


- They might have feared retaliation from the victim
- They had to go against their own moral values
- There was a conflict between the needs of the victim and the needs of the authority figure.
- The participants would not want to harm someone because this would go against their
opinions of themselves.

This is the idea that our social system leads to obedience. If people see themselves as individuals,
they will respond as individuals and will be autonomous in that situation.

For example, in a threatening situation, many people avoid aggression and turn away to avoid
getting hurt and aid survival.

➢ Evolution theory: is the idea of natural selection- any tendency that aids survival would lead
to gene or gene combination for that tendency being passed on.

Early humans had a better chance of survival if they lived in social groups, with leaders and
followers. A tendency to have leaders and followers may also have been passed down genetically. A
hierarchical social system, such as the one that Milgram’s participants were used to, requires a
system in which some people act as agents for those ‘above’ them.

According to the agency theory, the agentic state involves a shift in responsibility from the person
carrying out an order to the person in authority giving the order- the responsibility is ‘given’ to the
one doing the ordering.

Miligram proposed that we exist in two states:

➢ Autonomy: acting on one’s own free will. They feel responsible for their own actions.

In an autonomous state, individuals see themselves as having power and they see their
actions as being voluntary.

➢ Agency: when one acts as an agent for another. They see themselves as an agent or an
authority figure. They give up their free will in order to follow instructions and displace
responsibility for their actions to the authority.

In an agentic state, individuals act as agents for others and their own consciences are not in
control. They do not feel responsible for their actions. They feel as if they have no power, so
they might well act against their moral code

3
People are in an agentic state when they see the person giving an order as having legitimate
authority and when they see that person as taking responsibility for them following the
order.

● Milgram observed that society was hierarchical by nature and this has evolved for a survival
function. He thought that its function was to create social order and harmony- obedience is
needed to maintain this
● We are prepared to be obedient as we are exposed to authority figures through socialisation
○ Socialisation is the process by which we learn the rules and norms of society
(parents, teachers). Milgram believed that this prepares us to be obedient.

Evaluation of the agency theory of obedience:

Strengths:

➢ It explains the different levels of obedience found in the variations to the basic study. In the
basic study, the participants did not take responsibility and said that they were just doing
what they were told. However, as they were made to take more responsibility (for example,
holding the victim’s hand down), the obedience level decreased.

Agency theory has supporting evidence from Milgram’s original and variation studies of
obedience. In the original study, the participants were in the agentic state and didn’t take
responsibility for their own actions; that’s why 65% obeyed to 450V

➢ The theory has real life applications to explain obedience including verbal reports by General
Eichmann following WW2 as to why they obeyed destructive orders to kill innocent people
without question. They saw themselves as agents to the person giving the orders and
displaced responsibility for their actions into him. Eichmann said that he was just obeying
orders. Agency theory, which is rooted in the theory of natural selection, helps explain
inexplicable actions like the Holocaust and the My Lai massacre (Vietnam).

Weaknesses:

● Doesn’t explain individual differences

● Hard to define and measure agency and autonomy

● It is more of a description of how society works than an explanation. Doesn’t explain


motivational issues behind obedience

● It is an oversimplified explanation of obedience as it is reductionist. There could be other


reasons why a person obeys or disobeys an authority figure such as their genes and
personality

● The theory could have negative contributions to society. This is because it could be used by
individuals as an excuse for bad behavior where hateful acts are allowed to take place
without judgement.

4
Social Power Theory: (French & Raven 1959)

It is the ability of a person to create conformity, even when people are being influenced.

● Legitimate power- Power vested in those who are appointed or elected to positions of
authority such as teachers, politicians and their power is successful because members of
the group accept it as appropriate; Milgram would have had legitimate power.

● Reward power- The ability to use resources to reward others; Milgram may have held
reward power because he paid the participants.

● Coercive power- The ability to threaten someone using punishment- held by those who
can punish another; Milgram gave the participants a small shock so they may have felt
that he could punish them.

● Expert power- The ability to be seen as having special skills or more knowledge;
participants would see Milgram as an expert.
- Expert power represents a type of informational influence based on the
fundamental desire to obtain valid and accurate information, and where the
outcome is likely to be private acceptance.
- Conformity to the beliefs or instructions of doctors, teachers, lawyers and
computer experts is an example of expert influence; we assume that these
individuals have valid information about their areas of expertise, and we accept
their opinions based on this perceived expertise.

● Referent power- People with referent power have an ability to influence others because
they can lead those others to identify with them; the participants would probably not have
seen Milgram in this light.
- When they belong to a group of high status
In this case, the person who provides the influence is

5
a) A member of an important reference group- someone we personally admire and
attempt to emulate
b) A charismatic, dynamic, and persuasive leader
c) A person who is particularly attractive or famous.

● Social power can be defined as the ability of a person to create conformity, even when the
people being influenced may attempt to resist those changes.

● Social power is the ability to achieve goals even if other people oppose those goals. All
societies are built on some form of power, and this power typically resides within the
government; however, some governments in the world exercise their power through force,
which is not legitimate.

● Milgram’s studies on obedience demonstrated the remarkable extent to which the social
situation and people with authority have the power to create obedience.

● One of the most influential theories of power was developed by French and Raven, who
identified five different types of power—reward power, coercive power, legitimate power,
referent power, and expert power. The types vary in terms of whether their use is more
likely to create public conformity or private acceptance.

● Although power can be abused by those who have it, having power also creates some
positive outcomes for individuals.

● Leadership is determined by person variables, situational variables, and by the person-


situation interaction. The contingency model of leadership effectiveness is an example of the
latter.

Latane and Darley (1968) carried out a famous experiment into this. Participants sat in booths
discussing health issues over an intercom. One of the speakers was a confederate who would
pretend to have a heart attack. If there was only one other participant, they went for help 85%
of the time; this dropped to 62% if there were two other participants and 31% if there were 4+.

No one was giving orders in this study, but the rule “go and get help when someone collapses”
is a sort of order that is present all the time in society. Following these sorts of social rules is
called prosocial behaviour and breaking the rules is antisocial behaviour. Social Impact Theory
explains prosocial behaviour as well as obedience.

Milgram’s (1963) research into obedience

Aims, procedures and results:

Aims To establish how obedient participants would be when ordered to administer


increasingly intense electrical shocks by an authority figure, thinking we’re hurting

6
that person.

Sample 40 participants, all men aged 20-50. They were recruited through volunteer
sampling.

Procedure
● After a fake coin toss, the confederate became the ‘learner’ and the
participant became the ‘teacher’.
● Participants were assured that, although the shocks were painful, they
would ‘not cause lasting damage’.
● The shock generator had switches from 15V to 450V and labels like ‘Slight
shock’ or ‘Danger’.
● The participants learned a list of word-pairs, if the answer given was
wrong, the experimenter ordered the teacher to press the switch
delivering a 15V shock.
● The shock went up by 15V with each wrong answer.
● The Learner’s answers were pre-set and his cries of pain tape-recorded.
● The experimenter had a set of pre-scripted prods that were to be used if
the teacher questioned any of the orders. If all prods had to be used, the
observation would stop. It also stopped if the Learner got up and left or
reached 450V.

Results
● He found that 65% of participants went to the full 450V to a silent and
unresponsive learner and all went to 300V
● Many people showed signs of distress. Participants were seen trembling,
some begged to stop and one had a seizure.

Evaluation:

➢ Strengths

○ It was conducted in a laboratory setting and allowed the experimenter to


have a high level of control. This is useful as it makes the results more
reliable as we can say that we can observe the effects of Milgram’s
commands to the participants clearly,

○ Milgram’s study is also replicable as it was done in a laboratory setting with a


high level of control. This is useful because then another experimenter could
replicate the experiment and check the results were not due to chance.

○ This is a further strength of using the experimental laboratory method as


consent is easier to get rather than in a field setting.

○ The study shows high experimental realism and the tension shown by
participants throughout the experiment shows this.

7
○ Participants were debriefed on the real aims of the study, in an attempt to
deal with the ethical breach of the guideline of protection from deception
and the possibility to give informed consent.

○ Real life applications- explains obedience to destructive authority figures


(Hitler and how the Nazis obeyed his orders and killed innocent Jews).

○ High internal validity- 70% of participants believed the shocks were real. This
suggests the findings were likely to be accurate.

➢ Weaknesses

There are also several weaknesses to Milgram’s study, and most are to do with the amount of ethical
issues that Milgrim broke, the first being deception.

○ Participants of Milgram’s study were deceived as they were told the


experiment was about ‘the effects of punishment on learning’ and were
made to believe that they were giving real electrical shocks to participants.
Milgrim thought this was necessary for the study because if the participants
knew about the true aim of the study, demand characteristics would be
introduced, and the findings of the study would be useless. Since there was
deception, informed consent could not be obtained.

○ There is also some question to the participants' right to withdraw from the
study as any time, there is some debate to this for two key reasons. Firstly
the participants were being paid to participate in the study, and this may
have made them feel as though they had to continue. Secondly the cues
from the authority figure such as “please go on” may have made them feel as
if they had no choice but to continue.

○ The last ethical issue that Milgram breaks, and perhaps the most important is
protection from harm. After the experiment, the majority of participants
reported feeling high levels of stress, and also showed clear signs of distress
during the experiment. Some even had seizures. Post study interviews
showed that the experiment caused no long-term negative effects but can
Milgram really be excused for breaking the ethical guideline to begin with?

○ It also raises a socially-sensitive issue- Milgram’s findings suggest that those


who are responsible for killing innocent people can be excused because it is
not their personality that made them do this, rather than the situation they
were in.

○ Lack of internal validity- the experiment may have been about trust rather
than obedience because the experiment was held at Yale University.
Therefore, participants may have trusted that nothing serious would happen
to the confederate, especially considering the immense prestige of the
location.

8
○ There is a lack of ecological validity in the study. As the study was carried out
in a laboratory, the participants knew they were being observed and studied,
and the way people behave in a laboratory setting is vastly different to the
way people behave in “real life”. Participants may have felt protected by the
experimenter and trusted that what happened at Yale would be acceptable.

○ Sample was biased- milgram only used males in his study which means we
cannot generalise the results to females- Androcentric. It was also
ethnocentric to the US- they used US participants when trying to find out why
the Nazis killed Jews.

Conclusion:

● Milgram concludes you don’t have to be a psychopath to obey immoral orders: ordinary
people will do it in the right situation
● Milgram’s original study revealed how far ordinary people were willing to go to obey an
authority figure even when it meant seriously harming another person.
● The study simulated a great deal of research into obedience across the world. Milgram
developed the agency theory from his findings.
● The level of obedience was highest in the basic experiment; all the variations led to the lower
level.
● The setting had the least effect, the orders of the experimenter had the most effect. This
suggests that the conclusion that obedience results from orders given by an authority figure
is correct.

Milgram’s variation studies:

Experiment 7: telephonic instructions

Aim: to see if altering the situation of the basic study, by moving the authority figure to another
room and giving orders over the phone, would make it easier to dissent and resist orders.

Initial instructions face to face but then experimenter left the room and gave instructions over
telephone

9
● Creates a buffer (reduces immediacy)
● When he returned to lab, disobedience turned to obedience

Reduced proximity encourages dissent- obedience rate fell from 65% to 22.5%.

➔ One reason why the participants would not yield to the distress caused was because they
were helping the scientific study.
➔ One set of variation varied the distance between the participants and the experimenter to
see the effect of obedience- Many lied to experimenter

➢ Strengths

○ Milgram’s variation study has high reliability as he replicated his original


study by following the same standardised procedures- volunteer sample,
rigged draw. By changing just one element it allowed him to make
comparisons to the original study.

○ The study has positive applications to society for people such as police and
readers. If these people need to maintain high levels of obedience on the
street or in the classroom they should be present in close proximity to their
target.

➢ Weaknesses

○ Participants in Variation #7 noticed that, when they pressed a lower voltage


switch instead of the higher one, the cries of pain still intensified. This may
have meant that the experiment was not as valid as they didn’t believe the
shocks were real.

Conclusion:

The study helped to reveal that the authority figures role was very influential in determining levels of
obedience.

➢ When the experimenter is not face to face with the participant, it is easier to not obey
➢ A few participants gave lower shocks than they should have done, thinking they were not
being observed
➢ Over the phone, they said they were raising the shock levels but they were not.
➢ Physical presence is a force when it comes to obedience.

=Experiment 10: Run down Office Block

10
● Aim: To find out whether obedience would drop if the setting were not so prestigious, in this
case a rundown office building in Bridgeport

● Was previously held at Yale University. Now moved to Bridgeport, an industrial city near
Yale. All links to the university were removed. Milgram says that people regard Yale
university with ‘awe’ and ‘respect’

● Lowers status (linked to hierarchy)

● Location / status affects obedience

● In the post experiment interviews, people commented on where the study would be done
with integrity and competence because of the setting.

➢ Participants questioned the legitimacy/ credibility of the study.

➢ The same procedure took place, including asking for volunteers and paying $4.50 for
attending. The same personnel were used and the same occupation and age.

➢ The study was conducted by the Research Associates of Bridgeport. Participants were told
that the study was being conducted by a private research firm. The building was rundown
and sparsely furnished

Results:

● Dropped from 65% to 47.5%


● This was a lower level, but Milgram didn’t think it was a significant difference.
● The idea of legitimate settings doesn’t seem to be backed by evidence. Milgram thought it
was the category of place that led to obedience.

➢ Strengths
○ Has high reliability as he replicated his original study by following the same
standardised procedures eg volunteer sample, rigged draw. By changing one
element, the study being moved from Yale to a rundown office block in
Bridgeport, allowed him to make comparisons to the original.
○ This variation study was higher in ecological validity than the original study in
that it took place in a real world setting, an office block rather than a lab. Two
participants were quoted as questioning the legitimacy of the study and
competence of the researchers which didn’t happen in the lab at Yale. This
means that findings can be generalised to real life settings.

11
➢ Weaknesses
○ Obedience did not fall that much so the validity might still be questioned. The
study was still in a laboratory (even in an office). So this might still be
scientific requiring cooperation from the participants, which might lack
validity.
○ Lacks ecological validity as although the participants are in the real world, it
doesn’t measure real obedience. The controls in the study and the task
involved, such as the generator and verbal prods are all likely to show
participants the task is far from real.
○ 19 participants obeyed in the office setting and 26 obeyed in the Yale setting,
which Milgram claimed was not different. However, there is still less
obedience, and 47.5% compared with 65% is worth pursuing. Some people
might think that using Yale meant that the findings lacked validity.

Conclusion:

● It helped to reveal the integrity of the setting was not as influential as the role of the
experimenter in influencing obedience.

Experiment 13: Ordinary Man

● Aim: to prove that perceived authority/ status is needed for obedience

● The experimenter gives instructions to the point about administering shocks but, before he
can explain further, gets ‘called away’ and leaves the room.

● There is an accomplice in the room who was initially given the task. When the experimenter
leaves, the accomplice suggests a new way of doing the study, going up the shock one at a
time in response to the victim making mistakes.

● The participants sees this as a suggestion from an ‘ordinary man’

● Milgram noticed that leaving did create an awkward atmosphere and tended to undermine
the credibility of this variation

Results:

20% of participants reached the maximum shock level (4/16)

Strengths:

12
● Reliability- standardised procedures

● Participants saw the accomplice draw lots- believed it was chance. Helped reduce authority
in that situation

Weaknesses:

● Lot of authority- still in Yale- just having another participant might not be enough to remove
power.

● Lacks ecological validity- conducted in a lab so it was conducted in a controlled environment-


no external variables.

Conclusion:

● No perceived authority
● Perceived authority/ status is needed for obedience
● Experiment described as strained because learner had to go to great lengths to persuade
teacher to continue
● Dropped from 65% to 20%

Factors affecting obedience and dissent/ resistance to obedience

Situational factors:

The series of Milgram’s experiments demonstrated that an ordinary person could be capable of
harm at the instructions of an authority figure, which dispelled the myth that German soldiers were
lacking in character traits, making them more compliant and obedient. The baseline obedience was
65% and the series: the series of experiments also demonstrated various situational factors
increased or decreased levels of obedience and dissent.

● Momentum of compliance: starting with small and trivial requests, the teacher has
committed themselves to the experiment. As the request increased, the participants felt
duty bound to continue. This is also true of the shock generator as the initial shocks were
small but increased slowly in 15-volt increments. The situation created a binding relationship
that escalated steadily.

● Proximity: the closer the authority figure, the higher the level of obedience. Distance seemed
to act as a buffer to obedience, as found in the telephonic instruction condition. Proximity of
the victim also acted as a buffer to obedience.

- When the learner was in the same room or the teacher had to physically place the
hand of the learner onto a shock plate, obedience dropped. This is in contrast to the
pilot study where the learner was in a different room and could not be seen or heard
throughout, resulting in 100 per cent obedience. Milgram also referred to the shock

13
generator as a physical buffer between the participant and the victim – in the same
way that a soldier would be more inclined to drop a bomb than stab an enemy, the
generator buffered the distance between them

● Status of the authority: Milgram stated that obedience could only be established when the
authority figure was perceived to be legitimate. This was found to be the case when the
experiments were conducted at Yale University, and obedience fell when the experiment
was moved to Bridgeport or conducted by an ordinary man.

● Personal responsibility: Milgram believed that participants would be more obedient in a


situation where personal responsibility is removed and placed onto the shoulders of an
authority figure.

Individual differences in obedience and dissent

Personality:

Milgram conducted a series of follow-up investigations on participants who were involved in the
experiments to uncover whether certain individuals would be more likely to obey or dissent.

In one study, 118 participants from experiments 1–4 who were both obedient and disobedient were
asked to judge the relative responsibility for giving the shocks out of the experimenter, the teacher
and the learner. They indicated who was responsible for the person being given the shocks by
moving three hands on a round disc to show proportionate responsibility. He found that dissenting
participants gave proportionately more blame to themselves (48 per cent) and then the
experimenter (39 per cent), whereas obedient participants were more likely to blame the learner (25
percent), more so than the dissenters (12 percent blamed the learner).

Locus of control:

It seems that dissenting individuals take more of the blame, whereas obedient people are more
likely to displace blame. This can be explained by Rotter’s (1966) locus of control personality theory.
This theory outlines two different personality types: those with an internal and those with an
external locus of control. People with an internal locus of control tend to believe that they are
responsible for their own actions and are less influenced by others. People with an external locus of
control believe that their behaviour is largely beyond their control but due to external factors such
as fate. These people are more influenced by others around them.

This seems consistent with Milgram’s findings that obedient people have an external locus of
control; not only are they more likely to be influenced by an authority figure, but they also believe
that they are not responsible for their actions. Dissenters, on the other hand, are more resistant to
authority and more likely to take personal responsibility for their actions. The link between
personality and obedience seems a plausible one that can account for individual differences in
obedience. However, research in this area is mixed, providing only tentative evidence that

14
individuals with an internal locus of control resist and those with an external locus of control are
obedient.

Authoritarian personality:

An authoritarian personality is typically submissive to authority but harsh to those seen as


subordinate to themselves.

Michael Dambrun and Elise Vatiné (2010) conducted a simulation of Milgram’s experiment using a
virtual environment/computer simulation and found that authoritarianism was linked to obedience.
Those with high authoritarian scores were less likely to withdraw from the study, perhaps because
they were submissive to the authority of the experimenter, or showed an inclination to punish the
failing learner.

Empathy

It is believed that people who have high levels of empathy would be less likely to harm another
person at the instructions of an authority figure. In a recent replication of Milgram’s experiment,
Jerry Burger (2009) found that although people who score high on empathy were more likely to
protest against giving electric shocks, this did not translate into lower levels of obedience.

Gender

Milgram used predominantly male participants in his experiments, although he did conduct one
experiment (Experiment 8) which involved 40 female teachers.

Previous research had indicated that females were more compliant than males, yet traditionally we
think of women as less aggressive. This contradiction would be played out in an experiment that
commanded both compliance and aggression.

Milgram found that females were virtually identical to males in their level of obedience (65 per cent),
27.5 per cent breaking off at the 300-volt level. Yet their rated level of anxiety was much higher than
males for those who were obedient. This was also found in Burger’s (2009) replication of the
experiment. Sheridan and King (1972) adapted Milgram’s experiment to involve a live puppy as a
victim that received genuine shocks from college student participants. They found that all 13 female
participants were much more compliant and delivered the maximum levels of shock to the puppy
compared to men.

Culture

Many behaviours vary across cultures. Culture can be divided broadly into two types: individualistic
and collectivistic cultures. Individualistic cultures, such as America and Britain, tend to behave more
independently and resist conformity or compliance. Collectivistic cultures, such as China or Japan,
tend to behave as a collective group based on interdependence, meaning that cooperation and
compliance is important for the stability of the group (Smith and Bond, 1998). We could assume
from this that collectivistic cultures are more likely to be obedient.

15
Although some might argue that obedience levels are not universal, on closer inspection of the
methodologies of the research studies, it seems that the variation in percentage of participants who
gave the full shock is more a product of the procedure employed than cultural variation.

Conformity
Conformity is a type of social influence where a person changes their attitude or behaviour in
response to group pressure.

It is yielding to group pressures. It is also defined as ‘a change in a person’s behaviour or opinion as


a result of a real or imagined pressure from a person or group of people’, where an imagined
pressure is when there are no consequences for not conforming and a real pressure is when there
are consequences for conforming.

Types of conformity:

Compliance: This type of conformity involves simply ‘going along with others’ in public, but privately
not changing personal opinions and/or behaviour. Compliance results in only a superficial change. It
also means that a particular behaviour or opinion stops as soon as group pressure stops.

Internalisation: Internalisation occurs when a person genuinely accepts the group norms. This
results in a private as well as a public change.

Identification is the middle level of conformity. Here a person changes their public behaviour and
their private beliefs, but only while they are in the presence of the group. This is a usually a short-
term change and normally the result of normative social influence.

Explanations of conformity:

16
Informational social influence (ISI)
When someone conforms because they want to be right, so they look to others by copying or
obeying them, to have the right answer in a situation; when a person is uncertain or unsure, they
would look to others for information. It usually leads to internalisation and occurs in situations
where we do not have the knowledge or expertise to make our own decisions e.g. a person
following the direction of the crowd in an emergency, even though they don’t actually know where
they are going, as they assume that everyone else is going to the right place.

Normative Social influence (NSI)


When someone conforms because they want to be liked and be part of a group; when a person’s
needs to be accepted or have approval from a group drives compliance. It often occurs when a
person wants to avoid the embarrassing situation of disagreeing with the majority/ rejection.

Asch’s study:

Participants 123 male American undergraduates in groups of 6; consisting of 1 true


participant and 5 confederates

Aim To investigate conformity and majority influence

Procedure ● Participants and confederates were presented with 4 lines; 3


comparison lines and 1 standard line
● They were asked to state which of the three lines was the same length
as a stimulus line.
● The real participant always answered last or second to last
● Confederates would give the same incorrect answer for 12 out of 18
trials
● Asch observed how often the participant would give the same incorrect
answer as the confederates versus the correct answer.

Findings 36.8% conformed


25% never conformed
75% conformed at least once
In a control trial, only 1% of responses given by participants were incorrect

17
(which eliminates eyesight/ perception as an extraneous variable, thus
increasing the validity of the conclusions drawn).

Factors affecting level of conformity

Asch’s variations

Asch was further interested in the conditions that might lead to an increase or a decrease in
conformity. He investigated these by carrying out some variations of his original procedure.

1. Group size/ Size of majority

➢ Asch also wanted to know whether the size of the group would be more important
than the agreement of the group. An individual is more likely to conform when in a
larger group. There was low conformity when group size of confederates were less
than 3- any more than 3 and conformity rose by 30%.

This is because a person is more likely to conform if all members of the group are in
agreement and give the same answer, because it will increase their confidence in
correctness of the group, and decrease their confidence in their own answer.

➢ This shows that the majority must be at least 3 to exert an influence, but an
overwhelming majority is not needed in all instances to bring about conformity.

2. Unanimity

➢ An individual is more likely to conform when the group is unanimous i.e. all give the
same answer, as opposed to them all giving different answers. When joined by
another participant or disaffected confederate who gave the correct answer,
conformity fell from 32% to 5.5%. If different answers are given, it falls from 32% to
9%.

This is because the more unanimous a group is, the more confidence the participant
will have that they are all correct, and therefore the participant’s answer is more
likely to be incorrect.

➢ Unanimity is vital in establishing a consistent majority view, which is particularly


important by providing normative social influence through preventing any conflicting
views arising.

18
3. Task difficulty

➢ An individual is more likely to conform when the task is difficult.

➢ Asch made the line-judging task more difficult by making the stimulus line and the
comparison lines more similar in length. He found that conformity increased under
these conditions.

➢ This suggests that informational social influence plays a greater role when the task
becomes harder. This is because the situation is more ambiguous, so we are more
likely to look to other people for guidance and to assume that they are right and we
are wrong.

Evaluation:

Strengths:

➢ High internal validity - There was strict control over extraneous variables, such as timing of
assessment and the type of task used. The participants did the experiment before without
confederates to see if they actually knew the correct answer, thus removing the confounding
variable of a lack of knowledge. This suggests that valid and reliable ‘cause and effect’
relationships can be established, as well as valid conclusions.

➢ Lab experiment - Extraneous and confounding variables are strictly controlled, meaning that
replication of the experiment is easy. Successful replication increases the reliability of the
findings because it reduces the likelihood that the observed findings were a ‘one-off’.

➢ Ethical issues - The researchers breached the BPS ethical guideline of deception and
consequently, the ability to give informed consent. However, the participants were
debriefed. Ethical issues do not threaten the validity or reliability of findings, but rather
suggest that a cost-benefit analysis is required.

➢ Supports normative social influence - participants reported that they conformed to fit in with
the group, so it supports the idea of normative influence, which states that people conform
to fit in when privately disagreeing with the majority.

Weaknesses:

➢ Lacks ecological validity - it was based on peoples’ perception of lines and so the findings
cannot be generalised to real life as it does not reflect the complexity of real life conformity
i.e. where there are many other confounding variables and majorities exert influence
irrespective of being a large group.

➢ Lacks population validity due to sampling issues - For example, the participants were only
American male undergraduates, and so the study was subject to gender bias, where it is
assumed that findings from male participants can be generalised to females (i.e. beta bias).

19
➢ Ethical issues- there was deception as participants were tricked into thinking the study was
about perception rather than compliance so they could not give informed consent. However,
it is worth bearing in mind that this ethical cost should be weighed up against the benefits
gained from the study

There could have been psychological harm as the participants could have been embarrassed
after realising the true aims of the study- Such issues simply mean that a cost-benefit
analysis is required to evaluate whether the ethical costs are smaller than the benefits of
increased knowledge of the field. They do not affect the validity or reliability of findings!

➢ Lacked validity - The social context of the 1950s may have affected results. For example,
Perrin and Spencer criticised the study by stating that the period that the experiment was
conducted in influenced the results because it was an anti-Communist period in America
when people were more scared to be different i.e. McCarthyism. Thus, the study can be said
to lack temporal validity because the findings cannot be generalised across all time periods.

➢ Artificial situation and task- Participants know they were in a research study and may simply
have gone along with the demands of the situation (demand characteristics). The task of
identifying lines was relatively trivial and therefore there was really no reason not to
conform. Also, although the naive participants were members of a ‘group’, it didn’t really
resemble groups that we are part of in everyday life. According to Fiske (2014), ‘Asch’s
groups were not very groupy’.

This is a limitation because it means that the findings do not generalise to everyday
situations. This is especially true where the findings do not generalise conformity might be
more important, and we interact with other people in groups in a much more direct way.

➢ Limited applications of findings- Only men were tested by Asch. Other research suggests
that women might be more conformist, possibly because they are more concerned about
social relationships (and being accepted) than men are (Neto 1995). The men in Asch’s study
were from the US, an individualist culture, i.e. where people are more concerned about
themselves rather than their social group. Similar conformity studies conducted in
collectivist cultures (such as China where the social group is more important than the
individual) have found that conformity rates are higher. This makes sense because such
cultures are more oriented to group needs (Bond and Smith 1996).
This shows that conformity levels are sometimes even higher than Asch found. Asch’s
findings may only apply to American men because he didn’t take gender and cultural
differences into account.

➢ Findings only apply to certain situations- The fact that participants had to answer out loud
and were with a group of strangers who they wanted to impress might mean that conformity
was higher than usual. On the other hand, Williams and Sogon (1984) found conformity was
actually higher when the majority of the group were friends than when they were strangers.
Zimbardo’s study:

Participants 24 American male undergraduate students

20
Aim To investigate how readily people would conform to the social roles in a
simulated environment, and specifically, to investigate why ‘good people
do bad things’.

Procedure ● The basement of the Stanford University psychology building was


converted into a simulated prison.

● American student volunteers were paid to take part in the study.

● They were randomly issued one of two roles; guard or prisoner.


Both prisoners and guards had to wear uniforms.

● Prisoners were only referred to by their assigned number

● Guards were given props like handcuffs and sunglasses (to make
eye contact with prisoners impossible and to reinforce the
boundaries between the two social roles within the established
social hierarchy).

● No one was allowed to leave the simulated prison.

● Guards worked eight hour shifts, while the others remained on


call. Prisoners were only allowed in the hallway which acted as
their yard, in order to emphasise their complete power over the
prisoners.

● No physical violence was permitted, in line with ethical guidelines


and to prevent complete overruling.

● The behaviour of the participants was observed.

Findings ➢ Identification occurred very fast, as both the prisoners and guards
adopted their new roles and played their part in a short amount of
time, despite the apparent disparity between the two social roles.

➢ Guards began to harass and torment prisoners in harsh and


aggressive ways- they layer reported to have enjoyed doing so and
relished in their new-found power and control. Their behaviour
became a threat to the prisoners’ psychological and physical
health, and the study was stopped after six days instead of the
intended 14.

➢ Prisoners would only talk about prison issues (forgetting about


their previous life), and snitch on other prisoners to the guards to
please them. This is significant evidence to suggest that the
prisoners believed the prison was real, and were not acting simply
due to demand characteristics.

21
➢ 90% of their conversations were about prison life.

➢ They would even defend the guards when other prisoners broke
the rules, reinforcing their social roles as prisoner and guard,
despite it not being real.

➢ Within two days, the prisoners rebelled against the harsh


treatment by the guards. They ripped their uniforms, and shouted
and swore at the guards, who retaliated with fire extinguishers.
The guards employed ‘divide-and-rule’ tactics by playing the
prisoners off against each other. They harassed the prisoners
constantly, to remind them they were being monitored all the time.
For example, they conducted frequent headcounts sometimes in
the middle of the night. The guards highlighted the differences in
social roles by creating plenty of opportunities to enforce the rules
and punish even the smallest misdemeanour.

➢ After the rebellion was put down, the prisoners became subdued,
depressed and anxious. One prisoner was released on the first day
because he showed symptoms of psychological disturbance. Two
more were released on the fourth day. One prisoner went on a
hunger strike. The guards attempted to force-feed him and then
punished him by putting him in ‘the hole’, a tiny, dark closet.
Instead of being considered a hero, he was shunned by other
prisoners. The guards identified more and more closely with their
role. Their behaviour became more brutal and aggressive, with
some of them appearing to enjoy the power they had over the
prisoners.

The guards became more demanding of obedience and


assertiveness towards the prisoners while the prisoners become
more submissive. This suggests that the respective social roles
became increasingly internalised.

Conclusions

The simulation revealed the power of the situation to influence people’s behaviour.

Guards, prisoners and researchers all conformed to their roles within the prison. These roles were
very easily taken on by the participants- even volunteers who came in to perform certain functions
(such as the prison chaplain) found themselves behaving as if they were in prison rather than in a
psychological study.

22
Evaluation:

Strengths:

➢ Real life applications – This research changed the way US prisons are run e.g. young
prisoners are no longer kept with adult prisoners to prevent the bad behaviour
perpetuating. Beehive-style prisons, where all cells are under constant surveillance from a
central monitoring unit, are also not used in modern times, due to such setups increasing
the effects of institutionalisation and over exaggerating the differences in social roles
between prisoners and guards.

➢ Debriefing – participants were fully and completely debriefed about the aims and results of
the study. This is particularly important when considering that the BPS ethical guidelines of
deception and informed consent had been breached. Dealing with ethical issues in this way
simply makes the study more ethically acceptable, but does not change the quality (in terms
of validity and reliability) of the findings.

➢ The amount of ethical issues with the study led to the formal recognition or ethical
guidelines so that future studies were safer and less harmful to participants due to legally
bound rules. This demonstrates the practical application of an increased understanding of
the mechanisms of conformity and the variables which affect this.

➢ Control over variables- they selected participants who were emotionally stable and tried to
rule out personality differences. Having such control over variables is a strength because it
increases the internal validity of the study- confident in drawing conclusions about influence
of roles and behaviours.

Weaknesses:

➢ Lacks ecological validity - The study suffered from demand characteristics. For example, the
participants knew that they were participating in a study and therefore may have changed
their behaviour, either to please the experimenter (a type of demand characteristic) or in
response to being observed (participant reactivity, which acts as a confounding variable).

➢ Lack of realism- he participants also knew that the study was not real so they claimed that
they simply acted according to the expectations associated with their role rather than
genuinely adopting it. This was seen particularly with qualitative data gathered from an
interview with one guard, who said that he based his performance from the stereotypical
guard role portrayed in the film Cool Hand Luke, thus further reducing the validity of the
findings

23
➢ Lacks population validity – The sample only consisted of American male students and so the
findings cannot be generalised to other genders and cultures. For example, collectivist
cultures, such as China or Japan, may be more conformist to their prescribed social roles
because such cultures value the needs of the group over the needs of the individual. This
suggests that such findings may be culture-bound.

➢ Ethical issues: Lack of fully informed consent due to the deception required to (theoretically)
avoid demand characteristics and participant reactivity. However Zimbardo himself did not
know what was going to happen, so could not inform the participants, meaning that there is
possible justification for a breach of ethical guidelines.
Psychological harm – Participants were not protected from stress, anxiety, emotional
distress and embarrassment e.g. one prisoner had to be released due to excess distress and
uncontrollable screaming and crying. One prisoner was released on the first day due to
showing signs of psychological disturbance, with a further two being released on the next
day. This study would be deemed unacceptable according to modern ethical standards.

Role of dispositional influences

Fromm (1973) accused Zimbardo of exaggerating the power of the situation to influence behaviour
and minimising the role of personality factors (dispositional influences). For example, only a minority
of the guards (about a third) behaved in a brutal manner. Another third were keen on applying the
rules fairly. The rest actively tried to help and support the prisoners, sympathising with them,
offering them cigarettes and reinstating privileges (Zimbardo 2007).

This suggests that Zimbardo’s conclusion- that participants were conforming to social roles- may be
over-stated. The differences in the guards’ behaviour

Minority influence (Moscovici 1976)

Participants Randomly selected participants and confederates- liberal arts, law and social
science students.
Participants were in a group where there were two confederates (the
minority) and four participants (the majority)- 172 female participants

Aim To observe how minorities can influence a majority

Procedure ● It was a lab experiment


● Before the experiment they did the Polock eye test to ensure that
they didn’t have any vision abnormalities.
● Everyone was shown 36 blue slides, each with a different shade of
blue
● They were each asked to say whether the slide was blue or green
● Confederates deliberately said they were green on two-thirds of the

24
trials, thus producing a consistent minority view.
● The number of times that the real participants reported that the
slide was green was observed
● A control group was also used consisting of participants only- no
confederates.

Findings When the confederates were consistent in their answers about 8% of


participants said the slides were green. However, when the confederates
answered inconsistently about 1% of participants said the slides were green.
This shows that consistency is crucial for a minority to exert maximum
influence on a minority.

Consistency: Moscovici’s study clearly demonstrates the role of consistency in minority influence.
The majority is more likely to be influenced by the minority when the minority is consistent in their
views.

This is because it makes the opposition that the views of the minority are real and serious enough to
pay attention to (i.e. the augmentation principle), if they are so determined to stay consistent. A
consistent minority disrupts established norms and creates uncertainty, doubt and conflict. This can
lead to the majority taking the minority view seriously. The majority will therefore be more likely to
question their own views.

If all members share the same views (synchronic), then it can convince the majority that there is
something worth agreeing with. Remaining consistent (diachronic synchrony) forces the opposition
to rethink their own views repeatedly over time and generates more doubt due to conflicting views,
which allows more opportunity to be influenced. When confronted with consistent opposition,
members of the majority will sit up, take notice and rethink their position.

There are two types of consistency:

● Diachronic consistency is when the group remains consistent over time – they do not change
their views over time.

● Synchronic consistency is when the group is consistent between all the members of the
group – everyone in the group has the same views, and therefore agree with and support
each other.

25
Commitment
The majority is more likely to be influenced by the minority when the minority is committed,
because when the minority have so much passion and confidence in their point of view, it suggests
to the majority that their view must somehow be valid, and it encourages them to explore why;
offering more opportunity to be influenced (majority may assume that they have a point).

Flexibility
The majority is more likely to be influenced by the minority when the minority is flexible. Being too
consistent can suggest that the minority is inflexible, uncompromising and irrational, making their
argument less appealing to the majority. However, if they appear flexible, compromising and
rational, they are less likely to be seen as extremists and attention seekers. They are more likely to
be seen as reasonable, considerate and cooperative.

+ The emphasis of consistency, commitment and flexibility have a real-life application because they
can inform minority groups about the best way to behave in order to exert a maximum amount of
influence. However, it is worth considering that the majority is not only larger than the minority, but
often has greater connections and more power. Therefore, the three techniques described above
are not always enough to change the opinion of an audience.

Evaluation:

Strengths:

➢ It was a lab experiment- replicable

Weaknesses:

➢ It lacks ecological validity as it was conducted in a lab

➢ Used female students -i.e: unrepresentative sample which means it would be wrong to
generalise. Women are said to be more conformist than men

➢ Lacks realism- tasks were not as complex as real issues would be

➢ Ethical issues- he deceived his participants- they were told that they were taking part in a
colour perception test- did not gain fully informed consent.

Factors affecting conformity and minority influence, including individual differences


(personality), situation and culture.

Group size: the bigger the majority group, the more people conformed.

26
- Asch found that group influenced whether subjects conformed. The bigger the majority
group (no. of confederates), the more people conformed, but only up to a certain point.
- With one other person (i.e confederate) in the group conformity was 3% with two others it increased to
13%, and with three or more it was 32% (or ⅓).
- Optimum conformity effects (32%) were found with a majority of 3. Increasingly the size of
the majority beyond three did not increase the levels of conformity found.
- Eg: at a restaurant, don’t know what fork to use. Look around and see what others use.

Lack of group unanimity/ Presence of an ally

- More likely to conform when all members of the group agree and give the same answer
- Asch found that when in the presence of one confederate that goes against the majority,
conformity rates can reduces conformity as much as 80%
- For example, in the original experiment, 32% of participants conformed to the critical trials,
whereas when one confederate gave the correct answer on all the critical trials conformity
dropped to 5%.

Difficulty of task

- When the (comparison) lines (Eg. a, b, c) were made more similar in length. It was harder to
judge the correct answer and conformity increased.
- When we are uncertain, it seems we look to others for confirmation. The more difficult the
task, the greater the conformity.

Answer in private

- When participants were allowed to answer in private (So the rest of the group does not know
their response) conformity decreased.
- This is because there are fewer group pressures and normative influence is not as powerful,
as there is no feature of rejection from the group.

1.2 Methods: Self-reporting data

Designing and conducting questionnaires and interviews, considering researcher effects

A questionnaire is a research instrument consisting of a series of questions for the purpose of


gathering information from respondents. Questionnaires can be thought of as a kind of written
interview. They can be carried out face to face, by telephone, computer or post.

However, a problem with questionnaires is that respondents may lie due to social desirability. Most
people want to present a positive image of themselves and so may lie or bend the truth to look
good, e.g., pupils would exaggerate revision duration.

Questionnaires can be an effective means of measuring the behavior, attitudes, preferences,


opinions and intentions of relatively large numbers of subjects more cheaply and quickly than other
methods.

Often a questionnaire uses both open and closed questions to collect data. This is beneficial as it
means both quantitative and qualitative data can be obtained.

27
Primary data
It is data that is collected by a researcher from first-hand sources, using methods like surveys,
interviews, or experiments

Advantage

Some common advantages of primary data are its authenticity, specific nature, and up to date
information while secondary data is very cheap and not time-consuming.

Primary data is very reliable because it is usually objective and collected directly from the original
source. It also gives up to date information about a research topic compared to secondary data.

Secondary day, on the other hand, is not expensive making it easy for people to conduct secondary
research. It doesn't take so much time and most of the secondary data sources can be accessed for
free.

Disadvantage

The disadvantage of primary data is the cost and time spent on data collection while secondary data
may be outdated or irrelevant. Primary data incur so much cost and takes time because of the
processes involved in carrying out primary research.

For example, when physically interviewing research subjects, one may need one or more
professionals, including the interviewees, videographers who will make a record of the interview in
some cases and the people involved in preparing for the interview. Apart from the time required,
the cost of doing this may be relatively high.

Secondary data may be outdated and irrelevant. In fact, researchers have to surf through irrelevant
data before finally having access to the data relevant to the research purpose.

Secondary data
It refers to data that is collected by someone other than the user. Common sources of secondary
data for social science include censuses, information collected by government departments,
organizational records and data that was originally collected for other research purposes.

Advantages of Secondary Data


Ease of Access: Most of the sources of secondary data are easily accessible to researchers. Most of
these sources can be accessed online through a mobile device. People who do not have access to
the internet can also access them through print. They are usually available in libraries, book stores,
and can even be borrowed from other people.

Inexpensive: Secondary data mostly require little to no cost for people to acquire them. Many books,
journals, and magazines can be downloaded for free online. Books can also be borrowed for free
from public libraries by people who do not have access to the internet. Researchers do not have to

28
spend money on investigations, and very little is spent on acquiring books if any.

Time-Saving: The time spent on collecting secondary data is usually very little compared to that of
primary data. The only investigation necessary for secondary data collection is the process of
sourcing for necessary data sources. Therefore, cutting the time that would normally be spent on
the investigation. This will save a significant amount of time for the researcher

Longitudinal and Comparative Studies: Secondary data makes it easy to carry out longitudinal
studies without having to wait for a couple of years to draw conclusions. For example, you may want
to compare the country's population according to census 5 years ago, and now. Rather than waiting
for 5 years, the comparison can easily be made by collecting the census 5 years ago and now.

Generating new insights: When re-evaluating data, especially through another person's lens or point
of view, new things are uncovered. There might be a thing that wasn't discovered in the past by the
primary data collector, that secondary data collection may reveal. For example, when customers
complain about difficulty using an app to the customer service team, they may decide to create a
user guide teaching customers how to use it. However, when a product developer has access to this
data, it may be uncovered that the issue came from and UI/UX design which needs to be worked on.

Disadvantages of Secondary Data

Data Quality: The data collected through secondary sources may not be as authentic as when
collected directly from the source. This is a very common disadvantage with online sources due to a
lack of regulatory bodies to monitor the kind of content that is being shared. Therefore, working
with this kind of data may have negative effects on the research being carried out.

Irrelevant Data: Researchers spend so much time surfing through a pool of irrelevant data before
finally getting the one they need. This is because the data was not collected mainly for the
researcher. In some cases, a researcher may not even find the exact data he or she needs, but have
to settle for the next best alternative.

Outdated Information: Some of the data sources are outdated and there are no new available data
to replace the old ones. For example, the national census is not usually updated yearly. Therefore,
there have been changes in the country's population since the last census. However, someone
working with the country's population will have to settle for the previously recorded figure even
though it is outdated.

Unstructured, semi-structured and structured interviews, open, closed (including ranked


scale) questions

● Structured: Questionnaire that contains only closed-ended questions


● Semi-structured: Contains both open-ended and closed ended questions
● Unstructured: Contains open-ended questions exclusively or majority

Scale/rank question type can be used to ask respondents whether they agree or disagree with a

29
number of statements, to rate items on a scale, or to rank items in order of importance or
preference.

For example:

+ Ranking scales give you an insight into what matters to your respondents. Each response to
an item has an individual value, giving results that you can easily average and rank
numerically.

- Ranking scales cannot tell you why something is important or unimportant to respondents.
They address items in relation to each other rather than individually, and they may not give
fully accurate results.

Alternative hypothesis

The alternative hypothesis states that there is a relationship between the two variables being
studied (one variable has an effect on the other).

It states that the results are not due to chance and that they are significant in terms of supporting
the theory being investigated.

Sample selection and sampling techniques: Random, stratified, volunteer and


opportunity sampling
Random sampling:
It is a type of probability sampling where everyone in the entire target population has an equal
chance of being selected.

30
Random samples require a way of naming or numbering the target population and then using some
type of raffle method to choose those to make up the sample. Random samples are the best
method of selecting your sample from the population of interest.

● The advantages are that your sample should represent the target population and eliminate
sampling bias.
● The disadvantage is that it is very difficult to achieve (i.e. time, effort and money).

Stratified sampling:

The researcher identifies the different types of people that make up the target population and works
out the proportions needed for the sample to be representative.

● The advantage is that the sample should be highly representative of the target population
and therefore we can generalize from the results obtained.
● The disadvantage of stratified sampling is that gathering such a sample would be extremely
time consuming and difficult to do. This method is rarely used in psychology.

Opportunity sampling:

Uses people from the target population available at the time and willing to take part. It is based on
convenience.

An opportunity sample is obtained by asking members of the population of interest if they would
take part in your research. An example would be selecting a sample of students from those coming
out of the library.

● This is a quick way and easy of choosing participants


● It may not provide a representative sample, and could be biased

Systematic sampling:

Chooses subjects in a systematic (i.e. orderly / logical) way from the target population, like every nth
participant on a list of names.

To take a systematic sample, you list all the members of the population, and then decide upon a
sample you would like. By dividing the number of people in the population by the number of people
you want in your sample, you get a number we will call n.

● The advantage of this method is that it should provide a representative sample.


● The disadvantage is that it is very difficult to achieve (i.e. time, effort and money).

Quantitative data

1.2.6 Analysis of quantitative data:

31
Mean: the mean is the average of the numbers. It is easy to calculate: add up all the numbers, then
divide by how many numbers there are.

Median:

1. Arrange your numbers in numerical order.


2. Count how many numbers you have.
3. If you have an odd number, divide by 2 and round up to get the position of the median
number.
4. If you have an even number, divide by 2. Go to the number in that position and average it
with the number in the next higher position to get the median.

Mode:

To find the mode, or modal value, it is best to put the numbers in order. Then count how many of
each number. A number that appears most often is the mode.

Data tables:

Frequency table: It is a table that lists items and shows the number of times the items occur. It is a
method of organizing raw data in a compact form by displaying a series of scores in ascending or
descending order, together with their frequencies.

Summary table:

The summary table is a visualization that summarizes statistical information about data in table
form.

Summary statistics summarize and provide information about your sample data. It tells you
something about the values in your data set.

32
Graphical presentation (bar chart, histogram)

To organize and summarize their data, researchers need numbers to describe what happened.
These numbers are called Descriptive Statistics. Researchers may use Histograms or Bar Graphs to
show the way data are distributed. Presenting data this way makes it easy to compare results, see
trends in data, and evaluate results quickly.

To get a better sense of what these data mean, the researcher can plot them on a bar graph.
Histograms or bar graphs for the three courses might look like this:

A histogram represents the frequency distribution of continuous variables. Conversely, a bar graph
is a diagrammatic comparison of discrete variables. Histograms present numerical data whereas bar
graphs show categorical data.

Measures of dispersion

The range is the difference between the lowest and highest values.

In statistics, the standard deviation is a measure of the amount of variation or dispersion of a set
of values. It is a statistical measure of how far the data is spread.

Standard deviation:

(formula will be given)

33
Normal and skewed distribution

In a normal distribution, the mean and the median are the same number while the mean and
median in a skewed distribution become different numbers: A left-skewed, negative distribution
will have the mean to the left of the median. A right-skewed distribution will have the mean to the
right of the median.

Analysis of qualitative data using thematic analysis

Thematic analysis is a method of analyzing qualitative data. It is usually applied to a set of texts, such
as interview transcripts. The researcher closely examines the data to identify common themes –
topics, ideas and patterns of meaning that come up repeatedly.

There are various approaches to conducting thematic analysis, but the most common form follows a
six-step process:

1. Familiarization: knowing the data (transcribing audio, reading through the text, taking
notes)
2. Coding: highlighting sections of the text (phrases, sentences)
3. Generating themes: identifying patterns among them
4. Reviewing themes: make sure the themes are useful
5. Defining and naming themes: defining themes involves formulating exactly what theme
and figuring out how it helps us understand the data.
6. Writing up: writing up a thematic analysis requires an introduction to establish our research
question, aims and approach.

This process was originally developed for psychology research by Virginia Braun and Victoria Clarke.
However, thematic analysis is a flexible method that can be adapted to many different kinds of
research.

Thematic analysis is a good approach to research where you’re trying to find out something about
people’s views, opinions, knowledge, experiences or values from a set of qualitative data – for
example, interview transcripts, social media profiles, or survey responses.

34
Ethical guidelines

British Psychologist society (BPS) code of ethics and conduct (2009) including risk
management when carrying out research in psychology

This Code provides the parameters within which professional judgements should be made.
However, it cannot, and does not aim to, provide the answer to every ethical dilemma a psychologist
may face. It is important to remember to reflect and apply a process to resolve ethical dilemmas as
set out in this code.

The BPS Code of Ethics is a set of guidelines which have been outlined by the British Psychological
Society for anyone carrying out psychological research in the UK. Many countries have guidelines
that are similar (for example the USA).

There are four ethical principles which are the main domains of responsibility for consideration by
researchers within the code; respect, competence, responsibility and integrity.

The following list is a summary of the ethical considerations set up by the BPS in 2009. The list is
only a summary and only covers the main considerations.

Researchers in psychological research should consider the following when they plan and run
research:

Consent; have participants given informed consent? If the participant is under 16 years old, has
informed consent been given by their parents or carers?

Deception: have the participants been deceived in any way? If so, could this have been avoided?

Debriefing: have the participants been debriefed? Have they been given the opportunity to ask
questions?

Withdrawal from the investigation: have the participants been informed of their right to withdraw
from the research at any point, including the right to withdraw their data at a later date?

Anonymity and Confidentiality: participants have a right to remain anonymous in publication of the
research and confidentiality should be maintained except in exceptional circumstances where harm
may arise to the participant or someone associated with the research or participant.

Protection of participants: researcher must protect participants from both physical and
psychological harm

35
Contemporary study: Burger (2009) Replicating Milgram: Would people still obey
today?

This study is significant for students in other ways:

● It shows how scientific research proceeds, because Burger is replicating parts of Milgram’s
study to see if the conclusions still hold true today (if not, they are “time locked”).
● It illustrates features of the Social Approach, since it explores how situations dictate people’s
behaviour – but it also looks at individual differences, because it investigates personality too
● Studies the effect of culture on obedience- considered how some cultures are collectivist
(like Japan and China while others are individualistic (like the US)
● Effect on gender- he experimented on both men and women and found no difference

Participants 76 participants- six dropped out of the study, one did not return for the second
session and five expressed awareness of Milgram’s obedience research.
Final sample was 29 men and 41 women. Participants’ ages ranged from 20 to 81
years, and the mean was 42.9 years.

Aim Attempt to replicate Milgram’s study in an ethically acceptable way and to see if
similar results to Milgram’s original study could still be found today
Does Milgram’s study have historical validity?

Method Most of Milgram’s procedure was followed but important changes were made as
follows:
● No one with knowledge of Milgram’s study was used
● Max. apparent shock was 150 V- the level at which the learner first cries
out in pain- to protect participants from stress
● Two step screening process helped remove participants likely to be
distressed- no one with a history of mental illness or stress was accepted.
● Participants were told three times that they could withdraw at any point in
the experiment.
● Participants were promised $50 for two 45 minute sessions.
● Experimenter was not a confederate but a clinical psychologist who could
stop the experiment at any sign of excessive stress.

36
● 70 male and female participants were used.

FIndings Obedience rate of 70%, with no difference between male and female rates.
63.3% of participants in the modelled refusal condition continued past the 150 volt
point which was not significantly different from the base condition.

Conclusions Possible to replicate Milgram’s study in a non-harmful way.


Findings indicate that the situational factors affecting obedience in Milgram’s
participants still operate today.
Obedience rates have not changed dramatically in the 50 years since Milgram’s
study

Strengths:

➢ Historically valid after 50 years- similar results received


➢ Reliable- same procedure used
➢ Conducted in a lab so no external variables (controlled environment)
➢ Can be replicated as it was conducted in a lab
➢ Participants were paid full in advance so it can be concluded that social pressure made them
shock the learner.
➢ Ethical: two-step screening process and stopped the experiment if participants showed signs
of suffering
➢ Right to withdraw- advised them three times that they could withdraw at any time and keep
the money

Weaknesses:

➢ It lacks ecological validity as it was conducted in a lab- electric shocks are artificial and don't
happen in real life.
➢ Burger stopped the study at 150V and assumed that anyone who was prepared to fo on
would have gone to 450V. This might not be a correct assumption as participants may have
had second thoughts and backed out later.

37
Yi Huang et al. (2014) Conformity to the opinions of other people last for no more than 3 days

Study 1:

Participants Sample: 17 Chinese students were recruited from South China Normal University
(5 men, 12 women) with a mean age of 22 years. All participants were right-
handed, had normal or corrected-to-normal vision, and reported no neurological
or psychiatric disorders.

Aim ● To investigate whether social conformity reflects private acceptance or


public compliance by examining the stability of behavioural changes in
judgments. (whether the conformity reflects internalisation)

● To determine whether long-lasting judgment changes are likely to reflect


a change in private opinion, whereas transient judgment changes
suggest that public compliance is involved.

● To ascertain whether social conformity could persist in the short-term.

Procedure ● The study was approved by the Ethics Committee of the School of
Psychology at South China Normal University. All participants gave
written informed consent and were informed of their right to discontinue
participation at any time.

● Participants received a payment of 30 yuan (about $5 U.S.). Participants


were informed that they were taking part in research about human
perception of facial attractiveness.

38
● 280 photographs of faces of young adult Chinese women with neutral
expressions were downloaded from free internet sources or were
university students (taken with consent). All photographs were colour
and of similar quality and general appearance.

● Photographs were presented on a computer monitor for 2 s. An 8-point


Likert scale (1 = very unattractive, 8 = very attractive) was then added to
the display, and participants rated the face. Their initial rating was shown
on screen for 0.5 seconds. Then, for 2 seconds, another box indicated an
alleged average rating given by 200 other students of the same gender as
the participant.

FIndings In 25% of trials, the group rating agreed with the participant rating (peers agree
condition). In 75% of trials, the group rating was equally likely to be above or
below the participant rating (peers-higher and peers-lower conditions).
The group norm seems to sway participant’s own judgements when they re-
rated the photos 1 and 3 days after the session
No evidence that it’s internalisation- no evidence for a social-conformity effect
when the intervening period was longer (either 7 days or 3 months after the first
session)

After 3 months, participants were called back and asked to complete a second test, which they had
not been told about previously. They rated the same faces again. Faces were presented in random
order; participants were not reminded of peer-group ratings.

Study 2:

Participants This was to ascertain wh


or seven day interval (as
sessions to periods.

Aim Three different groups o


University. 18 students t
20.72 years) 16 were in t
17 in the 7-day group (8

Procedure Participants performed


the faces 1, 3, or 7 days

Results 3 days so just complianc

39
Results

➢ Study 1 : Rating scores changed, showing that participants changed their ratings of
attractiveness in the retest, aligning themselves with the peer-group ratings given 3 months
before.

➢ Study 2: For the 1-day group, the rating change between the peers-lower and peers-higher
conditions was significant; participants rated faces in the peers-higher condition as more
attractive than faces in the peers-lower condition.

The rating change was also significant for the 3-day group, but not for the 7-day group.

Conclusions

➢ Study 1 There was no evidence for long-term influence of social conformity on participants’
attractiveness ratings.

➢ Study 2 Overall, social conformity in facial attractiveness judgments persists for up to 3 days,
but not for longer than 7 days. The social-conformity effect observed reflected a change in
privately held views. However, the short duration may have been the result of participants’
daily exposure to large numbers of faces.

It is probable that opinions were quickly revised because of subsequent experience, so that
judgments of facial attractiveness were reset back to the original norm. A resetting of individual
judgment norms could have occurred more quickly than it would for classes of objects viewed
infrequently.

Evaluation

Strengths:

➢ Standardised procedure- instructions were the same which means the experiment can be
repeated
➢ Laboratory setting- controlled environment
➢ Convincing and plausible explanation of the experiment means demand characteristics are
unlikely
➢ Sample was screened and participants who had a neurological or psychiatric disorder were
excluded
➢ Able to control for methodological issues that often arise ins studies that use a test-retest
format, such as the natural human tendencies to regress to the mean and to behave
consistently over time

Weaknesses:

➢ Laboratory setting means it was an artificial task and lacks mundane realism

40
➢ Small, localised sample wouldn’t account for individual differences
➢ Similar background, ethnicity and level of education means there is a lack of diversity
➢ Still don’t understand why the effect lasts for 3 days
➢ Cross-cultural validity- differences in conformity among different cultures
➢ Ethical issue of deception

Cognitive Psychology
Topic overview: Students must show an understanding that cognitive psychology is about the role of
cognition/ cognitive processes in human behaviour. Processes include perception, memory, selective
attention, language and problem solving. The cognitive topic area draws on how information is processed
in the brain.

41
Models of memory

The multi-store model of memory (Atkinson and Shiffrin, 1968), including information
processing, encoding, storage, retrieval, capacity and duration.

Atkinson and Shiffrin (1968) developed the Multi-Store Model of memory (MSM), which describes
flow between three permanent storage systems of memory: the sensory register (SR), short-term
memory (STM) and long-term memory (LTM). The model suggests that memory is made up of three
stores linked by processing.

Information passes from store to store in a linear way, is described as information processing.

Three areas are studied in the multi-store model of memory. Researchers investigate:

- Capacity: the size of the store


- Duration: how long information remains in the store
- Mode of representation (mode of storage): the form in which information is encoded or
stored.

Researchers investigate:

- Encoding: how memories are encoded, which means how they are registered as memories,
such as by sound or smell
- Storage- how memories are stored
- Retrieval- how we retrieve memories when the output is needed, which means finding and
accessing stored memories.

SR STM LTM

Coding Iconic/ Echoic Acoustically Semantic

Capacity Unlimited 5-9 items of Unlimited


information
7+/-2

Duration 0.5 seconds 15 to 30 seconds Unlimited

42
Sensory register:

● A stimulus from the environment; for example, the sound of someone’s name, will pass into
the sensory registers along with lots of other sights, sounds, smells and so on. It is modality
specific as each sense accommodates to itself.

● So this part of memory is not one store but several, in fact one for each of our five senses.
The two main stores are called:

1. Iconic- information is coded visually


2. Echoic- information is coded acoustically
3. Other sensory stores

Since it receives information for our senses, it has a huge capacity and a duration of less than half a
second. Therefore, information will only pass from the sensory register to the short term if we pay
attention to it.

Short Term Memory

STM is the conscious part of the mind. It has a limited capacity store, because it can only contain a
certain number of ‘things’ before forgetting takes place.

● STM is described as being acoustically encoded and lasts about 30 seconds unless it is
rehearsed.

● Maintenance rehearsal occurs when we repeat the new information to ourselves, allowing
the information to be kept in the STM. Prolonged maintenance rehearsal allows the
information to be passed into the LTM, whilst a lack of such rehearsal causes forgetting.

Long Term Memory

● This is the potentially permanent memory store for information that has been rehearsed for
a prolonged time. Psychologists believe that its capacity is unlimited and can last very many
years.

● In order to remember information, ‘retrieval’ must occur, which is when information is


transferred to the STM during the process of retrieval. According to the MSM, this is true of
all. None of them are recalled directly from LTM.

Encoding Storage Retrieval

Definition: This is registering info This is keeping This is accessing memories


as a memory. memories after from storage.

43
encoding.

Mode of Can be in different Can be in sensory Can be recognition or


representation form/ mode- visual, memory, short-term recall. Can be
acoustic (sound), tactile memory or long-term reconstructive (not an
(touch), semantic memory. exact match with what was
(meaning) Links to neuroscience- encoded and stored).
there is actual storage Lack of retrieval =
in the brain. forgetting.

Types of long Term Memory

➢ Episodic- part of the explicit long-term memory responsible for storing information about
events (i.e. episodes) that we have experienced in our lives. It involves conscious thought
and is declarative- eg: first day of school. It describes memories which have some kind of
personal meaning to us.

➢ Procedural- is a part of the implicit long-term memory responsible for knowing how to do
things. Eg: riding a bike

➢ Semantic- it is part of the explicit long-term memory responsible for storing information
about the world. This includes knowledge about the meaning of words, as well as general
knowledge. It involves conscious thought and is declarative (memory focuses on knowing
that) Eg: London is the capital of England.

(Implicit- unconscious)
(Explicit/ declarative- conscious)

The prefrontal cortex is seen to relate to short-term memory while the hippocampus is associated
with long term memory, supporting the model’s idea of different memory stores.

44
Evaluation of the multi-store memory model:

Strengths:

➢ There is a large base of research that supports the idea of distinct STM and LTM systems
(e.g: brain-damaged case study patient KF’s STM was impaired following a motorcycle
accident, but his LTM remained intact)

➢ It makes sense that memories in the LTM are encoded semantically- ie: you might recall the
general message put across in a political speech, rather than all of the words as they were
heard

➢ The MSM was a pioneering model of memory that inspired further research and
consequently other influential models, such as the Working Memory Model.

➢ The experiments that provide support for the model are reliable because they have been
repeated often and, being well controlled, are replicable. The experimental research method
is scientific and so a sound body of knowledge can be built up. For example, Glanzer and
Cuntiz (1996) carried out a study using word lists. They found that the first words in a list
were recalled well, as were the last words, but that the middle words were not remembered
well. They claimed that the primacy effect was because those words had been rehearsed
and so were in long-term memory and accessible. The recency effect was because those
words were still in consciousness in short-term memory, so were recalled easily. The middle
words were neither well-rehearsed in long term memory nor in consciousness in short term
memory. Therefore, these words were the most easily forgotten.

➢ There is evidence from case studies that give physiological support. For example, the case
study of Clive Wearing (Blakemore, 1988) showed that there is an area of the brain (the
hippocampus) which, if damaged, prevents new memories from being laid down. It appears
that the hippocampus holds the short-term memory because a person with a damaged
hippocampus can no longer build long-term memories.
If there is no rehearsal, no new long-term memories, and a particular area of the brain is
damaged, this suggests that when undamaged, this area of the brain fulfils that purpose.

Weaknesses:

➢ Some research into STM duration has low ecological validity, as the stimuli participants were
asked to remember bear little resemblance to items learned in real life, e.g: Peterson and
Peterson (1959) used nonsense trigrams such as ‘XQF’ to investigate STM duration

➢ The model is arguably over-simplified, as evidence suggests that there are multiple short
and long-term memory stores, e.g: ‘LTM’ can be split into Episodic, Procedural and Semantic
memory.

➢ It does not make much sense to think of procedural memory (a type of LTM) as being
encoded semantically, i.e. knowing how to ride a bike through its meaning.

45
➢ It is only assumed that LTM has an unlimited capacity, as research has been unable to
measure this accurately.

➢ Although case studies like Clive Wearing have suggested an area of the brain for short-term
memory, another case study (Shallice and Warrington, 1970) showed that a victim of a
motorbike accident was able to add long-term memories even though his short-term
memory was damaged. This goes against the multi-store model.

Another study (Schmolck et al., 2002, one of the contemporary studies for this topic area)
gives more detailed findings when looking at how brain damage affects memory. The study
of Henry Molaison (HM) and others suggests that how memory works is actually very
complex when it comes to showing where STM and LTM might be in the brain and how they
might work. This complexity goes against the simplicity of the multi-store model.

➢ It is hard to say what capacity means. Craick and Lockhard (1972) ask whether it is limited
processing capacity or limited storage capacity. Short term memory tends to take limited
capacity as limited storage and that capacity is five to nine items. However, words rather
than letters are used in a span test, 20 items can be recalled. This shows that capacity needs
to be defined more rigorously

➢ Experiments used to test the multi-store model tend to employ artificial tasks- for example,
testing short-term memory using letters or digits. The findings may not be valid because, in
real life, processing is rarely as isolated as these tasks suggest. Steyvers and Hemmer (2012),
which is detailed later as one of the contemporary studies, focuses on how memory
experiments lack ecological validity and finds useful evidence to show that experiments do
not yield valid data.

Clive Wearing was a musicologist who suffers from chronic anterograde and retrograde amnesia.
Anterograde amnesia: cannot create new memories
Retrograde amnesia (lost many of his memories).
His procedural memory was not damaged and he can still play the piano.

Henry Molaison had a bilateral medial temporal lobectomy where his hippocampi were removed
in an attempt to cure his epilepsy. He had heavy anterograde amnesia and was unable to form
new memories.

Working memory (Baddeley and Hitch, 1974)

This modern model of memory is probably the most dominant model today. Referring back to the
multi-store model will help you to understand the working memory model. The multi-store model
suggests that there is a short-term store and a long-term store. The working memory model focuses
on the short term store and on providing more information about short-term remembering.

46
General ideas about the working memory and the model

The idea of working memory is that there is a system in the short term that is there to maintain and
store information and this system underlies all thinking, not just focusing on memory. The idea is a
system that has limited capacity to bridge between perception, long-term memory and action.
Perception is getting the information and ‘action’ is the output. Working memory is seen as a system
between perception and long-term memory.

Such a system also needs something as a control to sort information, so that it can get into long-
term memory. This is taking short-term memory and splitting it into different areas. Such a system
also needs something as a control to sort information into such storage areas. Working memory is
concerned with reasoning, understanding and learning- with thinking.

Baddeley in 2003 wrote a review about working memory and discusses the starting point of ideas
about short term and long-term memory. Baddeley in 2003, suggested that short-term memory is
based on some sort of electrical activity that is short-lived and long-term memory based on
neuronal growth.

More research was done into STM as a separate system from LTM and what developed was a model
of memory that has temporary sensory registers flowing into a limited capacity short-term store.
You can recognise the multi-store model here, though there are additional registers. This is the basis
of the working memory model, which looks at the temporary registers. Baddeley and Hitch
proposed that short term memory has three components and is not just one system as suggested
by the multi-store model.

The trace decay theory of forgetting, linking to the biology of the brain

➢ The trace decay theory of forgetting is best understood by recalling the multi-store model of
memory in which there is a short-term store and a long-term store. Trace decay is a theory
of forgetting that applies to both the short-term store and the long-term store.

47
➢ The main point is that memories have a physical trace. Over time, this trace deteriorates
until finally it is lost. It is thought that memories are stored in the brain, which means a
structural change must occur. This is called an engram. Engrams are thought to be subject to
neurological decay. As an engram decays, the memory disappears and forgetting occurs.

➢ One way of renewing the trace is to repeat and rehearse information, which reinstates the
engram. The working memory model includes rehearsal in its explanation. It is thought that
when something is first learned the trace is fragile, but after further learning the engram
becomes more solid and is less likely to be destroyed. The change from a cognitive process
to an engram is a neurochemical one.

Evidence for the trace theory of forgetting

Reitman (1974) and McKenna and Glendon (1985) carried out studies into trace decay. In one study,
focusing on the short-term store, male students were shown a list of five words for two second and
then had to listen for a faint tone over headphones. The tone was given after a 15 second period
before recall, when recall rate was compared with no 15-second gap.

The passage of time led to forgetting. This suggested that forgetting came about because of the
decay of the trace. As there was no rehearsal of the material to renew the trace and no
displacement by new material, so the trace in the short-term store must have decayed.

Skills also need renewing to be remembered. Since memory worsens over time, it is time that causes
the trace to decay.

Evaluation of the trace-decay theory

48
Strengths:

➢ Physiological evidence supports the idea that there is a physical trace in the brain. This does
not, however. Prove that such a trace will decay’. Hebb put forward the idea of an engram
and Penfield provided evidence when he probed the brains of epileptic patients who were
awake and found areas of the brain that held certain memories.

➢ Theory focuses on the physical aspects of memory. People with Alzheimer’s disease seem to
lose memories, rather than not being able to retrieve them. Therefore, the theory helps
explain forgetting in real-life situations, which suggests that it may be valid.

Weaknesses:

➢ In studies of memory loss in the short-term store, it is difficult to know whether new
information has been attended to. Therefore, it is difficult to test only the trace- decay
theory, without any suggestion that displacement could have caused the forgetting. So in the
short term store, it is difficult to test whether the trace has decayed or whether the memory
cannot be retrieved for some other reason

➢ Although there is evidence that memories in the long-term store become inaccessible, there
are some memories that are resistant to being forgotten and can be remembered clearly
(flashbulb memories). Therefore, some memories can retain their trace.

49
Baddely and Hitch’s (1974) Working Memory Model

Baddeley and Hitch’s (1974) original model of memory, the three-component model of working
memory, has the following components:

● A Central Executive- this supervises the system and controls the flow of information
● Phonological loops (inner ear)- which holds sound information
● Visuospatial sketchpad (inner eye)- deals with visual and spatial information

The original model has separate phonological and visuospatial systems, according to Baddeley, if
participants in an experiment are asked to do two simultaneous tasks at the same time. However,
they can carry out simultaneously a visual task and a task involving sound. Therefore, the model was
developed to say that the two systems are separate.

The central executive is necessary to explain how tasks are allocated and how the systems are
controlled. A test that Baddeley uses is the dual-task paradigm.

(A dual task paradigm is where an experiment is done using two different tasks. The two tasks might
be similar, such as having letters that sound alike, to find out whether that similarity affects recall. )

The dual-task paradigm holds that different parts of the cognitive system are involved if a task
seems to interfere with one type of processing (such as processing using sound) but not with
another type of processing (such as using vision).

The phonological loop is split into two parts, the articulatory loop and the primary acoustic store, to
cater for the idea that there is voicing of information when rehearsing in short-term memory
(articulatory loop) and also acoustic information (primary acoustic store), and the two are different.

- The primary acoustic store deals with the perceptions of sounds and speech
- The articulatory loop is a verbal rehearsal system that prevents the rehearsal of verbal
material

They proposed a basic version of Working Memory, which can be defined as: “A temporary storage
system under attentional control that underpins our capacity for complex thought”

50
These two systems are managed by the Central Executive. It doesn't handle memories itself but it
allocates them to the other systems. It has non-specific modality - it can process sight, sound or any
of the 5 senses.

The phonological loop

The phonological loop processes auditory information and allows for maintenance rehearsal by
being made up of the articulatory process (stores the words you hear) and the phonological loop.

Deals with auditory information and is split into two part:

● The STM phonological store (or primary acoustic store - inner ear) holds auditory memory
traces (these decay rapidly after a few seconds). Information in the phonological loop lasts
for about 2 seconds before it decays.

● The articulatory loop (inner voice) revives memory traces by rehearsing them. This is like
talking without vocalising (subvocal speech).

Sound information goes directly into the primary acoustic store. This has been called the ‘inner ear’
and remembers sounds in their order. The articulatory system has been called the ‘inner voice’
because information is repeated to maintain the trace. Information in the phonological loop is
assumed to last for about two seconds before it decays. Word lists and other items are stored as
sound. The articulatory system is such that rehearsal refreshes memories but as there are more
items, it gets to the point where memory of the first term in the phonological store fades before it
can be rehearsed.

Comparison to the multi-store model (MSM): The MSM maintains info in the STM which is mainly
auditory (held as sound), which suggests it does not last long. So the working memory model is an
expansion of the multi-store model and they support each other.

Evidence for the phonological loop comes from Baddeley’s study (2003), he saw that letters that
sounded alike (e.g V, B, P) were not recalled as much as letters that did sound alike (e.g W, X, R, K)

Function of the phonological loop: It is about learning language. Support for this claim comes from
evidence that people with a phonological loop deficit cannot learn new vocabulary even though their
LTM is “normal” verbally.

The visuospatial sketchpad

It combines the visual and spatial information processed by other stores, giving us a ‘complete
picture’ e.g: when recalling the architecture of a famous landmark.

The VSS is divided into the visual cache and inner scribe. The capacity of the VSS is around 4-5
chunks of information (Baddely).

51
● Holds information we see
● Used to manipulate spatial information, such as shapes, colors and the position of objects
● It is limited in capacity to around 3 - 4 objects

Anything to do with spatial awareness such as finding your way around the school involved using
your visuospatial sketchpad. It is divided into visual and spatial and kinaesthetic (movement) parts.

Function of the visuospatial sketchpad: It seems to link to non-verbal intelligence. The sketchpad
might have a role in finding out how objects appear and adding that sort of meaning to objects. It
might also help in understanding objects. It is divided into parts:

● The visual cache stores information about form and color


● The ‘inner scribe’ deals with retrieval and rehearsal

There is biological evidence for this split, from patients with damage. In two types of patients with
right hemisphere damage had visuospatial difficulties.

The Central Executive

➢ It has been described as an ‘attentional process’ with a very limited processing capacity, and
whose role is to allocate tasks to the 3 slave systems.

➢ Drives the whole system (e.g the boss of working memory) and allocates data to the
subsystems: the phonological loop and the visuospatial sketchpad. It also deals with
cognitive tasks such as mental arithmetic and problem-solving.

➢ The central executive is the most important component of the model. It is responsible for
monitoring and coordinating the operation of the slave systems (the sketchpad and
phonological loop) and relates them to LTM.

➢ It decides which information is attended to and which parts of the working memory to send
that information to be dealt with. For example, two activities sometimes come into conflict,
such as driving a car and talking.

➢ It is the most versatile and important component of the working memory system. However,
there is still more research needed to be done as we know less about this component than
the two other subsystems. Baddeley suggests that the CE acts more like a system which
controls attentional processes rather than as a memory store. This is unlike the phonological
loop and the visuospatial sketchpad, which are specialized storage systems. The CE enables
the working memory system to selectively attend to some stimuli and ignore others.

The episodic buffer

52
There are issues with the central executive being all about attention. For example, thinking about
memory span in STM, it was found that using chunks helped memory span, but chunks use meaning
and there must be some facility to access information from the LTM about such meaning. Also there
had to be a way for the phonological loop and the articulatory system to interact. These issues led
Baddeley to introduce, in 2000, a fourth aspect of the working memory model: the episodic buffer.

The buffer provides time sequencing for visual, spatial and verbal information- for example, the
chronological order of words or the sequence of pictures in a film. The episodic buffer might also
bring in information from the long-term store. The episodic buffer has limited capacity as do all
aspects of the working memory model. The central executive controls the attention of the episodic
buffer and the episodic buffer is perhaps the storage space for the central executive.

Working memory located in the brain

As seen previously, biological evidence has been used to support the model. Baddeley used such
evidence to suggest where the working memory stores (and central executive) might be in the brain.
Evidence comes from patients with lesions (damage to the brain) and also scanning of patients
without damage.

The phonological loop does appear to involve the left temporoparietal area of the brain. Broca’s
area is suggested as being involved in the rehearsal side of the phonological loop. Broca’s area was
perceived as the area of speech. If this area is damaged, someone might know the words they want
to say but they can’t say them, which supports the idea that this area is involved in the working
memory.

Visuospatial memory seems to be in the right hemisphere - more than one study comes to that
conclusion, so the conclusion seems firm. Verbal working memory seems to be in the left
hemisphere, linking to the right inferior parietal cortex, among other areas. The central executive
seems to link to the frontal lobes, again taking evidence from neuroimaging (scanning) and lesion
studies (brain damage).

Difference in approach regarding the working memory model:

53
- The difference between this model to others was that it was formed from biological ideas
about how the brain functioned, and continued to evolve as the brain was understood
better.
- Other models, such as the multi-store, were first put forward and tested to see if it worked.
- It could be argued that it isn't just ‘a model’ because it is using neurophysiological evidence
and building based on that evidence. It is a model that is always developing; many questions
are still being asked and answered about the explanation of STM.

Evidence for the working memory model:

Phonological loop:
- If participants were asked to learn a list of words and say something aloud at the same time,
they would have difficulty learning the list. This is because the articulatory loop is already
being used to repeat other words aloud, making it unavailable to repeat the words on the
list.
- Lists with words that sound similar are more difficult to remember than lists with words that
sound different.

Visuospatial sketchpad:
- Scans show that tasks involving visual objects would activate the left hemisphere while tasks
involving spatial information would activate the right hemisphere.
- Carrying out 2 visual(real object) tasks was more difficult than carrying out 1 visual and 1
spatial task.

Episodic buffer:
- People with amnesia who could not form new memories from the long-term store could
recall stories in short-term with a lot of information. The information was more than what
could be retained in the phonological loop.

Evaluation of the working memory model:

Strengths:
➢ The model expands on the multi-store model, giving more information and refining it. Many
studies showed that some dual tasks were more difficult than others but this required an

54
explanation. This introduced ideas of an “inner ear”, “inner voice” and an “inner eye”.

➢ The amount of research it has generated and is still generating is a strength. Studies have
led to refinements of the model. The working memory model is supported by many
experiments by many well known psychologists. One of which being Baddeley and Hitch.

➢ The WMM can be applied to real life. Examples of when the Working Memory Model is
present are

1) Reading. This uses the Phonological Loop

2) Navigating oneself around a place. This uses the Central Executive

3) Solving Problems. This uses Visual and also Spatial Processing.

- The working memory model has attracted lots of research and this can be considered a
strength as the more research is put into a theory, the more questions and problems it
answers.

Weaknesses

- The model only includes the STM. This isn't a very comprehensive representation and has
minimal insight due to not including LTM or STM.

- It doesn't relate the changes in processing ability that take place due to time and practice.

- The model has been added due to the newer findings and this means that the original model
was in a lack of information and was not a valid explanation of the Memory Model.

- There is very little evidence as to how our Central Executive works and even its purpose. The
capacity has also never been measured.

- Many of the experiments were artificial, meaning they were in a controlled or man made
environment and they were also given artificial tasks, such as learning lists and
remembering stories. This could question the validity as in reality, tasks are normally
involving many of the senses.

Displacement as a theory of Forgetting

The theory that displacement causes forgetting can be understood by reference to the multi-store
model of memory. As we have seen, the idea is that there is a short-term store where information is
held for a short time (up to 30 seconds). It is either rehearsed, and goes into the long-term store or

55
is lost.
The theory of displacement as a reason for forgetting is that the rehearsal loop in the short-term
store has a limited capacity- perhaps nine items or fewer.

Evidence for displacement as a theory of forgetting


The idea of primacy and recency effects comes from the multi-store model of memory:

- The primary effect is that information learned first is well remembered, probably because it
is still in the rehearsal loop and so available for immediate recall.

- The recency effect is that information that is learned last is well remembered, probably
because it is still in the rehearsal loop and so available for immediate recall.

Information from the middle is not well recalled.


This is probably because it did not go from the
rehearsal loop into the long term store, but was
displaced by new material in the loop and was
lost, i.e. forgotten. This is evidence for the idea
of displacement in the short term store.

➢ Waugh and Norman (1965) tested this idea. They


read a list of letters to participants. After hearing
the list, the participants were told one of the
letters they had to try to remember the
subsequent letter. This explanation fits in with the multi store model of memory where a
rehearsal loop is used to explain forgetting in STM. The loop can only contain a limited
amount of information, so it makes sense that information is displaced from it. It also fits in
with working memory- supporting two models of memory.

➢ They found that displacement did seem to occur. Although Glanzer et al. (1967) thought that
displacement was a factor in forgetting, they also thought that decay caused forgetting. This
is because there is a time delay in experiments- the longer the time before recall, the greater
the forgetting. This forgetting could not be displacement alone because displacement would
cause the dsme degree of forgetting, whatever the time delay between learning and recall.
Therefore, displacement alone does not explain forgetting.

Evaluation of the displacement theory of forgetting

Strengths:

56
➢ The theory fits with the multi-store model of memory and the working memory model. Both
these models suggest a loop where information is rehearsed before going to the long-term
store. If there is a loop with limited capacity, it makes sense to say that new material
displaces material already in the loop. This theory of forgetting supports two models of
memory that are themselves supported by a great deal of evidence. This, in turn, is support
for this theory of forgetting

➢ Tested by experiments that are well controlled and therefore yield information about the
cause and effect.

➢ The experiments are replicable and can be tested for reliability. Therefore, displacement is
tested scientifically.

Weaknesses:

➢ The theory is difficult to operationalise. What is taken to be displacement could be


interference. The information in the rehearsal loop could be written over, which is
displacement. However, it could be that new information interferes with the information
being rehearsed. This would be interference, rather than displacement.

➢ It is tested using artificial tasks, such as lists of letters. This means that what is being tested
may not be valid because it is not a real-life task.

Interference theory of forgetting

The theory that interference causes forgetting differs from the theory that it says an item gets in the
way of another item, rather than displacing it.

There are two types of interference:

● Proactive interference is when something learned earlier interferes with current learning.
It occurs when you cannot learn a new task because of an old task that has been learnt.

● Retroactive interference is when something learned later gets in the way of something
learned previously. It occurs when you forget a previously learnt due to the learning of a new
task.

57
P- proactive interference
O- old
R- retroactive interference
N- new

Evidence for the interference theory of forgetting

When testing interference, participants are given one set of pairs to learn, followed by a second set.
The first word of each pair in each set is the same. For example, one set could include table- chair
and the other set could include table-stool. Participants become confused between the two lists.
(this causes interference).

Table- chair……………..Table-stool

Jenkins and Dallenbach (1924) carried out an experiment to test the idea that interference causes
forgetting. They thought that what is learned later will interfere with what people have already
learned. Participants were given 10 nonsense syllables to learn (eg: BOH or INJ).

Some participants slept after the learning while others carried on with their everyday routines.
Those who stayed awake did not remember as much as those who slept- there was more forgetting.

The researchers claimed that this was because sleeping had not caused interference, whereas the
day’s activities had and that interference had caused forgetting.

Evaluation of the interference theory of forgetting:

Strengths:

➢ There is much evidence that supports the theory. Different lists of words are used with
participants and what they learn first does interfere with what they learn second. Jenkins
and Dallenbach (1924) give evidence for this idea

➢ The evidence comes from experiments, which are controlled and so yield cause-and-effect
conclusions. The scientific approach to study is rated highly because firm conclusions can be

58
drawn. It also means that studies are replicable and can be tested for reliability.

Weaknesses:

➢ The theory describes a feature of forgetting in memory experiments, where similar tasks
make remembering difficult, and it is thought that this is because of interference. However,
it does not explain how this happens. The problem is separating the idea of interference
from displacement causes the loss of recall from the short-term store and not that the
memory trace has simply decayed or that interference from a new set of information (rather
than displacement by that new information) has caused the memory loss.

➢ The studies tend to use word lists and artificial tasks. In real life, it is not usual to do only one
thing at once and many tasks are carried out quickly. It is not likely, therefore, that
interference accounts for all forgetting, so the conclusions may not be valid. Solso (1995)
says that the tasks carried out to test interference theory (eg: learning nonsense syllables)
would not occur in real life. One of the contemporary studies that you could choose to study
in cognitive psychology is Steyvers and Hemmer (2012), which gives a good account of how
studies need to be ‘naturalistic’ for findings to be useful.

➢ The effect of interference disappears when participants are given cues. Therefore, it seems
that the memory trace was present but could not be retrieved. This goes against the idea of
interference as an explanation for forgetting.

Some ways to improve your memory:

• Knowledge of results: feedback allowing you to check your progress.

• Recitation: summarising aloud while you are learning.

• Rehearsal: reviewing information mentally.

• Selection: selecting most important concepts to memorise.

• Organisation: organising difficult items into chunks; a type or reordering.

Reconstructive memory model

Reconstructive memory is the theory that memories are not exact copies of what is encoded and
stored, but they’re affected by prior experience and prior knowledge in the form of schemas.

59
Schemas are cognitive plans/ scripts that are built up using experiences about everyday life and that
affect the processing of information.

Bransford and Johnson (1972) used the following passage to demonstrate how we use schema and
scripts when processing information.
The procedure is actually quite simple. First you arrange things into different groups… Of course,
one pile may be sufficient depending on how much there is to do. If you have to go somewhere else
due to lack of facilities that is the next step, otherwise you are pretty well set. It is important not to
overdo any particular endeavour. That is, it is better to do too few things at once than too many. In
the short run this may not seem important, but complications from doing too many can easily arise.
A mistake can be expensive as well… At first the whole procedure will seem complicated. Soon,
however, it will become just another facet of life. It is difficult to foresee any end to the necessity for
this task in the immediate future, but then one never can tell. After the procedure is completed one
arranges the materials into different groups again. Then they can be put into their appropriate
places. Eventually they will be used once more and the whole cycle will have to be repeated.
However, that is part of life.
(Bransford and Johnson 1972

Barlett’s idea that memory is reconstructive

● Individuals: Elizabeth Loftus is a well-known researcher in the field of memory and uses
ideas such as schemas to explain why eyewitness memory might be unreliable.
● Barlett (1932) maintained that memory is not like a tape recorder. This point has been
supported by other researchers such as Steyvers and Hemmer (2012) and Lotus and
Mackworth (1978). Loftus and Mackworth (1978) suggest that having prior knowledge and
schemas about scenes can free up cognitive processing capacity, which can then be
allocated to what is inconsistent. Elizabeth Loftus is a well-known researcher in the field of
memory and uses ideas such as schemas to explain why eyewitness memory might be
unreliable.
● The idea is that a memory is not perfectly formed, perfectly ecoded and then perfectly
retrieved. The multi-store model suggests that memories are retrieved after more than 30
seconds only if they are in the long-term store, and that only happens if the material is
attended to and rehearsed. Tulving shows how episodic memories are autobiographical and
are stories about ourselves and it is unlikely that such memories will be perfectly coded.
stored and retrieved.

Explanation:

Schemata are ideas and scripts about the world - for example, you might have an “attending a
lesson” script or a “going to the cinema” script. These scripts or schemata give you expectations and
rules about what to do.

Schemata is the plural of schema.

60
Study:

Baddeley (1966) showed that when material is acoustically similar (similar sound) or semantically
similar (similar meaning) recall is affected differently. You’ll be able to see that it's unlikely that a
memory that is retrieved is exactly what was originally perceived. Barlett’s view starts from this idea
above.

Barlett thought that the past and current experiences of individuals would affect their memory for
events. There would be input, which would be the perception of an event and then there’d be
processing, this would include the perception and also the interpretation. Interpretation involves
previous experiences and schemas.

Memory of an event involves information from specific traces encoded at the time of the event and
ideas that a person has from knowledge, expectations, beliefs and attitudes. Remembering involves
retrieving knowledge that has been altered to fit with knowledge that the person already has.

Evidence for Memory's Being Reconstructive

Bartlett ● Bartlett’s example thought of memory as reconstructive through the game


of Chinese whispers.

● The story changes along the way, often in a way that makes sense to the
person telling it.

● The final story can end up being completely different from the original.

● In a book he published called ‘Remembering’ (1932) he used the concept of


Chinese Whispers based on the ‘War of Ghosts’ study and described what
he called the ‘method of repeated reproduction.’

Aim: To look at what sorts of changes people make to material as they recall it after time
has passed

Procedure ● The Native American folk tale called The War Of The Ghosts was given to
participants

● This story was completely new to the participants so it didn't fit in with their
current schemata.

● All of the participants read through the story twice and were then asked to
recall the story after 15 minutes.

● Then, the recollection period increased and varied after the first recall at 15
minutes because the participant was asked to do the recall when there was

61
a chance, some time later and after a year

● The results showed that changes that began in the first recollection and
became more pronounced over time, it becomes more concise and
coherent.

● Bartlett found that there was also elaboration on the story.

● One participant was asked to recall the study two and a half years later and
the story became much shorter than the original, with only a ‘bare outline’
remaining.

● The story kept becoming shorter and shorter after about six recall sessions
and was reduced from 330 words to 180.

● Bartlett found out that the participants would rationalize the parts that
made absolutely no sense to them, such as saying ‘there was something
about a canoe but I cannot fit it in’.

● There was also conformity with some ideas that emerged from society such
as that ‘people die at sunset’ when the story has the person dying at ‘sun
rose’.

● This means that they reconstructed their memories of their story.

Loftus and Palmer (1974) - showed clips of two cars crashing.

● 2 Questions - “How fast do you think the cars were going when they
smashed into each other?” and “How fast do you think the cars were going
when they hit into each other?”

● “Smashed” made participants predict a much higher speed than “hit”

● A week later - “was there any broken glass?” - no actual glass broke

● “Smashed” - 32%, “hit” - 14%

Features of Reconstructive Memory


● With the errors that came about from his War of the Ghosts study, he found that there were
successful recalls to list the features of the RCM

● Some parts were changed, some were completely missed out and other things were just
added into the story.

● RATIONALIZATION : When participants made things make sense to them to fit their own

62
schemas and this may involve purposefully missing things out of the story.

● CONFABULATION : This was when things were added into the story that made sense to the
person. This draws on from previous experiences and episodes in an individual’s life.

Summary of Findings from Bartlett's War of the Ghosts Study

In summary, the main findings of Bartlett's method are as follows :

● His reproduction method was inaccurate.

● There is persistence in reproductions for each individual, after the first version.

● The style and rhythm of the story was rarely reproduced.

● When the reproduction was more frequent and recalled often, the details became
stereotypes and soon after there was little to no change.

● When the reproduction was less frequent, details were missed out and events were
simplified and some things were changed to something more familiar.

● When they were asked to recall over some period of time (long term memory) there was an
elaboration in detail. Things were invented and ideas were imported into the story.

● This shows that memory is constructed and interference is present.

Wynn and Logie’s evaluation of Bartlett's claims on reconstructive memory

Wynn and Logie (1998) discuss Bartlett's War of the Ghosts study. This shows how psychology builds
over time- a study in 1998 was carried out to test a 1932 study. Wynn and Logie point out that the
story was reduced in the recalling but there was still a lot of detail given. There was rationalising as
well as importation and invention.

The recall was not an accurate reproduction of the story. It was concluded by Bartlett that memory
is not ‘photographic’ rather ‘reconstructed’. There is a recall of past experiences. Wynn and Logie
point out that the recalls for different participants were not at regular intervals or even at the same
interval for each participant- they were done on an opportunity basis. Therefore, the study lacked
controls. Wynn and Logie summarise Bartlett’s findings as showing:

- Omission of detail (particularly when it was detail the participant did not not fully
understand)

63
- Rationalisations (to make the story more logical to the participant)
- Transformations of order (putting parts of the story in a different order)

Aim Wynn and Logie (1998) wanted to test Bartlett’s findings because they thought that
having a story that had little meaning for the participants might have affected the
findings.
Therefore, their aim was to look at Bartlett’s findings and his method of repeated
recall but this time using a real-life situation. Their study also looked at trace decay
and interference as explanations of errors in recall but here their study is
considered in order to evaluate Bartlett’s reconstructive memory model

The aim was to look at how the delays in recall/ description affected what was
described.

Procedure The real life event they chose was first-year psychology students recalling places
and events they came across in their first week at university. The participants did
not know they would be asked about what happened in their first week, which is
more like what would happen in real-life recall. The first description of events and
places in the student’s first week were taken after two weeks, and then at two
months, four months and six months, with the intervals being fixed. Wynn and
Logie felt that their study had the control and structure of an experiment but
without the criticisms about validity of the task.

Participants were given sheets with instructions and response sheets, at both the
first description and the subsequent descriptions. Two-hundred participants
received the sheets for the first recall and 128 were returned. The 63 participants
who went on with the study (40 female and 23 male, average age 18 years) had
given a suitable description at the first recall. For reach recall, participants were
asked, using a notice board, to attend that recall- they did not know they would be
asked back each time. Some participants recalled just once, in May. Some recalled
twice, in March and May. Others recalled three times in January, March and May.
All had the initial recall in November (the description two weeks after they started).
There was, therefore, a four recall condition (19 participants returned the sheet), a
three recall condition (16 participants returned the sheet) and two recall conditions

64
(20 participants returned the sheet).

Results The three recall groups and the four recall groups showed a significant difference
in words recalled over time. There were some differences in recall over time with
regard to the number of words used in descriptions and some difference
depending on the number of repetitions.

There did not seem to be differences in the types of words that were used by
different recalls. There seemed to be a decrease in the proportions of objects
recalled at the six-month recall, but up to this time there did not seem much
difference in the type of detail recalled. The proportion of adjectives used also
changed at the six month interval. Wynn and Logie concluded that over a period of
six months there is little reduction in the amount of information (accurate or
inaccurate) that is there to be recalled.

One issue with Wynn and Logie’s study is that they could not easily measure
omission of detail as they did not start with the same detail. This made comparison
with Bartlett’s findings about omission of detail difficult.

Conclusion Wynn and Logie (1998) concluded that people’s memories for ‘distinctive’ events
are resistant to change over time, no matter how many times the memory is
recalled.

Evaluation of the Reconstructive Theory of Memory

Strengths:

● Backing up evidence of the theory is the idea of the game ‘Chinese Whispers’, which many
have found works - people would unintentionally change aspects of the story to make more
sense of it.

● Reliability: The theory can be tested because the independent variable can be measured. A
story can have features that can be counted each time the story is recalled and the changes
can be recorded. So, up to a point, the theory can be tested scientifically.

● The theory can be usefully applied to key questions of society. For example, the reliance of
eyewitness memory in a prosecution is shown to need questioning by studies showing
memories are reconstructed and perhaps not the real version of events.

Weaknesses:

● The “War of the Ghosts” was a story that was unusual to participants. It could be argued that
they altered it simply because they didn’t understand it.

65
● It doesn’t explain how memory is reconstructive, but rather describes what happens.

● The evidence for Bartlett’s and Loftus’s work comes from experiments and the tasks are
artificial, as is the situation where the tasks are carried out. Therefore the results may not be
applicable to real life.

Experiments and experimental design

Designing and conducting experiments including field and laboratory experiments

Lab Experiment

A laboratory experiment is an experiment conducted under highly controlled conditions (not


necessarily a laboratory), where accurate measurements are possible.

The researcher decides where the experiment will take place, at what time, with which participants,
in what circumstances and using a standardized procedure.

Participants are randomly allocated to each independent variable group. An example is Milgram’s
experiment on obedience or Loftus and Palmer's car crash study.

● Strength: It is easier to replicate (i.e. copy) a laboratory experiment. This is because a


standardized procedure is used.
● Strength: They allow for precise control of extraneous and independent variables. This
allows a cause and effect relationship to be established.
● Limitation: The artificiality of the setting may produce unnatural behavior that does not
reflect real life, i.e. low ecological validity. This means it would not be possible to generalize
the findings to a real life setting.
● Limitation: Demand characteristics or experimenter effects may bias the results and
become confounding variables.

66
Field Experiment

Field experiments are done in the everyday (i.e. real life) environment of the participants. The
experimenter still manipulates the independent variable, but in a real-life setting (so cannot really
control extraneous variables).

An example is Holfing’s hospital study on obedience.

● Strength: behavior in a field experiment is more likely to reflect real life because of its
natural setting, i.e. higher ecological validity than a lab experiment.
● Strength: There is less likelihood of demand characteristics affecting the results, as
participants may not know they are being studied. This occurs when the study is covert.
● Limitation: There is less control over extraneous variables that might bias the results. This
makes it difficult for another researcher to replicate the study in exactly the same way.

Independent and dependent variables

The independent variable is the variable the experimenter changes or controls and is assumed to
have a direct effect on the dependent variable. Two examples of common independent variables are
gender and educational level.

The dependent variable is the variable being tested and measured in an experiment, and is
'dependent' on the independent variable. An example of a dependent variable is depression
symptoms, which depends on the independent variable (type of therapy).

Experimental and null hypotheses

A hypothesis (plural hypotheses) is a precise, testable statement of what the researcher(s) predict
will be the outcome of the study.

● In research, there is a convention that the hypothesis is written in two forms, the null
hypothesis, and the alternative hypothesis (called the experimental hypothesis when the
method of investigation is an experiment).

The alternative hypothesis states that there is a relationship between the two variables being
studied (one variable has an effect on the other).
It states that the results are not due to chance and that they are significant in terms of supporting
the theory being investigated.

The null hypothesis states that there is no relationship between the two variables being studied
(one variable does not affect the other).
It states results are due to chance and are not significant in terms of supporting the idea being
investigated. the statement that there will be a statistically significant difference between the
experimental group and the control group, and that this difference will have been caused by the

67
independent variable under investigation.

Directional (one-tailed) and non-directional (two-tailed) tests and hypotheses

● A one-tailed directional hypothesis predicts the nature of the effect of the independent
variable on the dependent variable.
- E.g., adults will correctly recall more words than children.

● A two-tailed non-directional hypothesis predicts that the independent variable will have an
effect on the dependent variable, but the direction of the effect is not specified.
- E.g., there will be a difference in how many numbers are correctly recalled by
children and adults.

Experimental and research designs: repeated measures, independent groups and matched
pairs, the issues with each and possible controls.

Experimental design refers to how participants are allocated to the different groups in an
experiment. Types of design include repeated measures, independent groups, and matched pairs
designs.

Probably the most common way to design an experiment in psychology is to divide the participants
into two groups, the experimental group, and the control group, and then introduce a change to the
experimental group and not the control group.

The researcher must decide how he/she will allocate their sample to the different experimental
groups. For example, if there are 10 participants, will all 10 participants take part in both groups
(e.g., repeated measures) or will the participants be split in half and take part in only one group
each?

Repeated measures:

Repeated measures design is an experimental design where the same participants take part in each
condition of the independent variable. This means that each condition of the experiment includes
the same group of participants.

Repeated measures design is also known as within groups or within-subjects design.

Pros:

➢ As the same participants are used in each condition, participant variables (i.e., individual
differences) are reduced.

➢ Fewer people are needed as they take part in all conditions (i.e. saves time).

Cons:

68
➢ There may be order effects. Order effects refer to the order of the conditions having an
effect on the participants’ behavior. Performance in the second condition may be better
because the participants know what to do (i.e. practice effect). Or their performance might
be worse in the second condition because they are tired (i.e., fatigue effect). This limitation
can be controlled using counterbalancing.

➢ To combat order effects the researcher counter balances the order of the conditions for the
participants. Alternating the order in which participants perform in different conditions of
an experiment.

Control: To combat order effects the researcher counter balances the order of the conditions for
the participants. Alternating the order in which participants perform in different conditions of an
experiment.

Independent groups:

Independent measures design, also known as between-groups, is an experimental design where


different participants are used in each condition of the independent variable. This means that each
condition of the experiment includes a different group of participants.

This should be done by random allocation, which ensures that each participant has an equal chance
of being assigned to one group or the other.

Independent measures involve using two separate groups of participants; one in each condition. For
example:

Control: After the participants have been recruited, they should be randomly assigned to their
groups. This should ensure the groups are similar, on average (reducing participant variables).

Pros:

➢ Avoids order effects (such as practice or fatigue) as people participate in one condition only.
If a person is involved in several conditions, they may become bored, tired and fed up by the
time they come to the second condition, or become wise to the requirements of the
experiment!

Cons:

➢ More people are needed than with the repeated measures design (i.e., more time

69
consuming).
➢ Differences between participants in the groups may affect results, for example; variations in
age, gender or social background. These differences are known as participant variables (i.e.,
a type of extraneous variable).

Matched pairs:

A matched pairs design is an experimental design where pairs of participants are matched in terms
of key variables, such as age or socioeconomic status. One member of each pair is then placed into
the experimental group and the other member into the control group.

One member of each matched pair must be randomly assigned to the experimental group and the
other to the control group.

Control: Members of each pair should be randomly assigned to conditions. However, this does not
solve all these problems.

Pros:

➢ Reduces participant variables because the researcher has tried to pair up the participants so
that each condition has people with similar abilities and characteristics.

➢ Avoids order effects, and so counterbalancing is not necessary.

Cons:

➢ Reduces participant variables because the researcher has tried to pair up the participants so
that each condition has people with similar abilities and characteristics.

➢ Very time-consuming trying to find closely matched pairs.

➢ Impossible to match people exactly, unless identical twins!

Experimental Design Summary

Experimental design refers to how participants are allocated to the different conditions (or IV levels)
in an experiment. There are three types:

1. Independent measures / between-groups: Different participants are used in each condition of the
independent variable.

2. Repeated measures /within-groups: The same participants take part in each condition of the

70
independent variable.

3. Matched pairs: Each condition uses different participants, but they are matched in terms of
important characteristics, e.g., gender, age, intelligence, etc.

Operationalisation of variables, extraneous variables and confounding variables

Operationalisation: This term describes when a variable is defined by the researcher and a way of
measuring that variable is developed for the research.

When we conduct experiments there are other variables that can affect our results, if we do not
control them.

Extraneous variables are all variables, which are not the independent variable, but could affect the
results of the experiment.

The researcher wants to make sure that it is the manipulation of the independent variable that has
an effect on the dependent variable. Hence, all the other variables that could affect the dependent
variable to change must be controlled. These other variables are called extraneous or confounding
variables.

The use of control groups, counterbalancing, randomisation and order effects.

The control group is composed of participants who do not receive the experimental treatment.

➢ A control group is used to establish a cause-and-effect relationship by isolating the effect of


an independent variable.

Counterbalancing is a technique used to deal with order effects when using a repeated measures
design. With counterbalancing, the participant sample is divided in half, with one half completing
the two conditions in one order and the other half completing the conditions in the reverse order.

- The sample would split into two groups experimental (A) and control (B). For
example, group 1 does ‘A’ then ‘B,’ group 2 does ‘B’ then ‘A’ this is to eliminate order
effects. Although order effects occur for each participant, because they occur
equally in both groups, they balance each other out in the results.

➢ The goal of counterbalancing is to ensure internal validity by controlling the potential


confounds created by sequence and order effects.

Randomisation is used in the presentation of trials in an experiment to avoid any systematic errors
that might occur as a result of the order in which the trials take place.

Random allocation of participants is an extremely important process in research. In order to assess

71
the effect of one variable on another, all variables other than the variable to be investigated need to
be controlled. Random allocation greatly decreases systematic error, so individual differences in
responses or ability are far less likely to consistently affect results.

Order effects refer to differences in research participants' responses that result from the order
(e.g., first, second, third) in which the experimental materials are presented to them. Order effects
can occur in a repeated measures design and refers to how the positioning of tasks influences the
outcome e.g. practice effect or boredom effect on second task

➢ Order effects make it difficult to know whether change over time reflects legitimate
respondent differences or question effects.

Situational and participant variable

Situational variables also include order effects that can be controlled using counterbalancing, such
as giving half the participants condition 'A' first, while the other half get condition 'B' first. This
prevents improvement due to practice, or poorer performance due to boredom.

Participant variables refers to the ways in which each participant varies from the other, and how this
could affect the results e.g. mood, intelligence, anxiety, nerves, concentration etc.

- For example, if a participant that has performed a memory test was tired, dyslexic or had
poor eyesight, this could affect their performance and the results of the experiment. The
experimental design chosen can have an affect on participant variables.

Participant variables can be controlled using random allocation to the conditions of the independent
variable.

Objectivity, reliability and validity (internal, predictive and ecological)

Objectivity is a feature of science, and if something is objective it is not affected by the personal
feelings and experiences of the researcher. The researcher should remain value-free and unbiased
when conducting their investigations.

Reliability

The term reliability in psychological research refers to the consistency of a research study or
measuring test.

● Internal reliability assesses the consistency of results across items within a test.
● External reliability refers to the extent to which a measure varies from one use to another.

72
Validity

The concept of validity was formulated by Kelly (1927, p. 14) who stated that a test is valid if it
measures what it claims to measure.

➢ Internal validity refers to whether the effects observed in a study are due to the
manipulation of the independent variable and not some other factor.
○ In-other-words there is a causal relationship between the independent and
dependent variable.
○ Internal validity can be improved by controlling extraneous variables, using
standardized instructions, counterbalancing, and eliminating demand characteristics
and investigator effects.

➢ External validity refers to the extent to which the results of a study can be generalized to
other settings (ecological validity), other people (population validity) and over time (historical
validity).
○ External validity can be improved by setting experiments in a more natural setting
and using random sampling to select participants.

➢ Predictive validity: This is the degree to which a test accurately predicts a criterion that will
occur in the future.

For example, a prediction may be made on the basis of a new intelligence test, that high
scorers at age 12 will be more likely to obtain university degrees several years later. If the
prediction is born out then the test has predictive validity.

➢ Ecological validity refers to the ability to generalize study findings to real-world settings. High
ecological validity means you can generalize the findings of your research study to real-life
settings. Low ecological validity means you cannot generalize your findings to real-life
situations.

Experimenter effects, demand characteristics and control issues

Experimenter Effects:
When scientists conduct experiments, influences and errors occur that affect the results of the
experiments. Those influences and errors that occur because of some characteristics of the
experimenter or because of something the experimenter did are called experimenter effects.
Experimenter effects reduce the validity of the experiment, because the results do not really tell
about the hypothesis; they show that the experimenter somehow (usually unwittingly) influenced or
changed the results.
Investigator Effects occur when the presence of the investigator themselves affects the outcome of
the research. Eg. during an interview the participants might feel self-conscious or might be

73
influenced by behavioural cues from the researcher (nodding, smiling, frowning etc.).

Demand characteristics
Presence of demand characteristics in a study suggests that there is a high risk that participants will
change their natural behaviour in line with their interpretation of the aims of a study, in turn
affecting how they respond in any tasks they are set.
Participants may, for example, try to please the researcher by doing what they have guessed is
expected of them.

Alternatively, they may deliberately try to skew the results in one way or another, such as attempting
to do the opposite of what they think is expected (i.e. the 'screw you' effect).

Control issues

(List B) Decision making and interpretation of inferential statistics

Levels of measurement

In psychology, there are different ways that variables can be measured and psychologists typically
group measurements into one of four scales: nominal, ordinal, interval or ratio.

The simplest level of measurement is nominal data (frequency count data), followed by ordinal
(scores in rank order), then interval (a continuous scale with no absolute zero) and finally, ratio (a
continuous scale with an absolute zero).

Wilcoxon signed ranks test of difference (also covering Spearman’s rank correlation
coefficient (formula) and Spearman’s rank (critical values table) and Chi-squared distribution
once Unit 2 has been covered)

The Wilcoxon test is a non-parametric statistical test of difference that allows a researcher to
determine the significance of their findings. It is used in studies that have a repeated measures or
matched pairs design, where the data collected is at least ordinal.

74
Probability and levels of significance

Probability and significance are very important in relation to statistical testing. Probability refers to
the likelihood of an event occurring. It can be expressed as a number (0.5) or a percentage (50%).
Statistical tests allow psychologists to work out the probability that their results could have occurred
by chance, and in general psychologists use a probability level of 0.05. This means that there is a 5%
probability that the results occurred by chance.

The level of statistical significance is often expressed as a p-value between 0 and 1. The smaller the
p-value, the stronger the evidence that you should reject the null hypothesis.

● A p-value less than 0.05 (typically ≤ 0.05) is statistically significant. It indicates strong evidence against the
null hypothesis, as there is less than a 5% probability the null is correct (and the results are random).
Therefore, we reject the null hypothesis, and accept the alternative hypothesis.
However, this does not mean that there is a 95% probability that the research hypothesis is true. The p-
value is conditional upon the null hypothesis being true is unrelated to the truth or falsity of
the research hypothesis.

● A p-value higher than 0.05 (> 0.05) is not statistically significant and indicates strong evidence
for the null hypothesis. This means we retain the null hypothesis and reject the alternative
hypothesis. You should note that you cannot accept the null hypothesis, we can only reject
the null or fail to reject it.
A statistically significant result cannot prove that a research hypothesis is correct (as this
implies 100% certainty).
Instead, we may state our results “provide support for” or “give evidence for” our research
hypothesis (as there is still a slight probability that the results occurred by chance and the
null hypothesis was correct – e.g. less than 5%)

Observed and critical values, and sense of checking of data

Critical values are a numerical value which researchers use to determine whether or not their
calculated value (from a statistical test) is significant. Some tests are significant when the observed
(calculated) value is equal to or greater than the critical value, and for some tests the observed value
needs to be less than or equal to the critical value.

75
➢ Observed value: the result of the statistical test (in this case, the result of the sign test)
➢ Critical value: the table result (in which you will compare the observed value)

One- or two-tailed regarding inferential testing

The result of a statistical test (the observed/calculated value) must be compared to a critical value in
order for the result to be calculated as significant or not. If the statistical test has an ‘r’ in the name,
the observed value must be equal to or greater than the critical value for significance to be shown. If
not, the observed value must be equal to or less than the critical value for significance to be shown.
To work out what the critical value is, the researcher must know:

● Whether a one-tailed (directional) or two-tailed (non-directional) hypothesis is being used


● The number of participants in the study (N)- for some tests ‘degrees of freedom’ (df) are
used instead
● The level of significance- which will be (unless stated otherwise) 0.05 or 5%.

Type I and Type II errors: As the probability level used is never 100%, there is always a chance that
the researcher may mistakenly accept one of the hypotheses.

● Type I error: the alternative/experimental hypothesis is mistakenly accepted, so the null is


mistakenly rejected. Therefore, the researcher says that there is a significant difference
between the groups, but in reality there isn’t (the null hypothesis should have been
accepted). The chance of this in psychological research is usually 5%, due to the conventional
significance level of 0.05. Type I errors are more likely when the significance level is too
lenient- for example, 10% (0.1).

● Type II error: the null hypothesis is mistakenly accepted, so the alternative/experimental is


mistakenly rejected. Therefore, the researcher says that there is not a significant difference
between the groups, but in reality there is (the alternative hypothesis should have been
accepted). Type !! errors are more likely when the significance level is too strict- for example,
1% (0.01). A 5% level is a good balance of the risk of making a Type I or Type II error.

76
Case studies of brain-damaged patients related to research into memory,
including the case of Henry Molaison (HM)

What is amnesia?
Memory loss (inability to learn new information or retrieve information)
Two types:
1) Retrograde: memory loss of events before brain damage
2) Anterograde: memory loss of events after brain damage

Causes of amnesia:
- Trauma
- Alcohol
- ECT (electrocuted)

Causes:
● Developmental issues
● Concussion
● Migraines
● Epilepsy
● Electroconvulsive shock therapy
● Specific brain lesions (ie: surgical removal)- HM
● Drugs- LSD, MM, MDMA
● Infection (Clive- Wearing)
● Psychological (trauma)
● Nutritional deficiency
● Lack of sleep

Wilson (1985) studies Clive Wearing

Clive Wearing- 30 second memory


● He suffers from both anterograde and retrograde amnesia that resulted from a viral
infection that attacked the brain, damaging the hippocampus and associated areas.
● Before this infection Clive was a world-class musician and he can still play the piano
brilliantly
● Can remember some aspects of his life before the infection

77
● MRI scan show damage to the hippocampus and some of the frontal regions
● Episodic memory and some semantic memory are lost- cannot put new information in long
term memory. However, his procedural memory is intact
● Implicit memory and emotional memory still intact.
- Implicit- unconscious
- Explicit- conscious

Clive will play the same piece of music over and over again as it is part of his long term procedural
memory.
His procedural memory is intact, so he can still play the piano. But his episodic memory has been
badly damaged, so he has no recollection of events in his life. This includes playing a piece of music
just a few minutes earlier. So he will play the same piece because he can’t remember playing it the
last time.
Aim Reporting on Clive Wearing- anterograde and retrograde amnesia

Sample 1938 born gifted musician


1985 contracted herpes encephalitis which resulted in extensive brain
damage to hippocampus, affecting long term memory

Research method 21 year longitudinal- qualitative and quantitative

Materials ➢ Neuropsychological tests (such as IQ tests, verbal fluency, digital


span testing LTM and STM)
➢ MRI scans (amount and location of damage)

Findings Auditory hallucinations such as music playing, familiar sounds repeated


Episodic and some semantic memory lost,
Lives within recent two minutes
Retains implicit memory and emotional memory, musical skills.

Results LTM severely impaired- episodic deficits


Memory abnormalities: hippocampus, amygdala, temporal pole
Severe brain abnormalities- retrograde and anterograde amnesia- some
semantic loss, severe episodic memory loss.

Conclusion Sense of self disrupted


Effect of brain damage on memory, evidence for distributed memory
system
Brain damage- information can’t be rehearsed or passed onto LTM
Proves MSM- STM and LTM have different stores
Proves that hippocampus is responsible for memory

Evaluation High ecological validity

Distress- 21 years
Consent but damage caused
Misunderstanding

78
No confidentiality
Unusual case- hard to generalise

Temporal lobe: processes auditory information


Frontal lobe: planning and decision making

Hippocampus: conversion of STM to LTM

Henry Molaison

Case Study: HM anterograde amnesia


- Head injury when he was 9
- Epileptic seizures
- No drug treatment- surgery
- 27 years old
- Removed tissue from the temporal lobe, including hippocampus, the amygdala
- Cured his seizures, gave him amnesia (anterograde)
- Able to: carry on a conversation
- Not able to: recognise people and also rereads
- Can remember if rehearsed
- MRI Scanner in 1997 that supported what was suspected.

Could not recall EXPLICIT (declarative) memory (both semantic and episodic memory).

Aim
To investigate the extent and nature of H.M.'s memory deficits and how they relate to his
brain damage. In particular, Milner (and later, Corkin) investigated the structure of memory
as revealed in H.M.'s behaviour and the function of brain structures like the hippocampus.

Sample One adult male, H.M., aged 27 at the start of the first case study. H.M. suffered from both
retrograde amnesia (loss of memories from before his brain operation) and anterograde
amnesia (loss of memories after his brain operation).

Procedure
Brenda Milner's early tests were simple recall tasks, testing H.M. 's ability to recall events

79
from his childhood, from his adult life before the operation and from his experiences after
the operation. She also tested his short and long term memory recall. Finally, she tested his
other cognitive faculties, like IQ, perception and general knowledge.

Milner also tested H.M. with maze tasks. H.M. attempted to trace the correct route through
the maze with his finger. Milner then tested him over and over with the same maze to see if
H.M. would remember the route, even if he didn't remember having attempted the task
before.

In the 1963 case study, Milner asked H.M. to copy a five-pointed star by drawing between
the lines of the template. However, H.M could only see the reflection of the star and his
hand in a memory. This made the task difficult. As with the maze task, Milner asked H.M to
re-attempt the task many times, to see if he grew more skilled at the procedure even
though he didn’t remember doing it before.

Other tests were carried out on H.M. For example, the effect of reinforcement and
punishment was investigated, to see if mild electric shocks would help HM to remember
correct answers. Later, under the direction of Suzanne Corkin, brain scanning technology
was used to improve our understanding of H.M’s condition.

Results H.M. forgot all new experiences after about 30 seconds; however he remembered a lot of
information from before his sixteenth birthday. His personality was consistent, he had
good language skills and an above-average IQ. His perception was normal, except for his
ability to identify smell, which was very poor.

H.M. loved to describe clear memories of his childhood, over and over, though he lacked a
context for them (like how long ago they happened). The face he saw in the mirror
surprised himself. Every time H.M. met Milner (and later, Corkin), he introduced himself as
if they had never met before, and told his stories again.

H.M. had a knowledge of past events (the Wall Street Crash, World War II). H.M. could not
explain where he lived, who cared for him, what he ate for his last meal, what year it was,
who the president was, or even how old he was. In 1982, he failed to recognize a picture of
himself that had been taken on his 40th birthday in 1966. However, he did acquire some
knowledge from after his operation: he knew what an astronaut was, that someone named
Kennedy was assassinated and he learned what rock music was. He learned how to play
tennis, although he could not remember being taught the skills and denied that he knew
how when asked.

Over 252 attempts, H.M. never showed any improvement in the maze task.

80
However, H.M. did show improvement in the star-tracing task, making fewer
mistakes on each attempt. He started with 30 errors, dropping to 20 on his
second attempt and 10 by his seventh. Moreover, he kept these skills from one
day to the next, getting better and better at it: on Day 2 he started making only
25 mistakes, immediately dropping to fewer than 10; by Day 3, he was making
fewer than 5 mistakes each time.

Attempts to test punishment with electric shocks had to be abandoned. H.M. possessed
huge tolerance for electric shocks, barely noticing shocks that normal people found quite
painful. He also seemed to have difficulty noticing feelings of tiredness and hunger.

Conclusions Milner's qualitative data shows a clear difference between short term and long term
memory. They suggest that the hippocampus plays a vital role in transforming short term
memories into long term memories, because this was something H.M. (whose
hippocampus had been removed in the operation) couldn't do.
Milner's quantitative data and the later qualitative data suggests a more complex structure
for memory.

H.M. did not improve at the maze task because, when he figured out the correct route
through the maze, he immediately forgot it. However, he got better at the star task, despite
forgetting his previous attempts. Later in life, he learned to play tennis. This suggests H.M.
remembered skills even if he forgot events.

H.M. also remembered some items of general knowledge (the moon landings, the Kennedy
assassination), even though he couldn't remember the events taking place.

Milner termed this sort of memory "unconscious memory", but Eron Tulving later termed it
procedural memory (skills) and semantic memory (general knowledge).

Schmolck et al (2002) Semantic knowledge in patient HM and other patients with bilateral
medial and lateral temporal lobe lesions

Aim To see if there was a relationship in performance on semantic memory and


temporal lobe damage.

Background Studies have shown that having medial temporal lobe (MTL) lesions causes
lasting memory impairment, which affects acquisition of new information as well
as recalling material just learned. However, information from long ago can be
unaffected such as grammar rules and definitions of words as well as knowledge
of places that were experienced some time ago.
The damage to the MTL being talked about is bilateral, which means both sides
of the brain. HM showed stable performance in intelligence tests. This shows
that MTL is required for new knowledge to be acquired but not for material that
has been in memory a long time.

81
Sample 6 brain damaged patients (temporal lobe or hippo campus), 8 control. FMI brain
scans.

Procedure The procedure consisted of giving semantic knowledge tests to patients with
medial temporal lobe lesions, who also had some damage to the lateral
temporal cortex (some with more damage than others). These patients were said
to have MTL+ damage. They also used patients with damage to the hippocampal
formation only. The hippocampal formation is within the MTL, so this means
they looked at the patients with just hippocampal formation damage and not
other MTL damage. They also tested HM and compared his performance with
the performance of others.

13 semantic memory tests e.g. definitions of a picture. Object is named (point to


picture), object is described (point to picture), shown picture (asked to name it),
given verbal description (asked to name it) and yes/no questions.

Results Control: 100%


Hippo campus damage = name and point out and answer questions with good
accuracy.
Temporal Lobe Damage = performed less well at naming, pointing out and
answering questions about objects.
Patient HM = worse at defining objects, grammar poor, however, did well at
naming.

Conclusion Temporal lobe is linked with semantic memory. More progressed the disease,
the bigger the damage to semantic memory. Hippocampus = not involved.

Evaluation

Strengths:

➢ There is a great deal of evidence for the conclusions of this study, with many pointing to the
lateral temporal cortex being involved in semantic knowledge rather than the medial
temporal lobe. When there is a lot of evidence from different studies using careful controls
and experimental procedures, this adds strengths to the findings.

➢ The study uses a scientific method- MRI and CT scans are used to measure the damage to
the brain and the location of the damage in a reasonably exact manner, which adds strength
to the findings they then used controlled tests to find out about semantic ability. This means
they can draw cause-and-effect conclusions about which brain area is drawn on for semantic

82
knowledge; because variables are controlled and careful measuring has taken place,
conclusions are credible and accurate.

➢ Its use of a healthy control group: these are people with no brain damage so there is a
baseline measure of ‘normal’ functioning when it comes to using semantic knowledge. As the
control group achieved more or less 100% success in the tests, this strengthens the claim
that the tests measured normal functioning.

➢ Reliability- standardised procedure, objective (quantitative data) although some subjectively


in brain scans. Comparison between control groups.

Weaknesses:

➢ Even though scanning is done carefully, with set ways of measuring parts of the brain, there
is still interpretation required when analysing the scans, and parts of the brain are hard to
separate and measure. Scanning is not as exact as it might seem because someone has to
carry out the interpretation.

➢ Low number of participants- signals problems with generalisability. Only 8 healthy controls
were used, not that many, although there is no reason to think their performance would not
represent ‘normal functioning’. There were just three patients with MTL damage and two HF
patients, and HM as an individual. It could be argued that these are rather low numbers on
which to base firm conclusions. Evidence from the other studies does add weight to the
findings, however, bear in mind that studies often use the same patients. (eg: HM has been
widely studied).

➢ Lack of validity- scanning of the brain takes a reductionist approach by looking at the brain
as having separate functions. Looking at the MTL and the nearby lateral temporal cortex
separates brian functioning into parts. Also using semantic tests is separating ‘normal’
functioning into small areas, such as naming pictures. The measures in this study can be said
not to capture ‘normal’ functioning. However, the control group could do the tests. It is just
that validity can be questioned as the ‘whole’ picture is not seen by such methods.

Sacchi et al. (2007) Changing history: doctored photographs affect memory for past
public events

Aims ● Sacchi wanted to investigate whether doctored photographs of two well


known events could change a person’s memory of an event

83
● He also wanted to find out if viewing doctored images would change the
attitudes a person has towards an event.
● He also wanted to investigate if viewing doctored images of a past event
could change future behaviours.

Sample ➢ In total, there were 187 participants


➢ 31 of them were male participants and 156 of them were female
➢ All participants were undergraduates, with 92% (170) being Psychology
and 8% (17) of them being other courses, attending either the University
of Padua or the University of Udine, both of which were in Italy.
➢ The participants were aged from 19-39 years old.
➢ Participants DID NOT receive any form of compensation/pay

Events 1: Beijing
● An image of a popular event in Beijing was used. It depicted a student
standing in front of tanks in Tiananmen Square.

● However, the doctored image of the event was manipulated so that a


crowd was added, on both sides of where the tanks were positioned. In
the original image, there was no such crowd.

Rome event:
● A very well known image of a peaceful protest marching in front of the

84
Roman Colosseum was used.

● However, for the doctored photograph, police officers and aggressive


protestors were added into the midst of peaceful protesters.

● Now, for the experimenters to guarantee the level of violence was high,
they asked 8 judges to rate them on a peaceful-violent scale. The
doctored versions were more violent.

Procedure ● Participants were shown a combination of the photographs, one for the
Beijing event and one for the Rome event. One was either doctored or
one was kept the same.

● Two original photos (N=48)


● Two doctored photos (N=44)
● The doctored Beijing photo and the original Rome photo (N=43)
● The original Beijing photo and the doctored Rome photo (N=52)
● The participants were shown three sets of multiple choice questions :
manipulation questions, critical questions and attitude questions.

● The questions were formatted in a questionnaire that participants were


to answer in groups and inside a classroom.

Questionnaire ● On the first page, participants were shown both pictures and were
questioned ‘Can you tell what public event in the past decade is shown
in each of the following photos?’ next to the image.
● On the next page, one of the two pictures were shown, this time with a
caption naming the event and when it took place. On the same page,
participants were also given the manipulation check questions and two
short exercises.
● The MCP’s assed whether the photographs were believable and how
familiar the participants were with the events. They indicated whether

85
they had already seen the photograph and how familiar they were with
the event.
● On the next page, participants were to respond based on their
memories of the event (told not to look back at the photograph). They
then were asked the critical questions for that event and the attitude
questions.
● Critical questions were asking the aspects of participant memories that
could be biased by the doctored photographs.
● Attitude questions tested if the doctored image could affect the
participants attitude towards the events, in this example, rating the
amount of violence.
● Finally, a blank page at the end for participants to add any other
comments regarding the events.

Results

Beijing event:

● Manipulation Check Questions: Modification did not seem to play a


role, as participants recognised photos regardless. Ratings did not differ
between the two conditions.
● Critical Questions: Participants who viewed the modified picture
estimated that more people took part in the event/ how many people
were near the tanks
● Attitude Questions: ratings on whether it was peaceful/violent and
positive/negative did not differ from the two conditions

Rome Event:

● Manipulation Check Questions: More likely to recognise a real photo


than the modified one. There was a high familiarity of 73.5% for original
and 51.6% in the doctored condition.
● Critical Questions: Focused on violence. More likely to think a lot of
violence occurred in doctored photo. 34% of participants said that there

86
were injuries in the original photo, whereas in the doctored photo that
rose to 73%.
● Attitude Questions: Participants with doctored photos rated it as more
violent and more negative to the original photo.

Study 2
● Sacchi wanted to test if the exposure to a doctored photograph would affect one’s
behavioural intentions. Therefore, a second experiment was conducted.

● 112 participants (35 male, 73 female, 4 did not specify gender) used to attend the University
of Italy. Their age varied from 50-84. 56% of the participants were retired, 20% were still
working and the remainders did not give any information regarding occupation. Again,
participants did not receive any pay.

● Identical photographs from the first experiment were used as stimulus material
(researchable/visual) and the participants only viewed one of the four possible
combinations. The questions were still the same but an extra one was added for the Rome
event and it was to rate how likely they would take part in a similar situation.

● Results showed that comparing the participants who viewed the original photograph of
Rome, when asked if they were to take part in a similar situation, those who saw the
doctored picture gave lower ratings than the original photograph

Conclusions:
● This experiment shows how the modifications of the pictures affected how people
remembered past public events, their attitudes and their behavioral intentions. This was for
both young and older adults.

● The doctored photos looked very authentic and therefore that may have led participants
into engaging more in the reconstructive process of remembering. They retrieved bits of
information that were consistent (bits they knew from before) but with a misleading
suggestion.

● Application to Real life - The results show that people can easily be deceived and their
opinion can be affected by using such (misleading) material. E.g think about propaganda,
fake news articles that want to encourage you to think in a certain way, social media.

Question: If misleading/false pictures affect the way we remember things then what happens when
we are exposed to misleading material from the very first time when we learn about an event?

87
Evaluation

Strengths:
➢ Sacchi’s task involved using doctored photographs of two famous events, the Tiananmen
Square protest in Beijing or the peaceful protest near the Coliseum, in Rome which are real
events that happened in society therefore increasing task validity

➢ Realism - people face images regularly and pilot studies were carried out to ensure validity
of photographs wasn’t questionable.

➢ Internal validity- it was a lab experiment- standardised instructions (controlled environment)

Weaknesses:

Validity:

➢ 35% of participants were completely unfamiliar with the Beijing event. Then they were
meant to answer critical and attitude questions according to their memory, but how could
they do so if they did not know the event at all? What if they gave fabricated answers?

➢ It is a non-invasive produce which means that we do not know if the participant responded
on the basis of a modified memory or simply based their answers on the photograph they
were shown

➢ The mean age was 22.3 years, which is young. Participants would have witnessed these
events directly and would have very fragmented memories of reading/hearing about them, if
at all.

➢ Low population validity as students used were all Italian.

Ethics:

➢ They were deceived as to the true nature of the study but they were informed later in the

88
debriefing

89

You might also like