0% found this document useful (0 votes)
9 views18 pages

Thinking

Uploaded by

abalostrisha428
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views18 pages

Thinking

Uploaded by

abalostrisha428
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Thinking, Reasoning, and Problem-Solving

The human brain is indeed a remarkable thinking machine, capable of amazing, complex,
creative, logical thoughts. Why, then, are we telling you that you need to learn how to think? Mainly
because one major lesson from cognitive psychology is that these capabilities of the human brain are
relatively infrequently realized. Many psychologists believe that people are essentially “cognitive
misers.” It is not that we are lazy, but that we have a tendency to expend the least amount of mental
effort necessary. Although you may not realize it, it actually takes a great deal of energy to think.
Careful, deliberative reasoning and critical thinking are very difficult. Because we seem to be
successful without going to the trouble of using these skills well, it feels unnecessary to develop
them. As you shall see, however, there are many pitfalls in the cognitive processes described in this
module. When people do not devote extra effort to learning and improving reasoning, problem
solving, and critical thinking skills, they make many errors.
As is true for memory, if you develop the cognitive skills presented in this module, you will be
more successful in school. It is important that you realize, however, that these skills will help you
far beyond school, even more so than a good memory will. Although it is somewhat useful to have a
good memory, ten years from now no potential employer will care how many questions you got right-
on multiple-choice exams during college. All of them will, however, recognize whether you are a
logical, analytical, critical thinker. With these thinking skills, you will be an effective, persuasive
communicator and an excellent problem solver.
This material begins by describing different kinds of thought and knowledge, especially
conceptual knowledge and critical thinking. An understanding of these differences will be valuable
as you progress through school and encounter different assignments that require you to tap into
different kinds of knowledge. The second section covers deductive and inductive reasoning, which
are processes we use to construct and evaluate strong arguments. They are essential skills to have
whenever you are trying to persuade someone (including yourself) of some point, or to respond to
someone’s efforts to persuade you. The material ends with a section about problem solving. A solid
understanding of the key processes involved in problem solving will help you to handle many daily
challenges.
1. Different kinds of thought
2. Reasoning and Judgment
3. Problem Solving
Remember and Understand
By reading this material, you should be able to remember and describe:
• Concepts and inferences
• Procedural knowledge
• Metacognition
• Characteristics of critical thinking: skepticism; identify biases, distortions, omissions, and
assumptions; reasoning and problem-solving skills
• Reasoning: deductive reasoning, deductively valid argument, inductive reasoning,
inductively strong argument, availability heuristic, representativeness heuristic
• Fixation: functional fixedness, mental set
• Algorithms, heuristics, and the role of confirmation bias
• Effective problem-solving sequence
Factual and conceptual knowledge
Under the topic memory, the idea of declarative memory was introduced, which is composed
of facts and episodes. If you have ever played a trivia game or watched Jeopardy on TV, you realize
that the human brain is able to hold an extraordinary number of facts. Likewise, you realize that
each of us has an enormous store of episodes, essentially facts about events that happened in our
own lives. It may be difficult to keep that in mind when we are struggling to retrieve one of those
facts while taking an exam, however. Part of the problem is that, many students continue to try to
memorize course material as a series of unrelated facts (picture a history student simply trying to
memorize history as a set of unrelated dates without any coherent story tying them together). Facts
in the real world are not random and unorganized, however. It is the way that they are organized
that constitutes a second key kind of knowledge, conceptual.
Concepts are nothing more than our mental representations of categories of things in the
world. For example, think about dogs. When you do this, you might remember specific facts about
dogs, such as they have fur and they bark. You may also recall dogs that you have encountered and
picture them in your mind. All of this information (and more) makes up your concept of dog. You
can have concepts of simple categories (e.g., triangle), complex categories (e.g., small dogs that
sleep all day, eat out of the garbage, and bark at leaves), kinds of people (e.g., psychology
professors), events (e.g., birthday parties), and abstract ideas (e.g., justice).
Gregory Murphy (2002) refers to concepts as the “glue that holds our mental life together”.
Very simply, summarizing the world by using concepts is one of the most important cognitive tasks
that we do. Our conceptual knowledge is our knowledge about the world. Individual concepts are
related to each other to form a rich interconnected network of knowledge. For example, think about
how the following concepts might be related to each other: dog, pet, play, Frisbee, chew toy, shoe.
Or, of more obvious use to you now, how these concepts are related: working memory, long-term
memory, declarative memory, procedural memory, and rehearsal? Because our minds have a natural
tendency to organize information conceptually, when students try to remember course material as
isolated facts, they are working against their strengths.
One last important point about concepts is that they allow you to instantly know a great deal
of information about something. For example, if someone hands you a small red object and says,
“here is an apple,” they do not have to tell you, “it is something you can eat.” You already know
that you can eat it because it is true by virtue of the fact that the object is an apple; this is called
drawing an inference, assuming that something is true on the basis of your previous knowledge (for
example, of category membership or of how the world works) or logical reasoning.
Procedural knowledge
Physical skills, such as tying your shoes, doing a cartwheel, and driving a car (or doing all three
at the same time) are certainly a kind of knowledge. They are procedural knowledge, the same idea
as procedural memory. Mental skills, such as reading, debating, and planning a psychology
experiment, are procedural knowledge. In short, procedural knowledge is the knowledge how to do
something (Cohen & Eichenbaum, 1993).
Metacognitive knowledge
Floyd used to think that he had a great memory. Now, he has a better memory. Why? Because
he finally realized that his memory was not as great as he once thought it was. Because Floyd
eventually learned that he often forgets where he put things, he finally developed the habit of
putting things in the same place. Because he finally realized that he often forgets to do things, he
finally started using the To Do list app on his phone. And so on. Floyd’s insights about the real
limitations of his memory have allowed him to remember things that he used to forget.
All of us have knowledge about the way our own minds work. You may know that you have a
good memory for people’s names and a poor memory for math formulas. Someone else might realize
that they have difficulty remembering to do things, like stopping at the store on the way home.
Others still know that they tend to overlook details. This knowledge about our own thinking is actually
quite important; it is called metacognitive knowledge, or metacognition. Like other kinds of thinking
skills, it is subject to error. For example, in unpublished research, one of the authors surveyed about
120 General Psychology students on the first day of the term. Among other questions, the students
were asked them to predict their grade in the class and report their current Grade Point Average.
Two-thirds of the students predicted that their grade in the course would be higher than their GPA.
The reality is that students tend to earn lower grades in psychology than their overall GPA. Another
example: Students routinely report that they thought they had done well on an exam, only to
discover, to their dismay, that they were wrong. Both errors reveal a breakdown in metacognition.
The Dunning-Kruger Effect
In general, most college students probably do not study enough. For example, using data from
the National Survey of Student Engagement, Fosnacht, McCormack, and Lerma (2018) reported that
first-year students at 4-year colleges in the U.S. averaged less than 14 hours per week preparing for
classes. The typical suggestion is that you should spend two hours outside of class for every hour in
class, or 24 – 30 hours per week for a full-time student. Clearly, students in general are nowhere
near that recommended mark. Many observers, including some faculty, believe that this shortfall is
a result of students being too busy or lazy. Now, it may be true that many students are too busy,
with work and family obligations, for example. Others, are not particularly motivated in school, and
therefore might correctly be labeled lazy. A third possible explanation, however, is that some
students might not think they need to spend this much time. And this is a matter of metacognition.
Consider the scenario that is mentioned above, students thinking they had done well on an
exam only to discover that they did not. Justin Kruger and David Dunning examined scenarios very
much like this in 1999. Kruger and Dunning gave research participants tests measuring humor, logic,
and grammar. Then, they asked the participants to assess their own abilities and test performance
in these areas. They found that participants in general tended to overestimate their abilities, already
a problem with metacognition. Importantly, the participants who scored the lowest overestimated
their abilities the most. Specifically, students who scored in the bottom quarter (averaging in the
12th percentile) thought they had scored in the 62nd percentile. This has become known as
the Dunning-Kruger effect. Many individual faculty members have replicated these results with their
own student on their course exams. Think about it. Some students who just took an exam and
performed poorly believe that they did well before seeing their score. It seems very likely that these
are the very same students who stopped studying the night before because they thought they were
“done.” Quite simply, it is not just that they did not know the material. They did not know that they
did not know the material. That is poor metacognition.
In order to develop good metacognitive skills, you should continually monitor your thinking
and seek frequent feedback on the accuracy of your thinking (Medina, Castleberry, & Persky 2017).
For example, get in the habit of predicting your exam grades. As soon as possible after taking an
exam, try to find out which questions you missed and try to figure out why. If you do this soon
enough, you may be able to recall the way it felt when you originally answered the question. Did you
feel confident that you had answered the question correctly? Then you have just discovered an
opportunity to improve your metacognition. Be on the lookout for that feeling and respond with
caution.
Terminologies
concept: a mental representation of a category of things in the world
Dunning-Kruger effect: individuals who are less competent tend to overestimate their abilities
more than individuals who are more competent do
inference: an assumption about the truth of something that is not stated. Inferences come from
our prior knowledge and experience, and from logical reasoning
metacognition: knowledge about one’s own cognitive processes; thinking about your thinking
Critical thinking
One particular kind of knowledge or thinking skill that is related to metacognition is critical
thinking (Chew, 2020). You may have noticed that critical thinking is an objective in many college
courses, and it is particularly appropriate in psychology. As the science of (behavior and) mental
processes, psychology is obviously well suited to be the discipline through which you should be
introduced to this important way of thinking.
More importantly, there is a particular need to use critical thinking in psychology. We are all,
in a way, experts in human behavior and mental processes, having engaged in them literally since
birth. Thus, perhaps more than in any other class, students typically approach psychology with very
clear ideas and opinions about its subject matter. That is, students already “know” a lot about
psychology. The problem is, “it isn’t so much the things we don’t know that get us into trouble. It’s
the things we know that just isn’t so” (Ward, quoted in Gilovich 1991). Indeed, many of students’
preconceptions about psychology are just plain wrong. Randolph Smith (2002) wrote a book about
critical thinking in psychology called Challenging Your Preconceptions, highlighting this fact. On the
other hand, many of students’ preconceptions about psychology are just plain right! But, how do you
know which of your preconceptions are right and which are wrong? And when you come across a
research finding or theory in this class that contradicts your preconceptions, what will you do? Will
you stick to your original idea, discounting the information from the class? Will you immediately
change your mind? Critical thinking can help us sort through this confusing mess.
What is critical thinking? The goal of critical thinking is simple to state (but extraordinarily
difficult to achieve): it is to be right, to draw the correct conclusions, to believe in things that are
true and to disbelieve things that are false. This material will provide you with two definitions of
critical thinking. First, a more conceptual one: Critical thinking is thinking like a scientist in your
everyday life (Schmaltz, Jansen, & Wenckowski, 2017). The second definition is more operational;
it is simply a list of skills that are essential to be a critical thinker. Critical thinking entails solid
reasoning and problem-solving skills; skepticism; and an ability to identify biases, distortions,
omissions, and assumptions. Excellent deductive and inductive reasoning, and problem-solving skills
contribute to critical thinking.
Scientists form hypotheses, or predictions about some possible future observations. Then, they
collect data, or information. They do their best to make unbiased observations using reliable
techniques that have been verified by others. Then, and only then, they draw a conclusion about
what those observations mean. And do not forget the most important part which is “Conclusion”
however, it is probably not the most appropriate word because this conclusion is only tentative. A
scientist is always prepared that someone else might come along and produce new observations that
would require a new conclusion be drawn.
A Critical Thinker’s Toolkit
Good critical thinkers (and scientists) rely on a variety of tools to evaluate information.
Perhaps the most recognizable tool for critical thinking is skepticism (and this term provides the
clearest link to the thinking like a scientist definition). Some people intend it as an insult when they
call someone a skeptic. But if someone calls you a skeptic, if they are using the term correctly, you
should consider it a great compliment. Simply put, skepticism is a way of thinking in which you refrain
from drawing a conclusion or changing your mind until good evidence has been provided. As a skeptic,
you are not inclined to believe something just because someone said so, because someone else
believes it, or because it sounds reasonable. You must be persuaded by high quality evidence.
If that evidence is produced, you have a responsibility as a skeptic to change your belief.
Failure to change a belief in the face of good evidence is not skepticism; skepticism has open
mindedness at its core. M. Neil Browne and Stuart Keeley (2018) use the term weak sense critical
thinking to describe critical thinking behaviors that are used only to strengthen a prior belief. Strong
sense critical thinking, on the other hand, has as its goal reaching the best conclusion. Sometimes
that means strengthening your prior belief, but sometimes it means changing your belief to
accommodate the better evidence.
Many times, a failure to think critically or weak sense critical thinking is related to a bias, an
inclination, tendency, leaning, or prejudice. Everybody has biases, but many people are unaware of
them. Awareness of your own biases gives you the opportunity to control or counteract them.
Unfortunately, however, many people are happy to let their biases creep into their attempts to
persuade others; indeed, it is a key part of their persuasive strategy.
Here are some common sources of biases:
• Personal values and beliefs. Some people believe that human beings are basically driven to
seek power and that they are typically in competition with one another over scarce resources.
These beliefs are similar to the world-view that political scientists call “realism.” Other people
believe that human beings prefer to cooperate and that, given the chance, they will do so.
These beliefs are similar to the world-view known as “idealism.” For many people, these
deeply held beliefs can influence, or bias, their interpretations of such wide-ranging situations
as the behavior of nations and their leaders or the behavior of the driver in the car ahead of
you. For example, if your worldview is that people are typically in competition and someone
cuts you off on the highway, you may assume that the driver did it purposely to get ahead of
you. Other types of beliefs about the way the world is or the way the world should be, for
example, political beliefs, can similarly become a significant source of bias.
• Racism, sexism, ageism and other forms of prejudice and bigotry. These are, sadly, a common
source of bias in many people. They are essentially a special kind of “belief about the way the
world is.” These beliefs—for example, that women do not make effective leaders—lead people
to ignore contradictory evidence (examples of effective women leaders, or research that
disputes the belief) and to interpret ambiguous evidence in a way consistent with the belief.
• Self-interest. When particular people benefit from things turning out a certain way, they can
sometimes be very susceptible to letting that interest bias them. For example, a company
that will earn a profit if they sell their product may have a bias in the way that they give
information about their product. A union that will benefit if its members get a generous
contract might have a bias in the way it presents information about salaries at competing
organizations. Home buyers are often dismayed to discover that they purchased their dream
house from someone whose self-interest led them to lie about flooding problems in the
basement or back yard. This principle, the biasing power of self-interest, is likely what led to
the famous phrase Caveat Emptor (let the buyer beware).
Knowing that these types of biases exist will help you evaluate evidence more critically. Do not
forget, though, that people are not always keen to let you discover the sources of biases in their
arguments. For example, companies or political organizations can sometimes disguise their support
of a research study by contracting with a university professor, who comes complete with a seemingly
unbiased institutional affiliation, to conduct the study.
People’s biases, conscious or unconscious, can lead them to make omissions, distortions, and
assumptions that undermine our ability to correctly evaluate evidence. It is essential that you look
for these elements. Always ask, what is missing, what is not as it appears, and what is being assumed
here? In order to be a critical thinker, you need to learn to pay attention to the assumptions that
underlie a message. Let us briefly illustrate the role of assumptions by touching on some people’s
beliefs about the criminal justice system. Some believe that a major problem with our judicial system
is that many criminals go free because of legal technicalities. Others believe that a major problem
is that many innocent people are convicted of crimes. The simple fact is, both types of errors occur.
A person’s conclusion about which flaw in our judicial system is the greater tragedy is based on an
assumption about which of these is the more serious error (letting the guilty go free or convicting
the innocent). This type of assumption is called a value assumption (Browne and Keeley, 2018). It
reflects the differences in values that people develop, differences that may lead us to disregard valid
evidence that does not fit in with our particular values.
skepticism: a way of thinking in which you refrain from drawing a conclusion or changing your mind
until good evidence has been provided
bias: an inclination, tendency, leaning, or prejudice

Reasoning and Judgment


An essential set of procedural thinking skills is reasoning, the ability to generate and evaluate
solid conclusions from a set of statements or evidence. You should note that these conclusions (when
they are generated instead of being evaluated) are one key type of inference. There are two main
types of reasoning, deductive and inductive.
Deductive reasoning
Suppose your teacher tells you that if you get an A on the final exam in a course, you will get
an A for the whole course. Then, you get an A on the final exam. What will your final course grade
be? Most people can see instantly that you can conclude with certainty that you will get an A for the
course. This is a type of reasoning called deductive reasoning, which is defined as reasoning in which
a conclusion is guaranteed to be true as long as the statements leading to it are true. The three
statements can be listed as an argument, with two beginning statements and a conclusion:
Statement 1: If you get an A on the final exam, you will get an A for the course
Statement 2: You get an A on the final exam
Conclusion: You will get an A for the course
This particular arrangement, in which true beginning statements lead to a guaranteed true
conclusion, is known as a deductively valid argument. Although deductive reasoning is often the
subject of abstract, brain-teasing, puzzle-like word problems, it is actually an extremely important
type of everyday reasoning. It is just hard to recognize sometimes. For example, imagine that you
are looking for your car keys and you realize that they are either in the kitchen drawer or in your
book bag. After looking in the kitchen drawer, you instantly know that they must be in your book
bag. That conclusion results from a simple deductive reasoning argument. In addition, solid deductive
reasoning skills are necessary for you to succeed in the sciences, philosophy, math, computer
programming, and any endeavor involving the use of logic to persuade others to your point of view
or to evaluate others’ arguments.
Cognitive psychologists, and before them philosophers, have been quite interested in
deductive reasoning, not so much for its practical applications, but for the insights it can offer them
about the ways that human beings think. One of the early ideas to emerge from the examination of
deductive reasoning is that people learn (or develop) mental versions of rules that allow them to
solve these types of reasoning problems (Braine, 1978; Braine, Reiser, & Rumain, 1984). The best
way to see this point of view is to realize that there are different possible rules, and some of them
are very simple. For example, consider this rule of logic:
p or q
not p
therefore q
Logical rules are often presented abstractly, as letters, in order to imply that they can be used in
very many specific situations. Here is a concrete version of the of the same rule:
I’ll either have pizza or a hamburger for dinner tonight (p or q)
I won’t have pizza (not p)
Therefore, I’ll have a hamburger (therefore q)
This kind of reasoning seems so natural, so easy, that it is quite plausible that we would use
a version of this rule in our daily lives. At least, it seems more plausible than some of the alternative
possibilities—for example, that we need to have experience with the specific situation (pizza or
hamburger, in this case) in order to solve this type of problem easily. So perhaps there is a form of
natural logic (Rips, 1990) that contains very simple versions of logical rules. When we are faced with
a reasoning problem that maps onto one of these rules, we use the rule.
But be very careful; things are not always as easy as they seem. Even these simple rules are
not so simple. For example, consider the following rule. Many people fail to realize that this rule is
just as valid as the pizza or hamburger rule above.
if p, then q
not q
therefore, not p
Concrete version:
If I eat dinner, then I will have dessert
I did not have dessert
Therefore, I did not eat dinner
The simple fact is, it can be very difficult for people to apply rules of deductive logic correctly; as
a result, they make many errors when trying to do so. Is this a deductively valid argument or not?
Students who like school study a lot
Students who study a lot get good grades
Jane does not like school
Therefore, Jane does not get good grades
Many people are surprised to discover that this is not a logically valid argument; the conclusion
is not guaranteed to be true from the beginning statements. Although the first statement says that
students who like school study a lot, it does NOT say that students who do not like school do not
study a lot. In other words, it may very well be possible to study a lot without liking school. Even
people who sometimes get problems like this right might not be using the rules of deductive
reasoning. Instead, they might just be making judgments for examples they know, in this case,
remembering instances of people who get good grades despite not liking school.
Making deductive reasoning even more difficult is the fact that there are two important
properties that an argument may have. One, it can be valid or invalid (meaning that the conclusion
does or does not follow logically from the statements leading up to it). Two, an argument (or more
correctly, its conclusion) can be true or false. Here is an example of an argument that is logically
valid, but has a false conclusion.
Either you are eleven feet tall or the Grand Canyon was created by a spaceship crashing into the
earth.
You are not eleven feet tall
Therefore, the Grand Canyon was created by a spaceship crashing into the earth
This argument has the exact same form as the pizza or hamburger argument above, making it
is deductively valid. The conclusion is so false, however, that it is absurd (of course, the reason the
conclusion is false is that the first statement is false). When people are judging arguments, they tend
to not observe the difference between deductive validity and the empirical truth of statements or
conclusions. If the elements of an argument happen to be true, people are likely to judge the
argument logically valid; if the elements are false, they will very likely judge it invalid (Markovits &
Bouffard-Bouchard, 1992; Moshman & Franks, 1986). Thus, it seems a stretch to say that people are
using these logical rules to judge the validity of arguments. Many psychologists believe that most
people actually have very limited deductive reasoning skills (Johnson-Laird, 1999). They argue that
when faced with a problem for which deductive logic is required, people resort to some simpler
technique, such as matching terms that appear in the statements and the conclusion (Evans, 1982).
This might not seem like a problem, but what if reasoners believe that the elements are true and
they happen to be wrong; they will would believe that they are using a form of reasoning that
guarantees they are correct and yet be wrong.
deductive reasoning: a type of reasoning in which the conclusion is guaranteed to be true any time
the statements leading up to it are true
argument: a set of statements in which the beginning statements lead to a conclusion
deductively valid argument: an argument for which true beginning statements guarantee that the
conclusion is true
Inductive reasoning and judgment
Every day, you make many judgments about the likelihood of one thing or another. Whether
you realize it or not, you are practicing inductive reasoning on a daily basis. In inductive reasoning
arguments, a conclusion is likely whenever the statements preceding it are true. The first thing to
notice about inductive reasoning is that, by definition, you can never be sure about your conclusion;
you can only estimate how likely the conclusion is. Inductive reasoning may lead you to focus on
Memory Encoding and Recoding when you study for the exam, but it is possible the instructor will ask
more questions about Memory Retrieval instead. Unlike deductive reasoning, the conclusions you
reach through inductive reasoning are only probable, not certain. That is why scientists consider
inductive reasoning weaker than deductive reasoning. But imagine how hard it would be for us to
function if we could not act unless we were certain about the outcome.
Inductive reasoning can be represented as logical arguments consisting of statements and a
conclusion, just as deductive reasoning can be. In an inductive argument, you are given some
statements and a conclusion (or you are given some statements and must draw a conclusion). An
argument is inductively strong if the conclusion would be very probable whenever the statements
are true. So, for example, here is an inductively strong argument:
• Statement #1: The forecaster on Channel 2 said it is going to rain today.
• Statement #2: The forecaster on Channel 5 said it is going to rain today.
• Statement #3: It is very cloudy and humid.
• Statement #4: You just heard thunder.
• Conclusion (or judgment): It is going to rain today.
Think of the statements as evidence, on the basis of which you will draw a conclusion. So, based
on the evidence presented in the four statements, it is very likely that it will rain today. Will it
definitely rain today? Certainly not. We can all think of times that the weather forecaster was wrong.
Given the possibility that we might draw an incorrect conclusion even with an inductively strong
argument, we really want to be sure that we do, in fact, make inductively strong arguments. If we
judge something probable, it had better be probable. If we judge something nearly impossible, it
had better not happen. Think of inductive reasoning, then, as making reasonably accurate judgments
of the probability of some conclusion given a set of evidence.
We base many decisions in our lives on inductive reasoning. For example:
Statement #1: Psychology is not my best subject
Statement #2: My psychology instructor has a reputation for giving difficult exams
Statement #3: My first psychology exam was much harder than I expected
Judgment: The next exam will probably be very difficult.
Decision: I will study tonight instead of watching Netflix.
Some other examples of judgments that people commonly make in a school context include
judgments of the likelihood that:
• A particular class will be interesting/useful/difficult
• You will be able to finish writing a paper by next week if you go out tonight
• Your laptop’s battery will last through the next trip to the library
• You will not miss anything important if you skip class tomorrow
• Your instructor will not notice if you skip class tomorrow
• You will be able to find a book that you will need for a paper
• There will be an essay question about Memory Encoding on the next exam
Tversky and Kahneman (1983) recognized that there are two general ways that we might make
these judgments; they termed them extensional (i.e., following the laws of probability) and intuitive
(i.e., using shortcuts or heuristics). We will use a similar distinction between Type 1 and Type 2
thinking, as described by Keith Stanovich and his colleagues (Evans and Stanovich, 2013; Stanovich
and West, 2000). Type 1 thinking is fast, automatic, effortful, and emotional. In fact, it is hardly
fair to call it reasoning at all, as judgments just seem to pop into one’s head. Type 2 thinking, on
the other hand, is slow, effortful, and logical. So obviously, it is more likely to lead to a correct
judgment, or an optimal decision. The problem is, we tend to over-rely on Type 1. Now, we are not
saying that Type 2 is the right way to go for every decision or judgment we make. It seems a bit
much, for example, to engage in a step-by-step logical reasoning procedure to decide whether we
will have chicken or fish for dinner tonight.
Many bad decisions in some very important contexts, however, can be traced back to poor judgments
of the likelihood of certain risks or outcomes that result from the use of Type 1 when a more logical
reasoning process would have been more appropriate. For example:
Statement #1: It is late at night.
Statement #2: Albert has been drinking beer for the past five hours at a party.
Statement #3: Albert is not exactly sure where he is or how far away home is.
Judgment: Albert will have no difficulty walking home.
Decision: He walks home alone.
As you can see in this example, the three statements backing up the judgment do not really
support it. In other words, this argument is not inductively strong because it is based on judgments
that ignore the laws of probability. What are the chances that someone facing these conditions will
be able to walk home alone easily? And one need not be drunk to make poor decisions based on
judgments that just pop into our heads.
The truth is that many of our probability judgments do not come very close to what the laws
of probability say they should be. Think about it. In order for us to reason in accordance with these
laws, we would need to know the laws of probability, which would allow us to calculate the
relationship between particular pieces of evidence and the probability of some outcome (i.e., how
much likelihood should change given a piece of evidence), and we would have to do these heavy
math calculations in our heads. After all, that is what Type 2 requires. Needless to say, even if we
were motivated, we often do not even know how to apply Type 2 reasoning in many cases.
So, what do we do when we don’t have the knowledge, skills, or time required to make the
correct mathematical judgment? Do we hold off and wait until we can get better evidence? Do we
read up on probability and fire up our calculator app so we can compute the correct probability? Of
course not. We rely on Type 1 thinking. We “wing it.” That is, we come up with a likelihood estimate
using some means at our disposal. Psychologists use the term heuristic to describe the type of
“winging it” we are talking about. A heuristic is a shortcut strategy that we use to make some
judgment or solve some problem. Heuristics are easy and quick, think of them as the basic procedures
that are characteristic of Type 1. They can absolutely lead to reasonably good judgments and
decisions in some situations (like choosing between chicken and fish for dinner). They are, however,
far from foolproof. There are, in fact, quite a lot of situations in which heuristics can lead us to make
incorrect judgments, and in many cases the decisions based on those judgments can have serious
consequences.
You were asked to judge the likelihood (or frequency) of certain events and risks. You were
free to come up with your own evidence (or statements) to make these judgments. This is where a
heuristic crops up. As a judgment shortcut, we tend to generate specific examples of those very
events to help us decide their likelihood or frequency. For example, if we are asked to judge how
common, frequent, or likely a particular type of cancer is, many of our statements would be examples
of specific cancer cases:
Statement #1: Andy Kaufman (comedian) had lung cancer.
Statement #2: Colin Powell (US Secretary of State) had prostate cancer.
Statement #3: Bob Marley (musician) had skin and brain cancer
Statement #4: Sandra Day O’Connor (Supreme Court Justice) had breast cancer.
Statement #5: Fred Rogers (children’s entertainer) had stomach cancer.
Statement #6: Robin Roberts (news anchor) had breast cancer.
Statement #7: Bette Davis (actress) had breast cancer.
Judgment: Breast cancer is the most common type.
Your own experience or memory may also tell you that breast cancer is the most common
type. But it is not (although it is common). Actually, skin cancer is the most common type in the US.
We make the same types of misjudgments all the time because we do not generate the examples or
evidence according to their actual frequencies or probabilities. Instead, we have a tendency (or bias)
to search for the examples in memory; if they are easy to retrieve, we assume that they are
common. To rephrase this in the language of the heuristic, events seem more likely to the extent
that they are available to memory. This bias has been termed the availability heuristic (Kahneman
and Tversky, 1974).
The fact that we use the availability heuristic does not automatically mean that our judgment
is wrong. The reason we use heuristics in the first place is that they work fairly well in many cases.
So, the easiest examples to think of sometimes are the most common ones. Is it more likely that a
member of the U.S. Senate is a man or a woman? Most people have a much easier time generating
examples of male senators. And as it turns out, the U.S. Senate has many more men than women (74
to 26 in 2020). In this case, then, the availability heuristic would lead you to make the correct
judgment; it is far more likely that a senator would be a man.
In many other cases, however, the availability heuristic will lead us astray. This is because
events can be memorable for many reasons other than their frequency. Encoding Meaning, suggested
that one good way to encode the meaning of some information is to form a mental image of it. Thus,
information that has been pictured mentally will be more available to memory. Indeed, an event
that is vivid and easily pictured will trick many people into supposing that type of event is more
common than it actually is. Repetition of information will also make it more memorable. So, if the
same event is described to you in a magazine, on the evening news, on a podcast that you listen to,
and in your Facebook feed; it will be very available to memory. Again, the availability heuristic will
cause you to misperceive the frequency of these types of events.
Most interestingly, information that is unusual is more memorable. Suppose we give you the
following list of words to remember: box, flower, letter, platypus, oven, boat, newspaper, purse,
drum, car. Very likely, the easiest word to remember would be platypus, the unusual one. The same
thing occurs with memories of events. An event may be available to memory because it is unusual,
yet the availability heuristic leads us to judge that the event is common. In these cases, the
availability heuristic makes us think the exact opposite of the true frequency. We end up thinking
something is common because it is unusual (and therefore memorable).
Although the availability heuristic is obviously important, it is not the only judgment heuristic
we use. Amos Tversky and Daniel Kahneman examined the role of heuristics in inductive reasoning
in a long series of studies. Kahneman received a Nobel Prize in Economics for this research in 2002,
and Tversky would have certainly received one as well if he had not died of melanoma at age 59 in
1996 (Nobel Prizes are not awarded posthumously). Kahneman and Tversky demonstrated repeatedly
that people do not reason in ways that are consistent with the laws of probability. They identified
several heuristic strategies that people use instead to make judgments about likelihood. The
importance of this work for economics (and the reason that Kahneman was awarded the Nobel Prize)
is that earlier economic theories had assumed that people do make judgments rationally, that is, in
agreement with the laws of probability.
Another common heuristic that people use for making judgments is the representativeness
heuristic (Kahneman & Tversky 1973). Suppose we describe a person to you. He is quiet and shy, has
an unassuming personality, and likes to work with numbers. Is this person more likely to be an
accountant or an attorney? If you said accountant, you were probably using the representativeness
heuristic. Our imaginary person is judged likely to be an accountant because he resembles, or is
representative of the concept of, an accountant. When research participants are asked to make
judgments such as these, the only thing that seems to matter is the representativeness of the
description. For example, if told that the person described is in a room that contains 70 attorneys
and 30 accountants, participants will still assume that he is an accountant.
inductive reasoning: a type of reasoning in which we make judgments about likelihood from sets
of evidence
inductively strong argument: an inductive argument in which the beginning statements lead to a
conclusion that is probably true
heuristic: a shortcut strategy that we use to make judgments and solve problems. Although they
are easy to use, they do not guarantee correct judgments and solutions
availability heuristic: judging the frequency or likelihood of some event type according to how
easily examples of the event can be called to mind (i.e., how available they are to memory)
representativeness heuristic: judging the likelihood that something is a member of a category on
the basis of how much it resembles a typical category member (i.e., how representative it is of the
category)
Type 1 thinking: fast, automatic, and emotional thinking.
Type 2 thinking: slow, effortful, and logical thinking.

Problem Solving
Mary has a problem. Her daughter, ordinarily quite eager to please, appears to delight in being
the last person to do anything. Whether getting ready for school, going to piano lessons or karate
class, or even going out with her friends, she seems unwilling or unable to get ready on time. Other
people have different kinds of problems. For example, many students work at jobs, have numerous
family commitments, and are facing a course schedule full of difficult exams, assignments, papers,
and speeches. How can they find enough time to devote to their studies and still fulfill their other
obligations? Speaking of students and their problems: Show that a ball thrown vertically upward with
initial velocity v0 takes twice as much time to return as to reach the highest point (from Spiegel,
1981).
These are three very different situations, but we have called them all problems. What makes
them all the same, despite the differences? A psychologist might define a problem as a situation
with an initial state, a goal state, and a set of possible intermediate states. Somewhat more
meaningfully, we might consider a problem a situation in which you are in here one state (e.g.,
daughter is always late), you want to be there in another state (e.g., daughter is not always late),
and with no obvious way to get from here to there. Defined this way, each of the three situations we
outlined can now be seen as an example of the same general concept, a problem. At this point, you
might begin to wonder what is not a problem, given such a general definition. It seems that nearly
every non-routine task we engage in could qualify as a problem. As long as you realize that problems
are not necessarily bad, this may be a useful way to think about it.
Can we identify a set of problem-solving skills that would apply to these very different kinds
of situations? Let us try to begin to make sense of the wide variety of ways that problems can be
solved with an important observation: the process of solving problems can be divided into two key
parts. First, people have to notice, comprehend, and represent the problem properly in their minds
(called problem representation). Second, they have to apply some kind of solution strategy to the
problem. Psychologists have studied both of these key parts of the process in detail.
When you first think about the problem-solving process, you might guess that most of our
difficulties would occur because we are failing in the second step, the application of strategies.
Although this can be a significant difficulty much of the time, the more important source of difficulty
is probably problem representation. In short, we often fail to solve a problem because we are looking
at it, or thinking about it, the wrong way.
problem: a situation in which we are in an initial state, have a desired goal state, and there is a
number of possible intermediate states (i.e., there is no obvious way to get from the initial to the
goal state)
problem representation: noticing, comprehending and forming a mental conception of a problem
Defining and Mentally Representing Problems in Order to Solve Them
The main obstacle to solving a problem is that we do not clearly understand exactly what the
problem is. Recall the problem with Mary’s daughter always being late. One way to represent, or to
think about, this problem is that she is being defiant. She refuses to get ready in time. This type of
representation or definition suggests a particular type of solution. Another way to think about the
problem, however, is to consider the possibility that she is simply being sidetracked by interesting
diversions. This different conception of what the problem is (i.e., different representation) suggests
a very different solution strategy. For example, if Mary defines the problem as defiance, she may be
tempted to solve the problem using some kind of coercive tactics, that is, to assert her authority as
her mother and force her to listen. On the other hand, if Mary defines the problem as distraction,
she may try to solve it by simply removing the distracting objects.
Unfortunately, however, changing a problem’s representation is not the easiest thing in the
world to do. Often, problem solvers get stuck looking at a problem one way. This is called fixation.
Most people who represent the preceding problem as a problem about a fly probably do not pause to
reconsider, and consequently change, their representation. A parent who thinks her daughter is being
defiant is unlikely to consider the possibility that her behavior is far less purposeful.
Problem-solving fixation was examined by a group of German psychologists called Gestalt
psychologists during the 1930’s and 1940’s. Karl Dunker, for example, discovered an important type
of failure to take a different perspective called functional fixedness. Imagine being a participant in
one of his experiments. You are asked to figure out how to mount two candles on a door and are
given an assortment of odds and ends, including a small empty cardboard box and some thumbtacks.
Perhaps you have already figured out a solution: tack the box to the door so it forms a platform, then
put the candles on top of the box. Most people are able to arrive at this solution. Imagine a slight
variation of the procedure, however. What if, instead of being empty, the box had matches in it?
Most people given this version of the problem do not arrive at the solution given above. Why? Because
it seems to people that when the box contains matches, it already has a function; it is a matchbox.
People are unlikely to consider a new function for an object that already has a function. This is
functional fixedness.
Mental set is a type of fixation in which the problem solver gets stuck using the same solution
strategy that has been successful in the past, even though the solution may no longer be useful. It is
commonly seen when students do math problems for homework. Often, several problems in a row
require the reapplication of the same solution strategy. Then, without warning, the next problem in
the set requires a new strategy. Many students attempt to apply the formerly successful strategy on
the new problem and therefore cannot come up with a correct answer.
The thing to remember is that you cannot solve a problem unless you correctly identify what
it is to begin with (initial state) and what you want the end result to be (goal state). That may mean
looking at the problem from a different angle and representing it in a new way. The correct
representation does not guarantee a successful solution, but it certainly puts you on the right track.
A bit more optimistically, the Gestalt psychologists discovered what may be considered the
opposite of fixation, namely insight. Sometimes the solution to a problem just seems to pop into
your head. Wolfgang Kohler examined insight by posing many different problems to chimpanzees,
principally problems pertaining to their acquisition of out-of-reach food. In one version, a banana
was placed outside of a chimpanzee’s cage and a short stick inside the cage. The stick was too short
to retrieve the banana, but was long enough to retrieve a longer stick also located outside of the
cage. This second stick was long enough to retrieve the banana. After trying, and failing, to reach
the banana with the shorter stick, the chimpanzee would try a couple of random-seeming attempts,
react with some apparent frustration or anger, then suddenly rush to the longer stick, the correct
solution fully realized at this point. This sudden appearance of the solution, observed many times
with many different problems, was termed insight by Kohler.
fixation: when a problem solver gets stuck looking at a problem a particular way and cannot
change his or her representation of it (or his or her intended solution strategy)
functional fixedness: a specific type of fixation in which a problem solver cannot think of a new
use for an object that already has a function
mental set: a specific type of fixation in which a problem solver gets stuck using the same solution
strategy that has been successful in the past
insight: a sudden realization of a solution to a problem
Solving Problems by Trial and Error
Correctly identifying the problem and your goal for a solution is a good start, but recall the
psychologist’s definition of a problem: it includes a set of possible intermediate states. Viewed this
way, a problem can be solved satisfactorily only if one can find a path through some of these
intermediate states to the goal. Imagine a fairly routine problem, finding a new route to school when
your ordinary route is blocked (by road construction, for example). At each intersection, you may
turn left, turn right, or go straight. A satisfactory solution to the problem (of getting to school) is a
sequence of selections at each intersection that allows you to wind up at school.
If you had all the time in the world to get to school, you might try choosing intermediate
states randomly. At one corner you turn left, the next you go straight, then you go left again, then
right, then right, then straight. Unfortunately, trial and error will not necessarily get you where you
want to go, and even if it does, it is not the fastest way to get there. Trial and error is not all bad.
B.F. Skinner, a prominent behaviorist psychologist, suggested that people often behave
randomly in order to see what effect the behavior has on the environment and what subsequent
effect this environmental change has on them. This seems particularly true for the very young person.
Picture a child filling a household’s fish tank with toilet paper, for example. To a child trying to
develop a repertoire of creative problem-solving strategies, an odd and random behavior might be
just the ticket. Eventually, the exasperated parent hopes, the child will discover that many of these
random behaviors do not successfully solve problems; in fact, in many cases they create problems.
Thus, one would expect a decrease in this random behavior as a child matures. You should realize,
however, that the opposite extreme is equally counterproductive. If the children become too rigid,
never trying something unexpected and new, their problem-solving skills can become too limited.
Effective problem solving seems to call for a happy medium that strikes a balance between
using well-founded old strategies and trying new ground and territory. The individual who recognizes
a situation in which an old problem-solving strategy would work best, and who can also recognize a
situation in which a new untested strategy is necessary is halfway to success.
Solving Problems with Algorithms and Heuristics
For many problems there is a possible strategy available that will guarantee a correct solution.
For example, think about math problems. Math lessons often consist of step-by-step procedures that
can be used to solve the problems. If you apply the strategy without error, you are guaranteed to
arrive at the correct solution to the problem. This approach is called using an algorithm, a term that
denotes the step-by-step procedure that guarantees a correct solution. Because algorithms are
sometimes available and come with a guarantee, you might think that most people use them
frequently. Unfortunately, however, they do not. As the experience of many students who have
struggled through math classes can attest, algorithms can be extremely difficult to use, even when
the problem solver knows which algorithm is supposed to work in solving the problem. In problems
outside of math class, we often do not even know if an algorithm is available. It is probably fair to
say, then, that algorithms are rarely used when people try to solve problems.
Because algorithms are so difficult to use, people often pass up the opportunity to guarantee
a correct solution in favor of a strategy that is much easier to use and yields a reasonable chance of
coming up with a correct solution. These strategies are called problem solving heuristics. A problem
solving heuristic is a shortcut strategy that people use when trying to solve problems. It usually works
pretty well, but does not guarantee a correct solution to the problem. For example, one problem
solving heuristic might be “always move toward the goal” (so when trying to get to school when your
regular route is blocked, you would always turn in the direction you think the school is). A heuristic
that people might use when doing math homework is “use the same solution strategy that you just
used for the previous problem.”
Although it is probably not worth describing a large number of specific heuristics, two
observations about heuristics are worth mentioning. First, heuristics can be very general or they can
be very specific, pertaining to a particular type of problem only. For example, “always move toward
the goal” is a general strategy that you can apply to countless problem situations. On the other hand,
“when you are lost without a functioning gps, pick the most expensive car you can see and follow it”
is specific to the problem of being lost. Second, all heuristics are not equally useful. One heuristic
that many students know is “when in doubt, choose c for a question on a multiple-choice exam.”
This is a dreadful strategy because many instructors intentionally randomize the order of answer
choices. Another test-taking heuristic, somewhat more useful, is “look for the answer to one question
somewhere else on the exam.”
You really should pay attention to the application of heuristics to test taking. Imagine that
while reviewing your answers for a multiple-choice exam before turning it in, you come across a
question for which you originally thought the answer was c. Upon reflection, you now think that the
answer might be b. Should you change the answer to b, or should you stick with your first impression?
Most people will apply the heuristic strategy to “stick with your first impression.” What they do not
realize, of course, is that this is a very poor strategy (Lilienfeld et al, 2009). Most of the errors on
exams come on questions that were answered wrong originally and were not changed (so they remain
wrong). There are many fewer errors where we change a correct answer to an incorrect answer. And,
of course, sometimes we change an incorrect answer to a correct answer. In fact, research has shown
that it is more common to change a wrong answer to a right answer than vice versa (Bruno, 2001).
The belief in this poor test-taking strategy (stick with your first impression) is based on
the confirmation bias (Nickerson, 1998; Wason, 1960). People have a bias, or tendency, to notice
information that confirms what they already believe. Somebody at one time told you to stick with
your first impression, so when you look at the results of an exam you have taken, you will tend to
notice the cases that are consistent with that belief. That is, you will notice the cases in which you
originally had an answer correct and changed it to the wrong answer. You tend not to notice the
other two important (and more common) cases, changing an answer from wrong to right, and leaving
a wrong answer unchanged. Because heuristics by definition do not guarantee a correct solution to
a problem, mistakes are bound to occur when we employ them. A poor choice of a specific heuristic
will lead to an even higher likelihood of making an error.
algorithm: a step-by-step procedure that guarantees a correct solution to a problem
problem solving heuristic: a shortcut strategy that we use to solve problems. Although they are
easy to use, they do not guarantee correct judgments and solutions
confirmation bias: people’s tendency to notice information that confirms what they already
believe
An Effective Problem-Solving Sequence
You may be left with a big question: If algorithms are hard to use and heuristics often don’t work,
how am I supposed to solve problems? Robert Sternberg (1996), as part of his theory of what makes
people successfully intelligent described a problem-solving sequence that has been shown to work
rather well:
• Identify the existence of a problem. In school, problem identification is often easy; problems
that you encounter in math classes, for example, are conveniently labeled as problems for
you. Outside of school, however, realizing that you have a problem is a key difficulty that you
must get past in order to begin solving it. You must be very sensitive to the symptoms that
indicate a problem.
• Define the problem. Suppose you realize that you have been having many headaches recently.
Very likely, you would identify this as a problem. If you define the problem as “headaches,”
the solution would probably be to take aspirin or ibuprofen or some other anti-inflammatory
medication. If the headaches keep returning, however, you have not really solved the
problem—likely because you have mistaken a symptom for the problem itself. Instead, you
must find the root cause of the headaches. Stress might be the real problem. For you to
successfully solve many problems it may be necessary for you to overcome your fixations and
represent the problems differently. One specific strategy that you might find useful is to try
to define the problem from someone else’s perspective. How would your parents, spouse,
significant other, doctor, etc. define the problem? Somewhere in these different perspectives
may lurk the key definition that will allow you to find an easier and permanent solution.
• Formulate strategy. Now it is time to begin planning exactly how the problem will be solved.
Is there an algorithm or heuristic available for you to use? Remember, heuristics by their very
nature guarantee that occasionally you will not be able to solve the problem. One point to
keep in mind is that you should look for long-range solutions, which are more likely to address
the root cause of a problem than short-range solutions.
• Represent and organize information. Similar to the way that the problem itself can be
defined, or represented in multiple ways, information within the problem is open to different
interpretations. Suppose you are studying for a big exam. You have chapters from a textbook
and from a supplemental reader, along with lecture notes that all need to be studied. How
should you (represent and) organize these materials? Should you separate them by type of
material (text versus reader versus lecture notes), or should you separate them by topic? To
solve problems effectively, you must learn to find the most useful representation and
organization of information.
• Allocate resources. This is perhaps the simplest principle of the problem solving sequence,
but it is extremely difficult for many people. First, you must decide whether time, money,
skills, effort, goodwill, or some other resource would help to solve the problem Then, you
must make the hard choice of deciding which resources to use, realizing that you cannot
devote maximum resources to every problem. Very often, the solution to problem is simply to
change how resources are allocated (for example, spending more time studying in order to
improve grades).
• Monitor and evaluate solutions. Pay attention to the solution strategy while you are applying
it. If it is not working, you may be able to select another strategy. Another fact you should
realize about problem solving is that it never does end. Solving one problem frequently brings
up new ones. Good monitoring and evaluation of your problem solutions can help you to
anticipate and get a jump on solving the inevitable new problems that will arise.

You might also like