Problem Centered Curriculum PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

Journal of Applied Research on Learning

Vol. 3, Article 5, 2010

Problem-centered learning vs. teaching-centered learning in science


at the secondary level: an analysis of the dynamics of doubt
by Patrice Potvin, Martin Riopel, Steve Masson, and Frédéric Fournier

ABSTRACT
This exploratory research describes and compares how doubt evolves while
learning about electricity in two different learning contexts: problem-centered
and teaching-centered. It provides descriptions of performance variations
when Secondary 2 to Secondary 4 students begin appropriating basic
electricity knowledge with attitudes of either certainty or uncertainty in the two
aforementioned contexts. A simple classification of every test answer under
five categories (“legitimate certainty”, “legitimate doubt”, “under-estimation”,
“over-estimation” and “do not know”) was used to track how students’ answers
migrated from one category to another, and allowed researchers to monitor how
students’ “certainty vs. doubt” profile evolved in conjunction with variations in
their performance. Results indicate, among other things, that problem-centered
learning approaches seem to be more profitable for students who express at
least a little certainty about their answers, and that teacher-centered approaches
seem to be more appropriate for students who express doubts. Results also
suggest that problem-centered approaches better prevents the development of
unexpected conceptions.

INTRODUCTION
This research explores how initial certainty and doubt influence knowledge acquisition. This exploration
was made in two different contexts: a problem-centered learning content and teaching-centered learning
context, enabling descriptions and comparisons. The article first describes the general role of doubt in
the process of knowledge acquisition, and more specifically scientific knowledge acquisition, and then
concentrates on the differences between two fundamentally different ways to learn in school.

Problem-centered approaches for K-12 science classes


Meta-analyses of the effects of problem-centered approaches to student learning usually report that such
pedagogical approaches are generally beneficial for students (Hmelo-Silver, 2004; Levy, Minner, &
Jablonski, 2007; Vernon & Blake, 1993). While this type of research does not always clarify what
qualifies as “problem-centered”, it seems to have has a positive effect on explanation construction
skills (Wu & Hsieh, 2006), knowledge about the nature of science (NOS) (Akerson & Hanuscin,
2007), motivation and responsibility (Galand & Frenay, 2005; Wu & Hsieh, 2006), and metacognitive
and reflective skills (Hmelo-Silver, 2004). The research also suggests that problem-centred approaches
prevent the development of misconceptions (Akinoglu & Tandogan, 2007).

1
Journal of Applied Research on Learning Vol. 3, Article 5, 2010

Although much enthusiasm about problem-centered approaches can be noted, scant research exists that
can be used to elucidate the implementation of this type of pedagogy in the context of K-12 science
classes (Hmelo-Silver, 2004). Two reasons stand out to explain this state of affairs. First, comparing the
effects of problem-centered learning to other pedagogical approaches is a very difficult task because it
often doesn’t aim to develop the same competencies in learners. Where more traditional approaches to
teaching (such as direct teaching) are not as suitable for the development of high level skills (such as
open problem solving), problem-centered approaches appear to provide a fertile terrain. If the targeted
performances are different in nature, comparison between the two treatments might appear akin to
comparing apples with oranges, or will often lead to a debate about the purpose of education rather
than its effectiveness. Moreover, and quite understandably, K-12 teachers—many of whom work every
day with overcrowded classes and are constantly managing demanding situations (Perrenoud, 1996)—
might be very reticent to adhere to research results that cannot clearly demonstrate the superiority of
methods that can be more time consuming, over those they deem to have yielded “proven” results.
Secondly, research on problem-centered approaches to teaching has not focused primarily on
K-12 science education, but more on post-secondary education. Hmelo-Silver (2004) suggests that
constraints and difficulties typical of K-12 science education (and common across classrooms at this
level) such as classroom organization, subject-domain centered curricula, and instructional periods
of 50 to 75 minutes in duration, are likely to be factors explaining the lack of research in this area.
Therefore, more research is needed to enlighten the effects of pedagogy at these levels.

Science and doubt


The present study is based on two fundamental assumptions. The first was well articulated by Vygotsky
almost a century ago.
“The state of development is never defined alone by what has matured. If the gardener decides
only to evaluate the matured or harvested fruits of the apple tree, he cannot determine the state
of his orchard. The maturing trees must also be taken into consideration. Correspondingly, the
psychologist must not limit his analysis to functions that have matured; he must consider those
that are in the process of maturation” (Vygotski, 1997, p. 203).
This suggestion has not, in our opinion, been given enough consideration by the research community,
likely because the claim contains a subtle problem: What might constitute reliable clues for identifying
less-than-fully matured functions? Simple performance evaluations, such as content questions, are not
necessarily well suited to the task. At best, they can attribute partial scores for partially good answers,
but it could be argued that these merely reflect the existence of fully-developed sub-functions. This leads
into the second fundamental assumption of this research: in science, the idea of doubt (or skepticism)
has long been—and still is—often acknowledged as an intermediate and rather rarely assessed state of
understanding, a prerequisite for the development of scientific knowledge. Many classical scientists,
philosophers, and even poets (Aristotle, Bacon, Galileo, Descartes, Alain, Voltaire, Kant, and Feynman,
to name a few) have recognized (in famous and often well-known quotations) the role of doubt as
being at the source of scientific progress. Other epistemological arguments have carried on the idea

2
Journal of Applied Research on Learning Vol. 3, Article 5, 2010

that science is all about doubt and uncertainty (Barrette, 2005; Morin, 1977). Science is therefore often
depicted not as an activity that aims to discover truth, an endeavour which is often philosophically
presented as impossible, but rather as a process that essentially aims to discard inappropriate models
and hypotheses, leaving the most resistant ones standing (Popper, 1995). In this refutational perspective,
doubt and dissatisfaction (Strike, Posner, Hewson, & Gertzog, 1982) are perceived as active agents of
scientific progress, whereas certainty becomes an impediment to such progress.

Doubt and learning


Doubt is not only a condition for scientific progress and learning at the level of scientific communities.
It is also considered important for individuals. Our review of the literature suggests these conclusions:
1. Doubt is an important part of cognition in humans (Brown, Bransford, Ferrara, & Campione,
1983; Dunlosky & Nelson, 1992; Flavel, 1979; Koriat, 1993; Metcalfe, 1994; Nelson, 1992;
Nelson & Dunlosky, 1991; Reder & Shunn, 1996), and even in animals (Smith, Shield, &
Washburn, 2003), and is therefore an important part of learning.
2. Doubt and uncertainty (Gigerenzer & Todd, 1999) influence learning performance. Some
research suggests that since it is linked to self-worth, doubt can have dramatic negative effects
on performance (Thompson & Dinnel, 2007). But doubt is also related to self-efficacy (the
belief that one is capable of performing in a certain manner to attain certain goals), which has
been shown to have positive effect on performance (Pajares, 1997; Schunk, 1989). There is, in
short, no established consensus at the moment about the contributions of doubt and uncertainty.
3. Doubt can influence learning performance positively or negatively other intervening factors.
Among these are the level of previous knowledge, the type of pedagogical treatment, and other
factors (that are beyond the scope of this research) such as expectations, motivation, attitudes, etc.

Nevertheless, it is important to distinguish doubt about one’s abilities or competence from doubt as a
driving and supportive mechanism through which scientific understanding and scientific knowledge can
be developed and acquired. The former can be assessed through standardized testing of self-esteem or
self-worth in science. Such tests have showed, for example, that females usually get lower scores than
males (Acker & Oatley, 1993; CCA/CCL, 2007; Thompson & Dinnel, 2007). It can also be assessed by
allowing students the opportunity to freely express their inability to produce an answer to a question
without penalty of any type. The latter, in which this research is interested, can occur when students
have the opportunity to propose answers while also being given the possibility to express doubt about
these answers. This particular form of doubt was studied by Merenluoto and Lehtinen (2002) in the
field of mathematics education. These researchers concluded that performance and certainty usually go
together, except for the students who perform best. For the latter, they found that certainty was almost
as low as for the weakest performers. This led them to argue for the use of pedagogies that favour a
tolerance for uncertainty. Personal doubt has also been studied by Hasan, Bagayoko and Kelley
(1990), who used a 6-level scale (0 = totally guessed answer, 1 = almost a guess, 2 = not sure, 3 = sure,
4 = almost certain, 5 = certain) called the certainty of response index (CRI), and used it to

3
Journal of Applied Research on Learning Vol. 3, Article 5, 2010

measure doubt about expressed answers in order to distinguish between “lack of knowledge” and
“misconceptions”; the former being given by wrong-and-doubted answers, and the latter by wrong-
and-certain answers. Other research efforts in neurology (Burton, 2008; Koriat, 1993; Reder &
Shunn, 1996), and psychology and evaluation (Gilles, 1997; Leclercq, Poumay, & Gagnayre, 2003),
have showed that certainty of knowledge is often very different from evidence of knowledge and that
certainty and doubt can have an important impact on further learning (e.g. René de Cotret and Larose
(2007), who argued for the development of systematic vigilance toward one’s own conceptions).
According to Lee, Kwon, Park, and Kim (2003) (who developed a degree-of-confidence scale), doubt is
also an important part of cognitive conflict, a step oftentimes presented as important in science learning
(Dreyfus, Jungwirth, & Eliovitch, 1990; Hewson & Hewson, 1984; Limon, 2001; Nussbaum & Novick,
1982).

Although these research efforts have mainly concentrated on the diagnostics of what psychologists and
neurologists refer to as the “feeling of knowing” (Burton, 2008; Koriat, 1993; Modirrousta & Fellows,
2008; Schnyder, Verfaellie, Alexander, & Lafleche, 2004) that comes with the production of answers,
none (to our knowledge) have experimentally explored the effect of initial doubt and initial certainty
(i.e. the state preceding a task) on science learning, even if this question has been presented many times
as philosophically very important.

Doubt in different learning contexts


A consideration of the differences that can exist between different learning contexts, such as
teaching-centered learning and problem-centered learning contexts, suggests that doubt will not be
felt the same way in both. For example, teaching-centered learning, often presented as a “pedagogy
of answers” (Astolfi, Darot, Ginsburger-Vogel, & Toussaint, 1997), often provides students with
tools they need to resolve problems that “will be encountered only later” (p. 143). In turn, problem-
centered learning, often presented as a “pedagogy of problems”, usually presents as an “enigma”
(p.144) or a puzzle that has to be solved through the discovery or the development of a solution. It
can be reasonably hypothesized that doubt will not be experienced in the same fashion when students
are being put in contact with answers relative to when they are exposed to an enigma. Furthermore,
doubt in each context will likely not influence learning the same way. The present research aims to
describe differences between the effects of initial doubt in the two contexts discussed above in light
of the fundamentally different nature of each pedagogical treatment. The main research question
explored through this research therefore relates to how initial certainty and doubt influence knowledge
acquisition in the contexts of problem-centered learning and teaching-centered learning.

The exploratory nature of the research has lead us to consider two perspectives framing this question.
The first focuses on the different categories of initial given answers, while the second focuses on
students’ certainty/doubt profiles, understood here as their general attitude of certainty toward a test, as
indicated by the sum of certainty expressions (see methods section below).

4
Journal of Applied Research on Learning Vol. 3, Article 5, 2010

We hope that answers to the aforementioned question will cast new light on what happens when problem-
centered and teaching-centered approaches are used in a classroom context. We hope further that it will help
increase understanding of which students might profit the most from the use of one pedagogical treatment
over the other, and of how these influence the development of certainty/doubt toward knowledge.

METHODS
Instrument
The pre-test (test) and post-test (retest) questionnaire were the same. They included eight multiple-
choice questions (three to eight response choices per question) about simple electricity concepts. We
chose to study learning about electricity because it is a typical topic addressed in Quebec’s provincial
curriculum, where the research was conducted. The questions evaluated the capacity of 13- to 15-year-
old, French-speaking students to solve simple electricity comprehension problems often involving
“unexpected conceptions” (Potvin, 2007). Unexpected conceptions are conceptions that are not expected
from a pedagogical standpoint. They contradict the scientific content found in science textbooks.

The questionnaire synthesized many classical multiple choice questions available in the conceptual
change literature regarding the learning of electricity (Chambers & Andre, 1997; Duit & von Rhoneck,
1998) and aimed to identify the following unexpected conceptions (among others):
• Q1) One wire is sufficient to light a bulb (sink theory) (Duit & von Rhoneck, 1998; Shipstone,
1984; Thouin, 2008).
• Q2) One of the poles (the positive one, most of the time) is a source or electrical current and
the other “unloads” the « excess » and used current.
• Q2) Two different currents collide in the bulb to light it (clash theory (Shipstone, 1984)).
• Q2 and Q3) It is not necessary for the current to go back to the source to light a bulb
(Shipstone, 1984).
• Q4 and Q6) A bulb consumes current. A second one, wired in series with the first one, will
therefore light less (Shipstone, 1984; Thouin, 2008).
• Q5) A node or a long wire are substantial obstacles to the circulation of current.
• Q6 et Q7) Current distributes equally in parallel-connected wires.
The pre-test was administered just before treatment and the post-test was given 25-30 days after
treatment. Figure 1 offers an example of an item from the questionnaire.1 In this question, students
had to choose between: a) Bulb X; b) Bulb Y; c) One of the two; and d) Both will light up equally. For
each question, students could also select a fifth option, namely e) I have no idea. If they tried another
answer, they could also indicate whether they were “certain of their answer” or they “still had doubts”
about it. For the purpose of the research, and even though we were aware that expressing uncertainty
might be used by the students as a safety valve, we constructed doubt as being present when choosing
to express some uncertainty while being certain enough to propose an answer.

5
Journal of Applied Research on Learning Vol. 3, Article 5, 2010

Figure 1: A typical question of the test/retest

6
Journal of Applied Research on Learning Vol. 3, Article 5, 2010

Participants
There were 251 secondary school students from eight classrooms who received treatment A (Group 1:
problem-centered treatment) and 265 students from another eight classrooms who received treatment B
(Group 2: teaching-centered treatment), for a total n=516 with an average of 27 students per classroom.
These students came from 18 different schools and a total of 12 teachers were involved. Schools freely
responded to an invitation to participate in the study made publicly on institutional websites. All those
manifesting interest in the project were accepted. Each participating classroom was randomly assigned
to one of the two treatments, forming groups 1 and 2. None of the students had prior formal training
in problem-centered learning, nor in electricity—this topic being absent from previous curriculum—
although we can assume that some of them had acquired some knowledge of the subject informally.

Treatment conditions
Treatment A was a problem-centered approach called “The Electrical Challenge”, during which students
had to solve, in lab conditions and with no help from their teacher, as many problems as they could in a
75-minute period. Twenty-four hands-on electrical problems or challenges classified from the simplest
to the most complex were presented to students. These problems were all expressed in qualitative terms
(ex: “Challenge No. 6: Switch A must light up bulb No.1 while switch B must light up bulb No. 2”)
and addressed electrical circulation, the effects of parallel and serial circuits, as well as the effects
of electrical resistance. Students worked in pairs and had access to all available materials (electrical
sources, wires, bulbs, resistors, etc.). Every time a team thought it had solved a problem, one of its
members would raise his/her hand and a project assistant would confirm the solution by signing a form
and allowing the team to move on to the next challenge/problem. This treatment can be considered as
conforming to micro-PBL models where “PBL happens at a small scale, in a single course (50-180
minutes), where research of information is made on site […] by experiments, teamwork, using prior
experience or logical reasoning” (free translation from Guilbert & Ouellet, 1997 p. 77).

Treatment B involved a teaching-centered approach, specifically a more typical science class


featuring teacher-driven lessons, laboratories, textbook problems and exams that aimed at developing
understanding of electricity concepts in the context of the usual secondary physics course. This
treatment could be considered as an “explicit teaching” context (Bissonnette, Richard, & Gauthier,
2005) or “direct teaching” and was held within regular class activities. Hence, teachers were asked to
teach as they normally would. Treatment B also devoted more time than treatment A to the study of the
topic (several 75 minute periods). A total of six different teachers were involved in this treatment.

After the end of every treatment, a delay of 25 to 30 days was respected before administrating the
retest. Teachers were asked not to discuss the topic for this period. We were aware that because of
this delay the difference between test and retest would then be considerably reduced. However the
durability of learning was considered important. We did not retest the students immediately after the
treatment to avoid contamination of our delayed post-test (retest).

7
Journal of Applied Research on Learning Vol. 3, Article 5, 2010

Figure 2: General organisation of the reserach

For the purpose of the analysis, we used a categorisation of answers inspired by earlier research like
Merenluoto & Lentinen (2002) and Hasan et al. (1990), who used multiple-level certainty scales,
but more closely by the work of Vachey (2001) who used a simpler categorisation that we deemed
sufficient for our needs. These categories appear in the following table.

TABLE 1: Categories attributed to answers


Answer is… Expressed doubt or certainty Label given to the kind of answer
…right (1 point) Certain about answer Legitimate Certainty (LC)
…right (1 point) Still has a doubt Under-Estimation (UE)
…wrong (0 point) Still has a doubt Legitimate Doubt (LD)
…wrong (0 point) Certain about answer Over-Estimation (OE)
…unknown (0 point) Does not apply Do Not Know (DNK)

8
Journal of Applied Research on Learning Vol. 3, Article 5, 2010

Two different analyses of the data were made. The first one focused on answers and their migration
from one category to another, from testing to retesting, while the second focused on the students,
their initial categorisation, and the migration of this category from test to retest. These two analyses
were carried out for each of the two groups of students, allowing a comparison of the two pedagogical
treatments. Calculations and commentaries about validity and reliability of the obtained data follow in
the next section.

RESULTS AND ANALYSIS


Analysis of the initial answers and their migration
Tables 2 and 3 give the number of answers that went from one category to another for each one of the
two groups. Each table gives the number of answers that belong to one category and ended in another.
For example, the second column (the first one beginning with zero) indicates, for every questions and
then for the total, how many answers were first given with legitimate certainty (i.e. in the pretest), and
were later underestimated (i.e. in the post-test).

TABLE 2: Classification of the answers for treatment A (Group 1)

9
Journal of Applied Research on Learning Vol. 3, Article 5, 2010

TABLE 3: Classification of the answers for treatment B (Group 2)

Figures 3 and 4 give a synthesized view of the percentage variation for each category for each of the
two groups. Each grey area in the graph represents the evolution of a different category. Percentage
variation for each category is also given. Increases in performance (% of right answers, including
legitimate certainty (LC) and underestimation (UE)) and increases in certainty (% of certain answers,
including legitimate certainty (LC) and overestimation (OE)) are given at the right.

FIGure 3: Percentage variation of answers for each category of treatment A (Group 1)

10
Journal of Applied Research on Learning Vol. 3, Article 5, 2010

FIGure 4: Percentage variation of answers for each category of treatment B (Group 2)

The first relevant observation we can make about the two treatments pertains to the important
difference between the increase of performance (LC+UE) versus the increase of certainty (LC+OE).
While the two treatments produced comparable increases in performance (9.8% and 12.1%), treatment
B yielded a greater increase—almost double—in certainty than treatment A (22.4% vs. 12.2%).
These results are very important because 1) they allow us to extend our comparison between the two
treatments (increases in performance are comparable) and 2) they give an insight about the importance
of doubt evolution between the two treatments.

The increase in certainty of Group 2 is manifested in both correct and incorrect answers. Legitimate
certainty (LC) and overestimation (OE) are both about 5 points higher in Group 2 than in Group 1.
This increase is compensated mostly by a decrease in legitimate doubt (LD) (-15.1% instead of -7.8%
for Group 1) and to a smaller degree by a decrease in under-estimation (UE) (-4,7% instead of -1,6%).
Moreover, a greater number of students in Group 2 were confirmed in their belief that their answers
were correct (LC). However, a greater proportion of students from Group 2, relative to Group 1,
also believed that their answers were correct when in fact they were not (OE). These findings may
suggest that Group 2 students were less responsive to or influenced by the treatment to which they were
exposed.

Figures 5 and 6 give a more detailed view of the migration of answers from one category to another,
for both treatments A and B. In these figures, the reader’s attention should be drawn to the arrows. For
example, in figure 5, the arrow marked “3.6%” indicates that the balance of answers migrated from
UE to LC. This does not signify that there was no migration from LC to UE, but rather that the overall
migration was toward LC, in this particular proportion. The black arrows show the migrations from the

11
Journal of Applied Research on Learning Vol. 3, Article 5, 2010

do not know category (DNK) to each one of the other categories. The two bigger arrows depict general
performance and general certainty. The absence of an arrow reveals a null balance. Moreover, all the
arrows in these figures are depicted such that their surface is proportional to the balance of migrations.

FIGure 5: Detailed migrations of answers for treatment A (Group 1)

12
Journal of Applied Research on Learning Vol. 3, Article 5, 2010

FIGure 6: Detailed migrations of answers for treatment B (Group 2)

In addition to the observations already made about these figures, one can notice that the directions of
the arrows are nearly identical from group 1 to group 2. The difference lies in the values associated
with the migrations. For example, we notice that do not know (DNK) answers (black arrows) will
migrate mainly to legitimate doubt (LD) for group 1 and to legitimate certainty (LC) for group 2,
which is in line with earlier observations.

Moreover, the figures give an interesting overview of the general dynamics of doubt, that go mainly
from legitimate doubt (LD) to overestimation (OE) (preferably) or underestimation (UE), and then to
legitimate certainty (LC); or directly from legitimate doubt (LD) to legitimate certainty (LC). One can
almost perceive a flux, from legitimate doubt to legitimate certainty, that seems to “prefer” passing
through, in both cases, to overestimation rather than under-estimation.

13
Journal of Applied Research on Learning Vol. 3, Article 5, 2010

An interesting difference can be observed with regard to the total balance of answers labelled
overestimation (OE). These answers could, as Hasan et al. (1990) suggested for wrong-and-
sure answers, be a good indication of the presence of misconceptions (instead of simple lack of
knowledge), so it is important to consider them. In Group 1, the total balance is almost null (3,4%-
2,6%+0,1%=0,9%), and every migration out of this category is made toward legitimate certainty (LC).
In Group 2, the migration to overestimation is clearly positive (6,1%-1,8%-1,0%+0,2%=3,5%) and
answers sometimes migrate out to under-estimation (UE). These results are important because they
suggest that treatment B might generate more misconceptions (positive migration balance) and that it
might sometimes lead students to undervalue their progress (some migrations toward UE). This result
is in line with Akinoglu and Tandogan’s (2007) positive results about the prevention of misconceptions
by problem-based active learning.

Another interesting observation we can make of this data is about the migration of answers out of
legitimate doubt (LD). While migrations from this category to certainty (LC or OE) increase as much
in Group 1 than in Group 2, migrations to under-estimation (UE) decrease. We can say about this
interesting but difficult to interpret result, that it appears that traditional pedagogical treatments do
not really encourage students to persist with doubting. We can also observe this while analysing the
migrations in and out of under-estimation. While in-migrations are comparable for both groups, “out-
migrations” mostly lead students to legitimate certainty (LC): this finding is also revealing because it
suggests that traditional pedagogy seems to be more capable of renewing students’ confidence in things
they already know.

Analysis of students’ attitudes and their migrations


We now turn to a different analysis. Until now, it was difficult to understand the effects of the two
treatments A and B on students’ general “certainty/doubt” attitudes, because we analysed the questions
separately and while not paying attention to individuals. We now consider the students’ general initial
attitudes toward performance and doubt, as well as the migration of these individuals from test to
retest. Figure 7 gives an overview of all these migrations. The value on the X (doubt) axis was obtained
by adding one point for every “doubt” answer given(LD or UE), and subtracting one point for every
expressed certain answer (LC or OE). For example, using this technique, a student who expressed
doubt as many times as he expressed certainty was assigned a score of 0, while a student expressing
doubt for every answer would be assigned a score of -8. The value on the Y axis (score) was obtained
by simply recording the scores. The tail of every vector marks the initial position for every student
(score and general certainty) and the head marks the “retest position” of the same student. The width of
vectors is proportional to superposition of arrows.

14
Journal of Applied Research on Learning Vol. 3, Article 5, 2010

FIGure 7: Migrations of all students from test to retest

Given the considerable difficulty of making sense of information as dense as that presented in this
figure, we further refined the representation of these migrations. Figure 8 gives such a representation
where vectors are regrouped in 16 areas given by initial test positions, and where resulting migrations
(X and Y) are represented by a single vector, centered in the square. The zones containing too few
vectors (less than 5) were not compiled. The linear vectors are the ones detailing migrations for Group
1 students, while the dotted ones represent migrations for Group 2.

15
Journal of Applied Research on Learning Vol. 3, Article 5, 2010

FIGure 8: Resultants of student’s migrations for both treatments A (Group 1) and B (Group 2)

Even with this refined representation, it is quite difficult to discern appreciable tendencies, especially
when one considers how normal “edge effects” (an effect created by the impossibility for a value to be
outside the matrix, thereby moving means away from the edges) seem to push the performance/doubt
profiles toward the center of the figure. It is however still possible to observe that, surprisingly, students
who show the best initial scores and more doubt (upper right) decrease considerably in performance

16
Journal of Applied Research on Learning Vol. 3, Article 5, 2010

while others generally increase. It would seem that an initial underestimating attitude is not the best
one to expect effective learning in both treatments, and especially in treatment A (problem-centered).

A final and more interesting representation (see figure 9) can be produced data depicted from figure 8.
To obtain it, we carried out a subtraction between vectors from groups 1 and vectors from group 2
(treatment B minus treatment A). Thus the arrows give an idea of the difference between the effects
of the two treatments. The numbers in each box represent the total number of students considered in
each subtraction. For example, the vector found in the lower right box was obtained by considering
115 student’s migrations (initial and final “performance/doubt” profiles). It points toward the right
indicating that doubt increased more (or was better preserved) in treatment A than in treatment B.
It also slightly points down indicating that performance gain was less in treatment A. This new
representation (figure 9) has the advantage of cancelling all edge effects (as these were present in both
subtracted treatments), thereby revealing more about interesting differences. Grey areas or “boxes”
give the standard error of measurement. The latter was obtained independently for both the X and Y
axis by considering only the points for which the number of students was superior to 5. Boxes are of
equal size because we have indicated only the most conservative one in every square. We can see that
all migrations are larger than the standard error boxes. The shortest arrow is about 1.5 standard error
and the longest is about 4 times the standard error. Assuming normal distributions, the probabilities that
these results could be randomly generated are respectively below 14% and 1%.

17
Journal of Applied Research on Learning Vol. 3, Article 5, 2010

FIGure 9: Difference in student migrations between treatments A (Group 1) and B (Group 2)

A first observation can be made based on figure 9 about the general tendency for treatment A (a
problem-centered approach) to better preserve doubt than treatment B (a teaching-centered approach)
for most students, as revealed by most vectors heading toward the right in treatment A. We can see that
students who combine poor initial performance and good certainty (lower left square, representing some
15% of students) would best preserve doubt if they were being taught in a traditional teaching-centered
way (left pointing arrows). Interestingly, this conforms to interpretations made from preceding figures.

A second interesting observation is about students who were initially most uncertain about their
answers (right column). It is quite striking to see that while almost every other group of students

18
Journal of Applied Research on Learning Vol. 3, Article 5, 2010

clearly benefited from the problem-centered approach (treatment A), the group of students whose
initial uncertainty about their answers was highest did not seem to benefit at all, regardless of their
initial performance. The difference here appears to be a question of “certainty/doubt” attitude. Many
possible explanations might explain this finding. While it could be argued that the observations reflect
the lack of familiarity of students for problem-centered approaches (treatment A), it could also be
argued that uncertain students might not get as involved as they should in the teamwork required for
problem solving, and that, therefore, they ultimately learned less. We could also argue that uncertain
students might not be able to learn as much because they do not possess clear hypotheses on which
they could build their comprehension and on which they could base their trial and error initiatives; or
that they may not be able to find sufficient solid (scientific or not) grounds upon which to formulate
or test hypotheses; etc. It would seem though that most students (a little less than two-thirds of them)
are sufficiently certain of their proposed answers to attempt to test them in such a way that they can
learn more from a “micro-problem-centered” approach (two-person teams). This is also an interesting
result because it requires consideration of the importance of “dissatisfaction” (Strike, Posner, Hewson,
& Gertzog, 1982) with one’s own unexpected conceptions in the process of conceptual change. Our
results (figure 9) seem to strengthen this claim because it is difficult to imagine how weakly endorsed
conceptions could be associated with strong and clear dissatisfaction; conversely, it is easy to imagine
higher levels of dissatisfaction when events confront conceptions deemed or felt to be certain.

An analogy involving a river and a waterfall may be useful in understanding the general dynamics of
doubt illustrated in figure 9. A generally “beneficial” river of “problem-centered learning” flows from
left to right but, at a certain point, the “waterfall of doubt” makes learning tumble down and leads to
wasted effort. This suggests that when students feel most uncertain while engaged in problem-centered
learning (or inquiry-based learning), a lecture at the right time may be beneficial, as Hmelo-Silver
(2004) suggested. Although it is clear that we need more research on this, it seems that the provision by
the teacher of at least a small number of knowledge elements that are presented as certain might from
time to time be appropriate for further learning in such exploratory contexts.

A third observation concerns students who initially have the lowest performance (lower row).
These students seem to be the ones for whom the difference in treatments has the smallest effect on
performance. We can observe a very slight and not statistically significant advantage for the most
confident of these students, but it would not be appropriate to draw further conclusions about this.
We can also observe that, for such students taking part in the problem-centered treatment, confident
students see their confidence grown while uncertain ones tend to become een more uncertain.

DISCUSSION
This research cannot draw conclusions about the general performance increases given by one or the
other of our treatments because both treatments were not really comparable; for treatment B involved
more teaching time with students). However, what does research does allow is for an important

19
Journal of Applied Research on Learning Vol. 3, Article 5, 2010

comparison of the dynamics of doubt with individual variations in performance, notwithstanding the
observation that general performance increases were comparable for both treatments. It helps provide
insight into what a teacher can expect to unfold following his decision to evolve from a “teacher-
centered” practice toward a more “problem-centered” approach. The research thus suggests that such
teachers should be alert to students who feel less certainty about their answers: they are the ones that
are less susceptible to benefit from problem-centered teaching.

Our other main conclusions from this exploratory study are summarized next. It is very important
to keep in mind that these conclusions can only be applied to the context of teaching physics and
particularly electricity. Generalisation to other fields or other content would be unwarranted. Finally,
considering the delay (25-30 days, with no mention of the topic) after which retest was administered,
one should understand that our results are not meant to reflect immediate learning, but more
appropriately mid-term learning.
• In problem-centered approaches, students might develop more doubt, or better preserve doubt
(most of the time legitimate doubt) while learning, except for those students who combine very
poor initial knowledge and very good certainty (in this study, they represented about 15% of the
population studied). If we consider that doubt is an important part of science, this general result
could be interpreted as a better conformity of problem-centered approaches to the conditions of
real science. The observation that, in problem-centered approaches, doubt seems to accompany
learning has been reported elsewhere. Galand and Frenay (2005), for example, have already
recorded that students learning in these contexts feel they know less, while knowledge
tests show that their knowledge is equal, if not superior, and that they have developed more
competencies. We don’t know if this situation (feeling doubtful despite actually knowing)
is positively beneficial for learning, but our results suggest that while doubt seems to help
learning in problem-based approaches, an excess of it might be harmful. Our results suggest
that, when this happens, more traditional methods of teaching might be more appropriate, at
least for a time.
• Students might develop fewer unexpected conceptions in problem-centered approaches. Since
no content communication was made in treatment A, we could suppose that teachers or teacher-
centered methods or tools used in this context could be responsible for the development of
“misconceptions”. We could also suppose that more conformity of our problem-centered
treatment to real science could be an explanation. Being in direct contact with the reality of
phenomena while learning could also improve conceptual change.
• About a third of the students in this study did not feel sufficiently certain of their initial
conceptions to benefit from a problem-centered approach, while two-thirds did. Further
research is required to determine if it could be possible to “boost” learner certainty before entering
a problem-centered learning context, for example through classroom explanation of concepts, or
by giving students the opportunity to have their ideas about a topic confronted to those of their peers.
• Students who present the poorest initial performance are not the ones who will benefit the most
from a problem-centered approach. For them, the choice of treatment will make no measurable
difference.

20
Journal of Applied Research on Learning Vol. 3, Article 5, 2010

This research provided new insight into the application of problem-centered learning by using a
perspective—the dynamics of doubt—that enabled further understanding of students’ reactions toward
different methods. We are, as a result, able to suggest that problem-centered approaches might be
considered more interesting in cases where it is not possible to give differentiated pedagogical treatments
in a class, because most of the students in this study seemed to benefit from such an approach.

The research also provided a good illustration that problem-centered approaches, given equal increases
in performance, can better avoid the development of wrong-and-sure answers that we believe can be
associated with unexpected conceptions.

CONCLUSION
This research allowed us to better understand the circumstances in which doubt plays a role in learning
about electricity. Our results suggest the importance of considering general “doubt/certainty” attitudes
in choosing the best pedagogical treatment for students. We believe our results reveal strengths, limits
and a paradox in the use of these treatments.

First, students expressing certainty are the ones that benefited the most of the problem-based treatment,
even when they were wrong about their initial answers (i.e. if they overestimated them). This might be a
product of the use of “challenges” in this study, which we suggest may have given them the opportunity to
confirm or invalidate initial hypotheses directly; in other words, to confront them with reality rather than
simply with contradictory arguments. Two-thirds of these students seemed better prepared to benefit from
the problem-based approach and developed less overestimated answers (less “misconceptions”).

Second, it is possible that students who expressed more doubt underperformed in the problem-based
treatment because they simply lacked solid hypotheses they could test themselves, or that they did
not hold them strongly enough, even if these were right. For these students, is appears that teaching-
centered approaches might be more beneficial for conceptual change and that providing them with
knowledge presented validated by the scientific community might be a more suitable strategy. They
also might benefit more from laboratory manipulations if hypotheses are simply given to them, such as
in the teaching-centered approach.

As for doubt migration, it appears that, in problem-centered approaches, most students clearly moved
toward doubt, but it seems that too strong a move toward a doubting disposition lead to a reduction
in performance (cf. the “waterfall” metaphor discussed earlier, in relation to figure 9). The paradox is
that, in our problem-centered approach, students who profited the most from the treatment were the
ones who presented the highest level of certainty. Yet a problem-centred approach reduces certainty
and therefore does not favour further learning. Meanwhile, in the teaching-centered approach, students
seemed to develop certainty to the point that they appeared to have lost some of their capacity for
reflection (see the analyses associated with figures 5 and 6) about what they learned (wrong-and-
sure answers or unexpected conceptions), because they overestimated more often. One interesting

21
Journal of Applied Research on Learning Vol. 3, Article 5, 2010

interpretation from this research is that the best pedagogical strategy may be to avoid using only one
or the other of the approaches described in this study, but rather to alternate them, even though such a
strategy was not explicitly the object of this study.

Acknowledgements
Our thanks to Amélie Perron-Singh, Maude-Bouchard Fortier, Éric Durocher, Guillaume Cyr, Jean-
Sébastien Renaud, Jean-Mathieu Lavoie-Lebeau and the Montreal Science Center (CSM) for their
participation in this study. This research was made possible through funding from the FQRSC.

........ Authors’ note ...................................................................................................................


1 Item translation: “Knowing that X and Y bulbs are identical, identify the one that lights up the most
(circle the right answer)”.

........ References .............................................................................................................


Acker, S., & Oatley, K. (1993). Gender issues in education for science and technology : current situation and prospects for
change. Canadian journal of education, 18(3), 255-272.

Akerson, V. L., & Hanuscin, D. L. (2007). Teaching nature of science through inquiry: Results of a 3-year professional development
program. Journal of research on science teaching, 44(5), 653-680.

Akinoglu, O., & Tandogan, R. O. (2007). The effects of problem-based active learning in science education on student’s
academic achievement, attitude and concept learning. Eurasia journal of mathematics, science and technology
education, 3(1), 71-81.

Astolfi, J.-P., Darot, É., Ginsburger-Vogel, Y., & Toussaint, J. (1997). Mots-clés de la didactique des sciences. Bruxelles:
DeBoeck-Université.

Barrette, C. (2005). Mystère sans magie, science, doute et vérité: notre seul espoir pour l’avenir. Montréal: Multimondes.

Bissonnette, S., Richard, M., & Gauthier, C. (2005). Échec scolaire et réforme éducative. St-Nicolas: Presses de l’Université Laval.
114 pages.

Brown, A. L., Bransford, J. D., Ferrara, R. A., & Campione, J. C. (1983). Handbook of child psychology : Cognitive development.
Vol. 3. In J. H. Flavel & E. L. Markman (Eds.), Learning, remembering, and understanding. New York: John Wiley & son.

Burton, R. A. (2008). On being certain: believing you are right even when you’re not. New-York: St. Martin’s Press.

CCA/CCL. (2007). Écart entre les sexes sur le plan du choix de carrière : Pourquoi les filles n’aiment pas les sciences. Rapport du
Conseil canadien de l’apprentissage

Chambers, S. K., & Andre, T. (1997). Gender, Prior Knowledge, Interest, and Experience in Electricity and Conceptual Change
text manipulations in learning about direct current. Journal of research in science teaching (34), 107-123.

Dreyfus, A., Jungwirth, E., & Eliovitch, R. (1990). Applying the “Cognitive Conflict” Strategy for Conceptual Change - Some
Applications, Difficulties and Problems. Science Education, 74(5), 555-569.

Duit, R., & von Rhoneck, C. (1998). Learning and understanding key concepts of electricity. In A. Thibergien, L. Jossem & B.
Jorge (Eds.), Connecting Research in Physics Education with Teacher Education: The International Commission on
Physics Education.

Dunlosky, J., & Nelson, T. O. (1992). Importance of the kind of cue for judgments of learning (JOL) and the delayed-JOL effect.
Memory and cognition, 20, 374-380.

Flavel, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive-developmental inquiry. American
Psychologist, 34, 906-911.

Galand, B., & Frenay, M. (2005). Impact sur les connaissances des étudiants. In L’approche par problèmes et par projets dans
l’enseignement supérieur (pp. 138-161). Louvain: Presses Universitaires de Louvain.

22
Journal of Applied Research on Learning Vol. 3, Article 5, 2010

Gigerenzer, G., & Todd, P. M. (1999). Simple heuristics that make us smart. New York: Oxford university press.

Gilles, G. L. (1997). Impact de deux entraînements à l’utilisation des degrés de certitude chez les étudiants de 1ere candidature
de la Faculté de Psychologie et des Sciences de l’Education de l’ULg, Liege.

Guilbert, L., & Ouellet, L. (1997). Étude de cas. Apprentissage par problèmes. Sainte-Foy: Presses de l’Université du Québec.

Hasan, S., Bagayoko, D., & Kelley, E. L. (1990). Misconceptions and the certainty of response index (CRI). Physics education,
34(5), 294-299.

Hewson, P. W., & Hewson, M. G. (1984). The Role of Conceptual Conflict in Conceptual Change and the Design of Science
Instruction. Instructional Science, 13(1), 1-13.

Hmelo-Silver, C. E. (2004). Problem-based learning: what and how do students learn? Educational psychology review, 16(3),
236-262.

Koriat, A. (1993). How do we know that we know? The accessibility model of the feeling of knowing. Psychological review, 100,
609-639.

Leclercq, D., Poumay, M., & Gagnayre, E. A. (2003). La connaissance partielle chez l’apprenant : pourquoi et comment la
mesurer. In E.A. Gagnayre (Ed.) L’évaluation de l’Éducation Thérapeutique du Patient. Paris: IPCEM.

Lee, G., Kwon, J., Park, S.-S., & Kim, J. W. (2003). Development of an instrument for measuring cognitive conflict in secondary-
level science classe. Journal of research in science teaching, 40(6), 585-603.

Levy, A. J., Minner, D. D., & Jablonski, E. (2007). Inquiry-based science instruction and students’ science content knowledge: A
research synthesis. Proceedings of the National Association for Research in Science Teaching (NARST) Conference.

Limon, M. (2001). On the cognitive conflict as an instructional strategy for conceptual change : a critical appraisal. Learning and
instruction, 11, 357-380.

Merenluoto, K., & Lehtinen, E. (2002). Certainty bias as an indicator of problems in conceptual change: the case of the number
line. Paper presented at the Proceedings of the 26th annual conference of the international group for the psychology of
mathematics education, University of East Anglia, Norwich, UK.

Metcalfe, J. S. A. (1994). Metacognition: Knowing about knowing. Cambridge, Mass.: Bradford books.

Modirrousta, M., & Fellows, L. K. (2008). Medial prefrontal cortex plays a critical and selective role in “feeling of knowing” meta-
memory judgements. Neuropsychologia, 46(12), 2958-2965.

Morin, E. (1977). La méthode : 1. La nature de la nature. Paris: Éditions du Seuil.

Nelson, T. O. (1992). Metacognition: Core readings. Boston, MA: Allyn and Bacon.

Nelson, T. O., & Dunlosky, J. (1991). The delayed-JOL effect: When delaying your judgments of learning can improve the
accuracy of your metacognitive monitoring. Psychological science, 2, 267-270.

Nussbaum, J., & Novick, S. (1982). Alternative Frameworks, Conceptual Conflict and Accommodation : Toward a Principled
Teaching Strategy. Instructional science, 11, 183-200.

Pajares, F. (1997). Current directions in self-efficacy research. In M. Maehr & P. R. Pintrich (Eds.), Advances in motivation and
achievement, Volume 10 (pp. 1-49). Greenwich, CT: JAI press.

Perrenoud, P. (1996). Enseigner : agir dans l’urgence, décider dans l’incertitude. Paris: ESF Éditeur.

Popper, K. R. (1995). La logique de la découverte scientifique. Paris: Éditions Payot.

Potvin, P. (2007). Enseigner les sciences en considérant le rôle de l’intuition dans l’apprentissage des sciences. In P. Potvin, M.
Riopel & S. Masson (Eds.), Regards multiples sur l’enseignement des sciences. Québec: Multimondes.

Reder, L. M., & Shunn, C. D. (1996). Implicit Memory and Metacognition. In L. M. Reder (Ed.), Metacognition does not imply
awareness: Strategy choice is governed by implicit learning and memory. Florence, Kentuchy: Erlbaum.

René de Cotret, S., & Larose. (2007). Les choses que l’on sait et les choses dont on se sert, Paper presented at the Journées
Internationales sur la communication, l’éducation et la culture scientrifiques et industrielles, Chamonix.

Schnyder, D. M., Verfaellie, M., Alexander, M. P., & Lafleche, G. (2004). A role for right medial prefrontal cortex in accurate
feeling-of-knowing judgements: evidence from patients with lesions to frontal cortex. Neuropsychologia, 42(7), 957-966.

Schunk, D. H. (1989). Self efficacy and achievement behaviours. Educational psychology review, 1(3), 173-208.

Shipstone, D. M. (1984). A study of children’s understanding of electricity in simple DC circuits. European journal of science
education, 6, 185-198.

23
Journal of Applied Research on Learning Vol. 3, Article 5, 2010

Smith, J. D., Shield, W. E., & Washburn, D. A. (2003). The comparative psychology of uncertainty monitoring and metacognition.
Behavioral and brain sciences, 26, 317-373.

Strike, K., Posner, G. J., Hewson, P. W., & Gertzog, W. A. (1982). Accomodation of a Scientific Conception : Toward a Theory of
Conceptual Change. Science Education, 66(2), 211-227.

Thompson, T., & Dinnel, D. L. (2007). Poor performance in mathematics: is there a basis for a self-worth explanation for women?
Educational psychology, 27(3), 377-399.

Thouin, M. (2008). Tester et enrichir sa culture scientifique et technologique. Québec: Multimondes.

Vachey, E., Miquel, J. L., & Quinton, A. (2001). Quel intérêt avons-nous à intégrer la notion de certitude en contrôle continu?
Odonto-stomatologie tropicale (95), 1-8.

Vernon, D. T. A., & Blake, R. L. (1993). Does problem-based learning work? A meta-analysis of evaluative research. Academic
medicine, 69(7), 550-563.

Vygotski, L. (1997). Pensée et langage. Paris: La dispute/Inédit.

Wu, H.-K., & Hsieh, C.-E. (2006). Developing sixth graders’ inquiry skills to construct explanations in inquiry-based learning
environments. International Journal of Science Education, 28(11), 1289-1313.

........ About the authors ...........................................................................................................


Dr. Patrice Potvin is an Associate Professor of Science Education at the Université du Québec
à Montréal. Awarded a number of research grants, his interests focus on conceptual change,
inquiry learning, the use of technology in education and teacher training.
Dr. Martin Riopel is an Assistant Professor of Science Education at the Université du Québec
à Montréal. The recipient of various research grants, he is principally interested in the use of
technology in science education.
Steve Masson is a doctoral student at Université du Québec à Montréal. His research interests
involve science education, neuroscience, and science learning at the elementary level.
Dr. Frédéric Fournier is an Associate Professor of Science Education at the Université du
Québec à Montréal. He is interested in the use of technology in science education and in
education in general.

........ How to cite this article ....................................................................................................


Potvin, P., Riopel, M., Masson, S. & Fournier, F. (2010). Problem-centered learning vs. teaching-centered
learning in science at the secondary level: An analysis of the dynamics of doubt. Journal of Applied
Research on Learning, 3, Article 5, pp. 1-24.

24

You might also like