Challenge 3
Challenge 3
A R T I C L E I N F O A B S T R A C T
Keywords: The inclusion of computational thinking (CT) in the classroom is something that the present and
Computational thinking future society is demanding. However, this domain remains largely unexplored, especially in the
Elementary education first years of Primary Education. The purpose of this study is to evaluate whether the inclusion of
Evaluation methodologies
the so-called unplugged activities favours the development of CT in the students of the early years
Gender studies
Teaching/learning strategies
of Primary Education. To this end, a quasi-experimental study has been carried out to explore the
eventual benefit of a mixed approach that combines both unplugged and plugged-in activities. In
particular, 84 second-grade students took part in the experiment. Three questions were evaluated:
the development of their CT skills, their motivation towards the proposed instruction, and the
influence of students’ gender in the two previous areas. The intervention was designed on a se
lection of activities extracted from Code.org courses, and was divided into two phases, one in
which one group worked with unplugged activities and the other with plugged-in activities, and a
second where both groups worked with plugged-in activities. Analysing the three proposed
questions through tests performed before, in between and after the instruction, it is concluded
that the inclusion of unplugged activities in the instruction seems beneficial taking into account
CT, motivation and gender.
1. Introduction
Considered a pioneer in the computational thinking (CT) field (Ilic, Haseski, & Tugtekin, 2018), Wing (2008) defined CT as “taking
an approach to solving problems, designing systems and understanding human behaviour that draws on concepts fundamental to
computing’’ (p. 3717). A few years later she defined it as “the thought processes involved in formulating a problem and expressing its
solution(s) in such a way that a computer—human or machine—can effectively carry out” (Wing, 2014, para. 5).
About this, Bundy asserted that a remarkable intellectual revolution is happening regarding CT, as it is influencing research in the
majority of disciplines, either in sciences or in humanities (2007, p. 67). Accordingly, Wing inferred that today it can be stated that CT
is a fundamental skill for everyone, not just for computer scientists. To reading, writing, and arithmetic, CT should be added to every
child’s abilities (2006). For Wing, this scenario establishes a new educational challenge for our society, especially for our children
(2008, p. 3717).
Barr and Stephenson (2011) argued that all today’s students will live a life heavily influenced by computing, and many will work in
fields that involve or are influenced by computing, so they must begin to work with CT in K-12. Specifically, Djurdjevic-Pahl, Pahl,
Fronza, and El Ioini (2017) stated that CT can be started in early stages, since the first year in primary school.
* Corresponding author.
E-mail address: [email protected] (R. Cózar-Gutiérrez).
https://doi.org/10.1016/j.compedu.2020.103832
Received 6 September 2019; Received in revised form 24 January 2020; Accepted 2 February 2020
Available online 3 February 2020
0360-1315/© 2020 Elsevier Ltd. All rights reserved.
J. del Olmo-Muñoz et al. Computers & Education 150 (2020) 103832
As for its research and implementation in classroom, since the cognitive capacity of the students vary depending on their age, the
methods, contents and learning strategies for teaching CT should be adapted accordingly (Hsu, Chang, & Hung, 2018). In relation to
that, Brackmann et al. (2017) pointed out that CT is nowadays being adopted and investigated by using two main approaches in
schools: with computer programming exercises (plugged-in activities) and with unplugged activities. These two types of activities
differ in that the latter are based on the approach of exposing children to CT without using computers (Bell, Grimley, Bell, Alexander, &
Freeman, 2009).
Some research has been done regarding students’ motivation towards them and their results, as is the case of Brackmann et al.
(2017) with students of 5th and 6th grade of Primary Education, who proved that the unplugged approach has a positive effect on
motivation and may be effective for the development of CT, or those of Tsarava et al. (2017) and Tsarava, Moeller, and Ninaus (2018)
with children of 3rd and 4th grade, who stated that playful unplugged activities should allow students getting a first grip on CT
processes by actively engaging them. However, the little documentation about early years of Primary Education such as 1st and 2nd
grade suggests that this is a recent issue whose research still have a long way to go.
In terms of students’ gender, Dagienė, Pelikis, and Stupurienė (2015) underlined the little research specifically exploring gender
differences in young children’s problem-solving and programming abilities, as CT is so far a very little investigated domain. Since then,
despite the materialization of several studies on the topic with students from secondary school and last years of elementary school
(Atmatzidou & Demetriadis, 2016; Jenson & Droumeva, 2016; Owen et al., 2016; Torres-Torres, Román-González, & Pérez-González,
2019, pp. 209–215), there is still a research gap in the early years of Primary Education. In relation to the interest towards aspects
related to learning programming and CT, Espino and González (2015a, 2015b) observed that there is greater homogeneity between
boys and girls in the stages of Early Childhood Education and Primary Education, in comparison with the subsequent educational
stages. However, Master, Cheryan, Moscatelli, and Meltzoff (2017) highlighted that girls in early elementary school report less interest
and liking for computers compared with boys. These contradictory results indicate the need for further studies on the gender gap, and
eventually for approaches addressing more motivating alternatives for girls in the early educational stages, since at that time the
differences are still not too wide and can be more easily addressed.
With all this, the main aim of this work is to evaluate whether is it more appropriate to introduce CT in early years of Primary
Education through unplugged activities before plugged-in activities rather than do it exclusively through plugged-in activities. Spe
cifically, to address this aim we study the following research questions:
RQ01.- What approach, unplugged-plugged-in activities or only plugged-in activities, promotes a greater acquisition of CT skills
when introducing CT in the first years of Primary Education?
RQ02.- What approach, unplugged-plugged-in activities or only plugged-in activities, produces better motivational outcomes when
introducing CT in the first years of Primary Education?
RQ03.- Are there significant differences in the effectiveness of the approaches in terms of CT skills or motivational outcomes related
to the gender of the students?
2. Method
2.1. Design
Based on the main objectives of this study, and given the impossibility of subjecting the disposition of the groups under study to a
previous sampling to randomize their structuring, a quasi-experimental design with control and experimental groups was proposed.
Regarding the development of the experience, it was structured in two main training phases of instruction interspersed with pre-, mid-
and post-tests, making a total of eight 45-min lessons developed during an eight-week period. Fig. 1 describes the process implemented
in the development of the research.
In the first session the students performed the Pre-testCT for the evaluation of their previous level in the skills related to CT. After
that, the first phase of instruction took place over three sessions in which the control group (hereinafter, unplugged group) worked
with unplugged activities while the experimental group (hereinafter, plugged-in group) worked with complementary plugged-in ac
tivities. After the first phase of instruction, the Mid-testCT was performed by students to evaluate their CT skills, and the Mid-testMot to
measure their motivation towards the activities previously implemented.
2
J. del Olmo-Muñoz et al. Computers & Education 150 (2020) 103832
Once the Mid-tests were done, the second phase of instruction began. This phase consisted of two sessions in which the type of the
activities was unified for both groups, being all of the type plugged-in. At the end of the second phase of instruction, the Post-tests
(Post-testCT and Post-testMot) were completed to evaluate again both the levels of CT and the motivation.
2.2. Participants
The sample of the research consisted of a total of 84 students of the 2nd year of Primary Education divided into four groups of three
different schools, all of them from the region of Castilla-La Mancha (Spain). Both the plugged-in group and the unplugged group were
formed by 42 students each. Regarding the gender of the students, there were 18 boys and 24 girls in the unplugged group, and 23 boys
and 19 girls in the plugged-in group. The demographic data of the groups summarized in Table 1 verifies the homogeneity of the groups
in terms of gender.
2.3. Instruments
Table 1
Demographic data of the groups under study.
Group Role # Students # Gender
Unplugged Control 42 M: 18 F: 24
Plugged-in Experimental 42 M: 23 F: 19
3
J. del Olmo-Muñoz et al. Computers & Education 150 (2020) 103832
Fig. 2. Item 3 of the Computational thinking test (Bebras 2016 Edition (Blokhuis, Millican, Roffey, Schrijvers, & Sentance, 2016), Easy difficulty).
Fig. 3. Item 8 of the Computational thinking test (Bebras 2017 Edition (Blokhuis, Csizmadia, Millican, Roffey, Schrijvers, & Sentance, 2017),
Medium difficulty).
ARCS, which is named after an acronym referring to the categories that comprise it: attention, relevance, confidence and satisfaction.
This model defined by Keller (1987) is based on the idea that there are personal and environmental characteristics that influence
motivation, and therefore, performance against educational tasks. In this work we have used the validated reduction of Loorbach,
Peters, Karreman, and Steehouder (2015).
As the participants were students in the 2nd year of Primary Education, their level of reading comprehension was basic. Therefore,
Table 2
Motivation test items.
Item Dimension Description
4
J. del Olmo-Muñoz et al. Computers & Education 150 (2020) 103832
to carry out the motivation test, the choice of the answers was facilitated by a Likert scale that ranges from 1 (Strongly disagree) to 5
(Strongly agree) supported by emoticons. In Table 2 the items of the adapted motivation test can be consulted.
2.4. Procedure
As explained in the design section, there were five instructional sessions that allowed students to develop skills related to CT. The
instruction was composed of two phases, being the first of three 45-min sessions, and the second of two 45-min sessions.
For the design of the instruction sessions we based on several coding and computer literacy development programmes from the
website Code.org. This platform organises its courses in different programmes that, thanks to their characteristics, are a good option to
support students to lay the foundations of computer science (Kalelioğlu, 2015). It has become increasingly popular to promote and
teach these abilities in primary school by initiatives such as Code.org (Tsarava et al., 2017).
To plan the activities and their sequencing throughout these sessions, we made a selection from the different Code.org online
courses following the guidelines of Code.org (2018b) and Code.org (2018c), since they offer a good combination and alternation of
activities, from which an equivalence can be established between the plugged-in and the unplugged activities. Specifically, we chose
and adapted different activities from courses which are designed to a 2nd grade level, as indicated in Code.org (2018b), and that
include simple computational concepts such as “directions”, “sequences” and “loops”, but not other more complex such as “condi
tionals” or “functions” (Román-González, 2016). The courses are the following:
• Course 1 (Code.org (2019a)), intended for early-readers who have little or no previous computer science experience.
• Course 2 (Code.org (2019b)), intended for readers who have little or no previous computer science experience.
• Course B (Code.org (2018a)), developed with first-graders in mind, and tailored to a novice reading level.
Regarding the materials that were used, we prepared a teaching guide for each lesson to facilitate the development of the sessions
for each of the teachers, so that they only had to follow the proposed steps. Besides, the teacher had to support the explanation of the
activities with the whiteboard or the blackboard. For their part, students used different manipulative materials (such as different
templates, plastic cups or a reflection journal) for the unplugged activities, and tablets for the plugged-in activities, which allowed the
students to work individually in the plugged-in activities.
Regarding groupings, students worked individually most of the time, except when some unplugged punctual activity required
working in pairs or groups.
• Understand the difficulties of translating human language problems into machine language.
• Convert a sequence of steps into a coded program.
• Practice the communication of ideas through codes and symbols.
1) Vocabulary (2 min): Two new and important concepts for this lesson are explained for all the class: algorithm and program.
2) Introduce Graph Paper Programming (8 min): In this activity, the teacher explains to all the class how we guide each other to make
drawings. For this exercise, we use sheets of 4 × 4 graph paper. Starting in the upper left corner, we guide our pairs with simple
instructions. These instructions include: “Move a square to the right”, “Move a square to the left”, “Move a square up”, “Move a
square down”, and “Fill the square with colour”. For example, Fig. 5 shows how we would write an algorithm to indicate our pair
(who pretends to be a drawing machine) to colour their grid so that it looks like the image, by using the instructions in Fig. 4 (which
the teacher projects onto the board).
3) Practice Together (5 min): At this point, the teacher fills the grid for the class on the board, square by square, and then asks them to
help him/her describe what he/she has just done. First reciting the algorithm out loud, and then converting their verbal instructions
into a program.
5
J. del Olmo-Muñoz et al. Computers & Education 150 (2020) 103832
4) Four-by-Fours (20 min): At this point students work in pairs, first choosing one of the proposed grids of a worksheet and writing the
algorithm necessary to draw the chosen image, so that after the pair can interpret (as do the machines) and draw the grid of the
other.
5) Flash Chat (3 min): What did we learn? A series of questions are proposed to the class to reflect on what has been done:
• What we learned today?
• What happens if we use the same arrows, but replacing “Fill-in square with colour” with “Place brick”? What could we be able to
do?
• What else could we program if we changed what the arrows meant?
6) Vocab Shmocab (2 min): Different definitions are proposed to the whole class to be associated with the new vocabulary learned, in
order to settle it.
7) Graph Paper Programming Assessment (10 min): Students work individually with the same type of exercises in their ‘Assessment
Worksheet’ to continue practicing what they have learned.
6
J. del Olmo-Muñoz et al. Computers & Education 150 (2020) 103832
1) Jigsaw (15 min): Learn to drag and drop: Students start by simply dragging the images on the screen and then dropping the pieces of
the puzzle in the correct order (see Fig. 6). To avoid any student being left behind, we all do it at the same time, waiting for
everyone to finish each stage to move on to the next, and we will also do so in the future for all the plugged-in and unplugged
sessions. The lesson is available in https://studio.code.org/s/course1/stage/3/puzzle/1.
2) Maze Sequence (20 min): In this puzzle series with Angry Birds game characters, students develop sequential algorithms to move a
bird from one side of the maze to the pig on the other side (see Fig. 7). The lesson is available in https://studio.code.org/s/course1/
stage/4/puzzle/1.
1) Talking to Robots (5 min): First we play a video to put students in context about the things robots can do (https://www.youtube.
com/watch?v=kHBcVlqpvZ8). After that, we ask the students how they think the robot know what to do in the video and if a robot
really understands what you say. The objective of this quick discussion is to point out that while robots seem to behave like people,
in reality they only respond to their programming.
2) Introduction and Modelling (10 min): For this activity organised by groups, the teacher explains the students how to tell their
“robot” friend to build a specific stack of plastic cups using only the four commands of Fig. 8 that the teacher shows on the board.
The teacher shows a stack of plastic cups with a certain structure, asking the students what the algorithm (the step-by-step in
structions) should be like for the robot to build it.
3) Programming Your Robots (20 min): The students work in groups of 4, dividing each group into two pairs. Each couple develops
their own program to be “executed” by the other couple. Once the two couples in the group have completed their programs, they
can take turns being “robots” by following the instructions that the other couple wrote. By way of example, Fig. 10 shows the stack
of plastic cups resulting from the interpretation of the algorithm shown in Fig. 9.
4) Journaling (10 min): The students write in their diary of reflections on what they have learned, why it is useful and how they felt
about it by drawing an emoticon. They also draw the stack of glasses they would like a robot to build, drawing the corresponding
algorithm to build it. This can help to consolidate knowledge.
2.4.4. Phase 1, session 2, plugged-in: maze debug and bee & Artist Sequence
This session consists of three activities or lessons. In the first of them, students learn to debug, which is a process consisting of
finding and solving errors (bugs) in the code. In the second activity we review the sequences seen in the previous session, adding some
new element in addition to the movement arrows already known. Finally, another lesson is carried out to consolidate the knowledge
about sequences, in which also some different elements are added to vary the type of exercises.
Objectives:
7
J. del Olmo-Muñoz et al. Computers & Education 150 (2020) 103832
Fig. 8. Symbol key to build plastic cups stacks. Source: Code.org (2018c)
• Predict where a program will fail and modify an existing program to solve its errors.
• Express the movement as a series of commands and arrange them as sequential steps in a program.
• Create a program to draw an image using sequential steps.
8
J. del Olmo-Muñoz et al. Computers & Education 150 (2020) 103832
1) Maze Debugging (15 min): In this activity, students find puzzles that have been solved incorrectly. They must go through existing
code to identify errors, including missing blocks and unordered blocks, like in Fig. 11. The lesson is available in https://studio.code.
org/s/course1/stage/5/puzzle/1.
2) Bee Sequence (15 min): In this activity the students help their bees to collect the nectar from the flowers and create honey in the
honeycombs, as shown in Fig. 12. The students thus add blocks of action to the movement blocks that they already knew from the
previous session. The lesson is available in https://studio.code.org/s/course1/stage/7/puzzle/1.
3) Artist Sequence (15 min): In this activity students take control of the Artist to make simple drawings on the screen, as Fig. 13
illustrates. The lesson is available in https://studio.code.org/s/course1/stage/8/puzzle/1.
• Identify repeated patterns in the code that could be replaced with a loop.
• Write algorithms that use loops to repeat patterns.
1) My Robotic Friends Review (10 min): This review reminds the students about how quickly the “My Robotic Friends” activity
programs can become complex. To do this, we show one of the easy structures from the previous session (see Fig. 14) and program it
together as a reminder of the rules and terminology. Next, we show a more difficult structure, which requires the repetition of many
symbols to program it (see Fig. 15). Once the students have come up with the idea of “repeating” the code, tell them the related
vocabulary: “loop”.
2) Introduction and Modelling (10 min): As a demonstration for the whole class, the teacher shows a program for students to think
about in which part they can find a repeating instruction pattern. Using one of the repetition patterns that the class identified,
explain that this can be written in another way, as shown in Fig. 16.
3) Looping Your Robots (20 min): Similar to the activity of the “My Robotic Friends” session, groupings of four students are made and
then each group is divided into two pairs. Each pair writes an algorithm that the “robot” can read later, and the teacher reminds the
students to pay attention to the opportunities to replace a repeating pattern with a loop.
4) Journaling (5 min): The students write in their diary how they felt about the lesson by drawing an emoticon. They also write or
draw something in their diary that reminds them of what loops are, such as describing what “repeat” means to them, or drawing a
picture of themselves repeating something.
9
J. del Olmo-Muñoz et al. Computers & Education 150 (2020) 103832
Fig. 14. Simple plastic cups stack example. Source: Code.org (2018b)
Fig. 15. Complex plastic cups stack example. Source: Code.org (2018b)
Fig. 16. Plastic cups stack simplified looped algorithm example. Source: Code.org (2018b)
10
J. del Olmo-Muñoz et al. Computers & Education 150 (2020) 103832
• Use a combination of sequential commands and loops to move and perform actions to reach a target within a maze.
1) Maze Loops (22 min): Having worked in this workspace in the previous session, this stage makes students to use loops to move
around the maze more efficiently by needing fewer blocks of programming. Fig. 17 depicts a stage of the lesson, which is available
in https://studio.code.org/s/course1/stage/13/puzzle/1.
2) Bee Loops (23 min): Like in the previous session, in this activity students help their bees to collect the nectar from the flowers and
create honey in the honeycombs, but in this case, they will use loops to help the bee collect more nectar and make more honey, as
portrayed in Fig. 18. The lesson is available in https://studio.code.org/s/course1/stage/14/puzzle/1.
1) Jigsaw: Learn to drag and drop (10 min) (only unplugged group): This activity is explained in section 3.4.2.
2) Artist Loops (20 min): Returning to the Artist, students learn how to sketch more complex forms by looping simple sequences of
instructions. Fig. 19 depicts a stage of the lesson in which students must draw the battlements of a castle. The lesson is available in
https://studio.code.org/s/course1/stage/18/puzzle/4.
3) Maze Loops (Text blocks) (25 min): Students will use loops to move more efficiently through the maze they already know, but this
time with text blocks, adding difficulty to the lesson. Fig. 20 shows an example of this lesson, which is available in https://studio.
code.org/s/course2/stage/6/puzzle/1.
2.4.8. Phase 2, session 5: Maze Sequence & Bee Loops (text blocks) and Play Lab
This last session consists of three activities. The first of them serves to refresh the concept of sequence in which we worked during
the first sessions, but this time with textual blocks. In the second lesson we continue working to consolidate the loops, also with textual
blocks. Finally, in the last activity, students continue practicing with the environment in a relaxed way, creating an interactive story
with new characters.
Objectives:
• Express the movement as a series of commands and arrange them as sequential steps in a program.
• Use a combination of sequential commands and loops to move and perform actions to reach a target within a maze.
• Understand and know how to use other types of programming blocks such as text blocks, as well as their particularities.
11
J. del Olmo-Muñoz et al. Computers & Education 150 (2020) 103832
12
J. del Olmo-Muñoz et al. Computers & Education 150 (2020) 103832
• Create an animated and interactive story using sequences, loops and event handlers, and share the creations with classmates.
1) Maze Sequence (Text blocks) (15 min): Students review the sequences by programming with text blocks. Fig. 21 illustrates an
example of this lesson, which is available in https://studio.code.org/s/course2/stage/3/puzzle/1.
2) Bee Loops (Text blocks) (15 min): In this activity students help again their bees to collect the nectar from the flowers and create
honey in the honeycombs by using loops and text blocks, as shown in Fig. 22. The lesson is available in https://studio.code.org/s/
course2/stage/8/puzzle/1.
3) Play Lab: Create a Story (15 min): In this culminating activity, students will have the opportunity to be creative and apply all the
coding skills they have learned to create an animated story. Fig. 23 illustrates an example of this lesson, which is available in
https://studio.code.org/s/course1/stage/16/puzzle/1.
3. Results
Considering the objectives of this work, this section shows the results obtained for both groups in each of the tests (Pre-, Mid- and
Post-test) for each of the proposed areas to be compared in a grouped way. In the first place, the results related to CT are presented.
Then, those corresponding to the motivation towards the instruction. Finally, the previous two taking into account the gender of the
students.
Regarding CT skills, Table 3 shows the average score of the three tests on a scale from 0 to 10, and the standard deviation of each of
the measures is also included in parentheses. The descriptive data and a Mann-Whitney test indicated that there were already slight
differences in favour of the plugged-in group (Pre-testCT = 3.87) in comparison to the unplugged group (Pre-testCT = 3.26) before the
intervention that were statistically significant (U = 506.00, p = .009). Mann-Whitney tests were conducted since assumptions for
parametric test were not fulfilled.
Table 4 presents the average difference between Pre-testCT and Mid-testCT (U = 369.00, p = .033), and between Pre-testCT and Post-
13
J. del Olmo-Muñoz et al. Computers & Education 150 (2020) 103832
Table 3
Computational thinking tests average results.
Group Pre-testCT Mid-testCT Post-testCT
Table 4
Computational thinking tests average differences.
Group DiffPre-MidCT DiffPre-PostCT
Table 5
Computational thinking tests average (by difficulty).
Group Pre-testEasy Pre-testHard Mid-test-Easy Mid-testHard Post-testEasy Post-testHard
Plugged-in 3.21 (0.77) 0.66 (0.78) 3.13 (1.74) 0.87 (0.81) 3.68 (0.85) 2.58 (1.16)
Unplugged 2.75 (0.59) 0.53 (0.93) 3.10 (1.12) 1.90 (1.52) 4.24 (0.71) 2.84 (1.05)
testCT (U = 373.00, p = .003), evidencing the greater improvement experienced by the unplugged group throughout the study.
Table 5 also shows the results of the Pre-testCT, Mid-testCT and Post-testCT, but this time splitting every result by the difficulty level
of the problem, depending if it is easy or hard. In this way, the plugged-in group was more successful both in easy (U = 500.00, p =
.003) and hard (U = 645.00, p = .191) problems in the Pre-testCT, and in easy (U = 586.00, p = .653) problems in the Mid-testCT. On the
other hand, the unplugged group was more successful in hard (U = 365.00, p = .003) problems in the Mid-testCT, and both in easy (U =
451.00, p = .004) and hard (U = 594.50, p = .313) problems in the Post-testCT.
Next, Table 6 summarizes the differences between Pre-testCT and the Mid-testCT and between the Pre-testCT and the Post-testCT for
both easy and hard problems. The comparison between Pre-testCT and the Mid-testCT showed higher gains for the unplugged group in
the easy problems (U = 466.50, p = .275) and, especially, in hard problems (U = 300.00, p = .002). In the comparison between the Pre-
testCT and the Post-testCT, again larger improvements were experienced by the unplugged group in both easy (U = 317.50, p < .001)
Table 6
Computational thinking tests average difference (by difficulty).
Group DiffPre-MidCT-Easy DiffPre-MidCT-Hard DiffPre-PostCT-Easy DiffPre-PostCT-Hard
14
J. del Olmo-Muñoz et al. Computers & Education 150 (2020) 103832
Table 7
Mid-testMot and Post-testMot results.
Plugged-in Unplugged
3.2. Motivation
As shown in Table 7, no significant differences were identified between groups in the results of the Mid-testMot (U = 718.00, p =
.814) and in the Post-testMot (U = 715.50, p = .413), although there is a slight decrease in the Post-test, more accentuated in the
plugged-in group. The table also includes the results of both tests splitting them by dimensions (attention, relevance, confidence and
satisfaction), in which no significant differences were found again.
3.3. Gender
In this section we organise the results of the computational and motivational areas from the prism of the gender of the students, to
later analyse possible differences in this aspect.
Regarding CT, Table 8 shows the average score in all three tests, differentiating between boys and girls, giving us that the dif
ferences between both genders were not really significant in terms of CT. In the Pre-test, the girls from both groups clearly obtained
better scores, but this was more balanced in the subsequent tests as the instruction developed.
Table 9 shows the average difference between Pre-testCT and Mid-testCT (U = 86.00, p = .847), and between Pre-testCT and Post-
testCT (U = 127.50, p = .595), evidencing a greater improvement experienced by the boys throughout the study, although in this case
the difference is not statistically significant.
Regarding motivation, Table 10 shows that in the Mid-testMot the girls of the plugged-in group obtained lower results in general in
comparison with the boys, especially in the dimensions of relevance and confidence, being the difference in this case statistically
significant (U = 94.50, p = .015). The difference is even greater if we talk about the Post-testMot, where the girls scored even less than in
the Mid-testMot in comparison with the boys, being that again statistically significant (U = 116.50, p = .030). However, in the un
plugged group there were hardly any differences between both genders, there was even a slightly higher motivation in the case of girls
in both the Mid-testMot, although without no significant statistically differences, (U = 182.50, p = .900) and the Post-testMot (U =
155.00, p = .277).
4. Analysis of results
Following the same structure of the previous section, the results in each of the three proposed areas are now analysed.
If we focus on the computational area, the averages of the results obtained in the Pre-testCT for both groups are less than 4, which
indicates a deficient initial level in the field of CT. On the other hand, this result cannot be considered surprising since these types of
problem-solving processes are not currently taught at schools. Despite the low initial scores, both groups experienced an improvement
in the other two tests. The plugged-in group slightly improved in the Mid-testCT, and more considerably in the Post-testCT, with a two-
point improvement compared to the Mid-testCT. But even more so did the unplugged group, which already in the Mid-testCT improved
in two points, and another two more in the Post-testCT. Observing Table 4 it can be deduced more clearly that the unplugged group
experienced this greater improvement, especially in the Mid-testCT, which was performed after the first phase of unplugged instruction,
Table 8
Computational thinking tests average (by gender).
Group Gender Pre-testCT Mid-testCT Post-testCT
15
J. del Olmo-Muñoz et al. Computers & Education 150 (2020) 103832
Table 9
Computational thinking tests average difference (by gender).
Group Gender DiffPre-MidCT DiffPre-PostCT
Table 10
Mid-testMot and Post-testMot results by gender.
Plugged-in Unplugged
5. Conclusions
In order to establish the objectives of this work the following question was posed: Is it more appropriate to introduce CT in early
16
J. del Olmo-Muñoz et al. Computers & Education 150 (2020) 103832
years of Primary Education through unplugged activities before plugged-in activities rather than do it exclusively through plugged-in
activities? To answer this question, three research questions were set, consisting of analysing the development of CT skills, diagnosing
the possible loss or gain in terms of motivation, and identifying possible differences related to the gender of the students regarding the
two previous objectives. To do this, a comparison was made between two groups of the 2nd year of Primary Education, designating one
as a control group and the other as an experimental group. To carry out this comparison, an instruction was designed for both groups,
divided into two phases. The first phase was different for each group, and the second was the same for both groups.
In the case of the first research question it was found that the students had a low initial level in terms of CT, something that was
expected considering the little or no instruction previously received on the subject. The proposed instruction emphasized on CT
concepts that were suitable to the educational level of students, such as “directions”, “sequences” and “loops”. From there, it increased
the level of CT skills, being that increase more remarkable in the experimental group, which attended unplugged activities in the first
phase of instruction, as Brackmann et al. (2017) already revealed. This finding also underpins what Città et al. (2019) concluded,
which is the connection between sensorimotor factors and high-level cognitive processes, which reveals the impact of an unplugged
approach on the acquisition of computational thinking skills.
In reference to the second research question, related to motivation, the results obtained by the experimental group in the tests show
a significant advantage in the use of an instruction with unplugged activities, even if these are followed by plugged-in activities. As in
Tsarava et al. (2017) and Tsarava et al. (2018), this leads us to the conclusion that an instruction in CT based on unplugged activities or
in unplugged activities followed by plugged-in activities is not only beneficial in terms of skills acquisition, but also in terms of the
motivation of the students. This generates an obvious double advantage.
In the case of the third research question, related to the gender of the students and how it has been able to influence the devel
opment of CT and students’ motivation, the results confirm what was previously asserted in Espino and González (2015a, 2015b),
allowing us to infer that it influences both, being more evident in the case of motivation. After this study it has been possible to verify
that when working CT with students in early years of Primary Education (2nd year in this case), both respond better to an instruction
composed of unplugged or mixed activities (unplugged and plugged-in), both in the development of CT skills and in terms of the
motivation towards them. Speaking of motivation of girls in particular, this work responds to what (Kong, Chiu, & Lai (2018))
suggested in attracting girls to this field of computer science. To the different interventions mentioned in that study, from here the
promotion of unplugged activities is also proposed.
Analysing the achievement of the objectives, we can answer the initial question by concluding that it is more appropriate to work
on CT in early years of Primary Education through a mixed approach that combines unplugged and plugged-in activities rather to do it
only through plugged-in activities.
In future studies the size of the population could be increased, since although the one chosen for this study allows to draw some
conclusions, it limits the generalization of the results. Also, the length of the instruction could be increased, since a deeper instruction
from twelve to eighteen sessions as the ones collected in Code.org courses should provide more enlightening results to be measured
with each of the proposed instruments. Regarding those instruments, there is a manifest shortage of validated tools that allow eval
uating the development of skills related to CT that focus on the solution of ‘real’ problems, and not so much on others that are exclusive
to programming. In this sense, we find appropriate the proposal for further investigation of the Bebras Tasks that several authors have
already made (e.g. Román-González et al., 2019; Wiebe et al., 2019; or Araujo, Andrade, Guerrero, & Melo, 2019). In the other in
strument used lies one of the limitations of this work, which is in the measure of motivation. Even though the wording of the questions
was adapted to facilitate their understanding and the selection of answers was also facilitated by a Likert scale, it is fair to recognize
that the complexity of the questions is perhaps too high to be used in early years of Primary Education. A future improvement could be
the development of a measurement instrument adapted to these ages. Besides, the configuration of the instruction model may have
affected the results on motivation. Future studies could assess different settings and analyse their impact on motivation.
Although the study developed in this work allows to draw some conclusions such as those mentioned above, there is still a long way
to go. The inclusion of CT in the classroom is something inevitable, since present and future society yearns for people prepared to face
its needs. This work aims to contribute its bit in this field of education.
Javier del Olmo-Muñoz: Conceptualization, Methodology, Investigation, Data curation, Formal analysis, Writing - original draft,
Writing - review & editing. Ramón Cózar-Gutiérrez: Conceptualization, Methodology, Resources, Visualization, Writing - review &
editing. José Antonio González-Calero: Conceptualization, Methodology, Formal analysis, Writing - review & editing, Supervision.
Acknowledgments
This work was supported by the University of Castilla-La Mancha under Grant number 2019-GRIN-27150.
17
J. del Olmo-Muñoz et al. Computers & Education 150 (2020) 103832
References
Araujo, A. L. S. O., Andrade, W. L., Guerrero, D. D. S., & Melo, M. R. A. (2019). How many abilities can we measure in computational thinking?. In Proceedings of the
50th ACM technical symposium on computer science education - sigcse ’19 (pp. 545–551). New York, New York, USA: ACM Press. https://doi.org/10.1145/
3287324.3287405.
Atmatzidou, S., & Demetriadis, S. (2016). Advancing students’ computational thinking skills through educational robotics: A study on age and gender relevant
differences. Robotics and Autonomous Systems, 75, 661–670. https://doi.org/10.1016/j.robot.2015.10.008.
Barr, V., & Stephenson, C. (2011). Bringing computational thinking to K-12. ACM Inroads, 2(1), 48. https://doi.org/10.1145/1929887.1929905.
Blokhuis, D., Csizmadia, A., Millican, P., Roffey, C., Schrijvers, E., & Sentance, S. (2017). UK bebras computational challenge 2017 answers. Retrieved from http://www.
bebras.uk/uploads/2/1/8/6/21861082/uk-bebras-2017-answers.pdf.
Blokhuis, D., Millican, P., Roffey, C., Schrijvers, E., & Sentance, S. (2016). UK bebras computational challenge 2016 answers. Retrieved from http://www.bebras.uk/
uploads/2/1/8/6/21861082/uk-bebras-2016-answers.pdf.
Bell, T. C., Grimley, M., Bell, T., Alexander, J., & Freeman, I. (2009). Computer science unplugged: School students doing real computing without computers.
Retrieved from www.cra.org.
Brackmann, C. P., Román-González, M., Robles, G., Moreno-León, J., Casali, A., & Barone, D. (2017). Development of computational thinking skills through unplugged
activities in primary school. In Proceedings of the 12th workshop on primary and secondary computing education - WiPSCE ’17 (pp. 65–72). New York, New York, USA:
ACM Press. https://doi.org/10.1145/3137065.3137069.
Bundy, A. (2007). Computational Thinking is pervasive. Journal of Scientific and Practical Computing Noted Reviews, 1(2), 67–69. Retrieved from http://www.inf.ed.ac.
uk/research/programmes/comp-think/.
Chen, G., Shen, J., Barth-Cohen, L., Jiang, S., Huang, X., & Eltoukhy, M. (2017). Assessing elementary students’ computational thinking in everyday reasoning and
robotics programming. Computers & Education, 109, 162–175. https://doi.org/10.1016/j.compedu.2017.03.001.
Chiazzese, A., Chifari, L., & Tosto. (2019). Educational robotics in primary school: Measuring the development of computational thinking skills with the Bebras tasks.
Informatics, 6(4), 43. https://doi.org/10.3390/informatics6040043.
Città, G., Gentile, M., Allegra, M., Arrigo, M., Conti, D., Ottaviano, S.Sciortino, M., … (2019). The effects of mental rotation on computational thinking. Computers &
Education, 141, 103613. https://doi.org/10.1016/J.COMPEDU.2019.103613.
Code.org. (2018). CS fundamentals 2018 | course B. Retrieved May 7, 2019, from https://curriculum.code.org/csf-18/courseb/.
Code.org. (2018). CS fundamentals curriculum guide. Retrieved from https://code.org/curriculum/docs/csf/CSF_Curriculum_Guide_2018_smaller.pdf.
Code.org. (2018). Instructor handbook: Code studio lesson plans for courses one, two, and three. Retrieved from https://code.org/curriculum/docs/k-5/complete.pdf.
Code.org. (2019). Course 1. Retrieved May 7, 2019, from https://studio.code.org/s/course1/.
Code.org. (2019). Course 2. Retrieved May 7, 2019, from https://studio.code.org/s/course2/.
Csizmadia, A., Curzon, P., Humphreys, S., Ng, T., Selby, C., & Woollard, J. (2015). Computational thinking-A guide for teachers. Retrieved from http://www.
computacional.com.br/files/ComputingatSchool/ComputingatSchool-CAS-Csizmadia-ComputationalThinking-AGuideforTeachers.pdf.
Dagienė, V., Pelikis, E., & Stupurienė, G. (2015). Introducing computational thinking through a contest on informatics: Problem-solving and gender issues.
Informacijos Mokslai, 73, 55–63.
Dagienė, V., & Sentence, S. (2016). It’s computational thinking! Bebras tasks in the curriculum. In International conference on informatics in schools: Situation, evolution,
and perspectives (Vol. 9973, pp. 28–39). Cham: Springer. https://doi.org/10.1007/978-3-319-46747-4_3.
Djurdjevic-Pahl, A., Pahl, C., Fronza, I., & El Ioini, N. (2017). A pathway into computational thinking in primary schools. In T. Wu, R. Gennari, Y. Huang, H. Xie, &
Y. Cao (Eds.), Emerging technologies for education. SETE 2016. Lecture notes in computer science (pp. 165–175). https://doi.org/10.1007/978-3-319-52836-6_19.
Cham.
Espino, E. E. E., & González, C. S. G. (2015a). Estudio sobre diferencias de género en las competencias y las estrategias educativas para el desarrollo del pensamiento
computacional. Revista de Educación a Distancia (Vol. 46). Retrieved from https://revistas.um.es/red/article/view/240171.
Espino, E. E. E, & González, C. S. G (2015b). Influencia del Género en el Pensamiento Computacional. In Proceedings in XVI international conference on human computer
interaction (pp. 7–9).
Hsu, T.-C., Chang, S.-C., & Hung, Y.-T. (2018). How to learn and how to teach computational thinking: Suggestions based on a review of the literature. Computers &
Education, 126, 296–310. https://doi.org/10.1016/j.compedu.2018.07.004.
Hubwieser, P., & Muhling, A. (2015). Investigating the psychometric structure of Bebras contest: Towards mesuring computational thinking skills. In 2015
international conference on learning and teaching in computing and engineering (pp. 62–69). IEEE. https://doi.org/10.1109/LaTiCE.2015.19.
Ilic, U., Haseski, H. I., & Tugtekin, U. (2018). Publication trends over 10 Years of computational thinking research. Contemporary Educational Technology, 9(2),
131–153. https://doi.org/10.30935/cet.414798.
Jenson, J., & Droumeva, M. (2016). Exploring media literacy and computational thinking: A game maker curriculum study. Electronic Journal of e-Learning, 14(2),
111–121. Retrieved from http://www.alice.org/.
Jung, U., & Lee, Y. (2017). The applicability and related issues of Bebras challenge in informatics education -the journal of Korean association of computer education.
The Journal of Korean Association of Computer Education, 20(5), 1–14. Retrieved from http://www.koreascience.or.kr/article/JAKO201731063313588.page.
Kalelioğlu, F. (2015). A new way of teaching programming skills to K-12 students: Code.org. Computers in Human Behavior, 52, 200–210. https://doi.org/10.1016/j.
chb.2015.05.047.
Keller, J. M. (1987). Development and use of the ARCS model of instructional design. Journal of Instructional Development, 10(3), 2–10. https://doi.org/10.1007/
BF02905780.
Kong, S.-C., Chiu, M. M., & Lai, M. (2018). A study of primary school students’ interest, collaboration attitude, and programming empowerment in computational
thinking education. Computers & Education, 127, 178–189. https://doi.org/10.1016/J.COMPEDU.2018.08.026.
Lockwood, J., & Mooney, A. (2018). Developing a computational thinking test using Bebras problems. In Proceedings of the CC-tel 2018 and TACKLE 2018 workshops.
Leeds. Retrieved from http://ceur-ws.org.
Loorbach, N., Peters, O., Karreman, J., & Steehouder, M. (2015). Validation of the Instructional Materials Motivation Survey (IMMS) in a self-directed instructional
setting aimed at working with technology. British Journal of Educational Technology, 46(1), 204–218. https://doi.org/10.1111/bjet.12138.
Master, A., Cheryan, S., Moscatelli, A., & Meltzoff, A. N. (2017). Programming experience promotes higher STEM motivation among first-grade girls. Journal of
Experimental Child Psychology, 160, 92–106. https://doi.org/10.1016/j.jecp.2017.03.013.
Owen, C. B., Keppers, N., Dillon, L., Levinson, M., Dobbins, A., & Rhodes, M. (2016). Dancing computer: Computer literacy though dance. In ACM international
conference proceeding series (pp. 174–180). Association for Computing Machinery. https://doi.org/10.1145/3007120.3007131.
Román-González, M. (2015). Computational thinking Test: Design guidelines and content validation. In 7th annual international conference on education and new
learning technologies. Spain: Barcelona. https://doi.org/10.13140/RG.2.1.4203.4329.
Román-González, M. (2016). Codigoalfabetización y pensamiento computacional en Educación Primaria y Secundaria: validación de un instrumento y evaluación de
programas (Vol. 1).
Román-González, M., Moreno-León, J., & Robles, G. (2019). Combining assessment tools for a comprehensive evaluation of computational thinking interventions. In
S.-C. Kong, & H. Abelson (Eds.), Computational thinking education (pp. 79–98). Singapore: Springer Singapore. https://doi.org/10.1007/978-981-13-6528-7_6.
Román-González, M., Pérez-González, J.-C., & Jiménez-Fernández, C. (2017). Which cognitive abilities underlie computational thinking? Criterion validity of the
computational thinking test. Computers in Human Behavior, 72, 678–691. https://doi.org/10.1016/j.chb.2016.08.047.
Ruf, A., Mühling, A., & Hubwieser, P. (2014). Scratch vs. Karel-impact on learning outcomes and motivation. In Proceedings of the 9th workshop in primary and
secondary computing education (pp. 50–59). ACM. https://doi.org/10.1145/2670757.2670772.
Selby, C., & Woollard, J. (2013). Computational thinking: The developing definition. Retrieved from https://eprints.soton.ac.uk/356481/.
18
J. del Olmo-Muñoz et al. Computers & Education 150 (2020) 103832
Tang, X., Yin, Y., Lin, Q., Hadad, R., & Zhai, X. (2020). Assessing computational thinking: A systematic review of empirical studies. Computers & Education, 148,
103798. https://doi.org/10.1016/j.compedu.2019.103798.
Torres-Torres, Y.-D., Román-González, M., & Pérez-González, J.-C. (2019). Implementation of unplugged teaching activities to foster computational thinking skills in primary
school from a gender perspective. Association for Computing Machinery (ACM). https://doi.org/10.1145/3362789.3362813.
Tsarava, K., Moeller, K., Butz, M., Pinkwart, N., Trautwein, U., & Ninaus, M. (2017). Training computational thinking: Game-based unplugged and plugged-in
activities in primary school. In 11th European conference on game-based learning (pp. 687–695). Austria: Gratz.
Tsarava, K., Moeller, K., & Ninaus, M. (2018). Training computational thinking through board games: The case of crabs. International Journal of Serious Games, 5(2).
https://doi.org/10.17083/ijsg.v5i2.248.
Wiebe, E., London, J., Aksit, O., Mott, B. W., Boyer, K. E., & Lester, J. C. (2019). Development of a lean computational thinking abilities assessment for middle grades
students. In Proceedings of the 50th ACM technical symposium on computer science education - sigcse ’19 (pp. 456–461). New York, New York, USA: ACM Press.
https://doi.org/10.1145/3287324.3287390.
Wing, J. M. (2006). Computational thinking. Communications of the ACM, 49(3), 33. https://doi.org/10.1145/1118178.1118215.
Wing, J. M. (2008). Computational thinking and thinking about computing. Philosophical Transactions of the Royal Society A: Mathematical, Physical & Engineering
Sciences, 366(1881), 3717–3725. https://doi.org/10.1098/rsta.2008.0118.
Wing, J. M. (2014). Computational thinking benefits society. Retrieved December 24, 2018, from http://socialissues.cs.toronto.edu/index.html%3Fp=279.html.
19