0% found this document useful (0 votes)
4 views

Assessing_implicit_computational_thinking_in_game- Q1

This study investigates implicit computational thinking (CT) in game-based learning, specifically using the logical puzzle game Zoombinis. It identifies key gameplay factors such as duration, accuracy, and puzzle difficulty that correlate with students' CT stages, while gender and grade level show no significant impact. The findings suggest that analyzing gameplay performance can effectively assess and enhance students' CT skills in educational contexts.

Uploaded by

elenkorhova
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Assessing_implicit_computational_thinking_in_game- Q1

This study investigates implicit computational thinking (CT) in game-based learning, specifically using the logical puzzle game Zoombinis. It identifies key gameplay factors such as duration, accuracy, and puzzle difficulty that correlate with students' CT stages, while gender and grade level show no significant impact. The findings suggest that analyzing gameplay performance can effectively assess and enhance students' CT skills in educational contexts.

Uploaded by

elenkorhova
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Received: 2 November 2022 | Accepted: 10 February 2024

DOI: 10.1111/bjet.13443

ORIGINAL ARTICLE

Assessing implicit computational thinking in


game-­based learning: A logical puzzle game
study

Tongxi Liu

Hong Kong Baptist University, Hong Kong,


Hong Kong Abstract
To date, extensive work has been devoted to incor-
Correspondence
Tongxi Liu, Hong Kong Baptist University, porating computational thinking in K-­ 12 education.
AAB 834, 8/F, Academic and Administration Recognizing students' computational thinking stages
Building, Baptist University Road Campus, in game-­based learning environments is essential to
Kowloon Tong, Hong Kong, Hong Kong.
Email: [email protected] capture unproductive learning and provide appropri-
ate scaffolding. However, few reliable and valid com-
Funding information
U.S. Department of Education, Grant/Award putational thinking measures have been developed,
Number: U411C190179 especially in games, where computational knowl-
edge acquisition and computational skill construc-
tion are implicit. This study introduced an innovative
approach to explore students' implicit computational
thinking through various explicit factors in game-­
based learning, with a specific focus on Zoombinis,
a logical puzzle-­based game designed to enhance
students' computational thinking skills. Our results
showed that factors such as duration, accuracy, num-
ber of actions and puzzle difficulty were significantly
related to students' computational thinking stages,
while gender and grade level were not. Besides, find-
ings indicated gameplay performance has the poten-
tial to reveal students' computational thinking stages
and skills. Effective performance (shorter duration,
fewer actions and higher accuracy) indicated practi-
cal problem-­solving strategies and systematic compu-
tational thinking stages (eg, Algorithm Design). This
work helps simplify the process of implicit computa-
tional thinking assessment in games by observing the
explicit factors and gameplay performance. These
insights will serve to enhance the application of gami-
fication in K-­12 computational thinking education, of-
fering a more efficient method to understanding and
fostering students' computational thinking skills.

© 2024 British Educational Research Association.

Br J Educ Technol. 2024;55:2357–2382.  wileyonlinelibrary.com/journal/bjet | 2357


2358 |    LIU

KEYWORDS
computational thinking assessment, data science applications,
game-­based learning analytics, implicit computational thinking
behaviours, learning behaviour patterns

Practitioner notes

What is already known about this topic


• Game-­based learning is a pedagogical framework for developing computational
thinking in K-­12 education.
• Computational thinking assessment in games faces difficulties because students'
knowledge acquisition and skill construction are implicit.
• Qualitative methods have widely been used to measure students' computational
thinking skills in game-­based learning environments.
What this paper adds
• Categorize students' computational thinking experiences into distinct stages and
analyse recurrent patterns employed at each stage through sequential analysis.
This approach serves as inspiration for advancing the assessment of stage-­based
implicit learning with machine learning methods.
• Gameplay performance and puzzle difficulty significantly relate to students' com-
putational thinking skills. Researchers and instructors can assess students' im-
plicit computational thinking by observing their real-­time gameplay actions.
• High-­performing students can develop practical problem-­solving strategies and
exhibit systematic computational thinking stages, while low-­performing students
may need appropriate interventions to enhance their computational thinking
practices.
Implications for practice and/or policy
• Introduce a practical method with the potential for generalization across various
game-­based learning to better understand learning processes by analysing sig-
nificant correlations between certain gameplay variables and implicit learning
stages.
• Allow unproductive learning detection and timely intervention by modelling the
reflection of gameplay variables in students' implicit learning processes, helping
improve knowledge mastery and skill construction in games.
• Further investigations on the causal relationship between gameplay performance
and implicit learning skills, with careful consideration of more performance factors,
are expected.

I NTRO DUCTI O N

Computational thinking (CT) has received considerable attention as one of the most founda-
tional competencies for being an informed and competitive citizen in the twenty-­first century
(Grover & Pea, 2018). In 1980, Papert (1980) first used the term ‘computational thinking’ to
describe students' strategic thinking in a programming context called LOGO. Drawing from
computer science concepts, Wing (2006) defined CT as a cognitive process that involves
ASSESSING IMPLICIT COMPUTATIONAL THINKING IN GAME-­B ASED
LEARNING    | 2359

multiple problem-­solving steps like a computer performed. A critical aspect of this process is
devising and constructing appropriate strategies of computation with which to derive practi-
cal solutions (Aho, 2012). As such, CT enables students to efficiently develop computational
abilities and solve complex problems in the real world. Even though much effort has been
devoted to facilitating higher education students' CT skills and keeping them competitive in
the workplace (Lyon, 2020), CT in K-­12 education is a relatively new agenda. It is no longer
sufficient to introduce CT simply to college and graduate students (Kjällander et al., 2021).
All today's students should develop a solid foundation of CT skills, preparing themselves to
solve real-­life problems with algorithmic and computational methods (Shute, Ke, et al., 2017).
Recently, there has been an increasing interest in incorporating CT into K-­12 education.
Shute, Ke, et al. (2017) described the role of CT in K-­12 as ‘the conceptual foundation re-
quired to solve problems effectively and efficiently (ie, algorithmically, with or without the
assistance of computers) with solutions that are reusable in different contexts’ (p. 151). To
introduce the core concepts of CT in K-­12 education, many studies have focused on inte-
grating CT practices in both elementary and advanced programming contexts. For example,
Scratch (Maloney et al., 2010; Resnick et al., 2009) is a block-­based programming environ-
ment that enables students to develop computer science concepts and skills by thinking
creatively, reasoning systematically and solving computationally. As an efficient program-
ming context, Scratch has proven beneficial in developing engaged CT practices and en-
hancing teaching and learning in K-­12 CT education (Marcelino et al., 2018). Besides, Noh
and Lee (2020) suggested that using robotics programming significantly improved students'
CT and creativity. These programming contexts (block-­based, text-­based and others) allow
students to computationally and algorithmically design and create personalized solutions.
While constructing CT concepts in computing programs plays an essential role, K-­12 stu-
dents can practice their CT competencies in a broader range of learning situations that
do not require programming projects (Kazimoglu et al., 2011). Barr and Stephenson (2011)
emphasized that CT in K-­12 education was not merely programming or computing; instead,
it was the process of knowledge acquisition and competency construction that involved for-
mulating and identifying relevant agencies to solve authentic problems.
Today, CT incorporates much more than programming activities in K-­ 12 education.
Game-­based learning has proven to be effective in motivating and improving students'
learning performance (Hooshyar et al., 2018; Zumbach et al., 2020). Research (Weintrop,
Beheshti, et al., 2016) suggested that game-­based learning showed promise to be the ped-
agogical framework for developing students' CT skills. With playful and dynamic learning
activities, game-­based learning introduces CT concepts and provides CT practices to fos-
ter students' CT competencies in an engaging and attractive way (Kazimoglu et al., 2012;
Pho & Dinscore, 2015). In game-­based learning environments, students intentionally con-
struct computational and algorithmic strategies to solve complex problems, which facilitates
CT skill development through various game mechanics (Zhao & Shute, 2019). In addition,
game-­based learning creates cognitive intrinsic motivations, involving students into mastery,
aesthetics and autonomy (Sanmugam et al., 2014). The reward games encourage students
to competitively solve problems by learning from real-­life experiences, prompting internal
and external motivations. In an effort to develop K-­12 students' CT skills, many studies im-
part core knowledge and concepts of CT through game-­based learning environments. For
example, Jiang et al. (2019) assessed students' CT skills in a puzzle-­based game and found
that this non-­coding game significantly enhanced students' logical thinking and problem-­
solving ability. Guenaga et al. (2021) explored students' CT development in a maze-­based
learning game by analysing their gameplay interactions. This study provided deeper insights
into assessing students' CT skills in game-­based learning environments, as well as design-
ing more effective CT learning platforms.
2360 |    LIU

Even though much attention has been poured into fostering K-­12 students' CT through
game-­based learning, CT assessment in this specific context remains challenging. According
to a review study (Tatar & Eseryel, 2019), most current research applied interviews, tests
and quizzes to understand students' game-­based learning outcomes related to CT skills.
In games, students' gameplay actions are automatically operationalized in digital systems,
capturing extensive details such as computational logic, strategic sequence and algorithmic
behaviours to reflect students' CT skills (Liu & Israel, 2022), which could not be identified
by surveys or exams (Rowe et al., 2021). Besides, assessing students' implicit CT in games
proves inherent difficulty since it could not be expressed by pre/post-­tests or self-­reports.
With much terminology and formalism, traditional methods impede students to exhibit their
implicit knowledge (Arena & Schwartz, 2014). Many researchers have dedicated themselves
to assessing students' CT skills in game-­based learning environments. For example, a study
employed data mining methods to evaluate CT practices (De Souza et al., 2019), while the
educational interpretation of CT measures based on students' external performance is in-
sufficient. Polat et al. (2021) statistically explored the relationships between certain variables
and CT skills while ignoring other significant aspects such as students' real-­time perfor-
mance. Therefore, the effort in this field is still scattered, leaving a clear space for additional
research.
Assessing students' CT in games is essential because CT competency development re-
flects the effectiveness of games and the efficiency of CT education. In a well-­designed
game, gameplay performance has construct validity, reflecting students' implicit knowledge
without obtrusive evaluation that may disrupt students' self-­controlled learning (Shute, 2011;
Van Der Meij et al., 2013). Investigating the relationship between gameplay performance and
CT competency in this learning setting offers scalable a reliable assessment of students' CT
skills. Other objective variables, such as gender difference and puzzle difficulty, may also
be associated with students' CT competency and should be carefully considered (Ardito
et al., 2020; Atmatzidou & Demetriadis, 2016). Since gameplay performance and these ob-
jective variables are easily accessible and observable, examining the relationships between
these factors and students' CT skills provides unique opportunities for explicit assessment
of students' implicit CT. Researchers and instructors can capture students' CT processes
and provide appropriate interventions by observing these external factors, in such a way
enhancing students' knowledge acquisition and competency construction in game-­based
learning environments. Identifying the factors for distinguishing students' implicit CT skills in
games has the potential to bring great educational benefits by developing CT competency
with gamification.
In response to the absence of implicit CT assessment in games, this study employs statis-
tical and educational data mining methods to uncover the significance of various observable
variables in relation to students' CT skills. In doing so, we propose an efficient approach
for inferring and evaluating students' CT competence within the context of game-­based
learning. This study follows three key steps: First, we analyse students' gameplay log files
and establish evidence-­based measures of CT stages in Zoombinis. This puzzle-­based
game is designed to promote fundamental concepts of mathematics and computer science,
enabling students to develop implicit problem-­solving strategies and CT skills during their
gaming experiences. Second, we extract variables such as gender, grade level, duration of
each attempt, number of actions in each attempt, accuracy of each attempt, puzzle name
and puzzle difficulty from log files to examine which factors have a significant influence on
distinguishing students' CT stages. Third, we explore the relationship between students'
gameplay performance and their CT skills.
The contributions of this study are threefold: (1) illustrating how to build evidence-­based
measures of students' implicit CT by analysing their log files; (2) providing unique opportu-
nities for explicit assessment of students' implicit CT by uncovering the relations between
ASSESSING IMPLICIT COMPUTATIONAL THINKING IN GAME-­B ASED
LEARNING    | 2361

various factors and CT stages; (3) assisting researchers and instructors in providing appro-
priate interventions and evaluating the effectiveness of game-­based learning to enhance
students' CT competency development.
The central research questions addressed by this study are: (1) Which factors play im-
portant roles in distinguishing students' CT in Zoombinis? (2) What is the relationship be-
tween students' gameplay performance and CT skills in Zoombinis?

TH EO R ETI CA L FOUN DATI O NS A ND R EL ATED WO R K

Game-­based learning

Game-­based learning is a type of learning method that integrates certain gaming principles
in real-­life contexts to engage students in learning achievement (Liu et al., 2020). A study
(Baptista & Oliveira, 2018) suggested that game-­based learning significantly facilitates us-
ability, productivity and satisfaction. The educational goals of games emphasize building,
teaching and imparting knowledge in an efficient way (Tan et al., 2007). In a game-­based
learning environment, specific knowledge is embedded in virtual scenarios, in which stu-
dents connect abstract and logical thinking to authentic problems without conscious learn-
ing objectives (Dimitra et al., 2020). Acting as rich primers of attractive and contextualized
learning, games can involve students in an engaging problem-­based learning process (Ke
et al., 2016) and provide incentives for expected behaviours (Turan et al., 2016). Students
develop their knowledge and skills by testing hypotheses, interacting with system feedback,
adapting strategies and processing solutions. On the one hand, game-­based learning re-
quires students to satisfy authentic problems with strategic or planned gameplay actions
(Plass et al., 2015). On the other hand, students' gameplay behaviours can reflect their
knowledge states, cognitive ability and learning outcomes (Hicks et al., 2016). As a result,
well-­designed games affect students' learning motivation, cognitive ability and academic
performance (Su, 2016). Analysing gameplay actions is fundamental to understanding the
dynamic nature of students' learning processes and designing effective scaffoldings in the
future (Sawyer et al., 2018).
Game-­based learning assessment aims to measure students' knowledge and skills by
constructing irrelevant materials and implicit test items without jargon (Rowe et al., 2021).
Traditional assessments, such as interview and exams, are less effective since these meth-
ods cannot evaluate students' cognitive ability and learning competency (Rowe et al., 2021;
Shute, 2011). In games, assessing students' underlying knowledge is often grounded in
an evidence-­centred assessment design (ECD) framework (Mislevy et al., 2003). ECD
framework enables researchers to capture the relationships among operations, eviden-
tiary argument and assessment purposes by constructing, developing and arranging cer-
tain information elements (Mislevy et al., 2003). This framework has been used in various
game-­based learning assessment, also named stealth assessment (Ke & Shute, 2015). For
example, Kerr and Chung (2012) examined the key features of students' learning achieve-
ment in an educational video game, in which ECD was used to identify, extract and accu-
mulate features of students' performance. Even though all the key features were included
in students' log files and were difficult to understand, ECD allowed understanding of these
complicated data in large quantities in an efficient and interpretable way. Grover et al. (2016)
combined the ECD framework with discovery-­driven data analytics to investigate students'
programming processes and problem-­solving strategies in a block-­based programming
game. In this research, ECD identified structural connections between CT constructs and
students' observable behaviours, providing evidence of students' problem-­solving patterns
and CT practices.
2362 |    LIU

Computational thinking

First defined by Wing (2006), CT is characterized as a cognitive and analytical ability


(Hooshyar et al., 2021) enabling students to solve problems efficiently. CT is a fundamental
skill for both K-­12 and higher education students. It encourages students to think recursively
by abstracting and decomposing a complicated problem, making the problem tractable
and its relevant components to be appropriately represented and modelled (Wing, 2006).
However, the definition of CT has been argued for many years since Wing did not precisely
describe what ‘computational thinking’ is in her research. Recently, many researchers at-
tempted to derive an appropriate definition of CT. For instance, Barr et al. (2011) proposed
that K-­12 students' CT process involved problem-­solving skills. In this research, CT was
defined as a problem-­solving process that included six steps: formulating problem, ana-
lysing data, representing data, automating solution, implementing possible solutions and
generalizing pattern. Selby and Woollard (2014) developed a CT definition that included
five components to facilitate appropriate assessment: abstraction, decomposition, algo-
rithmic thinking, evaluation and generalization. Basu et al. (2017) defined CT practices as
programming-­related concepts that included: iteration, problem decomposition, abstraction
and debugging. In sum, CT definitions vary depending on the learning environments and
research goals.
According to Brennan's research (Brennan & Resnick, 2012), a general CT framework
includes three dimensions: computational concepts, computational practices and computa-
tional perspectives. Engaging with computational concepts such as sequences, loops and
conditionals is necessary as students conduct CT activities (Lye & Koh, 2014). Computational
practices emphasize the problem-­solving processes that occur in students' CT activities
(Lye & Koh, 2014). Computational perspectives describe students' self-­realization of the real
world (Kalelioglu et al., 2016). It helps them learn from the world and practice their CT abil-
ities to achieve real-­life problems. Brennan’ study described a possible way to assess stu-
dents' CT skills, and many researchers followed this framework to enhance CT assessment
in various learning settings (Zhong et al., 2016). For example, Mouza et al. (2020) examined
the influence of repeated participation on students' CT concepts and the gender differences
on students' CT achievement by depending heavily upon this framework. Leveraging a part
of Brennan's framework, Weintrop, Holbert, et al. (2016) developed a more nuanced CT
framework that is situated in mathematics and science learning to understand high school
students' CT practices.
Assessing students' CT skills has drawn much attention in STEM education, espe-
cially in recent years as various educational contexts (eg, block-­based programming,
puzzle games and video games) were introduced in both K-­12 and higher education
(Tang et al., 2020). The methods and tools for assessing students' CT skills have to
rely on certain factors, such as learning settings, CT constructs, research purposes and
educational levels. The development of CT assessment in K-­12 education is just the
beginning; much current research is significantly tied to computer science frameworks,
opposing the definition and construction of CT itself (Rowe et al., 2021). For instance,
students develop their CT skills by solving complex puzzles in Zoombinis rather than
coding (Rowe et al., 2021). González (2015) dedicated to evaluating students' CT con-
structs with Computational Thinking Test. Dagienė et al. (2016) applied Bebras Tasks (a
set of competition tasks) to explore the generalization of students' CT skills in real-­life
problems. However, these situational questions may be too peripheral and present a
barrier to neurodiverse students' CT assessment in specific work (Wiebe et al., 2019).
We should identify and follow appropriate CT framework to satisfy the certain research
questions in this study.
ASSESSING IMPLICIT COMPUTATIONAL THINKING IN GAME-­B ASED
LEARNING    | 2363

Computational thinking in game-­based learning environments

To date, many studies (All et al., 2016, 2021; Ifenthaler et al., 2012) investigated and em-
phasized the effectiveness of game-­based learning, which has proved to be promising to
facilitate students' CT concepts (Hsu et al., 2018). Generally, gameplay contents and ac-
tivities involve problem-­solving spaces and computational challenges (Qian & Clark, 2016).
Problems in game-­based learning environments are often ill-­structured, with no rigid formu-
lation or definitive solving trajectory (Turchi et al., 2019). Students' computational ability is
practical to help them design strategic and algorithmic gameplay actions to create effective
solutions (Pellas & Vosinakis, 2018). For instance, Tsarava et al. (2017) embedded a CT
framework including seven stages in a coding game to train students' programming skills.
The authors designed several lessons that were consistent with CT processes such as
decomposition, algorithms, patterns, abstraction and generalization to foster students' CT
skills. Lin et al. (2020) developed a CT framework in a digital game to enhance elementary
students' learning achievement and CT skills. In this study, CT was divided into three dimen-
sions: concepts, practices and perspectives. The authors mapped students' learning pro-
cesses to these three dimensions to evaluate and strengthen their CT abilities. By providing
an engaging and contextualized scenario of authentic problem-­solving (Gee, 2009), games
not only improve students' learning efficiency but also enhance their CT practices.
Zoombinis (Rowe et al., 2015) is a puzzle-­based game designed by TERC (2015) in the
1990s, aiming to develop students' mathematics concepts (eg, mapping, sorting and com-
paring) essential for CT skills. In this game, students have to design and implement efficient
CT patterns and problem-­solving strategies to bring Zoombini characters to safety (Rowe
et al., 2021). The process of building specific learning strategies in students' gameplay
behaviours is consistent with CT practices (Almeda et al., 2019), building evidence of stu-
dents' implicit learning constructs of CT skills. Aligning with the CT constructs introduced by
CSTA (2017) and Shute, Sun, et al. (2017), Zoombinis puzzles involved a learning progres-
sion of CT that was highly iterative with embedded loops among different CT stages. Previous
study (Rowe et al., 2020) employed hand-­labelling video analysis to understand students'
learning events and identify their CT stages, which might be time-­consuming. Asbell-­Clarke
et al. (2021) applied data mining approaches to detect students' CT practices. This research
found that students' gender and grade level were not significantly related to their CT achieve-
ment. However, other factors such as actions and accuracy were not considered.
In this study, we follow the CT framework introduced by previous research (Asbell-­Clarke
et al., 2021; Rowe et al., 2017) to assess students' implicit CT in Zoombinis: problem decomposi-
tion, pattern recognition and algorithm design. Students may develop various strategies or meth-
ods associated with one or more CT stages to solve certain puzzles. For example, a random trial
allows students to abstract and decompose a complicated problem, while systematically testing
hypotheses may help students understand underlying gameplay rules and construct useful CT
patterns. By leveraging this framework, we established evidence-­based measures of students'
implicit CT in Zoombinis. In addition, we included more factors and used efficient methods to
study the relationship between students' implicit CT and these explicit variables, making it more
practical to assess students' CT practices in game-­based learning environments.

M ETHO DO LOGY

Learning environment and participants

The research context of this study is Zoombinis (Rowe et al., 2015; TERC, 2015). It is a
well-­designed puzzle game that aims to facilitate students' mathematical and computational
2364 |    LIU

competency, in which the gameplay achievement can reflect implicit CT skills. Zoombini has
different physical attributes, such as shaggy hair, dot eyes and blue nose (see Figure 1).
Students have to safely move Zoombini characters to Zoombiniville by identifying the correct
physical characteristics of each character (Rowe et al., 2015; TERC, 2015). To solve these
complex puzzles, students are required to abstract authentic problems, develop problem-­
solving strategies, design analytical solutions and generalize efficient algorithms. Students'
gameplay performance and sequential actions in log files reflect their CT practices, provid-
ing an evidence-­based assessment of implicit CT in this game.
The data used in our study was sourced from Data Arcade, a dataset created by TERC
(the developer of Zoombinis) to store students' log files. The selected sample for this re-
search comprises 158 students in grades 3–8 (88 male and 79 female) who had no prior
exposure to Zoombinis. The selection criteria require that each participant must have en-
gaged with at least one of the four Zoombinis puzzles (refer to Figure 2), and no external
interventions or scaffolding were implemented during the students' playtest.

• Allergic Cliffs: Players have to discover which attributes allow each Zoombini to cross
which bridge.
• Bubblewonder Abyss: Zoombini must move through a maze that has junctions and
switches triggered by Zoombini attributes.
• Mudball Wall: Players have to figure out how the colour and shape of each mudball corre-
spond to a position on the wall.
• Pizza Pass: Players have to find the combination of pizza (and at higher level ice cream)
toppings desired by a troll blocking the Zoombini's path.

The gameplay data generated from Zoombinis contains students' identifiers, demograph-
ics, timestamps, type of actions, puzzle information, objects' attributes, solution attributes
and other additional information (see Table 1). On the one hand, each action (eg, select a
topping) represents a student testing different attributes of a character to satisfy the target
problem. On the other hand, students' attempts (eg, deliver toppings) show the gameplay
feedback when they submit the character built in previous actions. According to the in-
terpretation, students' sequential actions describe how they test the required attributes of
a character with ordered or unordered gameplay behaviours, while attempts illustrate the
correctness of the designed character. Therefore, we applied students' attempts as the anal-
ysis granularity of gameplay performance and CT stages, and we used students' actions to
explore their problem-­solving strategies. Gameplay log data totally included 1845 rounds
and 18,795 attempts. In each round, the first attempts could not have a dependent relation-
ship with previous ones, and the last attempts were used to combine all correct characters
tested before. Thus, students' first and last attempts in gameplay log data were excluded,
and 15,105 attempts remained.

FIGURE 1 Zoombini's physical characteristics.


ASSESSING IMPLICIT COMPUTATIONAL THINKING IN GAME-­B ASED
LEARNING    | 2365

F I G U R E 2 Screenshots Zoombinis puzzles (top left going clockwise): Allergic Cliffs, Bubblewonder Abyss,
Mudball Wall and Pizza Pass (Asbell-­Clarke et al., 2021).

Measuring implicit computational thinking

Problem-­solving strategies

To measure students' implicit CT skills, we followed a widely accepted conceptual framework


developed by prior research of Zoombinis (Asbell-­Clarke et al., 2021; Rowe et al., 2017).
This framework emphasized that students' sequential gameplay actions included details
of how they developed computational concepts and established strategic solutions to sat-
isfy certain problems. In other words, the ordered and planned problem-­solving strategies
shown in students' gameplay actions efficiently reveal implicit CT stages. Since students' CT
process in Zoombinis is ‘hidden’, we should first examine their log files and extract meaning-
ful behavioural structures that indicate problem-­solving strategies. Our earlier work has il-
lustrated that sequence mining was a capable educational data mining approach to discover
students' problem-­solving strategies manifested over their gameplay (Liu & Israel, 2022). On
the strength of our previous research, we applied sequential pattern mining techniques to
detect students' ordered gameplay actions in their log files, and five problem-­solving strate-
gies were identified as follows:

• Testing one: Testing only one dimension at each trial.


• Winnowing: Removing dimensions one by one.
• Additive: Adding dimensions one by one.
2366
|
  

TA B L E 1 Examples of gameplay log data.

PlayerID Timestamp Event Puzzle Level ObjectIDa SolutionID


100 12:19:46 Select_Topping Pizza_Pass 2 P10100000 P10110000
100 12:20:11 Select_Topping Pizza_Pass 2 P10110000 P10110000
100 12:21:27 Deliver_Toppings Pizza_Pass 2 P10110000 P10110000
100 12:22:03 Troll_Accepts Pizza_Pass 2 P10110000 P10110000
200 12:10:06 Lift_Object Allergic_Cliffs 1 Z5123 Z5123
200 12:10:58 Drop_Object Allergic_Cliffs 1 Z5123 Z5123
200 12:11:14 Object_Move_Accept Allergic_Cliffs 1 Z5123 Z5123
a
ObjectID consists of a letter and several digits, which indicates character's attributes (eg, Z5123, where Z represents ‘Zoombini’, 5 denotes the Zoombini's hair is hat, and 2 means the
Zoombini’ s nose is pink).
LIU
ASSESSING IMPLICIT COMPUTATIONAL THINKING IN GAME-­B ASED
LEARNING    | 2367

• Subtracting: Testing the complementary set of dimensions that tested in the previous
attempt.
• Replacing: Replacing one dimension while keeping other dimensions the same.

All these problem-­solving strategies were determined by analysing the ObjectID in stu-
dents' log files. The dependent relationships among students' gameplay actions offer an-
alytical evidence to uncover problem-­solving strategies. For instance, Additive represents
students adding new dimensions of a character one by one, which is manifested in their log
files as the ObjectID of the current action has one more digit with the number of 1 compared
with the previous one. Table 2 shows the patterns of each problem-­solving strategy in stu-
dents' log files.

Computational thinking stages

Studies suggested that CT was a problem-­ solving process (Fagerlund et al., 2021;
Haseski et al., 2018; Voskoglou & Buckley, 2012). Analysing problem-­solving strategies
in students' gameplay offers unique opportunities for identifying CT stages and assess-
ing CT skills. In this study, the construction of problem-­solving strategies was viewed
as decisive evidence to reveal students' CT stages. Building upon the CT framework
introduced in Section “Game- ­Based Learning”, three CT stages were identified by the
following principles:

• Problem decomposition: No ordered or planned sequence of strategies was identified


in students' attempts. Strategies were repeated no more than three times to reduce the
complexity of the problem.
• Pattern recognition: Evidence showed that students revealed the underlying rules with or-
dered and planned patterns. One or more specific strategies were repeated at least three
times, and each attempt was dependent on the previous one.
• Algorithm design: Ordered combinations of two or more strategies were repeated across
multiple puzzles to solve new problems.

Table 3 illustrates how to determine students' CT stage at each attempt by analysing


their problem-­solving strategies. For instance, in the first example of Pattern Recognition,
students' gameplay actions exhibited dependent relationships. Winnowing strategy was re-
peated more than three times in one puzzle, indicating that students had already compre-
hended this problem-­solving strategy and systematically employed it to solve this puzzle.
Conversely, the actions in the second example of Problem Decomposition were labelled as
unordered behaviours. Even though two problem-­solving strategies were used coinciden-
tally, there was no evidence of continuous dependence in this case.

TA B L E 2 Examples of problem-­solving strategies.

Testing one Winnowing Additive Subtracting Replacing


Pattern p00001000a p00101100 p00100000 p11010111 p00101100
p00100000 p00101000 p00110000 p00101000 p01101000
p00100000 p00110010
a
ObjectID of each action.
2368 |    LIU

TA B L E 3 Examples of computational thinking stages.

Problem decomposition Pattern recognition Algorithm design


puzzle_start puzzle_start puzzle_start
p00100010 p00111100 p00001100
p01100000 p00111000 p00001110
p00110011 p00110000 p00010000
p11110010 p00100000 p10000000
puzzle end puzzle end puzzle end
Solution p01001001 p00100000 p10001100
puzzle_start puzzle_start puzzle_start
p11111000 p00000100 p00010000
p00000111 p00100000 p00011000
p11011101 p00000010 p00000010
p00000100 p10000000 p00000001
puzzle end puzzle end puzzle end
Solution p00110000 p001001001 p00011001
Note: Red is Testing One, blue is Additive, green is Winnowing, purple is Subtracting and orange is Replacing.

Measuring gameplay performance and other objective factors

Duration

The duration of gameplay attempts stands as a pivotal proxy for assessing students' persistence
and strategy deployment. Building upon prior research (Rehbein et al., 2016), which has suggested
that gameplay duration holds potential as an indicator of students' learning performance, we aim to
uncover the nuanced implications of this temporal dimension. By exploring the temporal aspects
of students' interactions with Zoombinis, we seek to discern whether differing durations correlate
with distinct levels of CT competence. This exploration will elucidate whether students with well-­
developed CT skills tend to exhibit more efficient and expedited gaming behaviours, or if prolonged
durations signify unproductive trials that may result in heightened unproductive CT performance.

Actions

Another pivotal variable in our analysis is the number of actions undertaken during each game-
play attempt. As illustrated in Table 1, students engage in various numbers of actions within
each attempt. Depending on their CT proficiency, students may employ distinct problem-­solving
strategies in their endeavours, leading to varying numbers of actions. Taking Pizza Pass as an
example, a student may apply for a streamlined approach by selecting only one topping or,
alternatively, may adopt a more experimental approach by choosing multiple toppings and then
refining their selections. Consequently, the count of actions serves as a valuable indicator of
students' problem-­solving strategies and their competency in applying CT skills.

Accuracy

Integral to the examination of students' CT performance is the assessment of accuracy in


their gameplay attempts. Once students have successfully created a particular solution, they
ASSESSING IMPLICIT COMPUTATIONAL THINKING IN GAME-­B ASED
LEARNING    | 2369

submit it to Zoombinis for evaluation, and their attempts are categorized as either correct or
incorrect. By investigating the accuracy of their attempts, we endeavour to identify whether
students' proficiency in CT corresponds to an increased likelihood of producing accurate so-
lutions or whether a deficiency in CT skills leads to a higher incidence of incorrect attempts.

Gender and grade

Existing research (Gómez-­Gonzalvo et al., 2020; Rehbein et al., 2016) has underscored the
substantial impact of students' gender and age on their learning processes and outcomes.
This study also explores the potential influence of demographic variables such as gender
and grade level on students' CT skills.

Puzzle

Zoombinis encompasses a variety of puzzles, each designed to foster different CT mecha-


nisms. For instance, in Allergic Cliffs, students must compare and differentiate the neces-
sary features of each Zoombini, align them with the appropriate bridge—a process that
engenders fundamental mathematical concepts such as sorting and comparison. In con-
trast, Pizza Pass challenges students to grasp the concept of dimension by manipulating
toppings as discrete dimensions. Students have to develop algorithms to combine these
dimensions through ordered sequences. Since each puzzle in Zoombinis requires distinct
underlying CT practices and problem-­solving strategies, we regard the puzzle variable as
a potential influential factor in students' CT performance. Our analysis aims to examine
whether the choice of puzzle correlates with variations in students' CT abilities and whether
the nuances of these puzzles play a role in shaping CT competence.

Difficulty

Each puzzle presents itself with four levels of difficulty: easy, not easy, hard and very hard.
The inherent challenge posed by each puzzle escalates progressively across these difficulty
levels, demanding more intricate problem-­solving strategies and an elevated proficiency in
CT skills. For instance, in Pizza Pass, the easy level may feature a solitary troll with one or two
desired toppings, presenting relatively straightforward challenges. However, as the difficulty
level increases, the complexity of the puzzle intensifies. Two or three trolls may come into
play, each necessitating intricate combinations of toppings and ice creams. In essence, the
ascending levels of challenge impel students to extend their problem-­solving strategies from
single dimensions to encompass multi-­dimensions or even matrices. Consequently, puzzle
difficulty assumes significance as a potential factor interrelated with students' CT skills.

Data analysis

This study aimed to explore whether gender, grade, puzzle and its difficulty, and game-
play performance play statistically significant roles in distinguishing students' CT skills
in Zoombinis. Specifically, we applied multinomial logistic regression analysis (Kwak &
Clayton- ­Matthews, 2002) to identify the relationships between these factors and students'
CT stages. CT stages were recognized as an explained variable, while students' gameplay
performance, demographics and puzzle information were viewed as explanatory variables.
2370 |    LIU

It is worth noting that CT stages, gender, accuracy, puzzle and difficulty were repre-
sented as dummy variables in this analysis. Dummy variables are commonly employed in
regression analysis to encode categorical features. It possesses the property that when
X i = 1, it indicates that the subject belongs to the ith category; otherwise, Xi = 0 (Hardy, 1993;
Suits, 1957). These variables were incorporated into the regression analysis as valid predic-
tors, given that the necessary encoding steps were taken to ensure that the solution of the
normal equations would be determinate.
In addition, we applied the structural equation model (Ullman & Bentler, 2012) to estimate
the multiple and interrelated dependence between these factors and CT stages. Absolute fit
χ2, df, comparative fit index (CFI), Non-­Normed Fit Index (NNFI) and root mean square error
of approximation (RMSEA) were used to measure model fitness.
Next, we provided insights into the relationship between students' gameplay performance
and their CT stages, which might be a promising method to assess implicit CT skills in
Zoombinis based on their explicit learning achievement. First, we used K-­means cluster-
ing algorithm to divide students into three groups based on their performance parameters.
Second, we employed histogram and cross-­tabulation with χ2 test analysis to statistically
describe the interrelated connection between gameplay performance and CT stages.
Students' gameplay actions were extracted from log files, which were used to identify the
problem-­solving strategies and CT stages with Python code. Other explanatory variables
were also obtained from students' log data. Multinomial logistic regression and other statis-
tical analysis were conducted by IBM SPSS, and structural equation model was established
by IBM SPSS AMOS (Blunch, 2012).

R ESULTS

Which factors play important roles in distinguishing students' CT in


Zoombinis?

Table 4 shows the descriptive results of explained and explanatory variables.

Multinomial logistic regression

Multinomial logistic regression model was built to explore the relationships between various
factors and students' CT skills. It helps examine the variables that distinguish students' at-
tempts with different CT stages. Explanatory variables, including gender, grade, accuracy,
duration, actions, puzzle and difficulty, were taken. Table 5 presents the developed model,
and Table 6 is used to decide whether each explanatory variable meaningfully explains stu-
dents' CT stages. According to the results, accuracy (p < 0.05), duration (p < 0.05), actions
(p < 0.05) and difficulty (p < 0.05) were found to play significant roles in distinguishing stu-
dents' CT stages. Taking Algorithm Design as the reference value, lower accuracy increased
the probability of being in Problem Decomposition; lower accuracy, shorter duration, more
actions and greater difficulty increased the probability of identifying as Pattern Recognition.
Regarding the model fitting, Nagelkerke indicates the amount of variance explained by
this model, and the result 1.00 represents the perfect model fit. According to Table 5, the
Nagelkerke R 2 statistic was determined to be 27%. This statistic showed that there was a
27% relationship between students' CT stages and the explanatory variables. Table 7 shows
more details of model fitting information (χ2 = 1719.520; df = 14; p < 0.001), indicating the de-
veloped model was statistically significant.
ASSESSING IMPLICIT COMPUTATIONAL THINKING IN GAME-­B ASED
LEARNING    | 2371

TA B L E 4 Variables and descriptive statistics.

Variables Definitions and values Mean SD


Explained variable
CT (Y)a Students' CT stages
Problem decomposition (43.66%), pattern recognition (32.57%), algorithm
design (23.77%)
Explanatory variables
Students' demographics
Gender (X1)a Students' gender
Male (51.66%), female (48.34%)
Grade (X 2) Students' grade level 4.89 1.46
Gameplay performance
Accuracy (X3)a This attempt is correct or incorrect.
Incorrect (55.39%), correct (44.61%)
Duration (X4) Time elapsed at this attempt 104.12 87.49
Actions (X5) Total actions at this attempt 15.02 12.26
Puzzle information
Puzzle (X6)a The name of current puzzle
Allergic Cliffs (16.01%), Pizza Pass (38.42%); Mudball Wall (25.93%),
Bubblewonder Abyss (19.64%)
Difficulty (X7)a The difficult level of current puzzle.
Easy (80.29%), not easy (13.52%), hard (4.37%), very hard (1.82%)
a
Dummy variables were utilized with percentages rather than the mean and standard deviation.

TA B L E 5 Logistic regression analysis result.

Problem decomposition Pattern recognition

B Exp (B) p-­value B Exp (B) p-­Value


Intercept 4.814 0.000 3.130 0.000
X1 −0.046 0.955 0.578 −0.038 0.963 0.586
X2 −0.007 0.993 0.820 −0.001 0.999 0.958
X3 −2.987 0.050 0.000 −1.447 0.235 0.000
X4 −0.009 0.991 0.000 −0.005 0.995 0.000
X5 0.030 1.030 0.000 0.030 1.030 0.000
X6 −0.124 0.883 0.064 −0.029 0.971 0.600
X7 0.231 1.260 0.000 0.092 1.096 0.004
2
Nagelkerke R = 0.273
Note: The reference CT stage is Algorithm Design.

Structural equation model

Structural equation modelling approach was employed to test the associations among the
variables more holistically. Specifically, we only considered the explanatory variables such
as accuracy and duration that explained students' CT stages significantly based on the
results of multinomial logistic regression. Results showed that structural equation model
2372 |    LIU

TA B L E 6 Likelihood ratio tests.

Effect −2 log likelihood of reduced model Chi- ­square p-­Value


Intercept 11,798.518 333.889 0.000
Gender (X1) 11,464.996 0.367 0.832
Grade (X 2) 11,464.698 0.068 0.967
Accuracy (X3) 12,685.221 1220.592 0.000
Duration (X4) 11,661.395 196.766 0.000
Actions (X5) 11,527.518 62.889 0.000
Puzzle (X6) 11,468.911 4.282 0.118
Difficulty (X7) 11,503.594 38.965 0.000

TA B L E 7 Model fitting information.

Model fitting criteria Likelihood ratio tests

Model −2 log likelihood Chi- ­square p-­Value


Intercept only 13,184.150
Final 11,464.630 1719.520 0.000

FIGURE 3 Structural equation model with coefficients.

fit the data well (χ2 = 36.156; df = 5; NNFI = 0.989; CFI = 0.994; and RMSEA = 0.032). The
proposed model with standardized coefficients was presented in Figure 3, where the solid
black arrows represent statistically significant relationships and the values are the standard-
ized coefficients.
According to Figure 3, duration and actions influenced each other mutually, and diffi-
culty affected duration. Duration, difficulty and accuracy were positively associated with
students' CT stages, while no significant correlation was found between actions and CT
stages. In terms of the coefficients, accuracy played the most important role in identifying
students' CT stages (β = 0.57), followed by duration (β = 0.26). The puzzle difficulty ex-
plained students' CT stages with a coefficient of 0.07, which is smaller than the two other
variables.
ASSESSING IMPLICIT COMPUTATIONAL THINKING IN GAME-­B ASED
LEARNING    | 2373

TA B L E 8 Cross-­tabulation of gameplay performance and CT stages.

Problem Pattern Algorithm


decomposition recognition design Total
High 1251 881 2504 4636 χ 2 = 4094.91
(−27.5) (−5.6) (34.4) Cramer's
V = 0.37
Medium 2566 2943 751 6260 p < 0.001
(−23.7) (31.9) (−10.7)
Low 2778 1095 336 4209
(58.1) (−28.6) (−28.3)
Total 6595 4919 3591 15,105
Note: Adjusted residuals appear in parentheses below observed frequencies.

FIGURE 4 CT stages in different gameplay performance groups.

What is the relationship between students' gameplay performance and


CT skills in Zoombinis?

According to the previous findings, students' gameplay performance showed a critical as-
sociation with their CT stages, and all three gameplay performance variables had significant
effects on CT stages. As a result, we further categorized gameplay attempts into three
groups based on the variables of gameplay performance and examined the relationship
between the performance levels and CT stages. K-­means algorithm was used to divide stu-
dents' attempts into three groups: high-­performing, middle-­performing and low-­performing.
Attempts with shorter duration, fewer actions and higher accuracy were categorized into
high-­performing group. Attempts with longer duration, more actions and lower accuracy were
classified as low-­performing group. All other attempts were sorted into medium-­performing
group. There are 4636 high-­performing attempts, 6260 medium-­performing attempts and
4209 low-­performing attempts.
As shown in Table 8, χ2 test was conducted to analyse the independence of gameplay
performance with CT stages. The results revealed a significant relation between these two
variables, χ2(4, N = 15,105) = 4094.91, Cramer's V = 0.37, p < 0.001.
2374 |    LIU

In Figure 4, we further examined the interrelated dependence between students' game-


play performance and CT stages. According to this histogram, every CT stage could appear
in different performance groups. However, meaningful differences should be captured and
analysed in detail. In high-­performing group, the occurrence of Algorithm Design was 54%
and more than the other two stages. In low-­performing group, Problem Decomposition came
up the most, with a percentage of 66%. In the medium-­performing group, both Problem
Decomposition and Pattern Recognition occupied a lot of proportion, with the percentage of
41% and 47%, respectively.

D I SCUSS I O N

To assess students' implicit CT in Zoombinis, we examined the interrelated relations be-


tween students' demographics, gameplay performance, puzzle information and CT stages.
According to the results, duration, accuracy, number of actions and puzzle difficulty played
essential roles in distinguishing individual students' CT stages. Algorithm Design frequently
occurred in high-­performing group, while Problem Decomposition often appeared in low-­
performing group. This study was inspired by previous research (Asbell-­Clarke et al., 2021),
which applied implicit in-­game measures to assess students' CT skills. We extended the
prior work to consider more variables the relationships between gameplay performance
and CT stages, illustrating a promising method to assess students' CT skills based on their
explicit game-­based learning achievement. The following subsections discuss our two re-
search questions, address primary limitations and suggest future research directions.

Which factors play important roles in distinguishing students' CT in


Zoombinis?

Driven by the increasing interest in constructing CT in K-­12 education, the field dedicated
considerable effort to developing and evaluating students' CT skills (Tang et al., 2020). Even
though a body of literature has established practical assessment of CT skills in programming
or computing scenarios, more scalable and reliable CT measures need to be advanced in
games. In Zoombinis, students construct CT concepts by building effective problem-­solving
strategies to satisfy specific puzzle games. Operationalization of CT process in this game
is implicit and hard to capture, which may be a barrier to CT assessment (Asbell-­Clarke
et al., 2021). In this study, we utilized multinomial logistic regression and structural equation
model to determine which criteria effectively classify students' implicit CT. The findings are
expected to benefit implicit CT assessment based on the accessible and explicit gameplay
observations.
Sharing the congruence with previous work (Asbell-­Clarke et al., 2021), students' gender
and grade level did not relate to their CT activities. As expected, a significant relationship
was uncovered between students' gameplay performance and CT practices. Based on the
results of multinomial logistic regression, duration, accuracy, total number of actions and
puzzle difficulty were determined to distinguish students' CT stages. There is a 27% ex-
plained connection between CT stages and these explanatory variables. When Algorithm
Design was taken as the reference value, lower accuracy indicated a higher probability of
Problem Decomposition; lower accuracy, shorter duration, more actions and higher difficulty
level implied a higher probability of Pattern Recognition. Besides, the structural equation
model showed that the accuracy of each attempt played a critical role in distinguishing stu-
dents' CT stages, while duration and puzzle difficulty were also regarded influential factors
in implicit CT assessment.
ASSESSING IMPLICIT COMPUTATIONAL THINKING IN GAME-­B ASED
LEARNING    | 2375

Contributing to the rising work in CT assessment in game-­based learning environ-


ments, we revealed the multiple and interrelated dependence between various factors
and students' implicit CT, which can be used as external detectors of implicit CT assess-
ment in Zoombinis. Based upon the results, students who exhibited the attempts with
shorter duration, fewer actions, higher accuracy and easier puzzle may perform a higher
level of CT stages, indicating the development of their CT skills was more confident and
efficient. It helps researchers and instructors simplify the process of CT assessment,
especially in game-­based learning environments. As many studies (Doleck et al., 2017;
Haddad & Kalaani, 2015; Mindetbay et al., 2019) suggested, students' academic perfor-
mance was significantly associated with their CT skills, implying that students' CT skills
could be validly and reliably evaluated by the external and observable performance-­
based measurement. Therefore, it is believed that informing the relationships between
explicit factors and implicit CT for CT skill assessment in games-­based learning will open
more auspicious doors.
We faced some limitations. First, more factors should be included and analysed, such as
students' learning abilities and emotional states. A study (Rose et al., 2013) suggested that
disabled students may acquire CT knowledge and concepts in a totally different way, indicat-
ing that their gaming performance may have different relations with CT skills. Establishing
the general CT assessment for diverse students is critical in this research field. Second,
the conceptual framework of CT assessment in games might be improved. In this study, we
included duration, actions and accuracy as variables to describe students' gameplay per-
formance. Altanis and Retalis (2019) introduced a different framework to measure students'
analysing and designing skills in a programming-­based game, in which more gameplay
factors were considered. We will devote ourselves to the following work to analyse more
variables of CT skills and games, constructing a more reliable and valid framework of implicit
CT assessment in Zoombinis.

What is the relationship between students' gameplay performance and


CT skills in Zoombinis?

CT assessment in game-­based learning environments faced difficulties due to the lacked


observable evidence (Rowe et al., 2021; Tang et al., 2020; Weintrop et al., 2021). In games,
CT concepts are embedded in ill-­structured problem-­solving scenarios. Students developed
CT competency by constructing practical strategies to decompose and abstract problems,
recognizing problem-­solving patterns, and designing and generalizing algorithms. CT pro-
cess is concealed in students' sequential and structural gameplay actions, which is chal-
lenging to be captured by traditional measurements such as psychometric scale. Viewing
CT as a real-­life skill rather than the abstract knowledge, Çoban and Korkmaz (2021) em-
phasized that performance-­based gameplay measurement was an efficient alternative for
assessing students' implicit CT in games. The organization of algorithmic problem-­solving
strategies enables students to perform better, explicitly reflecting their CT skills. Depending
upon our prior work (Liu & Israel, 2022), we further investigated the relationship between
students' gameplay performance and CT skills, proposing a promising direction to assess
implicit CT from the explicit learning performance in Zoombinis.
To identify students' CT stages, we carefully analysed students' log files and uncovered
five problem-­solving strategies. The structural arrangement of these problem-­solving strat-
egies is employed as dominant evidence to describe students' CT stages. With the applica-
tion of K-­means clustering techniques, we divided students into three groups based on their
performing attempts. We then used cross-­tabulation table and histogram as visualization
tools to describe the interrelated relation between gameplay performance and CT stages.
2376 |    LIU

The findings found that students with high-­performing attempts exhibited a higher probability
of studying in Algorithm Design, while low-­performing attempts often occurred in Problem
Decomposition and Pattern Recognition. Besides, results showed that accuracy played a
significant role in distinguishing individuals' CT stages, while duration and the number of
actions should also be considered risk factors. We extended prior research to include more
factors that might relate to CT practices and examined the details of the structural relations
between these factors and CT stages. Based on this interpretation, students' learning per-
formance is directly related to their CT skills, allowing researchers to develop external evi-
dence to infer and measure students' implicit CT in Zoombinis.
Bringing CT to K-­12 education is a complicated task, requiring systematic course design,
thorough instruction, timely scaffolding and exhaustive assessment. Although much research
has been devoted to inspecting the relevance between students' learning performance and
CT skills (Haddad & Kalaani, 2015; Polat et al., 2021; Tang et al., 2020; Villalba-­Condori
et al., 2018), few effective assessment of implicit CT in games has been established. Our
study suggested that achieving higher gaming scores indicated more analytical CT stages.
Researchers and instructors can infer students' CT skills by observing their gameplay per-
formance in Zoombinis. The results also helped game developers and designers review the
effectiveness of educational games in promoting K-­12 students' CT knowledge acquisition
and competency development. Assessing students' implicit CT in accordance with their per-
formance is an encouraging way to measure students' CT practices in games, which may be
used to provide appropriate intervention and facilitate CT learning experiences in the future.
A significant study limitation is that the causal relationship between gameplay perfor-
mance and CT skills is far from established. On the one hand, Yildiz Durak et al. (2021)
suggested that students' affective states influenced their CT and programming practices.
Students who obtained positive feedback from Zoombinis might improve their confidence
and engagement, which subsequently impacted their CT activities. On the other hand,
Haddad and Kalaani (2015) proposed that CT skills enhanced students' learning success.
Students who mastered structural knowledge of mathematics and computer science could
develop efficient gameplay patterns to obtain higher achievement. In this study, no directed
correlation between gameplay performance and CT skills was developed. In terms of future
work, we should carefully consider the causal relations and construct equations to explain
the connection between gameplay performance and CT skills. Besides, it is possible to
come across CT assessment in various game scenarios, such as puzzle-­based games and
block-­based programming games. The relations between gaming achievement and CT skills
may exhibit different interrelated dependence regarding certain game scenarios. Much work
into the effects of different game-­based learning environments on the relevance between
gameplay performance and CT practices is expected in the future.

Implications

Game-­based learning enhance computational thinking skills

When compared to other learning environments, such as traditional classrooms, games


present a more intricate setting for tracking the dynamic processes of knowledge construc-
tion and skill development (Hainey et al., 2014). Much like many other game-­based learning
environments, Zoombinis incorporates CT concepts, such as mapping, sorting and compar-
ing, within problem-­solving scenarios that often involve implicit learning trajectories. The
features of this game allow students to develop diverse strategies and navigate toward so-
lutions through different pathways. Moreover, the inherent complexity of problem-­solving
ASSESSING IMPLICIT COMPUTATIONAL THINKING IN GAME-­B ASED
LEARNING    | 2377

scenarios necessitates that students draw upon their real-­world experiences and learn from
their past errors.
The fluid and adaptive nature of gameplay actions makes it a more efficient tool compared
to traditional surveys or quizzes when it comes to unveiling students' implicit thinking abili-
ties (Rowe et al., 2020). Gaming actions can provide meticulous records of the sequences
and patterns in students' deliberate behaviours. This, in turn, offers a valuable window into
how students approach problem decomposition, construct effective strategies and devise
efficient algorithms to surmount these challenges. Consequently, the outcomes of this study
hold the potential for broader application to other game-­based learning environments that
exhibit similar characteristics. This is especially pertinent in cases where CT and mathemat-
ical concepts are deeply integrated into problem-­solving scenarios and where capturing the
nuances of students' learning processes remains a formidable challenge.

Computational thinking assessment in game-­based learning environments

Although game-­based learning has demonstrated its efficacy in nurturing CT skills, the chal-
lenge of assessing students' implicit problem-­solving behaviours and CT performance persists.
This pilot study introduces a promising method for measuring students' implicit CT abilities based
on explicit variables within Zoombinis. The findings suggest a significant relationship between
students' gameplay performance, including accuracy, duration and the number of actions taken,
and their application of problem-­solving strategies. Importantly, these variables can be easily
derived from students' log files, providing a valuable and accessible resource for researchers
and instructors to infer the underlying CT processes in games. This research primarily focused
on the analysis of Zoombinis, and while our primary aim was to delve into the intricacies of this
specific game, we acknowledge the potential broader implications of the findings.
For instance, consider one digital game designed to develop the CT concepts of sorting
and comparing. Instructors can assess students' underlying CT stages by observing factors
such as the time taken to construct desired solutions and the accuracy of their attempts.
Moreover, this study's findings suggest that more challenging game levels may demand
higher CT skills from students. Hence, the level of difficulty within each game can pro-
vide insights into students' CT processes. Furthermore, the types of concepts embedded in
games do not appear to impact students' learning performance significantly. In other words,
whether the gameplay content is centred on sorting or comparing concepts, there seem to
be no substantial differences in their CT abilities.
Different games often emphasize distinct features and employ diverse learning mech-
anisms. Understanding how CT skills interact with this wide array of game-­based learning
environments is integral to the broader generalizability of our research. In this regard, con-
ducting comparative studies across various game genres holds significant potential for en-
hancing the comprehension of how to effectively assess CT skills by considering the unique
characteristics of games. Further exploration is required to establish the intricate relation-
ships between gaming factors and students' CT skills. More research should be dedicated
to examining how these factors may potentially influence CT practices, either by facilitating
or inhibiting them. Such insights will equip researchers with the means to offer timely and
efficient interventions when students exhibit unproductive performance in CT activities.

Data analytics in implicit learning assessment

While this study utilizes Zoombinis as a case study to exemplify the assessment of im-
plicit CT experiences, the insights gained can be extended to enhance implicit learning
2378 |    LIU

assessments in various games. In Zoombinis, CT is structured within a stage-­based learn-


ing process, aligning with the prevailing CT framework in existing research (Shute, Sun,
et al., 2017). Throughout each CT stage, students engage in sequential gameplay actions,
exhibiting practical problem-­solving strategies that reflect their CT skills. Utilizing sequential
analysis, this study discerns recurring patterns by analysing the features of students' game-
play actions, introducing a promising way for understanding hidden CT development. Similar
games designed to improve students' CT skills, mathematical understanding and problem-­
solving abilities can draw inspiration from this study to categorize implicit learning into dis-
tinct stages and apply efficient machine learning techniques to investigate how gameplay
actions can be leveraged to identify implicit learning stages.
Through the application of linear regression analysis and structural equation models, this
study further delves into the connections between explicit variables and CT stages. Based
on the findings, it is evident that certain gameplay variables hold a significant correlation
with specific CT stages. The broader application of these results involves an exploration of
how the explicit variables in different games serve as evidence of students' implicit learning
performance. This, in turn, can simplify the implicit learning assessment by observing the
explicit factors in games. For future studies, there is potential for in-­depth exploration into
modelling the reflection of various explicit factors, such as linguistic synchrony, in the implicit
knowledge mastery and skill construction. This avenue could provide valuable insights into
the implicit learning assessment and timely scaffoldings within learning environments where
students' learning processes are challenging to capture.

CO NCLUS I O N

Drawing upon the developed conceptual framework of implicit CT assessment in games,


this study leveraged efficient approaches to determine which factors were capable of distin-
guishing students' CT stages. The findings revealed that students' implicit CT in Zoombinis
was substantially associated with duration, accuracy, actions and puzzle difficulty. Building
on this, we further examined the relationship between gameplay performance and CT
stages. The results suggested that higher performance (shorter duration, fewer actions
and higher accuracy) was linked to advanced CT stages, whereas lower performance was
associated with the other two basic CT stages. A comprehensive understanding of the re-
lationships between CT stages and various factors can be served as an effective method to
infer students' CT skills in game-­based learning environments. By simplifying the assess-
ment process, this work enables researchers and instructors to assess students' implicit
CT in Zoombinis from explicit factors. We believe future work is just around the corner to
develop reliable and valid CT measures in various games to enhance CT practices in K-­12
education.

ACKNOWLEDGEMENTS
This work was partially supported by the U.S. Department of Education (No. U411C190179).
TL thanks Professor Maya Israel and TERC for the support and comments on this work.

C O N F L I C T O F I N T E R E S T S TAT E M E N T
There is no conflict of interest resulting from the research work in this study.

D ATA AVA I L A B I L I T Y S TAT E M E N T


All the data used in this study was obtained from Data Arcade, a private dataset developed
by TERC. People seeking access to this data should submit requests to the authors to get
the permissions due to privacy/ethical restrictions.
ASSESSING IMPLICIT COMPUTATIONAL THINKING IN GAME-­B ASED
LEARNING    | 2379

E T H I C S S TAT E M E N T
All data was collected and analysed after approval by the Institutional Review Board.

ORCID
Tongxi Liu https://orcid.org/0009-0009-0532-1050

REFERENCES
Aho, A. V. (2012). Computation and computational thinking. The Computer Journal, 55(7), 832–835.
All, A., Castellar, E. N. P., & Van Looy, J. (2021). Digital game-­ based learning effectiveness assessment:
Reflections on study design. Computers & Education, 167, 104160.
All, A., Castellar, E. P. N., & Van Looy, J. (2016). Assessing the effectiveness of digital game-­based learning: Best
practices. Computers & Education, 92, 90–103.
Almeda, M. V., Rowe, E., Asbell-­Clarke, J., Scruggs, R., Baker, R., Bardar, E., & Gasca, S. (2019). Modeling
implicit computational thinking in Zoombinis mudball wall puzzle gameplay. In Proceedings of the 2019
Technology, Mind, and Society Conference.
Altanis, I., & Retalis, S. (2019). A multifaceted students' performance assessment framework for motion-­based
game-­making projects with scratch. Educational Media International, 56(3), 201–217.
Ardito, G., Czerkawski, B., & Scollins, L. (2020). Learning computational thinking together: Effects of gender dif-
ferences in collaborative middle school robotics program. TechTrends, 64(3), 373–387.
Arena, D. A., & Schwartz, D. L. (2014). Experience and explanation: Using videogames to prepare students for
formal instruction in statistics. Journal of Science Education and Technology, 23(4), 538–548.
Asbell-­Clarke, J., Rowe, E., Almeda, V., Edwards, T., Bardar, E., Gasca, S., Baker, R. S., & Scruggs, R. (2021).
The development of students' computational thinking practices in elementary-­and middle-­school classes
using the learning game, Zoombinis. Computers in Human Behavior, 115, 106587.
Atmatzidou, S., & Demetriadis, S. (2016). Advancing students' computational thinking skills through educa-
tional robotics: A study on age and gender relevant differences. Robotics and Autonomous Systems, 75,
661–670.
Baptista, G., & Oliveira, T. (2018). Gamification and serious games: A literature meta-­analysis and integrative
model. Computers in Human Behavior, 92, 306–315.
Barr, D., Harrison, J., & Conery, L. (2011). Computational thinking: A digital age skill for everyone. Learning &
Leading with Technology, 38(6), 20–23.
Barr, V., & Stephenson, C. (2011). Bringing computational thinking to k-­12: What is involved and what is the role of
the computer science education community? ACM Inroads, 2(1), 48–54.
Basu, S., Biswas, G., & Kinnebrew, J. S. (2017). Learner modeling for adaptive scaffolding in a computational
thinking-­based science learning environment. User Modeling and User-­Adapted Interaction, 27(1), 5–53.
Blunch, N. (2012). Introduction to structural equation modeling using IBM SPSS statistics and AMOS. Sage.
Brennan, K., & Resnick, M. (2012). New frameworks for studying and assessing the development of computational
thinking. In Proceedings of the 2012 Annual Meeting of the American Educational Research Association,
Vancouver, Canada (Vol. 1, p. 25).
Çoban, E., & Korkmaz, Ö. (2021). An alternative approach for measuring computational thinking: Performance-­
based platform. Thinking Skills and Creativity, 42, 100929.
CSTA. (2017). Csta k-­12 computer science standards. https://​w ww.​c stea​chers.​org/​page/​stand​ards
Dagienė, V., Stupurienė, G., & Vinikienė, L. (2016). Promoting inclusive informatics education through the bebras
challenge to all k-­12 students. In Proceedings of the 17th International Conference on Computer Systems
and Technologies (pp. 407–414).
De Souza, A. A., Barcelos, T. S., Munoz, R., Villarroel, R., & Silva, L. A. (2019). Data mining framework to analyze
the evolution of computational thinking skills in game building workshops. IEEE Access, 7, 82848–82866.
Dimitra, K., Konstantinos, K., Christina, Z., & Katerina, T. (2020). Types of game-­based learning in education: A
brief state of the art and the implementation in Greece. European Educational Researcher, 3(2), 87–100.
Doleck, T., Bazelais, P., Lemay, D. J., Saxena, A., & Basnet, R. B. (2017). Algorithmic thinking, cooperativity,
creativity, critical thinking, and problem solving: Exploring the relationship between computational thinking
skills and academic performance. Journal of Computers in Education, 4(4), 355–369.
Fagerlund, J., HÄ Kkinen, P., Vesisenaho, M., & Viiri, J. (2021). Computational thinking in programming with
scratch in primary schools: A systematic review. Computer Applications in Engineering Education, 29(1),
12–28.
Gee, J. P. (2009). Deep learning properties of good digital games: How far can they go? In Serious games (pp.
89–104). Routledge.
Gómez-­Gonzalvo, F., Molina, P., & Devís-­Devís, J. (2020). Which are the patterns of video game use in Spanish
school adolescents? Gender as a key factor. Entertainment Computing, 34, 100366.
2380 |    LIU

González, M. R. (2015). Computational thinking test: Design guidelines and content validation. In Proceedings of
EDULEARN15 Conference (pp. 2436–2444).
Grover, S., Bienkowski, M., Niekrasz, J., & Hauswirth, M. (2016). Assessing problem-­solving process at scale. In
Proceedings of the Third (2016) ACM Conference on Learning@ Scale (pp. 245–248).
Grover, S., & Pea, R. (2018). Computational thinking: A competency whose time has come. Computer Science
Education: Perspectives on Teaching and Learning in School, 19, 1257–1258.
Guenaga, M., Egu'Iluz, A., Garaizar, P., & Gibaja, J. (2021). How do students develop computational thinking?
Assessing early programmers in a maze-­based online game. Computer Science Education, 31(2), 259–289.
Haddad, R. J., & Kalaani, Y. (2015). Can computational thinking predict academic performance? In IEEE Integrated
STEM Education Conference (pp. 225–229). IEEE.
Hainey, T., Connolly, T., Boyle, E., Azadegan, A., Wilson, A., Razak, A., & Gray, G. (2014). A systematic literature
review to identify empirical evidence on the use of games-­based learning in primary education for knowledge
acquisition and content understanding. In 8th European Conference on Games Based Learning: ECGBL (p.
167).
Hardy, M. A. (1993). Regression with dummy variables (Vol. 93). Sage Publication Inc.
Haseski, H. I., Ilic, U., & Tugtekin, U. (2018). Defining a new 21st century skill-­computational thinking: Concepts
and trends. International Education Studies, 11(4), 29–42.
Hicks, D., Eagle, M., Rowe, E., Asbell-­Clarke, J., Edwards, T., & Barnes, T. (2016). Using game analytics to
evaluate puzzle design and level progression in a serious game. In Proceedings of the Sixth International
Conference on Learning Analytics & Knowledge (pp. 440–448).
Hooshyar, D., Pedaste, M., Yang, Y., Malva, L., Hwang, G.-­J., Wang, M., Lim, H., & Delev, D. (2021). From gaming
to computational thinking: An adaptive educational computer game-­based learning approach. Journal of
Educational Computing Research, 59(3), 383–409.
Hooshyar, D., Yousefi, M., Wang, M., & Lim, H. (2018). A data-­driven procedural-­content-­generation approach for
educational games. Journal of Computer Assisted Learning, 34(6), 731–739.
Hsu, T.-­C., Chang, S.-­C., & Hung, Y.-­T. (2018). How to learn and how to teach computational thinking: Suggestions
based on a review of the literature. Computers & Education, 126, 296–310.
Ifenthaler, D., Eseryel, D., & Ge, X. (2012). Assessment for game-­based learning. In Assessment in Game-­Based
Learning (pp. 1–8). Springer.
Jiang, X., Harteveld, C., Huang, X., & Fung, A. Y. (2019). The computational puzzle design framework: A design
guide for games teaching computational thinking. In Proceedings of the 14th International Conference on the
Foundations of Digital Games (pp. 1–11).
Kalelioglu, F., Gü Lbahar, Y., & Kukul, V. (2016). A framework for computational thinking based on a systematic
research review. Baltic Journal of Modern Computing, 4(3), 583.
Kazimoglu, C., Kiernan, M., Bacon, L., & Mackinnon, L. (2011). Understanding computational thinking before
programming: Developing guidelines for the design of games to learn introductory programming through
game-­play. International Journal of Game-­Based Learning (IJGBL), 1(3), 30–52.
Kazimoglu, C., Kiernan, M., Bacon, L., & Mackinnon, L. (2012). A serious game for developing computational
thinking and learning introductory computer programming. Procedia-­Social and Behavioral Sciences, 47,
1991–1999.
Ke, F., & Shute, V. (2015). Design of game-­based stealth assessment and learning support. In C. Loh, Y. Sheng,
& D. Ifenthaler (Eds.), Serious games analytics (pp. 301–318). New York, NY: Springer.
Ke, F., Xie, K., & Xie, Y. (2016). Game-­based learning engagement: A theory-­and data-­driven exploration. British
Journal of Educational Technology, 47(6), 1183–1201.
Kerr, D., & Chung, G. K. (2012). Identifying key features of student performance in educational video games and
simulations through cluster analysis. Journal of Educational Data Mining, 4(1), 144–182.
Kjällander, S., Mannila, L., Åkerfeldt, A., & Heintz, F. (2021). Elementary students' first approach to computational
thinking and programming. Education Sciences, 11(2), 80.
Kwak, C., & Clayton-­Matthews, A. (2002). Multinomial logistic regression. Nursing Research, 51(6), 404–410.
Lin, S.-­Y., Chien, S.-­Y., Hsiao, C.-­L., Hsia, C.-­H., & Chao, K.-­M. (2020). Enhancing computational thinking capability
of preschool children by game-­based smart toys. Electronic Commerce Research and Applications, 44, 101011.
Liu, T., & Israel, M. (2022). Uncovering students' problem-­solving processes in game-­based learning environ-
ments. Computers & Education, 182, 104462.
Liu, Z.-­Y., Shaikh, Z., & Gazizova, F. (2020). Using the concept of game-­based learning in education. International
Journal of Emerging Technologies in Learning (IJET), 15(14), 53–64.
Lye, S. Y., & Koh, J. H. L. (2014). Review on teaching and learning of computational thinking through program-
ming: What is next for k-­12? Computers in Human Behavior, 41, 51–61.
Lyon, J. A., & Magana, J. (2020). Computational thinking in higher education: A review of the literature. Computer
Applications in Engineering Education, 28(5), 1174–1189.
Maloney, J., Resnick, M., Rusk, N., Silverman, B., & Eastmond, E. (2010). The scratch programming language and
environment. ACM Transactions on Computing Education (TOCE), 10(4), 1–15.
ASSESSING IMPLICIT COMPUTATIONAL THINKING IN GAME-­B ASED
LEARNING    | 2381

Marcelino, M. J., Pessoa, T., Vieira, C., Salvador, T., & Mendes, A. J. (2018). Learning computational thinking and
scratch at distance. Computers in Human Behavior, 80, 470–477.
Mindetbay, Y., Bokhove, C., & Woollard, J. (2019). What is the relationship between students' computational
thinking performance and school achievement? International Journal of Computer Science Education in
Schools, 2(5), 3–19.
Mislevy, R. J., Steinberg, L. S., & Almond, R. G. (2003). Focus article: On the structure of educational assess-
ments. Measurement: Interdisciplinary Research and Perspectives, 1(1), 3–62.
Mouza, C., Pan, Y.-­C., Yang, H., & Pollock, L. (2020). A multiyear investigation of student computational think-
ing concepts, practices, and perspectives in an after-­school computing program. Journal of Educational
Computing Research, 58(5), 1029–1056.
Noh, J., & Lee, J. (2020). Effects of robotics programming on the computational thinking and creativity of elemen-
tary school students. Educational Technology Research and Development, 68(1), 463–484.
Papert, S. A. (1980). Mindstorms: Children, computers, and powerful ideas. Basic Books.
Pellas, N., & Vosinakis, S. (2018). The effect of simulation games on learning computer programming: A com-
parative study on high school students' learning performance by assessing computational problem-­solving
strategies. Education and Information Technologies, 23(6), 2423–2452.
Pho, A., & Dinscore, A. (2015). Game-­based learning. In Tips and trends (pp. 1–5). Instructional technology committee.
Plass, J. L., Homer, B. D., & Kinzer, C. K. (2015). Foundations of game-­based learning. Educational Psychologist,
50(4), 258–283.
Polat, E., Hopcan, S., Kucuk, S., & Sisman, B. (2021). A comprehensive assessment of secondary school stu-
dents' computational thinking skills. British Journal of Educational Technology, 52(5), 1965–1980.
Qian, M., & Clark, K. R. (2016). Game-­based learning and 21st century skills: A review of recent research.
Computers in Human Behavior, 63, 50–58.
Rehbein, F., Staudt, A., Hanslmaier, M., & Kliem, S. (2016). Video game playing in the general adult population of
Germany: Can higher gaming time of males be explained by gender specific genre preferences? Computers
in Human Behavior, 55, 729–735.
Resnick, M., Maloney, J., Monroy-­Herná Ndez, A., Rusk, N., Eastmond, E., Brennan, K., Millner, A., Rosenbaum,
E., Silver, J., Silverman, B., & Kafai, Y. (2009). Scratch: Programming for all. Communications of the ACM,
52(11), 60–67.
Rose, L., Rouhani, P., & Fischer, K. (2013). The science of the individual. Mind, Brain, and Education, 7(3), 152–158.
Rowe, E., Almeda, M. V., Asbell-­Clarke, J., Scruggs, R., Baker, R., Bardar, E., & Gasca, S. (2021). Assessing
implicit computational thinking in Zoombinis puzzle gameplay. Computers in Human Behavior, 120, 106707.
Rowe, E., Asbell-­Clarke, J., & Baker, R. S. (2015). Serious games analytics to measure implicit science learning.
In Serious games analytics (pp. 343–360). Springer.
Rowe, E., Asbell-­Clarke, J., Bardar, E., Almeda, M. V., Baker, R. S., Scruggs, R., & Gasca, S. (2020). Advancing
research in game-­ based learning assessment: Tools and methods for measuring implicit learning. In
Advancing educational research with emerging technology (pp. 99–123). IGI Global.
Rowe, E., Asbell-­Clarke, J., Gasca, S., & Cunningham, K. (2017). Assessing implicit computational thinking in
Zoombinis gameplay. In Proceedings of the 12th International Conference on the Foundations of Digital
Games (pp. 1–4).
Sanmugam, M., Abdullah, Z., & Zaid, N. M. (2014). Gamification: Cognitive impact and creating a meaningful
experience in learning. In IEEE 6th Conference on Engineering Education (ICEED) (pp. 123–128). IEEE.
Sawyer, R., Rowe, J., Azevedo, R., & Lester, J. (2018). Filtered time series analyses of student problem-­solving
behaviors in game-­based learning. International Educational Data Mining Society.
Selby, C., & Woollard, J. (2014). Computational thinking: The developing definition. Proceedings of the 45th ACM
technical symposium on computer science education, SIGCSE 2014. ACM.
Shute, V., Ke, F., & Wang, L. (2017). Assessment and adaptation in games. In Instructional techniques to facilitate
learning and motivation of serious games (pp. 59–78). Springer.
Shute, V. J. (2011). Stealth assessment in computer-­based games to support learning. Computer Games and
Instruction, 55(2), 503–524.
Shute, V. J., Sun, C., & Asbell-­Clarke, J. (2017). Demystifying computational thinking. Educational Re- ­Search
Review, 22, 142–158.
Su, C.-­H. (2016). The effects of students' motivation, cognitive load and learning anxiety in gamification software
engineering education: A structural equation modeling study. Multimedia Tools and Applications, 75(16),
10013–10036.
Suits, D. B. (1957). Use of dummy variables in regression equations. Journal of the American Statistical
Association, 52(280), 548–551.
Tan, P.-­H., Ling, S.-­W., & Ting, C.-­Y. (2007). Adaptive digital game-­based learning framework. In Proceedings of
the 2nd International Conference on Digital Interactive Media in Entertainment and Arts (pp. 142–146). ACM.
Tang, X., Yin, Y., Lin, Q., Hadad, R., & Zhai, X. (2020). Assessing computational thinking: A systematic review of
empirical studies. Computers & Education, 148, 103798.
2382 |    LIU

Tatar, C., & Eseryel, D. (2019). A literature review: Fostering computational thinking through game-­based learning
in k-­12. The 42nd Annual Convention of The Association for Educational Communications and Technology,
288–297.
TERC. (2015). Terc (2015) Zoombinis. Game (Android, Ios, Macos, Windows, Web).
Tsarava, K., Moeller, K., Pinkwart, N., Butz, M., Trautwein, U., & Ninaus, M. (2017). Training computational think-
ing: Game-­based unplugged and plugged-­in activities in primary school. In European Conference on Games
Based Learning (pp. 687–695). Academic Conferences International Limited.
Turan, Z., Avinc, Z., Kara, K., & Goktas, Y. (2016). Gamification and education: Achievements, cognitive loads,
and views of students. International Journal of Emerging Technologies in Learning, 11(7), 64–69.
Turchi, T., Fogli, D., & Malizia, A. (2019). Fostering computational thinking through collaborative game-­based
learning. Multimedia Tools and Applications, 78(10), 13649–13673.
Ullman, J. B., & Bentler, P. M. (2012). Structural equation modeling (2nd ed.). Handbook of Psychology.
Van Der Meij, H., Leemkuil, H., & Li, J.-­L. (2013). Does individual or collaborative self-­debriefing better enhance
learning from games? Computers in Human Behavior, 29(6), 2471–2479.
Villalba-­Condori, K. O., Cuba-­Sayco, S. E. C., Chá Vez, E. P. G., Deco, C., & Bender, C. (2018). Approaches of
learning and computational thinking in students that get into the computer sciences career. In Proceedings of
the Sixth International Conference on Technological Ecosystems for Enhancing Multiculturality (pp. 36–40).
Voskoglou, M. G., & Buckley, S. (2012). Problem solving and computational thinking in a learning environment.
arXiv preprint arXiv:1212.0750.
Weintrop, D., Beheshti, E., Horn, M., Orton, K., Jona, K., Trouille, L., & Wilensky, U. (2016). Defining computa-
tional thinking for mathematics and science classrooms. Journal of Science Education and Technology,
25(1), 127–147.
Weintrop, D., Holbert, N., Horn, M. S., & Wilensky, U. (2016). Computational thinking in constructionist video
games. International Journal of Game-­Based Learning (IJGBL), 6(1), 1–17.
Weintrop, D., Wise Rutstein, D., Bienkowski, M., & Mcgee, S. (2021). Assessing computational thinking: An over-
view of the field. Computer Science Education, 31(2), 113–116.
Wiebe, E., London, J., Aksit, O., Mott, B. W., Boyer, K. E., & Lester, J. C. (2019). Development of a lean compu-
tational thinking abilities assessment for middle grades students. In Proceedings of the 50th ACM Technical
Symposium on Computer Science Education (pp. 456–461).
Wing, J. M. (2006). Computational thinking. Communications of the ACM, 49(3), 33–35.
Yildiz Durak, H., Saritepeci, M., & Durak, A. (2021). Modeling of relationship of personal and affective variables
with computational thinking and programming. Technology, Knowledge and Learning, 28, 165–184.
Zhao, W., & Shute, V. J. (2019). Can playing a video game foster computational thinking skills? Computers &
Education, 141, 103633.
Zhong, B., Wang, Q., Chen, J., & Li, Y. (2016). An exploration of three-­dimensional integrated assessment for
computational thinking. Journal of Educational Computing Research, 53(4), 562–590.
Zumbach, J., Rammerstorfer, L., & Deibl, I. (2020). Cognitive and metacognitive support in learning with a serious
game about demographic change. Computers in Human Behavior, 103, 120–129.

How to cite this article: Liu, T. (2024). Assessing implicit computational thinking in
game-­based learning: A logical puzzle game study. British Journal of Educational
Technology, 55, 2357–2382. https://doi.org/10.1111/bjet.13443

You might also like