Assessing_implicit_computational_thinking_in_game- Q1
Assessing_implicit_computational_thinking_in_game- Q1
DOI: 10.1111/bjet.13443
ORIGINAL ARTICLE
Tongxi Liu
KEYWORDS
computational thinking assessment, data science applications,
game-based learning analytics, implicit computational thinking
behaviours, learning behaviour patterns
Practitioner notes
I NTRO DUCTI O N
Computational thinking (CT) has received considerable attention as one of the most founda-
tional competencies for being an informed and competitive citizen in the twenty-first century
(Grover & Pea, 2018). In 1980, Papert (1980) first used the term ‘computational thinking’ to
describe students' strategic thinking in a programming context called LOGO. Drawing from
computer science concepts, Wing (2006) defined CT as a cognitive process that involves
ASSESSING IMPLICIT COMPUTATIONAL THINKING IN GAME-B ASED
LEARNING | 2359
multiple problem-solving steps like a computer performed. A critical aspect of this process is
devising and constructing appropriate strategies of computation with which to derive practi-
cal solutions (Aho, 2012). As such, CT enables students to efficiently develop computational
abilities and solve complex problems in the real world. Even though much effort has been
devoted to facilitating higher education students' CT skills and keeping them competitive in
the workplace (Lyon, 2020), CT in K-12 education is a relatively new agenda. It is no longer
sufficient to introduce CT simply to college and graduate students (Kjällander et al., 2021).
All today's students should develop a solid foundation of CT skills, preparing themselves to
solve real-life problems with algorithmic and computational methods (Shute, Ke, et al., 2017).
Recently, there has been an increasing interest in incorporating CT into K-12 education.
Shute, Ke, et al. (2017) described the role of CT in K-12 as ‘the conceptual foundation re-
quired to solve problems effectively and efficiently (ie, algorithmically, with or without the
assistance of computers) with solutions that are reusable in different contexts’ (p. 151). To
introduce the core concepts of CT in K-12 education, many studies have focused on inte-
grating CT practices in both elementary and advanced programming contexts. For example,
Scratch (Maloney et al., 2010; Resnick et al., 2009) is a block-based programming environ-
ment that enables students to develop computer science concepts and skills by thinking
creatively, reasoning systematically and solving computationally. As an efficient program-
ming context, Scratch has proven beneficial in developing engaged CT practices and en-
hancing teaching and learning in K-12 CT education (Marcelino et al., 2018). Besides, Noh
and Lee (2020) suggested that using robotics programming significantly improved students'
CT and creativity. These programming contexts (block-based, text-based and others) allow
students to computationally and algorithmically design and create personalized solutions.
While constructing CT concepts in computing programs plays an essential role, K-12 stu-
dents can practice their CT competencies in a broader range of learning situations that
do not require programming projects (Kazimoglu et al., 2011). Barr and Stephenson (2011)
emphasized that CT in K-12 education was not merely programming or computing; instead,
it was the process of knowledge acquisition and competency construction that involved for-
mulating and identifying relevant agencies to solve authentic problems.
Today, CT incorporates much more than programming activities in K- 12 education.
Game-based learning has proven to be effective in motivating and improving students'
learning performance (Hooshyar et al., 2018; Zumbach et al., 2020). Research (Weintrop,
Beheshti, et al., 2016) suggested that game-based learning showed promise to be the ped-
agogical framework for developing students' CT skills. With playful and dynamic learning
activities, game-based learning introduces CT concepts and provides CT practices to fos-
ter students' CT competencies in an engaging and attractive way (Kazimoglu et al., 2012;
Pho & Dinscore, 2015). In game-based learning environments, students intentionally con-
struct computational and algorithmic strategies to solve complex problems, which facilitates
CT skill development through various game mechanics (Zhao & Shute, 2019). In addition,
game-based learning creates cognitive intrinsic motivations, involving students into mastery,
aesthetics and autonomy (Sanmugam et al., 2014). The reward games encourage students
to competitively solve problems by learning from real-life experiences, prompting internal
and external motivations. In an effort to develop K-12 students' CT skills, many studies im-
part core knowledge and concepts of CT through game-based learning environments. For
example, Jiang et al. (2019) assessed students' CT skills in a puzzle-based game and found
that this non-coding game significantly enhanced students' logical thinking and problem-
solving ability. Guenaga et al. (2021) explored students' CT development in a maze-based
learning game by analysing their gameplay interactions. This study provided deeper insights
into assessing students' CT skills in game-based learning environments, as well as design-
ing more effective CT learning platforms.
2360 | LIU
Even though much attention has been poured into fostering K-12 students' CT through
game-based learning, CT assessment in this specific context remains challenging. According
to a review study (Tatar & Eseryel, 2019), most current research applied interviews, tests
and quizzes to understand students' game-based learning outcomes related to CT skills.
In games, students' gameplay actions are automatically operationalized in digital systems,
capturing extensive details such as computational logic, strategic sequence and algorithmic
behaviours to reflect students' CT skills (Liu & Israel, 2022), which could not be identified
by surveys or exams (Rowe et al., 2021). Besides, assessing students' implicit CT in games
proves inherent difficulty since it could not be expressed by pre/post-tests or self-reports.
With much terminology and formalism, traditional methods impede students to exhibit their
implicit knowledge (Arena & Schwartz, 2014). Many researchers have dedicated themselves
to assessing students' CT skills in game-based learning environments. For example, a study
employed data mining methods to evaluate CT practices (De Souza et al., 2019), while the
educational interpretation of CT measures based on students' external performance is in-
sufficient. Polat et al. (2021) statistically explored the relationships between certain variables
and CT skills while ignoring other significant aspects such as students' real-time perfor-
mance. Therefore, the effort in this field is still scattered, leaving a clear space for additional
research.
Assessing students' CT in games is essential because CT competency development re-
flects the effectiveness of games and the efficiency of CT education. In a well-designed
game, gameplay performance has construct validity, reflecting students' implicit knowledge
without obtrusive evaluation that may disrupt students' self-controlled learning (Shute, 2011;
Van Der Meij et al., 2013). Investigating the relationship between gameplay performance and
CT competency in this learning setting offers scalable a reliable assessment of students' CT
skills. Other objective variables, such as gender difference and puzzle difficulty, may also
be associated with students' CT competency and should be carefully considered (Ardito
et al., 2020; Atmatzidou & Demetriadis, 2016). Since gameplay performance and these ob-
jective variables are easily accessible and observable, examining the relationships between
these factors and students' CT skills provides unique opportunities for explicit assessment
of students' implicit CT. Researchers and instructors can capture students' CT processes
and provide appropriate interventions by observing these external factors, in such a way
enhancing students' knowledge acquisition and competency construction in game-based
learning environments. Identifying the factors for distinguishing students' implicit CT skills in
games has the potential to bring great educational benefits by developing CT competency
with gamification.
In response to the absence of implicit CT assessment in games, this study employs statis-
tical and educational data mining methods to uncover the significance of various observable
variables in relation to students' CT skills. In doing so, we propose an efficient approach
for inferring and evaluating students' CT competence within the context of game-based
learning. This study follows three key steps: First, we analyse students' gameplay log files
and establish evidence-based measures of CT stages in Zoombinis. This puzzle-based
game is designed to promote fundamental concepts of mathematics and computer science,
enabling students to develop implicit problem-solving strategies and CT skills during their
gaming experiences. Second, we extract variables such as gender, grade level, duration of
each attempt, number of actions in each attempt, accuracy of each attempt, puzzle name
and puzzle difficulty from log files to examine which factors have a significant influence on
distinguishing students' CT stages. Third, we explore the relationship between students'
gameplay performance and their CT skills.
The contributions of this study are threefold: (1) illustrating how to build evidence-based
measures of students' implicit CT by analysing their log files; (2) providing unique opportu-
nities for explicit assessment of students' implicit CT by uncovering the relations between
ASSESSING IMPLICIT COMPUTATIONAL THINKING IN GAME-B ASED
LEARNING | 2361
various factors and CT stages; (3) assisting researchers and instructors in providing appro-
priate interventions and evaluating the effectiveness of game-based learning to enhance
students' CT competency development.
The central research questions addressed by this study are: (1) Which factors play im-
portant roles in distinguishing students' CT in Zoombinis? (2) What is the relationship be-
tween students' gameplay performance and CT skills in Zoombinis?
Game-based learning
Game-based learning is a type of learning method that integrates certain gaming principles
in real-life contexts to engage students in learning achievement (Liu et al., 2020). A study
(Baptista & Oliveira, 2018) suggested that game-based learning significantly facilitates us-
ability, productivity and satisfaction. The educational goals of games emphasize building,
teaching and imparting knowledge in an efficient way (Tan et al., 2007). In a game-based
learning environment, specific knowledge is embedded in virtual scenarios, in which stu-
dents connect abstract and logical thinking to authentic problems without conscious learn-
ing objectives (Dimitra et al., 2020). Acting as rich primers of attractive and contextualized
learning, games can involve students in an engaging problem-based learning process (Ke
et al., 2016) and provide incentives for expected behaviours (Turan et al., 2016). Students
develop their knowledge and skills by testing hypotheses, interacting with system feedback,
adapting strategies and processing solutions. On the one hand, game-based learning re-
quires students to satisfy authentic problems with strategic or planned gameplay actions
(Plass et al., 2015). On the other hand, students' gameplay behaviours can reflect their
knowledge states, cognitive ability and learning outcomes (Hicks et al., 2016). As a result,
well-designed games affect students' learning motivation, cognitive ability and academic
performance (Su, 2016). Analysing gameplay actions is fundamental to understanding the
dynamic nature of students' learning processes and designing effective scaffoldings in the
future (Sawyer et al., 2018).
Game-based learning assessment aims to measure students' knowledge and skills by
constructing irrelevant materials and implicit test items without jargon (Rowe et al., 2021).
Traditional assessments, such as interview and exams, are less effective since these meth-
ods cannot evaluate students' cognitive ability and learning competency (Rowe et al., 2021;
Shute, 2011). In games, assessing students' underlying knowledge is often grounded in
an evidence-centred assessment design (ECD) framework (Mislevy et al., 2003). ECD
framework enables researchers to capture the relationships among operations, eviden-
tiary argument and assessment purposes by constructing, developing and arranging cer-
tain information elements (Mislevy et al., 2003). This framework has been used in various
game-based learning assessment, also named stealth assessment (Ke & Shute, 2015). For
example, Kerr and Chung (2012) examined the key features of students' learning achieve-
ment in an educational video game, in which ECD was used to identify, extract and accu-
mulate features of students' performance. Even though all the key features were included
in students' log files and were difficult to understand, ECD allowed understanding of these
complicated data in large quantities in an efficient and interpretable way. Grover et al. (2016)
combined the ECD framework with discovery-driven data analytics to investigate students'
programming processes and problem-solving strategies in a block-based programming
game. In this research, ECD identified structural connections between CT constructs and
students' observable behaviours, providing evidence of students' problem-solving patterns
and CT practices.
2362 | LIU
Computational thinking
To date, many studies (All et al., 2016, 2021; Ifenthaler et al., 2012) investigated and em-
phasized the effectiveness of game-based learning, which has proved to be promising to
facilitate students' CT concepts (Hsu et al., 2018). Generally, gameplay contents and ac-
tivities involve problem-solving spaces and computational challenges (Qian & Clark, 2016).
Problems in game-based learning environments are often ill-structured, with no rigid formu-
lation or definitive solving trajectory (Turchi et al., 2019). Students' computational ability is
practical to help them design strategic and algorithmic gameplay actions to create effective
solutions (Pellas & Vosinakis, 2018). For instance, Tsarava et al. (2017) embedded a CT
framework including seven stages in a coding game to train students' programming skills.
The authors designed several lessons that were consistent with CT processes such as
decomposition, algorithms, patterns, abstraction and generalization to foster students' CT
skills. Lin et al. (2020) developed a CT framework in a digital game to enhance elementary
students' learning achievement and CT skills. In this study, CT was divided into three dimen-
sions: concepts, practices and perspectives. The authors mapped students' learning pro-
cesses to these three dimensions to evaluate and strengthen their CT abilities. By providing
an engaging and contextualized scenario of authentic problem-solving (Gee, 2009), games
not only improve students' learning efficiency but also enhance their CT practices.
Zoombinis (Rowe et al., 2015) is a puzzle-based game designed by TERC (2015) in the
1990s, aiming to develop students' mathematics concepts (eg, mapping, sorting and com-
paring) essential for CT skills. In this game, students have to design and implement efficient
CT patterns and problem-solving strategies to bring Zoombini characters to safety (Rowe
et al., 2021). The process of building specific learning strategies in students' gameplay
behaviours is consistent with CT practices (Almeda et al., 2019), building evidence of stu-
dents' implicit learning constructs of CT skills. Aligning with the CT constructs introduced by
CSTA (2017) and Shute, Sun, et al. (2017), Zoombinis puzzles involved a learning progres-
sion of CT that was highly iterative with embedded loops among different CT stages. Previous
study (Rowe et al., 2020) employed hand-labelling video analysis to understand students'
learning events and identify their CT stages, which might be time-consuming. Asbell-Clarke
et al. (2021) applied data mining approaches to detect students' CT practices. This research
found that students' gender and grade level were not significantly related to their CT achieve-
ment. However, other factors such as actions and accuracy were not considered.
In this study, we follow the CT framework introduced by previous research (Asbell-Clarke
et al., 2021; Rowe et al., 2017) to assess students' implicit CT in Zoombinis: problem decomposi-
tion, pattern recognition and algorithm design. Students may develop various strategies or meth-
ods associated with one or more CT stages to solve certain puzzles. For example, a random trial
allows students to abstract and decompose a complicated problem, while systematically testing
hypotheses may help students understand underlying gameplay rules and construct useful CT
patterns. By leveraging this framework, we established evidence-based measures of students'
implicit CT in Zoombinis. In addition, we included more factors and used efficient methods to
study the relationship between students' implicit CT and these explicit variables, making it more
practical to assess students' CT practices in game-based learning environments.
M ETHO DO LOGY
The research context of this study is Zoombinis (Rowe et al., 2015; TERC, 2015). It is a
well-designed puzzle game that aims to facilitate students' mathematical and computational
2364 | LIU
competency, in which the gameplay achievement can reflect implicit CT skills. Zoombini has
different physical attributes, such as shaggy hair, dot eyes and blue nose (see Figure 1).
Students have to safely move Zoombini characters to Zoombiniville by identifying the correct
physical characteristics of each character (Rowe et al., 2015; TERC, 2015). To solve these
complex puzzles, students are required to abstract authentic problems, develop problem-
solving strategies, design analytical solutions and generalize efficient algorithms. Students'
gameplay performance and sequential actions in log files reflect their CT practices, provid-
ing an evidence-based assessment of implicit CT in this game.
The data used in our study was sourced from Data Arcade, a dataset created by TERC
(the developer of Zoombinis) to store students' log files. The selected sample for this re-
search comprises 158 students in grades 3–8 (88 male and 79 female) who had no prior
exposure to Zoombinis. The selection criteria require that each participant must have en-
gaged with at least one of the four Zoombinis puzzles (refer to Figure 2), and no external
interventions or scaffolding were implemented during the students' playtest.
• Allergic Cliffs: Players have to discover which attributes allow each Zoombini to cross
which bridge.
• Bubblewonder Abyss: Zoombini must move through a maze that has junctions and
switches triggered by Zoombini attributes.
• Mudball Wall: Players have to figure out how the colour and shape of each mudball corre-
spond to a position on the wall.
• Pizza Pass: Players have to find the combination of pizza (and at higher level ice cream)
toppings desired by a troll blocking the Zoombini's path.
The gameplay data generated from Zoombinis contains students' identifiers, demograph-
ics, timestamps, type of actions, puzzle information, objects' attributes, solution attributes
and other additional information (see Table 1). On the one hand, each action (eg, select a
topping) represents a student testing different attributes of a character to satisfy the target
problem. On the other hand, students' attempts (eg, deliver toppings) show the gameplay
feedback when they submit the character built in previous actions. According to the in-
terpretation, students' sequential actions describe how they test the required attributes of
a character with ordered or unordered gameplay behaviours, while attempts illustrate the
correctness of the designed character. Therefore, we applied students' attempts as the anal-
ysis granularity of gameplay performance and CT stages, and we used students' actions to
explore their problem-solving strategies. Gameplay log data totally included 1845 rounds
and 18,795 attempts. In each round, the first attempts could not have a dependent relation-
ship with previous ones, and the last attempts were used to combine all correct characters
tested before. Thus, students' first and last attempts in gameplay log data were excluded,
and 15,105 attempts remained.
F I G U R E 2 Screenshots Zoombinis puzzles (top left going clockwise): Allergic Cliffs, Bubblewonder Abyss,
Mudball Wall and Pizza Pass (Asbell-Clarke et al., 2021).
Problem-solving strategies
• Subtracting: Testing the complementary set of dimensions that tested in the previous
attempt.
• Replacing: Replacing one dimension while keeping other dimensions the same.
All these problem-solving strategies were determined by analysing the ObjectID in stu-
dents' log files. The dependent relationships among students' gameplay actions offer an-
alytical evidence to uncover problem-solving strategies. For instance, Additive represents
students adding new dimensions of a character one by one, which is manifested in their log
files as the ObjectID of the current action has one more digit with the number of 1 compared
with the previous one. Table 2 shows the patterns of each problem-solving strategy in stu-
dents' log files.
Studies suggested that CT was a problem- solving process (Fagerlund et al., 2021;
Haseski et al., 2018; Voskoglou & Buckley, 2012). Analysing problem-solving strategies
in students' gameplay offers unique opportunities for identifying CT stages and assess-
ing CT skills. In this study, the construction of problem-solving strategies was viewed
as decisive evidence to reveal students' CT stages. Building upon the CT framework
introduced in Section “Game- Based Learning”, three CT stages were identified by the
following principles:
Duration
The duration of gameplay attempts stands as a pivotal proxy for assessing students' persistence
and strategy deployment. Building upon prior research (Rehbein et al., 2016), which has suggested
that gameplay duration holds potential as an indicator of students' learning performance, we aim to
uncover the nuanced implications of this temporal dimension. By exploring the temporal aspects
of students' interactions with Zoombinis, we seek to discern whether differing durations correlate
with distinct levels of CT competence. This exploration will elucidate whether students with well-
developed CT skills tend to exhibit more efficient and expedited gaming behaviours, or if prolonged
durations signify unproductive trials that may result in heightened unproductive CT performance.
Actions
Another pivotal variable in our analysis is the number of actions undertaken during each game-
play attempt. As illustrated in Table 1, students engage in various numbers of actions within
each attempt. Depending on their CT proficiency, students may employ distinct problem-solving
strategies in their endeavours, leading to varying numbers of actions. Taking Pizza Pass as an
example, a student may apply for a streamlined approach by selecting only one topping or,
alternatively, may adopt a more experimental approach by choosing multiple toppings and then
refining their selections. Consequently, the count of actions serves as a valuable indicator of
students' problem-solving strategies and their competency in applying CT skills.
Accuracy
submit it to Zoombinis for evaluation, and their attempts are categorized as either correct or
incorrect. By investigating the accuracy of their attempts, we endeavour to identify whether
students' proficiency in CT corresponds to an increased likelihood of producing accurate so-
lutions or whether a deficiency in CT skills leads to a higher incidence of incorrect attempts.
Existing research (Gómez-Gonzalvo et al., 2020; Rehbein et al., 2016) has underscored the
substantial impact of students' gender and age on their learning processes and outcomes.
This study also explores the potential influence of demographic variables such as gender
and grade level on students' CT skills.
Puzzle
Difficulty
Each puzzle presents itself with four levels of difficulty: easy, not easy, hard and very hard.
The inherent challenge posed by each puzzle escalates progressively across these difficulty
levels, demanding more intricate problem-solving strategies and an elevated proficiency in
CT skills. For instance, in Pizza Pass, the easy level may feature a solitary troll with one or two
desired toppings, presenting relatively straightforward challenges. However, as the difficulty
level increases, the complexity of the puzzle intensifies. Two or three trolls may come into
play, each necessitating intricate combinations of toppings and ice creams. In essence, the
ascending levels of challenge impel students to extend their problem-solving strategies from
single dimensions to encompass multi-dimensions or even matrices. Consequently, puzzle
difficulty assumes significance as a potential factor interrelated with students' CT skills.
Data analysis
This study aimed to explore whether gender, grade, puzzle and its difficulty, and game-
play performance play statistically significant roles in distinguishing students' CT skills
in Zoombinis. Specifically, we applied multinomial logistic regression analysis (Kwak &
Clayton- Matthews, 2002) to identify the relationships between these factors and students'
CT stages. CT stages were recognized as an explained variable, while students' gameplay
performance, demographics and puzzle information were viewed as explanatory variables.
2370 | LIU
It is worth noting that CT stages, gender, accuracy, puzzle and difficulty were repre-
sented as dummy variables in this analysis. Dummy variables are commonly employed in
regression analysis to encode categorical features. It possesses the property that when
X i = 1, it indicates that the subject belongs to the ith category; otherwise, Xi = 0 (Hardy, 1993;
Suits, 1957). These variables were incorporated into the regression analysis as valid predic-
tors, given that the necessary encoding steps were taken to ensure that the solution of the
normal equations would be determinate.
In addition, we applied the structural equation model (Ullman & Bentler, 2012) to estimate
the multiple and interrelated dependence between these factors and CT stages. Absolute fit
χ2, df, comparative fit index (CFI), Non-Normed Fit Index (NNFI) and root mean square error
of approximation (RMSEA) were used to measure model fitness.
Next, we provided insights into the relationship between students' gameplay performance
and their CT stages, which might be a promising method to assess implicit CT skills in
Zoombinis based on their explicit learning achievement. First, we used K-means cluster-
ing algorithm to divide students into three groups based on their performance parameters.
Second, we employed histogram and cross-tabulation with χ2 test analysis to statistically
describe the interrelated connection between gameplay performance and CT stages.
Students' gameplay actions were extracted from log files, which were used to identify the
problem-solving strategies and CT stages with Python code. Other explanatory variables
were also obtained from students' log data. Multinomial logistic regression and other statis-
tical analysis were conducted by IBM SPSS, and structural equation model was established
by IBM SPSS AMOS (Blunch, 2012).
R ESULTS
Multinomial logistic regression model was built to explore the relationships between various
factors and students' CT skills. It helps examine the variables that distinguish students' at-
tempts with different CT stages. Explanatory variables, including gender, grade, accuracy,
duration, actions, puzzle and difficulty, were taken. Table 5 presents the developed model,
and Table 6 is used to decide whether each explanatory variable meaningfully explains stu-
dents' CT stages. According to the results, accuracy (p < 0.05), duration (p < 0.05), actions
(p < 0.05) and difficulty (p < 0.05) were found to play significant roles in distinguishing stu-
dents' CT stages. Taking Algorithm Design as the reference value, lower accuracy increased
the probability of being in Problem Decomposition; lower accuracy, shorter duration, more
actions and greater difficulty increased the probability of identifying as Pattern Recognition.
Regarding the model fitting, Nagelkerke indicates the amount of variance explained by
this model, and the result 1.00 represents the perfect model fit. According to Table 5, the
Nagelkerke R 2 statistic was determined to be 27%. This statistic showed that there was a
27% relationship between students' CT stages and the explanatory variables. Table 7 shows
more details of model fitting information (χ2 = 1719.520; df = 14; p < 0.001), indicating the de-
veloped model was statistically significant.
ASSESSING IMPLICIT COMPUTATIONAL THINKING IN GAME-B ASED
LEARNING | 2371
Structural equation modelling approach was employed to test the associations among the
variables more holistically. Specifically, we only considered the explanatory variables such
as accuracy and duration that explained students' CT stages significantly based on the
results of multinomial logistic regression. Results showed that structural equation model
2372 | LIU
fit the data well (χ2 = 36.156; df = 5; NNFI = 0.989; CFI = 0.994; and RMSEA = 0.032). The
proposed model with standardized coefficients was presented in Figure 3, where the solid
black arrows represent statistically significant relationships and the values are the standard-
ized coefficients.
According to Figure 3, duration and actions influenced each other mutually, and diffi-
culty affected duration. Duration, difficulty and accuracy were positively associated with
students' CT stages, while no significant correlation was found between actions and CT
stages. In terms of the coefficients, accuracy played the most important role in identifying
students' CT stages (β = 0.57), followed by duration (β = 0.26). The puzzle difficulty ex-
plained students' CT stages with a coefficient of 0.07, which is smaller than the two other
variables.
ASSESSING IMPLICIT COMPUTATIONAL THINKING IN GAME-B ASED
LEARNING | 2373
According to the previous findings, students' gameplay performance showed a critical as-
sociation with their CT stages, and all three gameplay performance variables had significant
effects on CT stages. As a result, we further categorized gameplay attempts into three
groups based on the variables of gameplay performance and examined the relationship
between the performance levels and CT stages. K-means algorithm was used to divide stu-
dents' attempts into three groups: high-performing, middle-performing and low-performing.
Attempts with shorter duration, fewer actions and higher accuracy were categorized into
high-performing group. Attempts with longer duration, more actions and lower accuracy were
classified as low-performing group. All other attempts were sorted into medium-performing
group. There are 4636 high-performing attempts, 6260 medium-performing attempts and
4209 low-performing attempts.
As shown in Table 8, χ2 test was conducted to analyse the independence of gameplay
performance with CT stages. The results revealed a significant relation between these two
variables, χ2(4, N = 15,105) = 4094.91, Cramer's V = 0.37, p < 0.001.
2374 | LIU
D I SCUSS I O N
Driven by the increasing interest in constructing CT in K-12 education, the field dedicated
considerable effort to developing and evaluating students' CT skills (Tang et al., 2020). Even
though a body of literature has established practical assessment of CT skills in programming
or computing scenarios, more scalable and reliable CT measures need to be advanced in
games. In Zoombinis, students construct CT concepts by building effective problem-solving
strategies to satisfy specific puzzle games. Operationalization of CT process in this game
is implicit and hard to capture, which may be a barrier to CT assessment (Asbell-Clarke
et al., 2021). In this study, we utilized multinomial logistic regression and structural equation
model to determine which criteria effectively classify students' implicit CT. The findings are
expected to benefit implicit CT assessment based on the accessible and explicit gameplay
observations.
Sharing the congruence with previous work (Asbell-Clarke et al., 2021), students' gender
and grade level did not relate to their CT activities. As expected, a significant relationship
was uncovered between students' gameplay performance and CT practices. Based on the
results of multinomial logistic regression, duration, accuracy, total number of actions and
puzzle difficulty were determined to distinguish students' CT stages. There is a 27% ex-
plained connection between CT stages and these explanatory variables. When Algorithm
Design was taken as the reference value, lower accuracy indicated a higher probability of
Problem Decomposition; lower accuracy, shorter duration, more actions and higher difficulty
level implied a higher probability of Pattern Recognition. Besides, the structural equation
model showed that the accuracy of each attempt played a critical role in distinguishing stu-
dents' CT stages, while duration and puzzle difficulty were also regarded influential factors
in implicit CT assessment.
ASSESSING IMPLICIT COMPUTATIONAL THINKING IN GAME-B ASED
LEARNING | 2375
The findings found that students with high-performing attempts exhibited a higher probability
of studying in Algorithm Design, while low-performing attempts often occurred in Problem
Decomposition and Pattern Recognition. Besides, results showed that accuracy played a
significant role in distinguishing individuals' CT stages, while duration and the number of
actions should also be considered risk factors. We extended prior research to include more
factors that might relate to CT practices and examined the details of the structural relations
between these factors and CT stages. Based on this interpretation, students' learning per-
formance is directly related to their CT skills, allowing researchers to develop external evi-
dence to infer and measure students' implicit CT in Zoombinis.
Bringing CT to K-12 education is a complicated task, requiring systematic course design,
thorough instruction, timely scaffolding and exhaustive assessment. Although much research
has been devoted to inspecting the relevance between students' learning performance and
CT skills (Haddad & Kalaani, 2015; Polat et al., 2021; Tang et al., 2020; Villalba-Condori
et al., 2018), few effective assessment of implicit CT in games has been established. Our
study suggested that achieving higher gaming scores indicated more analytical CT stages.
Researchers and instructors can infer students' CT skills by observing their gameplay per-
formance in Zoombinis. The results also helped game developers and designers review the
effectiveness of educational games in promoting K-12 students' CT knowledge acquisition
and competency development. Assessing students' implicit CT in accordance with their per-
formance is an encouraging way to measure students' CT practices in games, which may be
used to provide appropriate intervention and facilitate CT learning experiences in the future.
A significant study limitation is that the causal relationship between gameplay perfor-
mance and CT skills is far from established. On the one hand, Yildiz Durak et al. (2021)
suggested that students' affective states influenced their CT and programming practices.
Students who obtained positive feedback from Zoombinis might improve their confidence
and engagement, which subsequently impacted their CT activities. On the other hand,
Haddad and Kalaani (2015) proposed that CT skills enhanced students' learning success.
Students who mastered structural knowledge of mathematics and computer science could
develop efficient gameplay patterns to obtain higher achievement. In this study, no directed
correlation between gameplay performance and CT skills was developed. In terms of future
work, we should carefully consider the causal relations and construct equations to explain
the connection between gameplay performance and CT skills. Besides, it is possible to
come across CT assessment in various game scenarios, such as puzzle-based games and
block-based programming games. The relations between gaming achievement and CT skills
may exhibit different interrelated dependence regarding certain game scenarios. Much work
into the effects of different game-based learning environments on the relevance between
gameplay performance and CT practices is expected in the future.
Implications
scenarios necessitates that students draw upon their real-world experiences and learn from
their past errors.
The fluid and adaptive nature of gameplay actions makes it a more efficient tool compared
to traditional surveys or quizzes when it comes to unveiling students' implicit thinking abili-
ties (Rowe et al., 2020). Gaming actions can provide meticulous records of the sequences
and patterns in students' deliberate behaviours. This, in turn, offers a valuable window into
how students approach problem decomposition, construct effective strategies and devise
efficient algorithms to surmount these challenges. Consequently, the outcomes of this study
hold the potential for broader application to other game-based learning environments that
exhibit similar characteristics. This is especially pertinent in cases where CT and mathemat-
ical concepts are deeply integrated into problem-solving scenarios and where capturing the
nuances of students' learning processes remains a formidable challenge.
Although game-based learning has demonstrated its efficacy in nurturing CT skills, the chal-
lenge of assessing students' implicit problem-solving behaviours and CT performance persists.
This pilot study introduces a promising method for measuring students' implicit CT abilities based
on explicit variables within Zoombinis. The findings suggest a significant relationship between
students' gameplay performance, including accuracy, duration and the number of actions taken,
and their application of problem-solving strategies. Importantly, these variables can be easily
derived from students' log files, providing a valuable and accessible resource for researchers
and instructors to infer the underlying CT processes in games. This research primarily focused
on the analysis of Zoombinis, and while our primary aim was to delve into the intricacies of this
specific game, we acknowledge the potential broader implications of the findings.
For instance, consider one digital game designed to develop the CT concepts of sorting
and comparing. Instructors can assess students' underlying CT stages by observing factors
such as the time taken to construct desired solutions and the accuracy of their attempts.
Moreover, this study's findings suggest that more challenging game levels may demand
higher CT skills from students. Hence, the level of difficulty within each game can pro-
vide insights into students' CT processes. Furthermore, the types of concepts embedded in
games do not appear to impact students' learning performance significantly. In other words,
whether the gameplay content is centred on sorting or comparing concepts, there seem to
be no substantial differences in their CT abilities.
Different games often emphasize distinct features and employ diverse learning mech-
anisms. Understanding how CT skills interact with this wide array of game-based learning
environments is integral to the broader generalizability of our research. In this regard, con-
ducting comparative studies across various game genres holds significant potential for en-
hancing the comprehension of how to effectively assess CT skills by considering the unique
characteristics of games. Further exploration is required to establish the intricate relation-
ships between gaming factors and students' CT skills. More research should be dedicated
to examining how these factors may potentially influence CT practices, either by facilitating
or inhibiting them. Such insights will equip researchers with the means to offer timely and
efficient interventions when students exhibit unproductive performance in CT activities.
While this study utilizes Zoombinis as a case study to exemplify the assessment of im-
plicit CT experiences, the insights gained can be extended to enhance implicit learning
2378 | LIU
CO NCLUS I O N
ACKNOWLEDGEMENTS
This work was partially supported by the U.S. Department of Education (No. U411C190179).
TL thanks Professor Maya Israel and TERC for the support and comments on this work.
C O N F L I C T O F I N T E R E S T S TAT E M E N T
There is no conflict of interest resulting from the research work in this study.
E T H I C S S TAT E M E N T
All data was collected and analysed after approval by the Institutional Review Board.
ORCID
Tongxi Liu https://orcid.org/0009-0009-0532-1050
REFERENCES
Aho, A. V. (2012). Computation and computational thinking. The Computer Journal, 55(7), 832–835.
All, A., Castellar, E. N. P., & Van Looy, J. (2021). Digital game- based learning effectiveness assessment:
Reflections on study design. Computers & Education, 167, 104160.
All, A., Castellar, E. P. N., & Van Looy, J. (2016). Assessing the effectiveness of digital game-based learning: Best
practices. Computers & Education, 92, 90–103.
Almeda, M. V., Rowe, E., Asbell-Clarke, J., Scruggs, R., Baker, R., Bardar, E., & Gasca, S. (2019). Modeling
implicit computational thinking in Zoombinis mudball wall puzzle gameplay. In Proceedings of the 2019
Technology, Mind, and Society Conference.
Altanis, I., & Retalis, S. (2019). A multifaceted students' performance assessment framework for motion-based
game-making projects with scratch. Educational Media International, 56(3), 201–217.
Ardito, G., Czerkawski, B., & Scollins, L. (2020). Learning computational thinking together: Effects of gender dif-
ferences in collaborative middle school robotics program. TechTrends, 64(3), 373–387.
Arena, D. A., & Schwartz, D. L. (2014). Experience and explanation: Using videogames to prepare students for
formal instruction in statistics. Journal of Science Education and Technology, 23(4), 538–548.
Asbell-Clarke, J., Rowe, E., Almeda, V., Edwards, T., Bardar, E., Gasca, S., Baker, R. S., & Scruggs, R. (2021).
The development of students' computational thinking practices in elementary-and middle-school classes
using the learning game, Zoombinis. Computers in Human Behavior, 115, 106587.
Atmatzidou, S., & Demetriadis, S. (2016). Advancing students' computational thinking skills through educa-
tional robotics: A study on age and gender relevant differences. Robotics and Autonomous Systems, 75,
661–670.
Baptista, G., & Oliveira, T. (2018). Gamification and serious games: A literature meta-analysis and integrative
model. Computers in Human Behavior, 92, 306–315.
Barr, D., Harrison, J., & Conery, L. (2011). Computational thinking: A digital age skill for everyone. Learning &
Leading with Technology, 38(6), 20–23.
Barr, V., & Stephenson, C. (2011). Bringing computational thinking to k-12: What is involved and what is the role of
the computer science education community? ACM Inroads, 2(1), 48–54.
Basu, S., Biswas, G., & Kinnebrew, J. S. (2017). Learner modeling for adaptive scaffolding in a computational
thinking-based science learning environment. User Modeling and User-Adapted Interaction, 27(1), 5–53.
Blunch, N. (2012). Introduction to structural equation modeling using IBM SPSS statistics and AMOS. Sage.
Brennan, K., & Resnick, M. (2012). New frameworks for studying and assessing the development of computational
thinking. In Proceedings of the 2012 Annual Meeting of the American Educational Research Association,
Vancouver, Canada (Vol. 1, p. 25).
Çoban, E., & Korkmaz, Ö. (2021). An alternative approach for measuring computational thinking: Performance-
based platform. Thinking Skills and Creativity, 42, 100929.
CSTA. (2017). Csta k-12 computer science standards. https://w ww.c steachers.org/page/standards
Dagienė, V., Stupurienė, G., & Vinikienė, L. (2016). Promoting inclusive informatics education through the bebras
challenge to all k-12 students. In Proceedings of the 17th International Conference on Computer Systems
and Technologies (pp. 407–414).
De Souza, A. A., Barcelos, T. S., Munoz, R., Villarroel, R., & Silva, L. A. (2019). Data mining framework to analyze
the evolution of computational thinking skills in game building workshops. IEEE Access, 7, 82848–82866.
Dimitra, K., Konstantinos, K., Christina, Z., & Katerina, T. (2020). Types of game-based learning in education: A
brief state of the art and the implementation in Greece. European Educational Researcher, 3(2), 87–100.
Doleck, T., Bazelais, P., Lemay, D. J., Saxena, A., & Basnet, R. B. (2017). Algorithmic thinking, cooperativity,
creativity, critical thinking, and problem solving: Exploring the relationship between computational thinking
skills and academic performance. Journal of Computers in Education, 4(4), 355–369.
Fagerlund, J., HÄ Kkinen, P., Vesisenaho, M., & Viiri, J. (2021). Computational thinking in programming with
scratch in primary schools: A systematic review. Computer Applications in Engineering Education, 29(1),
12–28.
Gee, J. P. (2009). Deep learning properties of good digital games: How far can they go? In Serious games (pp.
89–104). Routledge.
Gómez-Gonzalvo, F., Molina, P., & Devís-Devís, J. (2020). Which are the patterns of video game use in Spanish
school adolescents? Gender as a key factor. Entertainment Computing, 34, 100366.
2380 | LIU
González, M. R. (2015). Computational thinking test: Design guidelines and content validation. In Proceedings of
EDULEARN15 Conference (pp. 2436–2444).
Grover, S., Bienkowski, M., Niekrasz, J., & Hauswirth, M. (2016). Assessing problem-solving process at scale. In
Proceedings of the Third (2016) ACM Conference on Learning@ Scale (pp. 245–248).
Grover, S., & Pea, R. (2018). Computational thinking: A competency whose time has come. Computer Science
Education: Perspectives on Teaching and Learning in School, 19, 1257–1258.
Guenaga, M., Egu'Iluz, A., Garaizar, P., & Gibaja, J. (2021). How do students develop computational thinking?
Assessing early programmers in a maze-based online game. Computer Science Education, 31(2), 259–289.
Haddad, R. J., & Kalaani, Y. (2015). Can computational thinking predict academic performance? In IEEE Integrated
STEM Education Conference (pp. 225–229). IEEE.
Hainey, T., Connolly, T., Boyle, E., Azadegan, A., Wilson, A., Razak, A., & Gray, G. (2014). A systematic literature
review to identify empirical evidence on the use of games-based learning in primary education for knowledge
acquisition and content understanding. In 8th European Conference on Games Based Learning: ECGBL (p.
167).
Hardy, M. A. (1993). Regression with dummy variables (Vol. 93). Sage Publication Inc.
Haseski, H. I., Ilic, U., & Tugtekin, U. (2018). Defining a new 21st century skill-computational thinking: Concepts
and trends. International Education Studies, 11(4), 29–42.
Hicks, D., Eagle, M., Rowe, E., Asbell-Clarke, J., Edwards, T., & Barnes, T. (2016). Using game analytics to
evaluate puzzle design and level progression in a serious game. In Proceedings of the Sixth International
Conference on Learning Analytics & Knowledge (pp. 440–448).
Hooshyar, D., Pedaste, M., Yang, Y., Malva, L., Hwang, G.-J., Wang, M., Lim, H., & Delev, D. (2021). From gaming
to computational thinking: An adaptive educational computer game-based learning approach. Journal of
Educational Computing Research, 59(3), 383–409.
Hooshyar, D., Yousefi, M., Wang, M., & Lim, H. (2018). A data-driven procedural-content-generation approach for
educational games. Journal of Computer Assisted Learning, 34(6), 731–739.
Hsu, T.-C., Chang, S.-C., & Hung, Y.-T. (2018). How to learn and how to teach computational thinking: Suggestions
based on a review of the literature. Computers & Education, 126, 296–310.
Ifenthaler, D., Eseryel, D., & Ge, X. (2012). Assessment for game-based learning. In Assessment in Game-Based
Learning (pp. 1–8). Springer.
Jiang, X., Harteveld, C., Huang, X., & Fung, A. Y. (2019). The computational puzzle design framework: A design
guide for games teaching computational thinking. In Proceedings of the 14th International Conference on the
Foundations of Digital Games (pp. 1–11).
Kalelioglu, F., Gü Lbahar, Y., & Kukul, V. (2016). A framework for computational thinking based on a systematic
research review. Baltic Journal of Modern Computing, 4(3), 583.
Kazimoglu, C., Kiernan, M., Bacon, L., & Mackinnon, L. (2011). Understanding computational thinking before
programming: Developing guidelines for the design of games to learn introductory programming through
game-play. International Journal of Game-Based Learning (IJGBL), 1(3), 30–52.
Kazimoglu, C., Kiernan, M., Bacon, L., & Mackinnon, L. (2012). A serious game for developing computational
thinking and learning introductory computer programming. Procedia-Social and Behavioral Sciences, 47,
1991–1999.
Ke, F., & Shute, V. (2015). Design of game-based stealth assessment and learning support. In C. Loh, Y. Sheng,
& D. Ifenthaler (Eds.), Serious games analytics (pp. 301–318). New York, NY: Springer.
Ke, F., Xie, K., & Xie, Y. (2016). Game-based learning engagement: A theory-and data-driven exploration. British
Journal of Educational Technology, 47(6), 1183–1201.
Kerr, D., & Chung, G. K. (2012). Identifying key features of student performance in educational video games and
simulations through cluster analysis. Journal of Educational Data Mining, 4(1), 144–182.
Kjällander, S., Mannila, L., Åkerfeldt, A., & Heintz, F. (2021). Elementary students' first approach to computational
thinking and programming. Education Sciences, 11(2), 80.
Kwak, C., & Clayton-Matthews, A. (2002). Multinomial logistic regression. Nursing Research, 51(6), 404–410.
Lin, S.-Y., Chien, S.-Y., Hsiao, C.-L., Hsia, C.-H., & Chao, K.-M. (2020). Enhancing computational thinking capability
of preschool children by game-based smart toys. Electronic Commerce Research and Applications, 44, 101011.
Liu, T., & Israel, M. (2022). Uncovering students' problem-solving processes in game-based learning environ-
ments. Computers & Education, 182, 104462.
Liu, Z.-Y., Shaikh, Z., & Gazizova, F. (2020). Using the concept of game-based learning in education. International
Journal of Emerging Technologies in Learning (IJET), 15(14), 53–64.
Lye, S. Y., & Koh, J. H. L. (2014). Review on teaching and learning of computational thinking through program-
ming: What is next for k-12? Computers in Human Behavior, 41, 51–61.
Lyon, J. A., & Magana, J. (2020). Computational thinking in higher education: A review of the literature. Computer
Applications in Engineering Education, 28(5), 1174–1189.
Maloney, J., Resnick, M., Rusk, N., Silverman, B., & Eastmond, E. (2010). The scratch programming language and
environment. ACM Transactions on Computing Education (TOCE), 10(4), 1–15.
ASSESSING IMPLICIT COMPUTATIONAL THINKING IN GAME-B ASED
LEARNING | 2381
Marcelino, M. J., Pessoa, T., Vieira, C., Salvador, T., & Mendes, A. J. (2018). Learning computational thinking and
scratch at distance. Computers in Human Behavior, 80, 470–477.
Mindetbay, Y., Bokhove, C., & Woollard, J. (2019). What is the relationship between students' computational
thinking performance and school achievement? International Journal of Computer Science Education in
Schools, 2(5), 3–19.
Mislevy, R. J., Steinberg, L. S., & Almond, R. G. (2003). Focus article: On the structure of educational assess-
ments. Measurement: Interdisciplinary Research and Perspectives, 1(1), 3–62.
Mouza, C., Pan, Y.-C., Yang, H., & Pollock, L. (2020). A multiyear investigation of student computational think-
ing concepts, practices, and perspectives in an after-school computing program. Journal of Educational
Computing Research, 58(5), 1029–1056.
Noh, J., & Lee, J. (2020). Effects of robotics programming on the computational thinking and creativity of elemen-
tary school students. Educational Technology Research and Development, 68(1), 463–484.
Papert, S. A. (1980). Mindstorms: Children, computers, and powerful ideas. Basic Books.
Pellas, N., & Vosinakis, S. (2018). The effect of simulation games on learning computer programming: A com-
parative study on high school students' learning performance by assessing computational problem-solving
strategies. Education and Information Technologies, 23(6), 2423–2452.
Pho, A., & Dinscore, A. (2015). Game-based learning. In Tips and trends (pp. 1–5). Instructional technology committee.
Plass, J. L., Homer, B. D., & Kinzer, C. K. (2015). Foundations of game-based learning. Educational Psychologist,
50(4), 258–283.
Polat, E., Hopcan, S., Kucuk, S., & Sisman, B. (2021). A comprehensive assessment of secondary school stu-
dents' computational thinking skills. British Journal of Educational Technology, 52(5), 1965–1980.
Qian, M., & Clark, K. R. (2016). Game-based learning and 21st century skills: A review of recent research.
Computers in Human Behavior, 63, 50–58.
Rehbein, F., Staudt, A., Hanslmaier, M., & Kliem, S. (2016). Video game playing in the general adult population of
Germany: Can higher gaming time of males be explained by gender specific genre preferences? Computers
in Human Behavior, 55, 729–735.
Resnick, M., Maloney, J., Monroy-Herná Ndez, A., Rusk, N., Eastmond, E., Brennan, K., Millner, A., Rosenbaum,
E., Silver, J., Silverman, B., & Kafai, Y. (2009). Scratch: Programming for all. Communications of the ACM,
52(11), 60–67.
Rose, L., Rouhani, P., & Fischer, K. (2013). The science of the individual. Mind, Brain, and Education, 7(3), 152–158.
Rowe, E., Almeda, M. V., Asbell-Clarke, J., Scruggs, R., Baker, R., Bardar, E., & Gasca, S. (2021). Assessing
implicit computational thinking in Zoombinis puzzle gameplay. Computers in Human Behavior, 120, 106707.
Rowe, E., Asbell-Clarke, J., & Baker, R. S. (2015). Serious games analytics to measure implicit science learning.
In Serious games analytics (pp. 343–360). Springer.
Rowe, E., Asbell-Clarke, J., Bardar, E., Almeda, M. V., Baker, R. S., Scruggs, R., & Gasca, S. (2020). Advancing
research in game- based learning assessment: Tools and methods for measuring implicit learning. In
Advancing educational research with emerging technology (pp. 99–123). IGI Global.
Rowe, E., Asbell-Clarke, J., Gasca, S., & Cunningham, K. (2017). Assessing implicit computational thinking in
Zoombinis gameplay. In Proceedings of the 12th International Conference on the Foundations of Digital
Games (pp. 1–4).
Sanmugam, M., Abdullah, Z., & Zaid, N. M. (2014). Gamification: Cognitive impact and creating a meaningful
experience in learning. In IEEE 6th Conference on Engineering Education (ICEED) (pp. 123–128). IEEE.
Sawyer, R., Rowe, J., Azevedo, R., & Lester, J. (2018). Filtered time series analyses of student problem-solving
behaviors in game-based learning. International Educational Data Mining Society.
Selby, C., & Woollard, J. (2014). Computational thinking: The developing definition. Proceedings of the 45th ACM
technical symposium on computer science education, SIGCSE 2014. ACM.
Shute, V., Ke, F., & Wang, L. (2017). Assessment and adaptation in games. In Instructional techniques to facilitate
learning and motivation of serious games (pp. 59–78). Springer.
Shute, V. J. (2011). Stealth assessment in computer-based games to support learning. Computer Games and
Instruction, 55(2), 503–524.
Shute, V. J., Sun, C., & Asbell-Clarke, J. (2017). Demystifying computational thinking. Educational Re- Search
Review, 22, 142–158.
Su, C.-H. (2016). The effects of students' motivation, cognitive load and learning anxiety in gamification software
engineering education: A structural equation modeling study. Multimedia Tools and Applications, 75(16),
10013–10036.
Suits, D. B. (1957). Use of dummy variables in regression equations. Journal of the American Statistical
Association, 52(280), 548–551.
Tan, P.-H., Ling, S.-W., & Ting, C.-Y. (2007). Adaptive digital game-based learning framework. In Proceedings of
the 2nd International Conference on Digital Interactive Media in Entertainment and Arts (pp. 142–146). ACM.
Tang, X., Yin, Y., Lin, Q., Hadad, R., & Zhai, X. (2020). Assessing computational thinking: A systematic review of
empirical studies. Computers & Education, 148, 103798.
2382 | LIU
Tatar, C., & Eseryel, D. (2019). A literature review: Fostering computational thinking through game-based learning
in k-12. The 42nd Annual Convention of The Association for Educational Communications and Technology,
288–297.
TERC. (2015). Terc (2015) Zoombinis. Game (Android, Ios, Macos, Windows, Web).
Tsarava, K., Moeller, K., Pinkwart, N., Butz, M., Trautwein, U., & Ninaus, M. (2017). Training computational think-
ing: Game-based unplugged and plugged-in activities in primary school. In European Conference on Games
Based Learning (pp. 687–695). Academic Conferences International Limited.
Turan, Z., Avinc, Z., Kara, K., & Goktas, Y. (2016). Gamification and education: Achievements, cognitive loads,
and views of students. International Journal of Emerging Technologies in Learning, 11(7), 64–69.
Turchi, T., Fogli, D., & Malizia, A. (2019). Fostering computational thinking through collaborative game-based
learning. Multimedia Tools and Applications, 78(10), 13649–13673.
Ullman, J. B., & Bentler, P. M. (2012). Structural equation modeling (2nd ed.). Handbook of Psychology.
Van Der Meij, H., Leemkuil, H., & Li, J.-L. (2013). Does individual or collaborative self-debriefing better enhance
learning from games? Computers in Human Behavior, 29(6), 2471–2479.
Villalba-Condori, K. O., Cuba-Sayco, S. E. C., Chá Vez, E. P. G., Deco, C., & Bender, C. (2018). Approaches of
learning and computational thinking in students that get into the computer sciences career. In Proceedings of
the Sixth International Conference on Technological Ecosystems for Enhancing Multiculturality (pp. 36–40).
Voskoglou, M. G., & Buckley, S. (2012). Problem solving and computational thinking in a learning environment.
arXiv preprint arXiv:1212.0750.
Weintrop, D., Beheshti, E., Horn, M., Orton, K., Jona, K., Trouille, L., & Wilensky, U. (2016). Defining computa-
tional thinking for mathematics and science classrooms. Journal of Science Education and Technology,
25(1), 127–147.
Weintrop, D., Holbert, N., Horn, M. S., & Wilensky, U. (2016). Computational thinking in constructionist video
games. International Journal of Game-Based Learning (IJGBL), 6(1), 1–17.
Weintrop, D., Wise Rutstein, D., Bienkowski, M., & Mcgee, S. (2021). Assessing computational thinking: An over-
view of the field. Computer Science Education, 31(2), 113–116.
Wiebe, E., London, J., Aksit, O., Mott, B. W., Boyer, K. E., & Lester, J. C. (2019). Development of a lean compu-
tational thinking abilities assessment for middle grades students. In Proceedings of the 50th ACM Technical
Symposium on Computer Science Education (pp. 456–461).
Wing, J. M. (2006). Computational thinking. Communications of the ACM, 49(3), 33–35.
Yildiz Durak, H., Saritepeci, M., & Durak, A. (2021). Modeling of relationship of personal and affective variables
with computational thinking and programming. Technology, Knowledge and Learning, 28, 165–184.
Zhao, W., & Shute, V. J. (2019). Can playing a video game foster computational thinking skills? Computers &
Education, 141, 103633.
Zhong, B., Wang, Q., Chen, J., & Li, Y. (2016). An exploration of three-dimensional integrated assessment for
computational thinking. Journal of Educational Computing Research, 53(4), 562–590.
Zumbach, J., Rammerstorfer, L., & Deibl, I. (2020). Cognitive and metacognitive support in learning with a serious
game about demographic change. Computers in Human Behavior, 103, 120–129.
How to cite this article: Liu, T. (2024). Assessing implicit computational thinking in
game-based learning: A logical puzzle game study. British Journal of Educational
Technology, 55, 2357–2382. https://doi.org/10.1111/bjet.13443