CT Review Personal
CT Review Personal
net/publication/353504761
CITATIONS READS
86 907
2 authors:
All content following this page was uploaded by Ndudi Ezeamuzie on 20 August 2021.
Abstract
This article provides an overview of the diverse ways in which computational thinking has been
operationalised in the literature. Computational thinking has attracted much interest and
debatably ranks in importance with the time-honoured literacy skills of reading, writing, and
arithmetic. However, learning interventions in this subject have modelled computational thinking
explicitness, and patterns of definitions of computational thinking. Data analysis revealed that
other hand, a substantial number of the studies did not establish the meaning of computational
thinking when theorising their interventions nor clearly distinguish between computational
thinking and programming. Based on these findings, this article proposes a model of
concepts which advances the conceptual clarity between computational thinking and
programming.
COMPUTATIONAL THINKING THROUGH AN EMPIRICAL LENS 2
Author Note
Ndudi O. Ezeamuzie is a doctoral student in the Faculty of Education, The University of Hong
Kong. He is also a computer scientist with research interest in computational thinking and
Hong Kong. Her current research interest, among other things, is epistemic practice and its
Introduction
as a fundamental problem-solving skill for everyone that is rooted in computer science concepts.
Before Wing’s pioneering call for CT, Alan Perils, the first recipient of the renowned A. M.
Turing Award from the Association for Computing Machinery, had advocated for the inclusion
of computer programming as part of liberal education for all students (Perils, 1962, as cited in
Guzdial, 2008). A decade later, another renowned computer scientist, Donald Knuth, posited that
people develop a clearer understanding of tasks by teaching a computer to perform them (Knuth,
1974). Similarly, in his work on procedural thinking, Seymour Papert promoted a vision of
children teaching computers to think through programming (Papert, 1980). While Wing’s
conceptualisation has been explicitly acknowledged as the de facto reference for CT in the 21st
century (e.g., Grover & Pea, 2013; Shute et al., 2017), the works of Perils, Knuth, and Papert
embodied the spirit of CT and rightly underscored the association between computer science
arithmetic was a brave claim. Subsequently, CT attracted interest from researchers, educators,
and policymakers, leading to significant progress in the teaching and learning of CT (Hsu et al.,
2018). One of the primary reasons for this spike in interest was the need to align education with
the socio-digital revolution, which has changed the ways we study, work, and live. By leveraging
CT skills, learners can design solutions for complex systems and instructors can teach the
systems as they are (Buitrago Flórez et al., 2017). In this regard, CT encapsulates unique
practices that are consistent with the objectives of 21st century science, technology, engineering,
While the vision of CT for everyone may be open to scholarly debate, the importance of
studies with 539 effect sizes, Scherer et al. (2019) discovered that computer programming, which
is the predominant approach to developing students’ CT skills (Lye & Koh, 2014), was
reported positive changes in programming knowledge when students learned CT through game
programming (Denner et al., 2019). The pros of learning CT extend beyond programming
communication, and self-management (Popat & Starkey, 2019). Other advantages include
positive attitudes and confidence (Denner et al., 2019). Therefore, it is not surprising that CT has
been included in national curricula, such as the Next Generation Science Standards in the United
States (National Research Council, 2013) and curricula set by European ministries of education
(Bocconi et al., 2016). It offers an invaluable means of tackling challenges facing 21st century
formal curricula (Repenning et al., 2015). Even non-governmental organisations, such as the
International Society for Technology in Education (ISTE), have recognised the importance of CT
questions about how it has been operationalized in empirical studies. What has been taught,
learned, and assessed? These fundamental questions concerning the definition of CT have
persisted. Indeed, Grover and Pea (2013) expressed similar concerns about the multiple
COMPUTATIONAL THINKING THROUGH AN EMPIRICAL LENS 5
findings when faced with the multiple instructional strategies (see Buitrago Flórez et al., 2017;
Hsu et al., 2018; Lye & Koh, 2014), diverse assessment methods (see Cutumisu et al., 2019;
Shute et al., 2017; Tang et al., 2020), and ever-changing pedagogical tools (see Xia & Zhong,
2018; Yu & Roque, 2019; Zhang & Nouri, 2019) adopted in CT research. Furthermore, it is not
clear whether the meaning of CT varies across educational settings, such as teachers’
professional development (Menekse, 2015), higher education (Czerkawski & Lyman, 2015), and
contrary, CT discourse is laden with diverse models that reflect varying perspectives. Such
multifaceted definitions are not peculiar to CT but normal in educational research, where
phenomena are examined through different lenses. Moreover, stepping back from the perceived
challenges posed by a lack of consensus, the advantages of interpreting concepts through diverse
lenses, including CT, cannot be ignored. Each definition contributes to a deeper understanding of
the dimensions of CT. Standard conceptual models of CT found in studies include (a) the
operational definition and age-appropriate examples produced by the International Society for
Technology in Education and Computer Science Teachers Association (Barr et al., 2011; Barr &
Stephenson, 2011), (b) the Computing At School CT framework (Csizmadia et al., 2015), (c)
Brennan and Resnick’s (2012) CT framework, (d) the CT framework for mathematics and
science developed by Weintrop et al. (2016), and (e) the CT framework developed by Shute et al.
(2017). These models reflect the near consensus that CT goes beyond programming, is not
confined to computer science (Corradini et al., 2017; Lu & Fletcher, 2009; Nardelli, 2019), and
These models have supported CT research. In the current review, CT was interpreted
according to Wing’s (2006) theoretical proposition. Our choice was informed by the explicit
primarily as a thinking style with or without programming a machine and rooted in computer
science concepts. How have these computer science concepts been interpreted – in terms of their
definition, consistency, and interaction across empirical studies? Certainly, any attempt to push
precarious venture. Nevertheless, a critical gap to be investigated relates to the ways in which CT
has been operationalised in empirical studies, specifically with respect to the constituent
how educators have interpreted CT. The findings of this review strengthen knowledge of CT and
contribute to the evolving discourse. In addition, they offer valuable guidance for educators and
Research Question: How has CT been defined in the literature investigating CT learning and
assessment?
Shute et al. (2017) explored the nature of CT by examining the components, intervention
strategies, and assessment methods. To the best of our knowledge, their quest to demystify CT is
the closest to our study. However, our study is fundamentally different in two ways. (a) It
considered not only the components of CT but also how these components are defined and
interact with each other. (b) The findings of Shute et al. (2017) were based on a review of both
empirical and theoretical works. In contrast, we aimed to identify how CT and its constituent
elements have been operationalised in empirical interventions alone. In other words, we sought
to ascertain, guided by our research question, how CT has been interpreted in practice.
COMPUTATIONAL THINKING THROUGH AN EMPIRICAL LENS 7
Method
We adopted the Preferred Reporting Items for Systematic Reviews and Meta-Analyses
(PRISMA) statement as the framing guideline. PRISMA was originally designed to provide a
harmonised guideline for researchers conducting systematic reviews and meta-analyses in the
field of medicine (Moher et al., 2009). However, this set of items also offers invaluable guidance
for systematic reviews of research in other fields. Figure 1 summarises our use of PRISMA to
Search Procedures
Our search parameters were influenced by the research question. We searched for articles
that contained the keyword “computational thinking” in one or more of the following four fields:
title, abstract, author keywords, and journal keywords. These fields represent significant parts of
scholarly works that depict the central theme of an article. Note that the enclosing quotation
marks were part of the search term; a schema for restricting the results to exact but case-
insensitive matches. The same search term has been used in previous CT reviews (Hsu et al.,
2018; Shute et al., 2017; Tang et al., 2020). While we found the methodological guidance paper
recommendation to use alternate terms for a major construct did not apply to our review, because
the research question was formulated to explore studies with an explicit interest in CT
intervention and assessment. Therefore, a single search term was an intuitive starting point that
With the search term identified, we executed our search in three key indexing databases
titles and proceedings: Web of Science (WoS), Scopus, and the Education Resources Information
COMPUTATIONAL THINKING THROUGH AN EMPIRICAL LENS 8
Center (ERIC). Scopus indexes works from more than 24,600 serial titles from over 5,000
publishers (Elsevier, n.d.) and WoS indexes the highest-quality research from more than 12,000
journals across 256 subject domains (Clarivate Analytics, n.d.). ERIC is a respected digital
library of educational research that publishes innovative research for educators and the public.
Apart from restricting our search to the title, abstract, author keywords, and journal
keywords, no further restrictions, such as the date of publication or language, were applied
during this stage. Although our investigation was based on Wing (2006), we speculated that it
was unnecessary to set such restrictions. In fact, any pre-2006 articles or written in languages
other than English that evaded the screening might be useful to our investigation. The cut-off
date of this identification stage was September 30, 2020, resulting in 4,686 documents: WoS (n
= 1,829), Scopus (n = 2,565), and ERIC (n = 292). We considered extending our search results
However, embarking on such procedures with 4,686 articles was not pragmatic at this stage.
Screening Procedures
With the initial pool of documents identified, we embarked on screening. Tracking back
to the research question, documents were selected if they fulfilled the following two core
inclusion criteria: (a) empirical studies with evidence of learners’ engagement in a learning
intervention and (b) studies that included assessments of learners’ development of CT based on
These inclusion criteria mirrored our intention that studies that combined the learning and
assessment of CT would provide clearer information on how the authors operationalised CT. The
above inclusion criteria were designated ‘core’ because they were compulsory conditions for
selecting documents that answered the research question. However, this also suggests that other
COMPUTATIONAL THINKING THROUGH AN EMPIRICAL LENS 9
inclusion and exclusion boundaries (e.g., peer-reviewed, source of document) were necessary to
validate the quality of the selected documents and ensure a pragmatic review. Alexander (2020)
referred to these boundaries as delimitations. The rationales for the delimitations are discussed in
the following sections concerning the three sequential screening stages: pre-processing
Pre-processing Stage
filters (for WoS and Scopus) were set to exclude other document types, including conference
proceedings, editorials, meetings, and reviews. The decision to exclude the conference
proceedings was based on their variable quality (Rothstein & Hopewell, 2009). Unfortunately,
some articles that met the inclusion criteria might have been omitted by this delimitation. For
examination of CT development when playing a game (Rowe et al., 2018), and an examination
of the role of CT education in developing algorithmic thinking (Wong & Jiang, 2018) met the
inclusion criteria. However, considering the pragmatism of reviewing more than 2,000
publication bias when proceedings are excluded was diminished, since the question was not
focused on effect size. The decision to exclude reviews and other document types (e.g. Lye &
Koh, 2014; Repenning, 2012; Tissenbaum et al., 2019; Xia & Zhong, 2018) was relatively easy,
because they did not meet the core inclusion criteria. We excluded 1,602 documents: 631 from
WoS, 679 from Scopus, and 292 from ERIC. The final operation in the pre-processing stage was
the removal of duplicates. The articles were aggregated, and duplicates (n = 636) were removed.
In this stage, we read the abstracts of the abovementioned 966 articles and screened out
documents that failed to meet the two core inclusion criteria. We excluded reviews/meta-
analyses (e.g. Scherer et al., 2019; Zhang & Nouri, 2019), theoretical discourse (e.g. Voogt et al.,
2015; Yadav et al., 2016), studies of CT instrument development (e.g. Korkmaz et al., 2017;
Román-González et al., 2017), studies of non-learning CT interventions (e.g. Israel et al., 2015),
and studies that did not assess CT (e.g. Chang, 2014). The abstract screening culminated in the
The last screening procedure, full-text screening, provided a finer layer of inspection –
an important process to assess the eligibility of articles when inclusion decisions could not be
made based on their abstracts. Of the 213 remaining articles, 68 were excluded because they did
not assess CT skills or focused substantially on programming. For example, Kalelioğlu (2015)
investigated how teaching K–8 introductory computer science based on code.org influenced the
development of students’ reflective thinking skills towards problem solving. Although CT was
mentioned approximately 12 times in Kalelioğlu’s study, the full-text screening excluded the
article because it used a self-reported scale. Similarly, other studies that measured related
outcome variables but not CT skills were excluded. Examples included interest (e.g., Kong et al.,
2018), self-efficacy (e.g., Psycharis & Kallia, 2017), and engagement (e.g., Liu et al., 2017).
Another 16 studies were excluded because they used self-report measures to assess CT. Many of
the articles in this category used the CT scale developed by Korkmaz et al. (2017) or variants of
this scale. We excluded an additional 22 articles because their full texts were reported in
COMPUTATIONAL THINKING THROUGH AN EMPIRICAL LENS 11
languages other than English or were not available. We excluded 26 studies because they did not
Finally, after the full-text screening, we scoured the contents of journals that appeared
more than once and looked at the publication records of authors who had contributed two or
more studies. All the identified articles were found in our initial pool. This could be attributed to
the increasing sophistication in database search algorithms and a single-term search keyword.
The screening phase culminated in the inclusion of 81 eligible studies to be charted and analysed
After identifying eligible articles, we captured the information required to address our
research question. Arguably, this was the most important stage of the study. It entailed several
iterations of reading, analysis, and discussion of the 81 eligible articles by the research team. Due
to the large number of articles designated for comprehensive review, we charted our data using a
spreadsheet. The articles were catalogued to gain insight into their profiles and identify potential
differences in CT operationalisation.
The captured variables were (a) authors, (b) title of article, (c) year of publication, (d)
(f) duration of intervention, (g) research setting (i.e. in-class, after-school, or camp), (h)
academic domain, (i) pedagogical approach, (j) nature of learning task (i.e. programming, non-
programming, or both), (k) gender of participants (i.e. male, female, or mixed), (l) grade level of
participants (i.e. kindergarten, lower elementary, upper elementary, middle school, high school,
college), (m) nature of assessment task (i.e. programming, non-programming, or both), and (n)
With respect to the research setting, studies conducted on school premises and within the
regular class activities were coded as ‘in-class’. Studies conducted outside regular class hours
such as extracurricular activities but on school premises were coded as ‘after-school’. When the
studies were conducted during school holidays or outside the participants’ institution, the
To examine study type, articles were coded into three categories: experimental, quasi-
split into separate groups (experimental versus control) and received different treatments. We
coded the studies as experimental if the participants were randomly assigned to the groups;
otherwise, the quasi-experimental label was assigned. The exploratory category comprised
studies that investigated certain constructs or practices but did not establish experimental or
control groups.
The nature of the learning and assessment tasks emerged as an important parameter.
Although we had not set this variable at the outset, the iterative analysis flagged it as a construct
of interest. We observed that learning and assessment tasks in the studies could be classified as
either programming or non-programming tasks. If the participants wrote computer code as part
of the learning activities (drag-and-drop or text), the nature of the learning task was coded as
programming. Likewise, when writing computer code formed part of the assessment, the nature
Definitional Coding
Guided by the primary objective of our review – to investigate how CT has been defined,
we were confronted with the challenge of making sense of a large number of studies. Dinsmore
et al. (2008) encountered similar difficulty in their systematic review to disentangle the
COMPUTATIONAL THINKING THROUGH AN EMPIRICAL LENS 13
Informed by Murphy and Alexander’s (2000) definitional coding scheme, which queried
definitions into two categories – implicit or explicit. They sub-categorised implicit definitions as
conceptual definitions (i.e. the construct definition was inferred from words or phrases in the
text), referential definitions (i.e. the construct definition was deduced from a referenced work),
or measurement definitions (i.e. the construct definition was deduced from a measurement
instrument).
In contrast, Singer and Alexander (2017) were interested in deconstructing the meaning
reading?), componential (what does reading entail?), operational (how does reading occur?), or
definitions). Based on these definitional coding schemes (Dinsmore et al., 2008; Murphy &
Alexander, 2000; Singer & Alexander, 2017), we modified developed a three-tiered coding
scheme for definitions of CT in the reviewed studies, focusing on the explicitness, nature, and
position of definitions.
Explicitness was established if the researchers expressly stated the meaning or referenced
the definition of CT. Clearly and deliberately stated definitions were coded as explicit, while
inferential or implied definitions were coded as implicit. An additional code – referential – was
assigned to the definitions when the final meanings were derived from other cited works.
question ‘what is CT?’ The componential code was assigned when CT was interpreted as a
COMPUTATIONAL THINKING THROUGH AN EMPIRICAL LENS 14
composite of other elements, constructs, concepts, or dimensions. The multifaceted code was
Definition position – the last tier in our three-tiered model – was classified as theoretical,
intervention, or assessment. An article was coded as theoretical when its definition of CT was
identified from the framework or literature review that supported the study; as intervention when
the definition was constructed from the learning or teaching experiment; and as assessment when
the definition was deduced from the measurement instrument. In an ideal scenario, the definition
in an article would be consistent across the three positions. As the positional values were not
= 3) was intuitive. We reasoned that definitions of the focal construct (CT) should ideally be
established in the theoretical framework of an article. Our rationale for this priority ranking was
connected to the general convention for reporting empirical studies of interventions. Researchers
construct, and finally measure the effect of the intervention with instruments designed to assess
the construct. Following the above ranking, the code ‘theoretical’ indicated that the definition
was given in the theoretical background or literature review section of the article. However, this
code did not indicate whether the definition was also provided in the intervention or assessment
section. The code ‘intervention’ indicated that the definition was given in the section on
implementing the intervention. While this suggested that the definition was not provided in the
theoretical discussion, it did not indicate whether the definition was provided in the assessment
section. The code ‘assessment’ implied that CT was defined in the assessment section only.
TangibleK curriculum, four key variables were observed and assessed: debugging,
correspondence, sequencing, and control flow” (Bers et al., 2014, p. 149). This was part of a
classified this as an explicit definition. Although there was no direct ‘CT is …’ phrase, the
meaning of CT could be unambiguously decoded from the quotation, which was clearly aligned
sequencing, and control flow). Therefore, the nature of the definition was coded as
componential. The position of the definition was best described as theoretical because it was part
In terms of reliability, the coding and analysis process of the eligible articles involved
several rounds of collaborative discussion and refinement. We considered this rigorous means of
achieving a consensus on the coding as a proper reflection of reliability when multiple values
Table 1 summarises the features of the studies (n = 81) in the final reviewed sample.
Although our focus was on understanding how CT has been operationalised in empirical studies
in the wake of Wing’s (2006) call, our attention was immediately drawn to the absence of studies
that implemented learning in tandem with the assessment of CT prior to 2013. Thereafter, the
focus of CT research expanded. As we did not delimit our literature search by period, the paucity
of studies that reported CT assessment before 2013 indicates that this period can be regarded as
the infancy phase of CT development. At this juncture, our interest was drawn towards
COMPUTATIONAL THINKING THROUGH AN EMPIRICAL LENS 16
understanding the directions of research in the infancy stage as a precursor to making sense of
measured self-reported CT dispositions and other educational outcomes. Most of the studies in
our initial pool were proceedings or theoretical works aimed at establishing the importance and
redesigning CT instructional strategies (e.g. the three-phase ‘use, modify, create’ pedagogical
framework proposed by Lee et al. (2011), representing the different cognitive levels of learning
CT), and other plausible ways of introducing CT to students (e.g. Repenning’s (2012) scalable
game design).
We also noted the variety of instructional strategies deployed to facilitate the acquisition
of CT skills. Effective teaching is an art that combines multiple pedagogical approaches to suit
learners and learning content. Therefore, it was be difficult to infer the causative influence of all
majority of the studies reported positive learning outcomes and highlighted the adaptability of
CT to various teaching styles. Instructional strategies explicitly deployed in three or more studies
are shown in Table 1. Other pedagogical approaches used to teach CT included feedback,
portfolio design, peer-to-peer assessment, peer review, peer coaching, inquiry-based learning,
project-based learning, debate, jigsaws and other puzzles, prototyping, mentoring, metaphor use,
personalised learning, role play, role models, storytelling, and flipped learning.
With respect to gender, more than 90% of the studies were conducted with mixed-gender
samples. There was only one study with male-only participants (Munoz et al., 2018) and one
with female-only participants (Luo et al., 2020). Although we did not record the ratio of the male
to female participants in the individual studies in our chart, which limits the ability to infer
COMPUTATIONAL THINKING THROUGH AN EMPIRICAL LENS 17
gender balance in CT, the deliberate effort to ensure equality is encouraging. Significant number
of studies (77.78%, n = 63) were situated in school settings, indicating the positive
studies (59.26%, n = 48) did not report how CT interfaced with academic domains or school
subjects, which extends the conceptualisation of CT as a generic skill. The studies that reported
how CT interfaced with academic domains were classified into the four domains of STEM:
technology (17.28%, n = 14), science (12.35%, n = 10), engineering (7.41%, n = 6), and
mathematics (2.47%, n = 2). Four studies explored CT acquisition outside the traditional STEM
domains: psychology (Yadav et al., 2014), dance (Leonard et al., 2020), story-writing (Price &
Price-Mohr, 2018), and professional development (Yadav et al., 2018). In summary, the
While the small number of investigations in kindergartens can be attributed to the probable
cognitive load associated with learning CT, we were surprised by the paucity of interventions
with high school students. A plausible explanation is the rigidity of curricula in high schools,
where most students have concrete career directions in mind and computer science is an elective
Definition Explicitness
For clarity, the categorisation of definitions as explicit did not necessarily suggest that the
authors presented their own definitions of CT or used definitional phrases such as ‘CT is …’.
Rather, it indicated that the meaning of CT was clearly articulated in the authors’ descriptions of
CT – whether it was conceived by the authors or extended from other references. For example,
pointers such as "CT-test assesses the user’s computational thinking level on five dimensions"
(Taylor & Baek, 2019, p. 101) and “design for the programming for CT development curriculum
COMPUTATIONAL THINKING THROUGH AN EMPIRICAL LENS 18
was based on a framework adapted from Brennan and Resnick (2012)” (Kong et al., 2020, p. 5)
Based on our coding, we found that 70.37% of the studies (n = 57) offered explicit
definitions. Implicit definitions were found in 25.93% of the studies (n = 21). The definitions in
the remaining 3.70% of the studies (n = 3) were unclear and designated as undefined. All the
implicit and undefined categories were logged as non-referential definitions because the
meanings and absence of CT respectively, were deduced directly from the primary studies.
Of the studies that explicitly defined CT (n = 57), 68.42% (n = 39) gave referential
definitions, in contrast with 31.58% (n = 18) that were coded as non-referential definitions. This
indicated that more than two thirds of the studies that explicitly defined CT, adduced its
operational meaning from existing CT frameworks. While this finding may be encouraging and
aligned with the golden rule of hinging educational research on sound theory, other explanations
are possible. As we demonstrate in a later section, this trend may also be linked to the blurry
(Allsop, 2019; Kim et al., 2013; Rodríguez del Rey et al., 2020; Yadav et al., 2018). Kim et al.
every human being has” (p. 443) in an experimental investigation of the effectiveness of paper-
and-pencil programming over Logo in preservice teachers’ classroom. They measured the CT
skills acquired by the preservice teachers using the Group Assessment of Logical Thinking, an
instrument designed primarily to measure logical reasoning skills. This depicts that the authors
COMPUTATIONAL THINKING THROUGH AN EMPIRICAL LENS 19
inferred a strong association between CT and logical thinking. Another explicit conceptual
definition was reported in a study of a professional development programme. Yadav et al. (2018)
defined CT as “the ways of thinking, or habits of mind, that computer scientists use” (p. 379),
which is consistent with Wing’s (2006) seminal interpretation of CT. This broad definition was
crafted to lead in-service teachers to integrate CT with mathematics and science, reflecting on
the meaning. Through vignette challenges, Yadav et al. (2018) deduced that CT is programming
and problem-solving; that it involves the use of logic, algorithms, data manipulation, and pattern
The other two explicit definitions were most appropriately classified as multifaceted – a
process regulated by metacognitive practices with the aim of using computational concepts (i.e.
solutions to problems. According to Rodríguez del Rey et al. (2020), “CT is a cognitive process
executed by humans for the resolution of diverse problems using computational concepts” (p. 3).
While both multifaceted definitions highlighted the cognitive aspect of CT, which is consistent
with the assumption of a relationship between CT and logical thinking (Kim et al., 2013), we
Rodríguez del Rey et al. (2020) implemented CT concepts differently, focusing on data
Table 2 summarises the explicit componential definitions of CT, which did not originate
in another research. These definitions expressed how the researchers operationalised CT in their
studies without wholly adopting pre-existing frameworks. Of course, this does not suggest that
COMPUTATIONAL THINKING THROUGH AN EMPIRICAL LENS 20
the authors failed to situate their work in relation to existing frameworks. Rather, these
definitions reflected the meanings that the authors assigned to CT based on their review of the
literature. Although CT components varied across the studies, several CT concepts (also referred
Table 2, while there was no consensus on a definition of CT, the factors considered were highly
consistent, signifying that most of the CT components identified in these studies were
meaningfully aligned with computer science concepts and practices. They were also consistent
with sets of components identified in previous review articles, such as the five components
generalisation) in Shute et al. (2017). Conceptual consistency denoted alignment with computer
implementation in the studies. A notable exception to the conceptual consistency was a study
that explored the effects of an identification, modelling, simulation, and prototyping teaching
model in a college design course (Chen et al., 2020). CT was operationalised in the assessment
and transformation.
componential definitions in eight studies. For example, in a study investigating the relationship
between spatial reasoning and CT with primary school students (Città et al., 2019), CT was not
explicitly framed. However, the CT test measured the students’ ability to write and interpret
algorithms on paper and CT was implicitly coded as a composite of sequences and algorithms.
COMPUTATIONAL THINKING THROUGH AN EMPIRICAL LENS 21
constructs in educational research, the implicit components of CT were generally consistent with
the explicit components (see Table 2). An exception to the consistency was González-González
et al. (2019), who included software and hardware amongst the components of CT. While these
were unconventional corpora of computer science concepts that are associated with CT, it
appears that the authors included these components to make the learning intervention relatable to
The articles in this category (n = 13) asserted that participants engaged in CT learning
interventions and were assessed afterwards. However, we could not deduce any explicit
definitions or implied components of CT from the articles. The teaching and learning of
programming represented the common trend in these studies, such as the use of Python
college bioscience course (Libeskind-Hadas & Bush, 2013). Therefore, articles that focused on
Basogain et al. (2018) reported the use of learning strategies such as blended learning,
assessment for CT study. As part of the learning and assessment, the participants wrote programs
Unfortunately, the features of the rubric used in the peer-to-peer assessment were not described.
Other related studies compared how programming via virtual and physical robotic platforms
COMPUTATIONAL THINKING THROUGH AN EMPIRICAL LENS 22
supported CT development in middle school (Berland & Wilensky, 2015) and investigated the
roles of bodily engagement and computational perspectives in the learning of CT with a code-
Although the authors’ intention in the studies in this category was to explore CT, they
focused on programming. While programming may be a viable approach to learning CT (Lye &
Koh, 2014), this focus blurred the boundary between programming and CT.
Referential Definitions
systematic and meaningful ways. As stated earlier, the majority (68.42%, n = 39) of articles
within the explicitly defined category extracted CT definitions from previous studies or
frameworks. Of the 39 articles, 35.90% (n = 14) gave these definitions in the theory section. On
the other hand, 10.26% (n = 4) extracted their definitions from the implemented intervention,
while a remarkable 53.85% (n = 21) definitions were deduced from the assessment.
Table 3 lists the frameworks cited by the articles in the final pool. Brennan and Resnick’s
(2012) framework was the most frequently cited. They framed CT in three dimensions: concepts
(i.e. the knowledge that designers have and apply when they program), practices (i.e. what
designers do when programming), and perspectives (i.e., how designers see themselves and their
environments). This componential definition has been used to frame CT in diverse scenarios. For
example, it was adapted to design programming activities for teacher development programmes
in primary schools (Kong et al., 2020) and to understand differences by gender in eye tracking
when programming (Papavlasopoulou et al., 2020). The prevalence of Brennan and Resnick’s
(2012) framework may not be unconnected with the wide adoption of Scratch. Scratch is a free
COMPUTATIONAL THINKING THROUGH AN EMPIRICAL LENS 23
block-based visual programming platform for children, designed in compliance with the ‘low
threshold, high ceiling’ requirement for CT tools (Repenning et al., 2010) and has several million
users. The framework and scratch are linked outputs of a design-based learning research at the
As well as framing empirical studies, Brennan and Resnick’s (2012) framework offers a
reliable model for interpreting CT in review articles (e.g. Lye & Koh, 2014; Zhang & Nouri,
2019). Underpinned by this framework, Lye and Koh (2014) reviewed 27 empirical studies to
unmask the connections between CT and programming in K–12 education. Zhang and Nouri
(2019) used the framework to identify CT skills learned via K–9 Scratch in their systematic
review of 55 empirical studies. These researchers found that all of the components of Brennan
and Resnick’s (2012) framework were successfully captured in the reviewed empirical studies.
While CT concepts were predominantly explored in the literature, CT perspectives were the least
studied. Zhang and Nouri (2019) identified additional skills in the literature missing from
Brennan and Resnick’s (2012) framework. They suggested revision of the framework that would
add input and output to CT concepts; user interaction to CT perspectives; and multimodal design,
predictive thinking, and the ability to read, interpret, and communicate code to the CT practice
dimension. Whether to revise or not, these suggestions revealed that there may still be unknown
One article (Newton et al., 2020) referenced Repenning et al. (2015) as the framework for
analysing CT in students’ games but assessed abstraction, algorithms, and learning transfer as
factors of CT. The authors presented these factors as the summative categories from the CT
pattern of Repenning et al. (2015). Other frameworks cited included Dr. Scratch (Moreno-León
et al., 2015), CT test (Román-González et al., 2017), and Bebras (Dagienė & Sentance, 2016).
COMPUTATIONAL THINKING THROUGH AN EMPIRICAL LENS 24
These three models were used in 46.15% of the articles that defined CT referentially.
Collectively, their designs incorporated measurement instruments for CT, especially Dr. Scratch,
which automatically analyses Scratch projects. A substantial number of articles adopted these
frameworks, along with the 53.85% of the articles that defined CT in the assessment section.
This supports our initial observation that referential definitions may be linked to the blurry
components. This interpretation was shared by most of the researchers but contrasted with the
explicit conceptual definitions of CT, and 3 did not define CT. As most of the researchers
than 5% of the articles as significant. In contrast, non-significant components were those that
appeared in fewer than 5% of the articles. Table 4 shows the aggregated frequency of the
observed components and Figure 2 is a simple word cloud visualisation of the components of
CT. In a word cloud, the font weight of a word is directly proportional to its frequency.
decomposition, and debugging, are prominent dimensions of common CT models (e.g. Brennan
& Resnick, 2012; ISTE & CSTA, 2011; Shute et al., 2017). Although the significant components
seemed to overlap with the existing models, the sizeable number of non-significant components
COMPUTATIONAL THINKING THROUGH AN EMPIRICAL LENS 25
explains the challenges in delineating a CT model. Given that computer science concepts are
bioscience course (Libeskind-Hadas & Bush, 2013), and the programming of robots as part of
the development of a CT instrument (Chen et al., 2017). The above finding is consistent with the
observation that programming is the main method of teaching and learning CT (Lye & Koh,
2014). However, the distinction between CT and programming was narrow in these studies,
especially when the most prominent feature of CT is “conceptualizing, not programming” (Wing,
2006, p. 35).
There is a near consensus among researchers that CT is a thinking style for solving
problems with or without programming. We discovered that the meaning of CT in the reviewed
articles’ theoretical discussions often did not match the operationalised meaning. Conceptually,
the researchers understood that CT is not equivalent to programming. However, several studies
assessed CT through programming artefacts. Some studies that did not require learners to write
computer science” (p. 33) illuminates the overlap between CT and programming. Computer
COMPUTATIONAL THINKING THROUGH AN EMPIRICAL LENS 26
science is overly broad. Its branches include but are not limited to databases, networking,
software engineering, graphics, artificial intelligence, virtual reality, cryptography, and computer
security (Barr & Stephenson, 2011). Professionals in different branches of computer science
require different skill sets. For example, the knowledge and skills required for the effective
practice of cryptography versus artificial intelligence are different. Nevertheless, the different
branches of computer science require some common skills, such as programming, from
practitioners. Indeed, programming is an indisputable learning module and essential skill for
Considering the diverse nature of computer science, only a subset of the concepts and
practices in this domain may qualify as components of CT. Otherwise, various computer science
concepts that did not appear in our analysis of componential definitions (e.g. inheritance,
studies. Table 4 shows the components of CT that were extracted from the reviewed empirical
studies. In our view, these components closely reflect the programming concepts that computer
scientists use to solve problems, irrespective of the specific domain. Therefore, we deduce that
these programming concepts are most appropriate for describing CT. This also explains the close
association between CT and programming. By ‘programming’, we mean not only writing code
(coding) but also the series of activities involved in software engineering, from ideation to
Here, we summarise what the systematic review of the literature taught us about
explain and frame the nature of CT. This definition was deduced from two principles on which
COMPUTATIONAL THINKING THROUGH AN EMPIRICAL LENS 27
researchers have generally agreed. The first principle is that CT is a skill underpinned by
computer science concepts. Concisely, programming concepts are the underlying support for CT.
The second principle positions CT as a relevant cognitive skill that is transferable to problem
solving in other domains. We emphasise ‘other domains’ because this differentiates CT from
Undoubtedly, programming is a useful skill and will remain a core method of acquiring CT.
However, the ability to use this skill for everyday problem solving distinguishes CT from
programming.
solutions for problems in different knowledge areas. By ‘algorithmic solution’, we refer to the
“series of steps that control some abstract machine or computational model without requiring
human judgement” (Denning, 2017, p. 33). This helps to distinguish an algorithm from just any
sequence of steps.
What constitutes programming concepts varies greatly (see Table 4) and cannot be
conclusively bound in a finite list. Moreover, some concepts can be operationalised in diverse
ways. For example, abstraction is a key concept in programming (Kramer, 2007), generally
understood as the practice of focusing on major details while concealing the peripherals.
However, abstraction occurs with different degrees of granularity, and practices such as
modularisation, problem decomposition, pattern generation, and problem formulation are all
approach.
COMPUTATIONAL THINKING THROUGH AN EMPIRICAL LENS 28
We embarked on this review with the goal of identifying ambiguities and gaining clarity
in the operationalisation of CT; a 21st century skill for navigating daily procedural tasks
efficiently and a cognitive ability transferable to problem solving in other domains (Denning,
2017). How have researchers operationalised CT in empirical studies? In our own practice, we
have long faced the challenge of distinguishing between CT and computer programming. Indeed,
this is not peculiar particular and has often been acknowledged in the CT literature. For example,
consider how Nardelli (2019) described Wing’s (2006) seminal article as a call to “start rolling
the ball” towards the introduction of computer science and informatics for all school students (p.
33). While acknowledging Nardelli’s (2019) argument that it may be risky to emphasise a few
aspects of Wing’s (2006) model and promote CT as a distinct subject, alternative interpretations
– such as the idea that CT is for everyone, not only computer scientists – cannot be overlooked
when programming and computer science are distinct subjects with considerable high cognitive
demand.
limitations and delimitations. The major constraint we established was the exclusion of studies
that did not address the learning and assessment of CT. We also restricted our search to peer-
reviewed articles. Inevitably, these delimitations excluded reviews and theoretical literature.
These decisions led to the exclusion of some articles that may have offered additional insights
into the subject. For example, Zhong et al. (2016) investigated how pair programming aided in
the development of CT, but their study was excluded because it did not measure CT. However,
our decision to impose these delimitations was strongly tied to our core inquiry into the
roles of reviews and theoretical articles in framing this exercise, they were justifiably excluded
constructionist approach suggests that children can develop a special reasoning style through the
epistemic practice of teaching computers to think, which can increase children’s confidence in
solving problems with different cognitive styles. In our interpretation, teaching computers to
think is analogous to computer programming, and the reasoning style described by Papert (1980)
is equivalent to CT as described by Wing (2006). CT derives its popularity from the consensus
that it is not limited to programming but a cognitive skill for everyone that is transferable to
other learning domains. However, most studies in this area have concentrated on programming.
Although it is indisputable that a strong association exists between CT and programming, future
research should explore other ways in which CT can be operationalised and transferred to other
domains.
considerably, they were aligned with the concepts that computer scientists use when writing
whether the components of CT are consistently operationalised both within and between studies.
outcomes will deepen understanding of CT. Considering the sizeable number of definitions
deduced from the interventions or assessment instruments, researchers will add clarity to CT
Our overarching aim in seeking definitional clarity was to enhance the teaching, learning,
and assessment of CT. As stated earlier, our objective was not to seek a consensus on a definition
of CT. In fact, based on our analysis of the trends in componential definitions, there is no sign of
solving, supported by various programming concepts. Within this framing, CT can be effectively
Thereby, researchers and instructors can avoid common pitfalls in conceptualising CT.
COMPUTATIONAL THINKING THROUGH AN EMPIRICAL LENS 31
References
Alexander, P. A. (2020). Methodological guidance paper: The art and science of quality
https://doi.org/10.3102/0034654319854352
https://doi.org/10.1016/j.ijcci.2018.10.004
Angeli, C., & Valanides, N. (2019). Developing young children's computational thinking with
Atmatzidou, S., & Demetriadis, S. (2016). Advancing students’ computational thinking skills
through educational robotics: A study on age and gender relevant differences. Robotics
Atmatzidou, S., & Demetriadis, S. (2017). A didactical model for educational robotics activities:
Barr, D., Harrison, J., & Conery, L. (2011). Computational thinking: A digital age skill for
Barr, V., & Stephenson, C. (2011). Bringing computational thinking to K-12: What is involved
and what is the role of the computer science education community? ACM Inroads, 2(1),
48-54.
COMPUTATIONAL THINKING THROUGH AN EMPIRICAL LENS 32
Basogain, X., Olabe, M. Á., Olabe, J. C., & Rico, M. J. (2018). Computational thinking in pre-
https://doi.org/10.1016/j.chb.2017.04.058
Berland, M., & Wilensky, U. (2015). Comparing virtual and physical robotics environments for
Bers, M. U., Flannery, L., Kazakoff, E. R., & Sullivan, A. (2014). Computational thinking and
Bocconi, S., Chioccariello, A., Dettori, G., Ferrari, A., & Engelhardt, K. (2016). Developing
https://doi.org/10.2791/792158
Brennan, K., & Resnick, M. (2012). New frameworks for studying and assessing the
Buitrago Flórez, F., Casallas, R., Hernández, M., Reyes, A., Restrepo, S., & Danies, G. (2017).
Bull, G., Garofalo, J., & Hguyen, N. R. (2020). Thinking about computational thinking. Journal
https://doi.org/10.1080/21532974.2019.1694381
COMPUTATIONAL THINKING THROUGH AN EMPIRICAL LENS 33
Calderon, A. C., Skillicorn, D., Watt, A., & Perham, N. (2020). A double dissociative study into
Carnegie Mellon Center for Computational Thinking. (n.d.). What is computational thinking?
Chang, C.-K. (2014). Effects of using Alice and Scratch in an introductory programming course
https://doi.org/10.2190/EC.51.2.c
Chen, G., He, Y., & Yang, T. (2020). An ISMP approach for promoting design innovation
capability and its interaction with personal characters. IEEE Access, 8, 161304-161316.
https://doi.org/10.1109/ACCESS.2020.3019290
Chen, G., Shen, J., Barth-Cohen, L., Jiang, S., Huang, X., & Eltoukhy, M. (2017). Assessing
https://doi.org/10.1016/j.compedu.2017.03.001
Città, G., Gentile, M., Allegra, M., Arrigo, M., Conti, D., Ottaviano, S., Reale, F., & Sciortino,
Clarivate Analytics. (n.d.). Web of Science Core Collection. Retrieved October 11, 2020 from
https://clarivate.com/webofsciencegroup/solutions/web-of-science/
Corradini, I., Lodi, M., & Nardelli, E. (2017). Conceptions and misconceptions about
computational thinking among Italian primary school teachers. Proceedings of the 2017
Csizmadia, A., Curzon, P., Dorling, M., Humphreys, S., Ng, T., Selby, C., & Woollard, J.
Cutumisu, M., Adams, C., & Chang, L. (2019). A scoping review of empirical research on recent
Czerkawski, B. C., & Lyman, E. W., III. (2015). Exploring issues about computational thinking
Dagienė, V., & Sentance, S. (2016). It’s computational thinking! Bebras tasks in the curriculum.
46747-4_3
Denner, J., Campe, S., & Werner, L. (2019). Does computer game design and programming
Dinsmore, D. L., Alexander, P. A., & Loughlin, S. M. (2008). Focusing the conceptual lens on
Elsevier. (n.d.). Getting the most out of published research. Retrieved October 11, 2020 from
https://www.elsevier.com/solutions/scopus/how-scopus-works
COMPUTATIONAL THINKING THROUGH AN EMPIRICAL LENS 35
Computational thinking and Down syndrome: An exploratory study using the KIBO
Grover, S., & Pea, R. (2013). Computational thinking in K-12: A review of the state of the field.
Guzdial, M. (2008). Paving the way for computational thinking. Communications of the ACM,
Hsu, T.-C., Chang, S.-C., & Hung, Y.-T. (2018). How to learn and how to teach computational
thinking: Suggestions based on a review of the literature. Computers & Education, 126,
296-310. https://doi.org/10.1016/j.compedu.2018.07.004
International Society for Technology in Education. (2016). ISTE standards for students.
International Society for Technology in Education, & Computer Science Teachers Association.
http://www.iste.org/docs/ct-documents/computational-thinking-operational-definition-
flyer.pdf?sfvrsn=2
Israel, M., Pearson, J. N., Tapia, T., Wherfel, Q. M., & Reese, G. (2015). Supporting all learners
Kalelioğlu, F. (2015). A new way of teaching programming skills to K-12 students: Code.org.
Kim, B., Kim, T., & Kim, J. (2013). Paper-and-pencil programming strategy toward
Knuth, D. E. (1974). Computer science and its relation to mathematics. The American
https://doi.org/10.1080/00029890.1974.11993556
Kong, S.-C., Chiu, M. M., & Lai, M. (2018). A study of primary school students' interest,
https://doi.org/10.1016/j.compedu.2018.08.026
Kong, S.-C., Lai, M., & Sun, D. (2020). Teacher development in computational thinking: Design
and learning outcomes of programming concepts, practices and pedagogy. Computers &
Kong, S.-C., & Lao, A. C.-C. (2019). Assessing in-service teachers’ development of
Korkmaz, Ö., Çakir, R., & Özden, M. Y. (2017). A validity and reliability study of the
https://doi.org/10.1016/j.chb.2017.01.005
Kramer, J. (2007). Is abstraction the key to computing? Communications of the ACM, 50(4), 36-
42. https://doi.org/10.1145/1232743.1232745
COMPUTATIONAL THINKING THROUGH AN EMPIRICAL LENS 37
Lee, I., Martin, F., Denner, J., Coulter, B., Allan, W., Erickson, J., Malyn-Smith, J., & Werner,
L. (2011). Computational thinking for youth in practice. ACM Inroads, 2(1), 32-37.
https://doi.org/10.1145/1929887.1929902
Leonard, A. E., Daily, S. B., Jörg, S., & Babu, S. V. (2020). Coding moves: Design and research
https://doi.org/10.1080/15391523.2020.1760754
Libeskind-Hadas, R., & Bush, E. (2013). A first course in computing with applications to
Liu, C.-C., Chen, W.-C., Lin, H.-M., & Huang, Y.-Y. (2017). A remix-oriented approach to
Looi, C.-K., How, M.-L., Longkai, W., Seow, P., & Liu, L. (2018). Analysis of linkages between
Lu, J. J., & Fletcher, G. H. L. (2009). Thinking about computational thinking. ACM SIGCSE
Luo, F., Antonenko, P. D., & Davis, E. C. (2020). Exploring the evolution of two girls’
Lye, S. Y., & Koh, J. H. L. (2014). Review on teaching and learning of computational thinking
through programming: What is next for K-12? Computers in Human Behavior, 41, 51-61.
https://doi.org/10.1016/j.chb.2014.09.012
COMPUTATIONAL THINKING THROUGH AN EMPIRICAL LENS 38
Menekse, M. (2015). Computer science teacher professional development in the United States: A
review of studies published between 2004 and 2014. Computer Science Education, 25(4),
325-350. https://doi.org/10.1080/08993408.2015.1111645
Moher, D., Liberati, A., Tetzlaff, J., Altman, D. G., & The PRISMA Group. (2009). Preferred
reporting items for systematic reviews and meta-analyses: The PRISMA statement. PLoS
Moreno-León, J., Robles, G., & Román-González, M. (2015). Dr. Scratch: Automatic analysis of
Mouza, C., Pan, Y.-C., Yang, H., & Pollock, L. (2020). A multiyear investigation of student
https://doi.org/10.1177/0735633120905605
Munoz, R., Villarroel, R., Barcelos, T. S., Riquelme, F., Quezada, A., & Bustos-Valenzuela, P.
https://doi.org/10.1109/access.2018.2877417
https://doi.org/10.1006/ceps.1999.1019
Nam, K. W., Kim, H. J., & Lee, S. (2019). Connecting plans to action: The effects of a card-
National Research Council. (2013). Next generation science standards: For states, by states. The
Newton, K. J., Leonard, J., Buss, A., Wright, C. G., & Barnes-Johnson, J. (2020). Informal
STEM: Learning with robotics and game design in an urban context. Journal of Research
https://doi.org/10.1080/15391523.2020.1713263
Noh, J., & Lee, J. (2019). Effects of robotics programming on the computational thinking and
Papavlasopoulou, S., Sharma, K., & Giannakos, M. N. (2020). Coding activities for children:
Papert, S. (1980). Mindstorms: Children, computers, and powerful ideas. Basic Books.
Popat, S., & Starkey, L. (2019). Learning to code or coding to learn? A systematic review.
Price, C. B., & Price-Mohr, R. M. (2018). An evaluation of primary school children coding using
https://doi.org/10.1080/07380569.2018.1531613
Psycharis, S., & Kallia, M. (2017). The effects of computer programming on high school
Repenning, A. (2012). Programming goes back to school. Communications of the ACM, 55(5),
38. https://doi.org/10.1145/2160718.2160729
Repenning, A., Webb, D., & Ioannidou, A. (2010). Scalable game design and the development of
a checklist for getting computational thinking into public schools. Proceedings of the 41st
https://doi.org/10.1145/1734263.1734357
Repenning, A., Webb, D. C., Koh, K. H., Nickerson, H., Miller, S. B., Brand, C., Horses, I. H.
M., Basawapatna, A., Gluck, F., Grover, R., Gutierrez, K., & Repenning, N. (2015).
schools through game design and simulation creation. ACM Transactions on Computing
Rodríguez del Rey, Y. A., Cawanga Cambinda, I. N., Deco, C., Bender, C., Avello-Martínez, R.,
https://doi.org/10.1002/cae.22214
https://doi.org/10.1016/j.chb.2016.08.047
Rothstein, H. R., & Hopewell, S. (2009). Grey literature. In H. Cooper, L. V. Hedges, & J. C.
Valentine (Eds.), The handbook of research synthesis and meta-analysis, 2nd ed. (pp.
Rowe, E., Asbell-Clarke, J., Baker, R., Gasca, S., Bardar, E., & Scruggs, R. (2018). Labeling
implicit computational thinking in pizza pass gameplay. Extended Abstracts of the 2018
https://doi.org/10.1145/3170427.3188541
Scherer, R., Siddiq, F., & Viveros, B. S. (2019). The cognitive benefits of learning computer
Selby, C., & Woollard, J. (2013). Computational thinking: The developing definition.
https://eprints.soton.ac.uk/356481/
Shute, V. J., Sun, C., & Asbell-Clarke, J. (2017). Demystifying computational thinking.
Singer, L. M., & Alexander, P. A. (2017). Reading on paper and digitally: What the past decades
https://doi.org/10.3102/0034654317722961
Sung, W., & Black, J. B. (2020). Factors to consider when designing effective learning: Infusing
Tang, X., Yin, Y., Lin, Q., Hadad, R., & Zhai, X. (2020). Assessing computational thinking: A
https://doi.org/10.1016/j.compedu.2019.103798
Taylor, K., & Baek, Y. (2019). Grouping matters in computational robotic activities. Computers
Tissenbaum, M., Sheldon, J., & Abelson, H. (2019). From computational thinking to
https://doi.org/10.1145/3265747
students know and can do. Journal of Educational Computing Research, 57(1), 3-31.
https://doi.org/10.1177/0735633117743918
Uzumcu, O., & Bay, E. (2020). The effect of computational thinking skill program design
Voogt, J., Fisser, P., Good, J., Mishra, P., & Yadav, A. (2015). Computational thinking in
compulsory education: Towards an agenda for research and practice. Education and
Weintrop, D., Beheshti, E., Horn, M., Orton, K., Jona, K., Trouille, L., & Wilensky, U. (2016).
9581-5
https://doi.org/10.1145/1118178.1118215
Witherspoon, E. B., Schunn, C. D., Higashi, R. M., & Shoop, R. (2018). Attending to structural
Wong, G. K., & Jiang, S. (2018). Computational thinking education for children: Algorithmic
Xia, L., & Zhong, B. (2018). A systematic review on teaching and learning robotics content
https://doi.org/10.1016/j.compedu.2018.09.007
Yadav, A., Hong, H., & Stephenson, C. (2016). Computational thinking for all: Pedagogical
Yadav, A., Krist, C., Good, J., & Caeli, E. N. (2018). Computational thinking in elementary
https://doi.org/10.1080/08993408.2018.1560550
Yadav, A., Mayfield, C., Zhou, N., Hambrusch, S., & Korb, J. T. (2014). Computational thinking
Yin, Y., Hadad, R., Tang, X., & Lin, Q. (2019). Improving and assessing computational thinking
in maker activities: The integration with physics and engineering learning. Journal of
Yu, J., & Roque, R. (2019). A review of computational toys and kits for young children.
https://doi.org/10.1016/j.ijcci.2019.04.001
COMPUTATIONAL THINKING THROUGH AN EMPIRICAL LENS 44
Zhang, L., & Nouri, J. (2019). A systematic review of learning computational thinking through
https://doi.org/10.1016/j.compedu.2019.103607
Zhong, B., Wang, Q., & Chen, J. (2016). The impact of social factors on pair programming in a
https://doi.org/10.1016/j.chb.2016.07.017
COMPUTATIONAL THINKING THROUGH AN EMPIRICAL LENS 45
Table 1
Summary of the Profiles of the Charted Studies
Year of publication 2013 (2, 2.47), 2014 (3, 3.70), 2015 (1, 1.23), 2016 (6, 7.41),
2017 (6, 7.41), 2018 (10, 12.35), 2019 (24, 29.63), 2020 (29, 35.80)
Type of study Exploratory (50, 61.73), quasi-experimental (16, 19.75),
experimental (15, 18.52)
Number of participants 1–10 (4, 4.94), 11–30 (11, 13.58), 31–100 (35, 43.21),
101–441 (30, 37.04), n/a (1, 1.23)
Duration of intervention 0–4.9 (12, 14.81), 5–9.9 (15, 18.52), 10–14.9 (11, 13.58),
(hours) 15–24.9 (13, 16.05), 25–78 (11, 13.58), n/a (19, 23.46)
Research setting a School (63, 77.78), camp (11, 13.58), after school (9, 11.11), n/a (3, 3.70)
Academic Technology (14, 17.28), science (10, 12.35), engineering (6, 7.41),
domain a mathematics (2, 2.47), others (4, 4.94), n/a (48, 59.26)
Instructional strategy a Lecturing (39, 48.15), scaffolding (15, 18.52), collaboration (14, 17.28),
learning-through-activities (10, 12.35), game-based (7, 8.64),
problem-based (7, 8.64), simulation and modelling (7, 8.64),
constructionism (6, 7.41), pair programming (5, 6.17),
game-playing (4, 4.94), group work (4, 4.94), reflection (4, 4.94),
constructivism (3, 3.70)
Gender Female (1, 1.23), male (1, 1.23), mixed (74, 91.36), n/a (5, 6.17)
Grade level a Kindergarten (4, 4.94), lower elementary (15, 18.52),
upper elementary (30, 37.04), middle school (20, 24.69),
high school (7, 8.64), college (15, 18.52), in-service teacher (3, 3.70),
special needs (1, 1.23), n/a (3, 3.70)
Nature of learning tasks Programming (62, 76.54), non-programming (17, 20.99), both (1, 1.23),
n/a (1, 1.23)
Nature of assessment Non-programming (54, 66.67), programming (26, 32.10), both (1, 1.23)
tasks
Explicitness of definition Explicit (57, 70.37), implicit (21, 25.93), n/a (3, 3.70)
Nature of definition Componential (61, 75.31), conceptual (15, 18.52), multifaceted (2, 2.47),
n/a (3, 3.70)
Position of definition Assessment (28, 34.57), theoretical (27, 33.33), intervention (23, 28.40),
n/a (3, 3.70)
Referential definition Yes (39, 48.15), no (42, 51.85)
a
The values for the study characteristics are not mutually exclusive.
COMPUTATIONAL THINKING THROUGH AN EMPIRICAL LENS 46
Table 2
Author(s) Definition
Table 3
Definition of Computational Thinking from the Referenced Literature
Note. CMCCT = Carnegie Mellon Center for Computational Thinking, CSTA = Computer
Table 4
abstraction (38), sequence (31), transfer, design, efficiency, modelling, pattern, problem
conditional (26), algorithm (24), loop formulation, problem solving, simulation, system thinking,
(23), parallelism (23), debugging (22), app originality, automation, causal inference, clear
data (20), decomposition (16), event instruction, control, correspondence, creativity, data analysis,
(15), iteration (15), modularity (14), data collection, develop, embedding, experimentation,
testing (13), logic (12), operator (12), expressions, extraneous block, flowchart, functionality,
remixing (12), reusing (12), hardware, instruction, keyboard input, list, object, operators,
generalisation (11), flow control (9), planning, problem identification, problem understanding,
evaluation (8), synchronisation (8), programming, random number, representation, robot
user interactivity (8), variable (5), programming, simplification, software, sprite customisation,
function (4) sprite name, stage customisation, structuring, transformation,
use, user interface, valuableness
COMPUTATIONAL THINKING THROUGH AN EMPIRICAL LENS 49
Figure 1
Flow diagram of search strategies and screening phases for selecting eligible studies for review
COMPUTATIONAL THINKING THROUGH AN EMPIRICAL LENS 50
Figure 2