0% found this document useful (0 votes)
97 views13 pages

Gamifying Learning Experiences Practical Implications

The document discusses a study on gamifying learning experiences in higher education. Researchers designed and implemented a gamification plugin for an e-learning platform and conducted an experiment in a university course. They found that students who completed the gamified experience got better scores on practical assignments and overall scores. However, these students performed poorer on written assignments and participated less in class activities, though they had higher initial motivation. The study provides both support and challenges to common beliefs about using games in education and suggests more research is needed.

Uploaded by

Mariana Mendoza
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
97 views13 pages

Gamifying Learning Experiences Practical Implications

The document discusses a study on gamifying learning experiences in higher education. Researchers designed and implemented a gamification plugin for an e-learning platform and conducted an experiment in a university course. They found that students who completed the gamified experience got better scores on practical assignments and overall scores. However, these students performed poorer on written assignments and participated less in class activities, though they had higher initial motivation. The study provides both support and challenges to common beliefs about using games in education and suggests more research is needed.

Uploaded by

Mariana Mendoza
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Computers & Education 63 (2013) 380–392

Contents lists available at SciVerse ScienceDirect

Computers & Education


journal homepage: www.elsevier.com/locate/compedu

Gamifying learning experiences: Practical implications and outcomes


Adrián Domínguez, Joseba Saenz-de-Navarrete, Luis de-Marcos*, Luis Fernández-Sanz, Carmen Pagés,
José-Javier Martínez-Herráiz
Computer Science Department, University of Alcalá, Dpto Ciencias Computación, Edificio Politécnico, Ctra Barcelona km 33.1, 28871 Alcalá de Henares, Madrid, Spain

a r t i c l e i n f o a b s t r a c t

Article history: Gamification is the use of game design elements and game mechanics in non-game contexts. This idea
Received 31 July 2012 has been used successfully in many web based businesses to increase user engagement. Some re-
Received in revised form searchers suggest that it could also be used in web based education as a tool to increase student
20 December 2012
motivation and engagement. In an attempt to verify those theories, we have designed and built
Accepted 23 December 2012
a gamification plugin for a well-known e-learning platform. We have made an experiment using this
plugin in a university course, collecting quantitative and qualitative data in the process. Our findings
Keywords:
suggest that some common beliefs about the benefits obtained when using games in education can be
Gamification
Games-based learning challenged. Students who completed the gamified experience got better scores in practical assignments
Computer game and in overall score, but our findings also suggest that these students performed poorly on written as-
Game mechanic signments and participated less on class activities, although their initial motivation was higher.
Motivation Ó 2013 Elsevier Ltd. All rights reserved.
Engagement
e-learning

1. Introduction

Since the 1970s and 80s, video games have been increasing their popularity over time as a form of entertainment. Firstly oriented to-
wards a male audience, the video game industry has made big efforts to expand its market and reach more kinds of people, especially
women and families. But it was not until the most recent years that the industry achieved this objective, with two clear examples, the Wii
console system, and the Facebook social games, both with millions of users around the world. Currently, video games are the most powerful
entertainment industry in economic terms,1 and are also considered an incipient form of art.2
Education researchers have viewed this kind of entertainment with great interest. Video games are interactive activities that continually
provide challenges and goals to the players, thus involving them into an active learning process to master the game mechanics (Koster,
2005). At the same time, video games provide a fictional context in the form of narrative, graphics and music, which if used appropri-
ately, can encourage the interest of players on non-gaming topics, like for example, history (Watson, Mong, & Harris, 2011). Due to this
potential, a lot of work has been done trying to unveil how video games could be used successfully with educational purposes. In the 1980s
Malone (1980) and Bowman (1982) theorized about what makes computer games so appealing to players, and how those aspects could be
applied in education to improve student motivation and engagement. Over time researchers conducted many theoretical and empirical
studies on this subject. These studies have unveiled many potential advantages of videogames in education like immediate feedback, in-
formation on demand, productive learning, motivating cycles of expertise, self-regulated learning or team collaboration (Gee, 2003; Rosas,
Nussbaum, & Cumsille, 2003); but also some issues related to educative content, learning transfer, learning assessment, teacher implication
and technological infrastructure (Facer, 2003; Squire, 2002, 2003). Recently Connolly, Boyle, MacArthur, Hainey, and Boyle (2012) presented
a systematic literature review on games-based learning and serious gaming focusing on positive outcomes. They also stress the necessity of
more rigorous evidence of games’ effectiveness and real impact.

* Corresponding author. Tel.: þ34 918856656; fax: þ34 918856646.


E-mail addresses: [email protected] (A. Domínguez), [email protected] (J. Saenz-de-Navarrete), [email protected], [email protected] (L. de-Marcos),
[email protected] (L. Fernández-Sanz), [email protected] (C. Pagés), [email protected] (J.-J. Martínez-Herráiz).
1
Factbox: A look at the $65 billion video games industry. June 6, 2011. Reuters. http://uk.reuters.com/article/2011/06/06/us-videogames-factbox-idUKTRE75552I20110606.
2
Art-s in Media. http://arts.gov/grants/apply/AIM-presentation.html.

0360-1315/$ – see front matter Ó 2013 Elsevier Ltd. All rights reserved.
http://dx.doi.org/10.1016/j.compedu.2012.12.020
A. Domínguez et al. / Computers & Education 63 (2013) 380–392 381

Due to mentioned issues, some researchers do not focus on using videogames to educate, but on exporting good aspects of video games
to non-gaming educative contexts. This concept, which is not exclusive of education, is commonly called ‘gamification’. Some researchers
generically defined it as the use of game design elements and game mechanics in non-game contexts (Deterding, Dixon, & Khaled, 2011),
although this broad definition has been further refined to reflect the most common objective of gamification: increase user experience and
engagement with a system. Another relevant fact is that, like videogames, gamification is still based on technology, and it’s almost always
applied on desktop, web or smartphone applications. Attending to these facts, it could be more narrowly defined as incorporating game
elements into a non-gaming software application to increase user experience and engagement. This last definition is the one we will use for
the rest of the paper.
Gamification has been incorporated with commercial success into platforms,3 especially social ones, as a way to create narrow re-
lationships between the platform and the users, and to drive viral behaviors on them to increase platform popularity. This success has made
some researchers theorize that it could also be used in education as a tool to increase student engagement and to drive desirable learning
behaviors on them (Lee & Hammer, 2011). Attending to its technological nature, one of the fields where gamification may have a greater
impact is online learning. Its potential benefits may address well-known issues as, for example, the lack of student motivation due to the
limited capacity of interaction with teacher and classmates (Liaw, 2008). In addition, the monitoring and communication infrastructure of e-
learning platforms provides the necessary tools to incorporate different gamification mechanisms and to measure their usage by students.
This paper will make a contribution to the empirical evidence in the field by designing, implementing and evaluating a gamified learning
experience in tertiary education. Our research tries to bridge the gap between theory and practice and study the design and consequences of
applying gamification in a real educational setting. The rest of the paper is structured as follows: Section 2 presents previous research of
gamification in education. Section 3 presents a theoretical analysis of videogames and motivation. Section 4 presents the system’s design
and Section 5 briefly outlines the technological architecture. Section 6 presents the experimental design. Section 7 presents quantitative and
qualitative results and discussion on those results. Finally conclusions and future research lines are outlined in Section 8.

2. Previous research

While some researchers are already working on it, currently there is still little work on this subject. Muntean made a theoretical analysis
of gamification as a tool to increase engagement in e-learning platforms (Muntean, 2011). Based on Fogg’s Behavior Model, the author states
that gamification mechanics can be used to motivate and trigger desired behaviors on students. Although he provides a list of gamification
elements explaining how they could be included in an e-learning course, there is no empirical research so, in our opinion, more work will be
required to give an implementation and obtain evidence about its effect on students.
Silva proposes another list of gamification elements, focusing specifically on social game mechanisms, that could be included in e-
learning courses to increase student motivation by means of new interaction mechanisms with classmates (Silva, 2010). Customization,
community interaction or leaderboards are some of the proposed mechanisms, but the author provides little guidance of how to apply them
on education, so more work is needed in this area.
Recently Simões, Díaz & Fernández (2013) presented a social gamification framework for http://schoooools.com, a social learning
environment, which “aims to assist educators and schools with a set of powerful and engaging educational tools to improve students’
motivation and learning outcomes”(p. 3). This framework enables teachers to deliver contents fitted to learning contexts and students’
profiles by choosing the appropriate social gamification tools, based on social games’ mechanics and dynamics. These authors also present
a scenario describing how a specific mechanic can be integrated using a point-based reward system, thus demonstrating the extensibility of
the framework, but there is no empirical evidence about the effectiveness of this approach.
One of the few empirical researches on this subject is the master’s thesis “Game mechanic based e-learning” (Gåsland, 2011). In her work,
Gaasland presents a detailed experiment in which she developed a web platform for a gamified e-learning experience and evaluated it with
a university class. The platform served as a collaborative database where students could create and answer questions, using it as an
alternative way to study and revise topics. Apart from the collaborative aspect, the only gamification mechanism is Experience Points,
a classic video game mechanic used to keep track of progression. Results suggest that the platform is somewhat motivating, but that much
more research is needed to test other gamification mechanisms and their combinations.
Our objective is to continue working on the line of the previous papers from an empirical point of view, studying also the motivational
impact of different gamification mechanisms. For that, we have created an e-learning gamification system that includes a limited set of those
mechanisms, and we have tested it on a university course, obtaining qualitative and quantitative data from the students. This contribution
will lead to a better understanding of the effects of gamification on e-learning.

3. Videogames and motivation

To create a gamification system that increases student motivation it is necessary to focus on the fundamental elements that make
videogames appealing to their players. According to Lee and Hammer (2011), games are motivating because of their impact on the cognitive,
emotional and social areas of players; and so, gamification in education should also focus on those three areas.
In the cognitive area, a game provides a complex system of rules along with series of tasks that guide players through a process to master
those rules. These tasks are designed as cycles of expertise (Gee, 2003). A cycle consists of a series of short-term tasks which players
repeatedly try to complete in a try and fail process until the necessary skill level is acquired. When the player is involved in this learning
process, games try to assure that players always know what to do next, and that they have the necessary knowledge to do it. To make the
learning process customizable, task sequences are usually non-linear, and players have a certain degree of freedom to choose which tasks to
accomplish depending on skill and personal preferences.

3
A notable example is Badgeville. http://www.badgeville.com/.
382 A. Domínguez et al. / Computers & Education 63 (2013) 380–392

Fig. 1. Sample of hierarchical tree for course ‘Qualification for users of ICT’.

The impact on the emotional area works mainly around the concept of success and failure. On one hand, when players complete tasks
they are expected to have positive emotions by their mere fact of overcoming difficulties. Games try to assure and increase those feelings
with reward systems that give immediate recognition to players’ success, awarding them with points, trophies or items on task completion.
On the other hand, when players fail, they are expected to feel anxiety. While some degree of anxiety is acceptable, it is not desirable that it
transforms into frustration. To avoid that, sequences of tasks are carefully designed to fit players’ skills at any level, and include low penalties
on failure to promote experimentation and task repetition. If the difficulty of tasks is correctly balanced, it can drive the players to a flow
state which is highly motivating (Csikszentmihalyi, 2008).
When multiple players interact through the game, these interactions have impact on players’ social area. Videogames offer a wide range
of multiplayer interaction mechanisms which are integrated in the rules of the system. These mechanisms make it possible for players to
cooperate helping each other towards a common goal, to compete trying to impair other players or to perform better than them, or just to
interact socially by talking, flirting, trading or gifting for example. All these kinds of interaction let players build different in-game identities
taking meaningful roles and obtaining recognition from other players (Lee & Hoadley, 2007).
All these three areas (cognitive, emotional and social) seem to be the base for player motivation, but their limits are blurry and game
mechanics usually cover more than one at the same time. For example, many items that are awarded to players on success are just keys to
new cycles of expertise that increase game complexity and difficulty, impacting both emotional and cognitive areas. Social area is always
mixed with cognitive area, when a task must be solved by means of player cooperation or competition; or with emotional area, when
rewards systems have an impact on players’ social status.
The main objective behind gamification in education is to apply some of these ideas when designing educative initiatives and their
contents in an attempt to make them more motivating. The fact that technology is necessary to implement most of the exposed mechanisms
makes e-learning platforms an ideal environment for experimentation.

4. System design

According to elements exposed in the previous section, we have designed a gamified educative experience in which some of those
elements are adapted and applied on an e-learning platform used as a tool in a university course. The course “Qualification for users of ICT” is
a transversal course in which students of different grades learn how to effectively use common ICT tools. The course is aimed at promoting
basic ICT competence at user level for students. It is inspired in the well-known ECDL (European Computer Driving License),4 a de-facto
vendor-independent standard in Europe for ICT literacy, with millions of certified people. Syllabus includes modules on general ICT
knowledge, basic use of operating system, word processor, spreadsheet, presentation software, database and communications skills with
web browsers and email. The course has optional exercises designed to improve the skills of students so that they perform better on final
exams. These exercises are usually downloadable from a Blackboard e-learning platform as PDFs. Instead of providing them as down-
loadable text files, we have created a Blackboard plugin which provides the same exercises in a gamified way. The main objective of this
plugin is to increase student motivation towards completing optional exercises through the use of rewards and competition mechanisms. In
the following sections we describe the design of this plugin.

4.1. Cognitive area

The first step was the design of the cognitive area of the experience. In this case, the system of rules in which students must obtain skills
is provided by the ICT tools used in the course, and the tasks that guide the player in the tool mastery process are the optional exercises. Due
to our research objectives, we decided that the gamification impact on this aspect should be limited in order to keep gamified tasks as
similar as possible to traditional optional exercises. Our solution was to create a hierarchical tree following the course topics and optional
exercises structure (Fig. 1). First level of the tree matches subject’s list of topics; second level matches optional exercises for each topic, called
‘challenges’; third level matches specific tasks in each challenge, called ‘trophies’ or challenge activities; and fourth level matches specific

4
http://www.ecdl.com/.
A. Domínguez et al. / Computers & Education 63 (2013) 380–392 383

steps in each stage that provide students with a detailed description of the work they have to do in order to obtain the trophy. Students can
freely access any topic and its challenges once it has been introduced in lectures. Trophies in a challenge are designed to be increasing in
difficulty and based on the previous ones, so they are sequentially unlocked as the student completes them. In order to make this hierarchy
clear for the students, we included two challenges per topic – intermediate and advance – and at most four trophies per challenge – copper,
silver, gold and platinum, each element with an appropriate visual representation (Fig. 2). Although these tasks are presented in video game-
like fashion, they are exactly the same as their traditional counterparts presented in PDF format.
Another important element of this area was task evaluation. Traditional exercises were not evaluated at all, but in order to be able to
reward task completion, we required an evaluation mechanism. An ideal mechanism would be integrated in the e-learning platform, making
it possible for student to auto-evaluate their tasks. Nevertheless, this is not always possible, as in our case, where exercises had to be done
using external software. The solution we came through was to use screenshots as evaluation mechanism, as we thought that it was simple
for students to capture and upload screenshots of their work while they were completing a task, and that those screenshots could provide
enough information for teachers to evaluate if the task was correctly completed or not. The problem with this solution is that if students
needed to wait for teacher to evaluate their work, it would be impossible to give immediate feedback on task completion in the form of
a reward (more about rewards in the following section). To avoid this, we decided to immediately accept any uploaded screenshot as correct,
leaving the evaluation as a verification mechanism to see if students were being honest and if their work was correctly done. In future
initiatives we may consider computer-based testing (Santos, Hernández-Leo, Pérez-Sanagustín, & Blat, 2012; Santos, Pérez-Sanagustín,
Hernández-Leo, & Blat, 2011) to overcome these problems.

4.2. Emotional area

Next step was to design how to impact on the emotional area of students. Our proposal was to include a virtual reward system that could
create positive emotions on task completion, thus motivating students to complete more tasks. According to Wang and Sun’s work on game
reward systems, there are eight forms of rewards: score systems, experience points, items, resources, achievements, instant feedback
messages, plot animations, and game content (Wang & Sun, 2011). Most of these rewards cannot be easily incorporated in gamification
systems. The lack of virtual worlds, avatars and stories make it difficult to include experience points, items, resources, plot animations or
unlockable game content. Instant feedback messages seem to be great to create positive emotions, but such a reward system is not feasible
because it would require to be integrated within the external software used by students to complete tasks. After examining the remaining
reward systems, points and achievements, we decided that achievements were the most appropriate form of reward for us. According to
Wang’s definition, “achievement systems consist of titles that are bound to player accounts; users collect them by fulfilling clearly stated
conditions” (p. 4). In our gamified experience students will have to complete tasks in order to obtain achievements. Although a score system
may also fit in our design, we left it out to keep design as simple as possible.
Achievements may generate a wide range of positive emotions. One possible emotion is related to the fact of being immediately
rewarded on task completion, as students will feel that they are performing well. To increase this feeling, we decided to represent some
special achievements as medals, a typical representation of excellence (Fig. 3). Another one is related to the fact of achievements being
collectables. Non-completed achievements are shown to the player as a list of tasks to perform, with an empty space for the corresponding
medal. Players motivated by collectables will be tempted to continue working in order to get all medals. Finally, some achievements have
been designed as hidden; they are awarded by surprise when some special conditions are met. In addition to being surprised with an award,
these achievements may also serve to promote exploration of system features in order to discover the secret medals.

4.3. Social area

The final design step is related to the social area of the system. As previously exposed, there are different ways of student interaction:
cooperative, competitive and social. Due to the individual design of course exercises, cooperative interaction didn’t have sense in our system.
Between the remaining two, we decided to include only competitive mechanisms to be able to study their effect over students in isolation;

Fig. 2. Screen capture showing a challenge and its four trophies. Copper, silver and gold trophies are completed, while platinum trophy is unlocked but not yet completed (in
Spanish). (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
384 A. Domínguez et al. / Computers & Education 63 (2013) 380–392

Fig. 3. Sample of some special achievements represented as medals.

thus leaving social mechanisms for future works. The most basic mechanic of competition in many videogames is a leaderboard or ranking,
so we opted to include this mechanic in our system. Usually leaderboards are score based, but due to the lack of a score system in our design
we used achievements instead, ranking players by the number of achievements they own (Fig. 4). This leaderboard let students compete to
obtain higher ranking by completing more exercises and by participating in the overall experience. This could be a source of motivation for
competitive students. Additionally two other competition mechanisms are provided. One of them lets a player view a comparison between
his achievement list and the achievement list of any other classmate. Comparison view could drive more direct competition between two
specific players who are trying to beat each other. The other included mechanism shows a list with all the achievements in the platform,
along with the percentage of total users who own it. This lets players challenge themselves to obtain the most exclusive achievements.

5. System architecture

In this section we will briefly describe system architecture. While “Qualification for users of ICT” course used Blackboard 8 (BB8) as
e-learning platform for online content, custom plugin support was made available only since Blackboard 9 (BB9) version. We solved this
problem by implementing a gamification plugin for the BB9 platform and deploying it on a parallel course to the BB8 one. Students could use
the same login credentials in BB8 and in BB9 platforms; traditional content was available in the former and gamified content in the latter.
Several technologies were used to implement the system (Fig. 5). Blackboard 9 plugins are JSP web applications that can access student
data, and in consequence, don’t require user authentication. Although the e-learning platform database could supposedly be used to store
plugin data, several problems were found at developing time, mainly related with the amount of documentation available, so we decided to
explore alternative solutions. We decided to create a cloud-based web service in Microsoft Azure platform linked to an SQL Azure database.
This service was consumed from client-side using AJAX. It was designed using the RESTful principles, and programmed in C# using
Windows Communication Foundation. Lastly, Amazon EC3 cloud based persistent storage services were used to store screenshots and user
avatars.

6. Experimental design

In order to assess the effectiveness of the gamified approach and to evaluate the attitude of students we designed an experiment for two
different groups of the course “Qualification for users of ICT” (6 ECTS, 150–180 h of student work). This course is based on the ECDL cer-
tification syllabus and has the following modules:

1. Introduction to the computer, the operating system, networks and communication.


2. Word processor
3. Spreadsheets
4. Presentation software
5. Databases

Fig. 4. Leaderboard sample, each row shows player’s photo, ladder position, number of achievements, and percentage of total achievements.
A. Domínguez et al. / Computers & Education 63 (2013) 380–392 385

Fig. 5. System architecture diagram.

The final score of the course is computed based on the following evaluation items:

 Initial activity (5%).


 Midterm assignment (30%)
 Final assignment (30%)
 Final examination (30%)
 Participation (5%)

The initial activity (week 1) is designed to introduce the course to the students, to get them used to the course stuff and class
dynamics, including the e-learning platform, and also to collect basic information about them. Students are asked to complete their
personal profile, fill two surveys about their knowledge and their usage of ICT, and also to complete a short interactive test to assess
their initial knowledge about modules 2–6 (word processor, spreadsheets, presentation software and databases). Questions in this test
cover just basic initial topics about each module but they turn out to be interesting for two reasons: firstly, students get a first glimpse
of the overall contents and skills required to pass the course, and secondly, initial scores for each module are collected given useful
information to both student and teacher. Teachers consider that this score is an indicator of the initial motivation of the student instead
of being a precise score of her initial knowledge. Activities are designed to motivate the student to participate and complete the course.
In the midterm assignment (week 10) students have to hand in two exercises which correspond to modules 2 (word processor) and 3
(spreadsheets). On the final assignment (week 14) students submit their exercises of modules 4 (presentation software) and 5 (data-
bases). The final examination (week 15) is a written test comprised of multiple choice as well as open-ended questions. Finally students
can get up to a 5% of the final score based on their participation on the activities that take place in classroom as well as on the virtual
classroom (e-learning platform). This score is computed semi-automatically taking the number of interactions on the e-learning
platform (posts in the forum, elements opened, activities completed, messages read, etc. and also challenges, trophies, achievements,
leaderboard, etc.) and a subjective assessment of the lecturer based on the student attendance and participation in classroom. With
this system of evaluation it is possible to have 7 scores per student (initial activity, word processor, spreadsheet, presentation software,
databases, final examination and participation) as well as the final score. All scores can be used to compare the performance of different
groups. None of the evaluation instruments was gamified.
During the spring semester the course “Qualification for users of ICT” is given to two distinct groups. As it is a transversal course, it is
offered to a wide range of students majoring different specialties. The control group consists of 1st and 2nd year university students
(freshmen and sophomores) majoring construction engineering, nursing, tourism, infant education, primary education or business
administration and management. 80 students enrolled initially and 73 completed at least one assignment so that they have a final score. The
experimental group consists of 1st year university students (freshmen) majoring economics, business administration and management,
accounting and finance, or economics and international business. 131 students enrolled initially and 123 completed at least one assignment
so that they have a final score. Both groups have separated spaces in the virtual classroom so that a student of one group does not know
about the activities on the other group. Groups are also physically distant as teaching takes place in different campuses and cities. The same
instruments and evaluation criteria were used to compute scores of students of both groups. The experimental and control groups were
chosen randomly. Unfortunately we were not able to assign individual students to each group as they freely enroll in the group that they
prefer. This decision is mostly made based on their major as the faculties/buildings schedule the groups for their students.
The gamified version of the course includes 36 challenge achievements (grouped in 9 challenges/activities), from which students get
trophies after completion, and 7 participation achievements from which participants get medals. All content challenges were created using
exactly the same contents of the activities available in the traditional non-gamified version of the course. Students in the experimental
group have access to both versions of every activity. Traditional activities are delivered using PDF files. Students of the experimental group
received an introduction of 1 hour to the gamification plug-in by the teacher. After that, they have the opportunity to decide freely which set
of activities they prefer to use and also to combine them as they want. If they want to use the gamified version they are just asked to register
386 A. Domínguez et al. / Computers & Education 63 (2013) 380–392

and upload their own avatar (a picture) the first time they connect. On first connection students are shown an introductory screen with
a text based tutorial that highlights plug-in features and explains how to use it. Technical support was also available during the course.
The plugin conveniently registers the activities of students on the gamified version. 58 students of the experimental group registered to
use the gamified version of the course. 27 students got 8 or more trophies and medals (i.e. completed 8 or more achievements). Teachers
indicated that 8 is the minimum number of achievements that a student must complete to consider that she followed the gamified version.
This is just to be sure that she completed activities of at least 3 different modules, but it will also offer a new dimension for analyzing the
results as it permitted researchers to distinguish between those students who followed the gamified experience and those who did not. The
plugin also permitted getting information on academic results as well as quantitative data and qualitative impressions about students’
interaction both with gamified activities and with traditional courseware. The decision to made participation on the gamified experience
optional was partly based on this but also on institutional requirements. Making participation mandatory would have supposed to do so for
all students in all groups thus hindering experimental design.

7. Results

All the experiments and grading were conducted during the 2011/2012 spring semester. Outcome data collected of the experimental and
control group is presented and discussed in this section. It must be borne in mind that teachers provided a grade for each evaluation item of
each student along with a final mark for the whole learning experience. Students’ opinions were also appraised in an attitudinal survey. All
grades were normalized in the range 0–100 for statistical analysis.

7.1. Achievement of students

Independent-2-sample t-tests indicate that there is no significant difference in the initial knowledge of students in each of the four
modules that was assessed (Table 1). Post-test results suggest that there is significant difference in six scores (Table 2). Students of the
experimental group get scores that are significantly higher in the initial activity (p ¼ .004) and also in the practical exercises about
spreadsheets (p ¼ .007), software presentation (p ¼ .000) and databases (p ¼ .000). On the contrary, students of the experimental group get
significantly lower scores on the final examination (p ¼ .006) and on the participation score (p ¼ .000). Finally there is no significant ev-
idence to support that the experimental group performs better on the exercise on word processing (p ¼ .090) and on the final score
(p ¼ .090). Results of the most significant scores are also presented graphically in Fig. 6.
One-way analyses of variance (ANOVA) are used to determine the existence of significant differences considering three groups: the
control group, the experimental non-gamified group, the experimental gamified group. Results (Table 3) are similar to those obtained
distinguishing just two groups. The difference in the final score is now statistically significant in at least one of the groups. Confidence
intervals (Fig. 7) show this graphically and suggest that students who get 6 or more achievements in the gamified system also get sig-
nificantly higher final scores. Interval plots for the final examination score (Fig. 8) and for the participation score (Fig. 9) also suggest that the
non-gamified experimental group have significantly lower scores than the other two groups and consequently there is no evidence that can
confirm that the gamified experience yields worse results in written examinations or somehow prevents students from participating in class
activities.
Students on the experimental group performed better on all the items that were concerned with practical application of concepts. On the
other hand students in the experimental group performed lower than the control group on the written examination and participation. We
think that such differences may be caused by the distinctive nature of the elements being assessed on these items and by the kind of learning
fostered by each instrument. In the written examination students are asked mainly about concepts and about the relation of these concepts
to practice. On all other evaluation items (except participation) assessment is based on competencies and students are required to know
how to complete different tasks using a given application. Considering the results obtained, we can argue that gamified activities help to
develop practical competences but somehow they also hinder the understanding of underlying theoretical concepts in contrast with tra-
ditional courseware. This conclusion was also drawn by previous work and it has even been identified as a trend by Ke’s (2009) meta-
analysis who suggested that learning games foster high-order thinking more than factual knowledge.
As for participation, this item was assessed mostly in an objective way based on the number of interactions with the learning platform,
contributions to forums and other participative media, and attendance and exercises completed both online and in the classroom. It is
tempting to argue that the lower marks got by students in the experimental group are due to the alienating nature of videogames. This is
aligned not only with popular culture but also with heideggerian philosophy on alienation through technology. Defendants of this
standpoint will argue that the gamified activities while fostering competence acquisition also split in or separate students from reality thus
reducing their overall interaction with other students and systems. It is worth mentioning that Heidegger (1977) perspective is that
technology is not alienating per se, but only when enframing (i.e. when the other is treated as an object, rather than as a subject) occurs as

Table 1
Scores in the initial activity for each module. Significance was computed using independent-2-sample t-tests.

Evaluation item Group n Mean Std dev Significance


Word processor Control 62 44.13 17.68 F ¼ 2.20
Experimental 111 49.92 16.53 p ¼ .141
Spreadsheet Control 62 53.32 17.68 F ¼ 0.62
Experimental 111 56.27 12.95 p ¼ .432
Presentations Control 62 44.52 13.14 F ¼ 0.49
Experimental 111 46.54 12.17 p ¼ .487
Databases Control 62 52.76 17.19 F ¼ 1.36
Experimental 111 56.01 17.75 p ¼ .244
A. Domínguez et al. / Computers & Education 63 (2013) 380–392 387

Table 2
Final scores for the experimental and control groups. Significance was computed using independent-2-sample t-tests.

Evaluation item Group n Mean Std error Std dev Significance


Initial activity Control 73 77.29 2.41 20.63 F ¼ 8.43
Experimental 123 88.46 2.59 28.75 p ¼ .004
Word processor Control 64 56.33 2.34 18.73 F ¼ 2.90
Experimental 113 64.01 2.63 27.98 p ¼ .090
Spreadsheet Control 64 62.70 3.21 25.67 F ¼ 7.48
Experimental 110 73.94 2.52 26.40 p ¼ .007
Presentations Control 66 64.59 1.52 12.38 F ¼ 178.48
Experimental 110 89.86 1.15 12.01 p ¼ .000
Databases Control 65 40.25 2.84 22.86 F ¼ 56.12
Experimental 106 69.65 2.53 26.09 p ¼ .000
Final examination Control 68 64.12 1.66 13.67 F ¼ 7.78
Experimental 106 58.05 1.38 14.21 p ¼ .006
Participation Control 73 86.53 2.42 20.67 F ¼ 97.47
Experimental 123 48.13 2.63 29.15 p ¼ .000
Final Control 73 56.27 2.17 18.58 F ¼ 2.90
Experimental 123 61.57 2.02 22.41 p ¼ .090

Fig. 6. Boxplots of the most significant scores.


388 A. Domínguez et al. / Computers & Education 63 (2013) 380–392

Table 3
Final scores for the control, experimental non-gamified and experimental gamified groups. Significance was computed using one-way ANOVA tests.

Evaluation item Group n Mean Std error Std dev Significance


Initial activity Control 73 77.29 2.41 20.63 F ¼ 5.85
Experimental non-gamified 96 86.25 3.12 30.62 p ¼ .003
Experimental gamified 27 96.30 3.70 19.25
Word processor Control 64 56.33 2.34 18.73 F ¼ 2.53
Experimental non-gamified 88 61.70 3.06 28.68 p ¼ .083
Experimental gamified 26 69.36 5.41 27.60
Spreadsheet Control 64 62.70 3.21 25.67 F ¼ 4.46
Experimental non-gamified 83 72.25 2.90 26.37 p ¼ .013
Experimental gamified 27 79.14 5.06 26.29
Presentations Control 66 64.59 1.52 12.38 F ¼ 93.13
Experimental non-gamified 84 89.11 1.40 12.58 p ¼ .000
Experimental gamified 26 92.31 1.67 8.53
Databases Control 65 40.25 2.84 22.86 F ¼ 28.17
Experimental non-gamified 80 68.77 2.91 26.05 p ¼ .000
Experimental gamified 26 72.37 5.21 26.55
Final Examination Control 68 64.12 1.66 13.67 F ¼ 3.99
Experimental non-gamified 81 57.67 1.60 14.39 p ¼ .020
Experimental gamified 25 59.27 2.77 13.84
Participation Control 73 86.53 2.42 20.67 F ¼ 82.14
Experimental non-gamified 96 40.52 2.51 24.64 p ¼ .000
Experimental gamified 27 75.19 5.43 28.20
Final Control 73 56.27 2.17 18.58 F ¼ 4.85
Experimental non-gamified 96 58.99 2.39 23.43 p ¼ .009
Experimental gamified 27 70.71 2.99 15.52

a consequence of technological mediation. Our point is that such questions have a very strong philosophical underpinning and that further
research and enquiry shall be performed before drawing unsustained conclusions. Particularly, studying approaches that circumvent
enframing by carefully addressing social interaction seems promising. Furthermore, a closer examination of data when considering three
groups (experimental gamified, experimental non-gamified and control) reveals that the real difference is between the non-gamified
experimental (M ¼ 40.52, SD ¼ 24.64) and the control group (M ¼ 86.53, SD ¼ 20.67) and it is very substantial. The experimental gami-
fied group also performs lower (M ¼ 75.19, SD ¼ 28.20) than the control group but there is no statistical significance. To be honest we have to
say that we do not find any explanation for such an important difference. Courseware and methodology were exactly the same for
experimental and control groups. The only difference was in the participating teachers as the number of students and groups required the
participation of different teachers. Regular meetings were kept to ensure consistency between groups. So in our opinion we can only infer
that either the teachers of the control group managed to keep their students participant or the students of the experimental group were
really under-participative.

7.2. Attitudinal survey

The students of the experimental group were also asked to answer a questionnaire of 10 items designed to evaluate their attitude to-
wards the learning tool and their satisfaction level. The instrument used was a questionnaire based on a five-point Likert scale with all the
sentences scored in a positive scale. Similar instruments have been used by other researchers (Garrido, Grediaga, & Ledesma, 2008). The
survey was answered anonymously. 45 students claimed to have followed the gamified experience and provided feedback. Questions and
results are summarized in Table 4. The average for these questions is 3.64 on the five-point scale, indicating that the students’ attitude to this
experience was positive. The highest rated statements are items 6 and 7 suggesting that the activities were successfully designed according
to students’ perception. The ratings of items 2, 9 and 10 are especially significant because they provide a general positive estimation of

Fig. 7. Interval plot of the final score (95% CI for the mean). 0–Control group, 1–Experimental non-gamified group, 2–Experimental gamified group.
A. Domínguez et al. / Computers & Education 63 (2013) 380–392 389

Fig. 8. Interval plot of final examination score (95% CI for the mean). 0–Control group, 1–Experimental non-gamified group, 2–Experimental gamified group.

students’ motivation and students’ attitude towards learning with this tool, not only during the learning experience but also in the future. In
contrast, the lowest rated statement is item 4 which suggest that additional work to improve the usability of tool should be undertaken.
Authors can only conjecture to what extent the integration of the tool in the BlackBoard system has an important role in this rating. Low rate
on statement 8 indicates a low level of involvement. Regarding this, students were also asked to provide a percentage (0–100) estimating to
what level they have completed the gamified activities. Results return a mean of 55.56 (SD ¼ 21.56). We can contrast this with real data as
the tool records every challenge and achievement completed by students. If we consider all the students who completed at least one
gamified activity (N ¼ 58) the mean is 22.65 (SD ¼ 26.74) and considering only the students (N ¼ 27) who completed 8 or more gamified
activities (18.6%) the mean is 40.91 (SD ¼ 29.59). So in our opinion students’ estimation about their own work is (very) optimistic and
participation rates are really low. We think that both researchers and teachers shall try to find ways to design new experiments and learning
actions in which participation and its promotion play a central role since this is critical to evaluate learning activities and also to foster
meaningful and efficient learning.
Answers variability in the attitudinal survey is low since overall SD is 0.96, which represents less than 1/4 of the mean. So it can be said
that the answers are homogeneous. Item correlations are examined to determine the relevance of each item in relation to the other items
and the entire survey. All items returned correlation coefficients larger than 0.4 suggesting coherence in responses. A factor analysis returns
a cumulative explanation percentage of variance of 68.5 suggesting that the instrument also presents factorial validity. However, we have to
be careful with this values concerning validity since the sample size (45) is considerably lower than the recommendations of standard
benchmarks. To complete the analysis, Cronbach’s alpha score is computed to measure the internal consistency of the survey. The overall
Cronbach’s alpha is 0.8629, which is higher than a commonly used benchmark value of 0.7. This suggests that the items measure the same
construct.
57 students acknowledged to have not used the gamified version and were asked about the reason/s that prevented them from taken part
in the gamified experience. Results are summarized in Table 5. Time availability is the most frequent reason argued by students. Technical
problems are the second most important reason. The reason argued less frequently is the difficulty to use or understand the system, in
marked contrast to the attitudinal survey in which tool ease-of-use is the lowest rated item. Under ‘other reasons’, students point out
additional problems. Some examples are: “Too many students”, “I have to visit too many web pages and applications in the university and I
did not want a new one” and “I do not like competition between students and that everyone can see it.”
Another informal questionnaire was included in the e-learning platform asking students whether they found it more motivating to
complete gamified activities in contrast with the traditional version, and about which specific elements of the plugin have the biggest

Fig. 9. Interval plot of the participation score (95% CI for the mean). 0–Control group, 1–Experimental non-gamified group, 2–Experimental gamified group.
390 A. Domínguez et al. / Computers & Education 63 (2013) 380–392

Table 4
Questions and results of the attitudinal survey. Answers were provided in a five-point Likert scale (1–Strongly disagree, 2–Disagree, 3–Undecided, 4–Agree, 5–Strongly agree).

Item N Mean Std error Std dev


#1 Content was presented effectively 45 3.64 0.13 0.89
#2 I learned about the course topic 45 3.76 0.13 0.86
#3 I enjoyed the experience 45 3.49 0.15 0.99
#4 Using the tool was easy for me 45 3.24 0.18 1.15
#5 The proposed practical activities were useful 45 3.56 0.15 0.99
#6 There was a sufficient number of exercises 45 3.91 0.13 0.90
#7 There was sufficient time to complete the exercises 45 3.98 0.15 0.99
#8 My level of involvement was high 43 3.40 0.13 0.85
#9 I would like to learn more about the course topic 44 3.63 0.15 0.99
#10 This was a worthwhile learning experience 45 3.76 0.15 0.98
Average – 3.63 – –

motivational impact. 91 students provided feedback using this instrument. Concerning motivation students were asked if they found the
gamified activities more motivating, if they found the traditional activities more motivating or if they found the gamified activities neither
more nor less motivating that the traditional ones. 29 students (31.87%) found the gamified activities more motivating, 56 students (61.54%)
found the traditional activities more motivating and 6 students (6.59%) felt no differences in their motivation. We can consider these figures
to be consistent with the previous ones since the number of students that found it motivating is similar to the number of students (27) that
completed a reasonable number of gamified activities. Thus, students that followed the gamified course seem to be motivated but further
questions remain unanswered about students that did not start or quitted. The reward-based system programmed on the e-learning plugin
is designed to improve extrinsic motivation. Although it can be a powerful force to drive intrinsic motivation, several problems have been
reported concerning extrinsic motivation. First, participants can feel manipulated. Second, little or no transfer can occur if behavior is only
driven by rewards. And finally, if the reward vanishes so does the behavior. In this way, the learner may become too dependent on the
reward or she may be not interested at all on it (Lepper, 1988). We conjecture that students who did not follow the gamified approach or
quitted were partly not attracted by the reward mechanics implemented. Nonetheless, we think that this was not the only reason but rather
that the other reasons argued by students and the lack of immediate feedback also contributed to the low motivation level observed.

7.3. Qualitative analysis

Finally students had different opportunities to provide additional feedback about their perceptions and attitude towards the system and
the learning experience. In the anonymous attitudinal survey there was an open question in which students were asked to provide any
comment or suggestion. 17 students provided feedback using this mechanism. The e-learning platform also provided a source of permanent
communication between teachers, researchers and students. Forums were used as a source of feedback and a specific feedback form was
also available in the e-learning platform. Teachers and researchers analyzed all elements to create also a qualitative appraisal of students’
perceptions and motivations.
In general we get numerous positive responses. The following comment can be taken as an example. It stresses the importance of the
leaderboard and also the fact that, as for all activities, completing them was a way to contribute to the participation score: “I have completed
the gamified activities because by means of the leaderboard, global statistics,.; I can know what is the amount of work that I have done
with respect to other students. The fact that my activities were also contributing to the participation score also influenced me.” The fol-
lowing reflection is representative of the possibility to choose between both versions of the activities. The student asserted that he had
completed the gamified activities “because the leaderboard was motivating for me, and also as I was going to complete the activities in any
case, I preferred the gamified version.” Another student interestingly commented: “I preferred to make the gamified ones. Decision that I
have taken for the simple reason that by completing them, but previously done them in the traditional way as the instructions are better, and
then submitted to the new virtual platform to win new points as it is fun and motivating in many ways, be it for the graphics, the trophies .
and it is even more colorful and encouraging.” Here the student is presenting his experience as a combination of both approaches (tra-
ditional and gamified). He prefers the traditional approach to go through the activities, but finally he completes them in the gamified version
because he finds it motivating, encouraging and even ‘colorful’. These as well as similar comments stress the importance that competition
has for some students as well as the chance to have the same contents in different formats which can be combined to create meaningful and
motivating learning experiences.
Contrasting with positive comments, we also found opposing opinions. We found especially interesting those that reflect about the
dislike and uneasiness created by the leaderboard and the feeling of competition among students. For instance, this student states “I prefer
traditional activities because I don’t think that leaderboards are a good representation of who gets more knowledge about the course” or
another student who states “I think that it would be more interesting to improve the traditional version, instead of making competitions.”

Table 5
Reasons for not using the gamified version (N ¼ 57). Students could point more than one
reason.

Answer Frequency
I do not know about them 9
I am not interested in them 6
I do not have time to complete the activities 34
I find technical problems 13
The system is difficult to use/understand 3
Other reasons 17
A. Domínguez et al. / Computers & Education 63 (2013) 380–392 391

We mentioned above that similar statements were argued among the reasons for not using the gamified system. All other negative per-
ceptions can be categorized in three groups: (1) preference for traditional-like activities because “they are easier” or “I feel more comfortable
with them”, (2) “I did not have time, and I didn’t know what difficulties I was going to find”, and (3) “By having the option of the normal
system, the game I thought it would take longer.”

8. Conclusions and future work

Gamification in e-learning platforms seems to have potential to increase student motivation, but it’s not trivial to achieve that effect, and
a big effort is required in the design and implementation of the experience for it to be fully motivating for participants.
On the one side, experiment qualitative analysis suggests that gamification can have a great emotional and social impact on students, as
reward systems and competitive social mechanisms seem to be motivating for them. Reward systems suppose an innovative, fun and
encouraging way to represent progress within an online educative experience. Leaderboards also serve as a source of motivation because
students see their work publicly and instantly recognized, and because they can compare their progress with other classmates. These good
results don’t happen for everyone though. For many, the system was not motivating enough to participate along the course. In some cases
the system was even discouraging, as some students don’t find it fun to compete with their classmates for a rank in the leaderboard. Our
work is influenced by studies on the profiles of players who foster competition. For instance Heeter, Lee, Medler, and Magerko (2011)
identify four types of players based on performance and mastery levels of achievement goals, namely: performance-only players,
mastery-only players, non-achievers and super-achievers. Arguably, other styles of players, like socializers or explorers, have to be con-
sidered. Our future work will try to address these issues, reducing the overall importance of competition and rewards, and introducing
cooperative and social mechanisms which are currently being used in the so called “social games” (Hicks, 2010). We will also try to find new
ways of gamification that are more meaningful to the students, not limiting the system to extrinsic rewards like achievements and badges, as
suggested by Deterding in his presentation “Meaningful Play: Getting gamification right”,5 and by Nicholson in his User-Centered Theo-
retical Framework for Meaningful Gamification (Nicholson, 2012).
On the other side, quantitative analysis suggests that cognitive impact of gamification over students is not very significant. Students who
followed traditional exercises performed similarly in overall score than those who followed gamified exercises. From our point of view,
cognitive characteristics of videogames that create the so called “cycles of expertise” (Gee, 2003) that further derive into “flow experiences”
(Csikszentmihalyi, 2008) are in the very nature of the medium, and cannot be exported to traditional educative content by any way without
entering in the field of edutainment or serious games. Although gamification impact on the cognitive aspects of educative content is limited,
we still think that changing content design and structure to make it more fun can have great motivational impact. One suggestion is to
design educative exercises embracing from the very beginning the concept of gameful design (Deterding, Dixon, Khaled, & Nacke, 2011) to
make them more interesting for students. Additionally we shall consider a more systematic approach for the design and evaluation of
gamified learning. We shall take previous work on evaluation frameworks in game-based learning, e.g. (de Freitas & Oliver, 2006), as
a starting point. This will enable us to extract more solid conclusions about the reality of gamification in education.
Apart from exposed lines, students reported other design and technical issues that should be addressed in future works. Some of them
complained about the Blackboard plugin because it was hard to use or didn’t work well. Although students were introduced to the plug-in
by the teacher and by a textual tutorial, it seems that those introductions were not good enough for all students to be able to use the plug-in
proficiently. On future versions, we might consider including an interactive introduction which not only explains, but also guides students
step by step on plug-in features usage. Some important technical problems may be related to the Blackboard platform, as it introduces
network overload that slowed down screenshot uploading, making it tiring and time consuming. The proprietary code of Blackboard
platform made it impossible for us to fix this, and we didn’t manage to find a workaround solution. Other potential issues may have risen
with an appropriate usability and software testing process. An important conclusion that students’ reports suggest is that a good testing
process is essential when developing a gamification system; otherwise its motivational effects can be dramatically diminished by unad-
dressed usability and technical issues.
Another important problem was task evaluation. Many students didn’t complete gamified exercises because they thought that it was
a waste of time to capture and upload screenshots of their work. This may also be related to the technical issues that slowed down
screenshot uploading, but it was not the only problem. Participants also reported that they could upload empty screenshots to obtain
achievements in an attempt to cheat. Finally, teachers had to do additional efforts to correct all the screenshots in order to validate student’s
achievements. All these facts indicate that gamification has some limitations when tasks cannot be automatically evaluated by the e-
learning platform as a conflict arises between immediate feedback, fair rewards and teacher effort. We think that immediate feedback will
increase students’ motivation yielding better results. This is a critical aspect of videogames that makes them compelling and engaging so
gamified initiatives must address it (Kapp, 2012). As future work we have to design new methods to automate the work that teachers must
do, and also develop the tools to enable them to create and modify the gamified learning experiences easily, making the underlying
technological infrastructure transparent. Unsupervised scoring systems (Goldberg & Song, 2004) may also be an interesting solution to this
problem, and response-driven feedback approaches (Fernández-Alemán, Palmer-Brown, & Jayne, 2011) can help teachers to produce
meaningful and rapid feedback.

References

Bowman, R. F. (1982). A Pac-Man theory of motivation. Tactical implications for classroom instruction. Educational Technology, 22(9), 14–17.
Connolly, T. M., Boyle, E. A., MacArthur, E., Hainey, T., & Boyle, J. M. (2012). A systematic literature review of empirical evidence on computer games and serious games.
Computers & Education, 59(2), 661–686.
Csikszentmihalyi, M. (2008). Flow: The psychology of optimal experience. New York: HarperCollins.
Deterding, S., Dixon, D. & Khaled, R. (2011). Gamification: toward a definition. In The ACM CHI Conference on Human Factors in Computing Systems 2011 (pp. 12–15).

5
Meaningful Play: Getting Gamification Right. http://www.youtube.com/watch%3fv%3d7ZGCPap7GkY.
392 A. Domínguez et al. / Computers & Education 63 (2013) 380–392

Deterding, S., Dixon, D., Khaled, R. & Nacke, L. (2011). From game design elements to gamefulness: defining gamification. In Proceedings of the 15th International Academic
MindTrek Conference (pp. 9–15).
Facer, K. (2003). Computer games and learning. Screen, 6, 35–46, December 2007. Available at http://admin.futurelab.org.uk/resources/documents/discussion_papers/
Computer_Games_and_Learning_discpaper.pdf Accessed 17.12.12.
Fernández-Alemán, J. L., Palmer-Brown, D., & Jayne, C. (2011). Effects of response-driven feedback in computer science learning. IEEE Transactions on Education, 54, 501–508.
de Freitas, S., & Oliver, M. (2006). How can exploratory learning with games and simulations within the curriculum be most effectively evaluated? Computers & Education,
46(3), 249–264.
Garrido, P. P., Grediaga, A., & Ledesma, B. (2008). VisualJVM: a visual tool for teaching Java technology. IEEE Transactions on Education, 51(1), 86–92.
Gåsland, M. (2011). Game mechanic based e-learning. Science And Technology, Master Thesis (June 2011). Available at: http://ntnu.diva-portal.org/smash/get/diva2:441760/
FULLTEXT01. Accessed 27.07.12.
Gee, J. P. (2003). What video games have to teach us about learning and literacy. Computers in Entertainment, 1(1), 20–20.
Goldberg, K. & Song, D. (2004). Unsupervised scoring for scalable internet-based collaborative teleoperation. In Proceedings of the 2004 IEEE International Conference on
Robotics and Automation (pp. 4551–4556).
Heeter, C., Lee, Y. H., Medler, B., & Magerko, B. (2011). Beyond player types. In Proceedings of the 2011 ACM SIGGRAPH symposium on video games – Sandbox ’11. New York, New
York, USA: ACM Press, 43–43.
Heidegger, M. (1977). The question concerning technology and other essays. New York: Harper & Row.
Hicks, A. (2010). Towards social gaming methods for improving game-based computer science education. In Proceedings of the Fifth international Conference on the Foundations
of Digital Games – FDG ’10 (pp. 259–261). New York, New York, USA: ACM Press.
Kapp, K. M. (2012). The gamification of learning and instruction. San Francisco: Wiley.
Ke, E. (2009). A qualitative meta-analysis of computer games as learning tools. In R. E. Ferdig (Ed.), Effective electronic gaming in education (pp. 1–32). Hershey: Information
Science Reference.
Koster, R. (2005). A theory of fun for game design. Scottsdale, Arizona: Paraglyph Press.
Lee, J. J., & Hammer, J. (2011). Gamification in education: what, how, Why Bother? Definitions and uses. Exchange Organizational Behavior Teaching Journal, 15(2), 1–5.
Lee, J. J., & Hoadley, C. (2007). Leveraging identity to make learning fun: possible selves and experiential learning in massively multiplayer online games (MMOGs). Journal of
Online education, Available at http://www.innovateonline.info/index.php%3fview%3darticle%26id%3d348 Accessed 08.10.12.
Lepper, M. R. (1988). Motivational considerations in the study of instruction. Cognition and Instruction, 5(4), 289–309.
Liaw, S. (2008). Investigating students’ perceived satisfaction, behavioral intention, and effectiveness of e-learning: a case study of the Blackboard system. Computers &
Education, 51(2), 864–873.
Malone, T. W. (1980). What makes things fun to learn? Heuristics for designing instructional computer games. In Proceedings of the 3rd ACM SIGSMALL symposium and the first
SIGPC symposium on Small systems – SIGSMALL ’80 (pp. 162–169). New York, New York, USA: ACM Press.
Muntean, C. I. (2011). Raising engagement in e-learning through gamification. In The 6th International Conference on Virtual Learning ICVL 2012 (pp. 323–329).
Nicholson, S. (2012). A user-Centered theoretical framework for meaningful gamification GamesþLearningþSociety 8.0. Available at: http://scottnicholson.com/pubs/
meaningfulframework.pdf Accessed 08.10.12.
Rosas, R., Nussbaum, M., & Cumsille, P. (2003). Beyond Nintendo: design and assessment of educational video games for first and second grade students. Computers &Ed-
ucation, 40, 71–94.
Santos, P., Hernández-Leo, D., Pérez-Sanagustín, M., & Blat, J. (2012). Modeling the Computing Based Testing domain extending IMS QTI: framework, models and exemplary
implementations. Computers in Human Behavior, 28, 1648–1662.
Santos, P., Pérez-Sanagustín, M., Hernández-Leo, D., & Blat, J. (2011). QuesTInSitu: from tests to routes for assessment in situ activities. Computers & Education, 57, 2517–2534.
Silva, E. (2010). Gamifying learning with social gaming mechanics. In N. Payne, & F. Masie (Eds.), The Masie learning center perspectives 2010 (pp. 61–62), Available at http://
www.icde.org/filestore/Resources/Handbooks/Learningperspectives.pdf.
Simões, J., Díaz Redondo, R., & Fernández Vilas, R. (2013). A social gamification framework for a K-6 learning platform. Computers in Human Behavior, 29, 345–353.
Squire, K. (2002). Cultural framing of computer/video games. Game Studies, 2(1), Available at http://gamestudies.org/0102/squire/%3fref%3dHadiZayifla Accessed 27.07.12.
Squire, K. (2003). Video games in education. International Journal of Intelligent Games & Simulation, 2(1), 49–62.
Wang, H., & Sun, C. T. (2011). Game reward Systems: gaming experiences and social meanings. In C. Marinka, K. Helen, & W. Annika (Eds.), Proceedings of the DiGRA 2011
Conference: Think design play, Available at http://www.digra.org/dl/db/11310.20247.pdf.
Watson, W. R., Mong, C. J., & Harris, C. A. (2011). A case study of the in-class use of a video game for teaching high school history. Computers & Education, 56(2), 466–474.

You might also like