Computers & Education: Lijia Lin, Robert K. Atkinson
Computers & Education: Lijia Lin, Robert K. Atkinson
a r t i c l e i n f o a b s t r a c t
Article history: The purpose of the study is to investigate the potential benefits of using animation, visual cueing, and
Received 30 September 2009 their combination in a multimedia environment designed to support learners’ acquisition and retention
Received in revised form of scientific concepts and processes. Undergraduate participants (N ¼ 119) were randomly assigned to
7 October 2010
one of the four experimental conditions in a 2 2 factorial design with visual presentation format
Accepted 8 October 2010
(animated vs. static graphics) and visual cueing (visual cues vs. no cues) as factors. Participants provided
with animations retained significantly more concepts than their peers provided with static graphics and
Keywords:
those afforded visual cues learned equally well but in significantly less time than their counterparts in
Multimedia/hypermedia systems
Human–computer interface uncued conditions. Moreover, taking into consideration both learning outcomes and learning time, cued
Interactive learning environments participants displayed more instructional efficiency than their uncued peers. Implications and future
directions are discussed.
Published by Elsevier Ltd.
1. Introduction
As computer technologies advance, the use of graphics in computer-based educational environments has become commonplace and
appears to be gaining increasing popularity. In the past several decades, a large number of studies have been conducted to investigate
various issues concerning the benefits of using static and dynamic graphical representations in multimedia learning environments. Two
important issues are: (a) the relative effectiveness of the presentation format (i.e., animation versus static media) and (b) the potential
instructional benefits of visual cuing. This study investigates how these two factors (i.e., presentation format and visual cues), either
separately or in combination with one another, influence the retention of science knowledge in a multimedia learning environment.
According to Bétrancourt and Tversky (2000, p. 313), animation is the visual representation that “generates a series of frames, so that
each frame appears as an alternation of the previous one”. Therefore, by its nature, animation is able to vividly present events which change
over time, such as motion, processes and procedures. It provides more external support for learners to construct their dynamic internal
representations than static graphics. Some studies in the past decades have shown positive results that favor the use of instructional
animations. For instance, Rieber (1990) provided participants with a computer-based lesson describing Newton’s law of motion, using either
static or animated graphics. The results revealed that participants in the animated graphics condition had a better understanding of the-
concepts and rules of Newton’s law than those in the static graphics condition. In another study, participants viewed either animation or
static diagrams of chemical reactions during a lecture (Yang, Andre, & Greenbowe, 2003). The researchers found that participants who
received the instructor-paced animation demonstrated better understanding of chemistry concepts than their counterparts studying static
diagrams. Kriz and Hegarty (2007) conducted a series of experiments using an animated diagram and a static diagram to teach participants
about a flushing system. They found that participants who learned from the animation had significantly better comprehension of the system
compared to those studying a static diagram, regardless of whether the animation was interactive or had signaling devices. In the meta-
analysis conducted by Höffler and Leutner (2007), an overall-effect of animations was found among dozens of reviewed studies. Other
* Corresponding author. Tel.: þ1 480 965 1832; fax: þ1 480 965 7193.
E-mail address: [email protected] (R.K. Atkinson).
studies also support animation’s effectiveness on learning (e.g., Arguel & Jamet, 2009; Ayres, Marcus, Chan, & Qian, 2009; Catrambone &
Seay, 2002; Large, Beheshti, Breuleux, & Renaud, 1996; Münzer, Seufert, & Brünken, 2009; Wong et al., 2009). It is of note that a positive
learning effect was found for animations in a wide range of domains including science (physics and chemistry concepts), engineering
(mechanical systems), and daily life skills (paper folding and knot making).
Cognitive load theory (Paas, Renkl, & Sweller, 2003; Schnotz & Kurschner, 2007; Sweller, van Merrienboer, & Paas, 1998) provides
a theoretical framework to explain the superiority of instructional animations over static graphics. It assumes that a human’s working
memory has limited capacity and considers learning as a process of schema acquisition. There are three subcomponents of cognitive
loaddintrinsic load, extraneous load and germane load. Intrinsic load is determined by element interactivity and cannot be altered on the
condition that the learners’ expertise has not changed and the learning material has been designated (Schnotz & Kurschner, 2007).
Extraneous load is caused by inappropriate instructional format and is irrelevant to learning, whereas appropriate instructional design
fosters learning-related cognitive activities, i.e., fosters germane load. By viewing instructional animations, learners do not exert cognitive
effort to mentally construct dynamic representations. As a result, more cognitive resources are freed up, which could potentially be used
for learning-related activities and deep processing. On the other hand, learning with static graphical representations requires information
integration and inferential reasoning, which may impose considerable mental load on learners. These additional processing requirements
may cause learners to experience cognitive overload, as indicated in some research findings (Hegarty, 1992; Hegarty & Just, 1993).
Tversky, Morrison and Betrancourt (2002) concluded that the superiority of animation found in some reviewed studies (e.g., Park &
Gittelman, 1992; Thompson & Riding, 1990) should be attributed to the increased amount of information that is conveyed in animation
compared to static graphics, rather than the animation per se. By controlling for the information delivered by different visualizations in
a series of experiments, some researchers (Mayer, Hegarty, Mayer & Campbell, 2005) found that paper-based static media (i.e., illustrations
accompanied with text) was neutral or even better to promote retention and transfer than computer-based system-paced animations
with narrations. A cognitive load approach could provide one possible explanation of the failure of the animations’ effectiveness. Due to the
animation’s transitory nature, learners need to study the current information delivered by the animation while at the same time referring to
the previous learning content. As a result, learners may experience high level of extraneous load, which impedes learning. Mayer and
Chandler (2001) found that learners, who studied with segmented, learner controlled animations, understood the lightning formation
more deeply than their peers who viewed a whole, continuous unit of animation. Therefore, in order to mitigate the transitory nature
of animation, learner control should be available to learners (Ayres & Paas, 2007). Segmentation and interactivity are the specific techniques
to provide learners control over the learning environment (Mayer & Moreno, 2003).
From the reviewed literature, the results of research designed to compare the effectiveness of animations and static diagrams are
divergent and inconsistent. Some studies revealed the advantage of using instructional animations, while other studies showed the effects
of animations and static graphics were equivalent with regard to learning. A few studies even reported that static visualizations were
superior. Therefore, general comparisons between dynamic and static graphics without taking into account some specific issues will not
lead to any systematic results and conclusions. As Hegarty (2004, p. 344) indicated, researchers should investigate “what conditions must be
in place for dynamic visualizations to be effective in learning”. Höffler and Leutner (2007) found effect sizes in favor of animations differed
for teaching declarative knowledge, problem-solving knowledge and procedural knowledge; procedural knowledge produced the largest
effect size while problem-solving knowledge showed the smallest. Therefore, the animated-static comparison should take different types
of knowledge into consideration.
In a multimedia learning environment, information is presented through visual and/or auditory channels via multiple formats, such as
graphics, on-screen text and narrations. When graphics are presented with narrations, learners may need to search the relevant information
on the visualizations to build connections between what they see and what they hear. In a learning environment where complex visual-
izations are presented, visually searching relevant information to match narrations held in the working memory may become difficult
for learners, and lead to high extraneous load. Under such conditions, learners may perceive and comprehend information from obvious yet
irrelevant parts of graphical representations, resulting in poor learning and performance (Lowe, 2003). Visual cueing is one of the techniques
to direct learners’ attention in the multimedia environment.
Visual cueing is the addition of non-content information (e.g., arrows, circles, and coloring) to visual representations. Research (de Koning,
Tabbers, Rikers, & Paas, 2009; de Koning, Tabbers, Rikers, & Paas, 2010a) has shown that visual cues are effective to guide learners’ attention to
animations in multimedia environments. As a result, visual cueing has the potential to facilitate the processes of selecting relevant infor-
mation, which is one of the essential processes for active learning (Mayer, 2005). From the cognitive load perspective, a substantial number of
studies have found that visual cueing is an effective method to reduce extraneous load in multimedia learning environments (for reviews, see
Mayer & Moreno, 2003; Wouters, Paas, & van Merriënboer, 2008) and several studies supported the instructional benefits of visual cueing
(Atkinson, Lin, & Harrison, 2009; de Koning, Tabbers, Rikers, & Paas, 2007, 2010b; Jamet, Gavota, & Quaireau, 2008; Jeung, Chandler, & Sweller,
1997; Kalyuga, Chandler, & Sweller, 1999). For instance, de Koning et al. (2007) conducted a study to investigate the effectiveness of a cued
animated cardiovascular system (using a spotlight effect). The researchers compared learning outcomes for participants who viewed a cued
animation with those who viewed the animation without a visual cue. The results showed that participants in the cued animated condition
had significantly higher scores on both comprehension and transfer tests. Jamet et al. (2008) used a coloring technique as visual cues in their
study. They found that participants who studied saliently colored graphics of the human brain performed significantly better than the group
that viewed non-salient colored graphics. In terms of efficiency (Paas & van Merriënboer, 1993; van Gog & Paas, 2008), Kalyuga et al. (1999)
found that color-coded diagrams promoted more efficient learning than conventional no-color-coding diagrams.
One purpose of the current study was to investigate whether animations were more effective than static graphics to promote learning,
i.e., to retain concepts and processes about the rock cycle, an earth science content. The rock cycle is a model that involves formation,
652 L. Lin, R.K. Atkinson / Computers & Education 56 (2011) 650–658
breakdown, and reformation between three main types of rocks on the earthdigneous rock, sedimentary rock and metamorphic rock. The
content requires learners to learn concepts and processes, which comprise the cycle of rock. In order to retain knowledge about concepts
and processes, learners need to accurately build dynamic mental models of how rocks are formed. Animations have the potential to facilitate
knowledge construction with this type of learning content (Höffler & Leutner, 2007; Rieber, 1990; Yang et al., 2003). Therefore,
we hypothesized that animations enhance retention of both concepts and processes. The study also investigated the potential cognitive
benefits of adding visual cues to visualizations to enhance science learning in a multimedia environment. Based on the literature reviewed
in previous section, we hypothesized that visual cueing is effective to enhance learning.
In addition to learning, cognitive load and motivation were also investigated. By providing learner control over animations, the transitory
nature of animations could be overcome. Therefore, we expected that when comparing animations to static graphics, animations would
reduce extraneous load and consequently foster germane load. We also expected visual cueing to reduce extraneous load in multimedia
learning environment, which is in line with Mayer and Moreno (2003) and Wouters et al. (2008). Only a few studies have investigated
learners’ motivation in multimedia learning, e.g., motivation in an agent-based environment (Moreno, Mayer, Spires, & Lester, 2001), in
an online animation-based environment (Rosen, 2009) or motivation with young children with low cognitive interest (Kim, Yoon, Whang,
Tversky, & Morrison, 2007). As motivation impacts learning (Boekaerts, 2007; Husman & Hilpert, 2007), this study explored the potential
effects of animations and visual cueing on learners’ intrinsic motivation in the multimedia environment.
Two independent variables were manipulated in the study: presentation format (animated vs. static graphics) and visual cueing (visual
cues vs. no visual cues). Other variables, such as the instructional content, the level of learner control and the number of presentation
segments were held constant. This study incorporated a number of dependent variables, including participants’ (a) learning outcomes,
(b) subjective cognitive load and (c) intrinsic motivation. As there was no time restriction in the learning phase, learning time was measured
as an en-route variable. Also, learners’ prior knowledge was statistically controlled in the study, as research revealed an interaction between
learners’ level of prior knowledge and the instructional presentation format (ChanLin, 1998, 2001; Kalyuga, 2007, 2008; Kalyuga, Ayres,
Chandler, & Sweller, 2003). pffiffiffi
Learning efficiency scores were computed using the formula E ¼ ðzperformance zlearning time Þ= 2, which was adapted from previous
literature (Paas & van Merriënboer, 1993; van Gog & Paas, 2008) and was used by Gerjets, Scheiter, Opfermann, Hesse, and Eysink (2009).
By using this construct, the current study has taken into account both learning outcomes and learning time. The greater the value of the
learning efficiency score, the more efficient the instruction.
3. Method
One hundred and nineteen participants (61 males and 58 females) from a large southwestern university in the US participated in the
study. They were students recruited from the general campus population as well as from educational psychology and introductory computer
courses in the College of Education. They were all over 18 years old and their average age was 25.57 (SD ¼ 8.98). They were paid a small
stipend ($10) for their participation.
This study used a pretest–posttest, 2 (animation vs. static graphics) 2 (visual cues vs. no visual cues) between-subjects design, in which
the participants were randomly assigned to one of the four conditions: (a) static graphics with visual cues, (b) static graphics without visual
cues, (c) animations with visual cues, and (d) animations without visual cues.
Due to technical problems, data of seven participants were not recorded by the computer program. Therefore, they were excluded
for analysis, leaving 112 participants in total (28 for each condition).
The computer-based instructional materials were intended to deliver a lesson about the rock cycle. The characteristics of the three types
of rocks (i.e., igneous rock, sedimentary rock and metamorphic rock) were described. Processes, such as volcano eruption, weathering,
erosion and metamorphism, were also explained to show how different types of rocks transform into each other. By retaining concepts
and processes in this domain, learners construct their internal representations of rock cycle.
The learning environment was created using Visual Basic and was embedded with two-dimensional graphics created using Adobe Flash.
In all of the four experimental conditions, participants listened to a female voice without foreign accent, narrating the content and
simultaneously presenting the content-related graphics. The visual presentations differed among the four conditions. In the visually cued
animation condition, participants viewed 20 segments of animations about the characteristics of the three main types of rocks and the
transformation processes between each other. Visual cues (i.e., red arrows) were added to these animations to highlight important infor-
mation. Specifically, the arrows were used to highlight concepts (e.g., name of a rock) or processes (e.g., weathering). No other types of visual
cues, such as circles, spotlight or hand pointing, were used. Participants assigned to the uncued animation condition viewed the same
number of animation segments in the same order as the cued condition and with the same narrations but without the addition of any visual
cues. In the cued-static-graphics condition (Fig. 1), 20 key frames taken from the corresponding segments of the cued animated graphics
were presented to participants; whereas 20 key frames taken from the corresponding uncued animation condition were presented to
participants assigned to uncued-static-graphics condition (Fig. 2). To keep the information delivered in the four conditions as equivalent
as possible, the narrations accompanying each of the 20 static graphics were exactly the same as those accompanied the 20 segments
of animations.
Before the lesson, a tutorial screen (Fig. 3) appeared and a brief description of the navigation features and the multimedia environment
was provided. Neither content-related graphics nor narration appeared in the tutorial. The content in computer-based lesson was
segmented into 20 separate screens. In the animation conditions, learners could stop and start the narration and animation as often as
they needed using the two control buttons located at the bottom of the animation on each screen. These buttons controlled both the
animation and the narration in order to maintain visual and audio synchronization. In the two conditions where the graphics were static,
L. Lin, R.K. Atkinson / Computers & Education 56 (2011) 650–658 653
learners were provided with the two control buttons to stop and start the narration. Each screen also included navigation buttons to allow
learners to go back to the previous screen or go forward to the next screen to view visualizations. There was no time limit for learners
to complete the lesson. However, their learning times (in minutes) were recorded by the computer program.
A pretest consisting of 20 multiple choice questions was administered to measure participants’ prior knowledge about the content. Each
test question had four choicesdone correct answer and three distracters. All items in the pretest were automatically scored by the computer
program according to the following rules: 0 points for an incorrect answer or 1 point for a correct answer. Therefore, a maximum total of 20
points could be achieved on the pretest. A posttest was used to measure participants’ comprehension of the material after instruction. The
posttest was almost identical to the pretest except that the order of the questions was different. The order of the pretest and posttest was
determined by using a random number table. Since the content involved learning about concepts and processes in the rock cycle and
knowledge retention was the learning goal, the test was divided into two sets of questions designed to measure both concept retention and
process retention. Specifically, there were 10 questions measuring learners’ concept retention and 10 questions measuring learners’ process
retention. A similar way of labeling different forms of tests was also used by Jamet et al. (2008) and Münzer et al. (2009). A sample test
question for concept retention is “What is the name of molten rock under the earth’s surface?” with four choices (A. Magma; B. Lava;
C. Sediments; D. Volcanic rock). A sample test question for process retention is “According to the rock cycle, which of the following is incorrect?”
with four choices (A. Igneous rocks may metamorphose into metamorphic rocks; B. Magma may crystallize to form igneous rocks;
C. Sedimentary rocks may weather to become igneous rocks; D. Metamorphic rocks may melt to become magma).
We assume that learners have the ability to reflect on their cognitive processes and provide their responses on numerical scales (Gopher
& Braune, 1984; Paas, Tuovinen, Tabbers, & Van Gerven, 2003). Therefore, self-report measures were used to measure participants’ cognitive
load and intrinsic motivation. Three subjective questions (i.e., task demands, effort and navigational demands, see Table 1) were used to
measure each of the subcomponents of cognitive load (i.e., intrinsic load, extrinsic load and germane load). They were adapted from the
NASA-TLX (Hart & Staveland, 1988) and were described in studies conducted by Gerjets, Scheiter, and Catrambone (2004) and Scheiter,
Gerjets, and Catrambone (2006). According to Scheiter et al. (2006), a mapping was assumed between the theoretical subcomponents
and the items of modified NASA-TLX. Participants rated each of the three questions on an 8-point Likert scale. For the first question, “1” was
labeled as easy in the rating scale and “8” as demanding; for the second question, “1” was labeled as not hard at all and “8” as very hard; for the
third question, “1” was labeled as low effort and “8” as high effort. Participants’ intrinsic motivation was also measured by an 8-point
Likert scale ranging from “1” (not at all true) to “8” (very true). There were a total of 15 statements, adapted from Ryan’s study (Ryan, 1982),
assessing intrinsic motivation with six subscalesdinterest, competence, value, effort, pressure and choice (see Table 2).
3.4. Procedure
The experiment was conducted in a laboratory setting. At the beginning of the experiment, a researcher asked participants to sign
a consent form for participation. Next, they were seated at an individual cubicle, facing a computer, and were briefed by the researcher about
the procedure of the experiment. However, participants were unaware of the different conditions and the research questions included in the
experiment. Then, they started the pretest on the computer with no time limit. After the completion of the pretest, each participant
Table 1
Cognitive load measurement.
Table 2
Intrinsic motivation items.
Item Subscale
1. I thought it was a boring activity. Interest
2. I think I was pretty good at this activity. Competence
3. I think that doing this activity could be useful. Value
4. I didn’t try very hard to do well at this activity. Effort
5. I did not feel nervous at all while doing this. Pressure
6. I believe I had some choice about doing this activity. Choice
7. It was important to me to do well at this task. Effort
8. I believe doing this activity could be beneficial to me. Value
9. I felt very tense while doing this activity. Pressure
10. I did this activity because I had no choice. Choice
11. This activity was fun to do. Interest
12. I put a lot of effort into this. Effort
13. This was an activity that I couldn’t do very well. Competence
14. I believe this activity could be of some value to me. Value
15. I would describe this activity as very interesting. Interest
was provided with a randomly assigned experiment ID number to start the computer-based lesson. Once the participants completed the
lesson, a posttest was administered followed by a questionnaire. Neither activity had a time limit. The questionnaire had two parts:
subjective cognitive load measures and intrinsic motivation measures. Upon completion of the posttest and the questionnaire, the
participants were thanked and paid. The participants needed approximately 30 min to complete the entire study.
4. Results
Table 3 presents the means, adjusted means (if available) and standard deviations for two types of learning outcome measures (i.e.,
concept retention and process retention), subjective cognitive load measures (i.e., task demands, effort and navigational demands), learning
time and learning efficiency for the four experimental conditions. All of the means for the learning outcome measures were transformed
from raw scores to percentages. An alpha level of .05 was used for all statistical analysis. Cohen’s f was used as an effect size index.
Accordingly, .02, .15 and .35 were defined as the values for small, medium and large effect sizes (Cohen, 1988).
A one-way analysis of variance (ANOVA) was conducted to evaluate whether participants’ prior knowledge significantly differed across
the four experimental conditions. There was no significant difference of total pretest percentage scores across the four conditions, F(3,
108) ¼ 1.18, MSE ¼ .03, p ¼ .32, f ¼ .18. In addition, no significant difference was found in participants’ knowledge on concepts (F(3, 108) ¼ .86,
MSE ¼ .05, p ¼ .47, f ¼ .15) or processes (F(3, 108) ¼ 1.58, MSE ¼ .04, p ¼ .20, f ¼ .21).
Table 3
Mean and standard deviations of test scores, cognitive load, time, instructional efficiency and six subscales of intrinsic motivation.
Note. Measures of cognitive load and intrinsic motivation were on 8-point scales. Adj. ¼ adjusted. CR ¼ concept retention. PR ¼ process retention. E ¼ efficiency. IM ¼ intrinsic
motivation. Means of CR/pretest, PR/pretest, CR/posttest and PR/posttest and adjusted means of CR/posttest and PR/posttest are percentage scores.
a
The unit of time is minute.
656 L. Lin, R.K. Atkinson / Computers & Education 56 (2011) 650–658
a covariate. A preliminary analysis was conducted to evaluate the homogeneity-of-slope assumption. It showed a non-significant interaction
between presentation format and the covariate (F(1, 108) ¼ 3.29, MSE ¼ .02, p ¼ .51, f ¼ .07) as well as a non-significant interaction between
visual cueing and the covariate (F(1, 108) ¼ .25, p ¼ .62, f ¼ .04), indicating that the relationship between the covariate and the dependent
variable (i.e., concept retention) did not differ significantly as a function of the two independent variables (i.e., presentation format and
visual cueing). Therefore, ANCOVA was conducted. The analysis revealed a significant difference between the animation conditions and
static graphics conditions, F(1, 107) ¼ 4.18, MSE ¼ .02, p ¼ .04, f ¼ .20, indicating a medium-to-large effect of the superiority of animations
(M ¼ 86.4%, SD ¼ .14) over static graphics (M ¼ 82.5%, SD ¼ .16). However, there was no significant visual cueing main effect between
the cued condition (M ¼ 84.8%, SD ¼ .14) and uncued condition (M ¼ 84.1%, SD ¼ .17) with regard to concept retention, F(1, 107) ¼ .81, p ¼ .37,
f ¼ .09. Neither was there an interaction effect, F(1, 107) ¼ .03, p ¼ .85, f ¼ .02.
Three two-way ANCOVAs were planned to evaluate the effects of presentation format and visual cueing on learners’ task demands, effort,
and navigational demands, which were indications of intrinsic, extraneous and germane cognitive load respectively. The covariate was the
percent correct score on all pretest questions so that participants’ prior knowledge was statistically controlled. Before each ANCOVA was
conducted, the homogeneity-of-slope assumption was examined. This preliminary analyses showed: (a) for task demands, a non-significant
interaction between presentation format and the covariate (F(1, 108) ¼ .10, MSE ¼ 2.64, p ¼ .75, f ¼ .03) as well as a non-significant inter-
action between visual cueing and the covariate (F(1, 108) ¼ .01, p ¼ .92, f ¼ .01); (b) for effort, a non-significant interaction between
presentation format and the covariate (F(1, 108) ¼ .08, MSE ¼ 3.22, p ¼ .78, f ¼ .03) as well as a non-significant interaction between visual
cueing and the covariate (F(1, 108) ¼ .97, p ¼ .33, f ¼ .09); and (c) for navigational demands, a non-significant interaction between presen-
tation format and the covariate (F(1, 108) ¼ .30, MSE ¼ 3.17, p ¼ .58, f ¼ .05) as well as a non-significant interaction between visual cueing and
the covariate (F(1, 108) ¼ 2.16, p ¼ .15, f ¼ .14). Therefore, planned ANCOVAs were conducted. With regard to task demandsdan indication of
intrinsic load, neither main effects nor interaction was significant: for presentation main effect, F(1, 107) ¼ .18, MSE ¼ 2.63, p ¼ .67, f ¼ .04; for
visual cueing main effect, F(1, 107) ¼ .07, p ¼ .80, f ¼ .03; for interaction effect, F(1, 107) ¼ 1.45, p ¼ .23, f ¼ .11. Neither main effects nor
interaction was found significant for effortdan indication of germane load: for presentation main effect, F(1, 107) ¼ .06, MSE ¼ 3.24, p ¼ .81,
f ¼ .03; for visual cueing main effect, F(1, 107) ¼ 1.09, p ¼ .30, f ¼ .10; for interaction effect, F(1, 107) ¼ .44, p ¼ .51, f ¼ .06. Moreover, neither
main effects nor interaction was found significant for navigational demandsdan indication of extraneous load; for presentation main effect,
F(1, 107) ¼ 1.39, MSE ¼ 3.08, p ¼ .24, f ¼ .11, for visual cueing main effect, F(1, 107) ¼ 2.32, p ¼ .13, f ¼ .15, for interaction effect, F(1, 107) ¼ 2.90,
p ¼ .09, f ¼ .16.
A two-way ANOVA was conducted to explore whether participants in the four experimental conditions had spent significantly different
times in studying the visualizations. A significant difference was found between the visually cued conditions (M ¼ 8.15 min, SD ¼ .21 min)
and the uncued conditions (M ¼ 8.81 min, SD ¼ .21 min), F(1, 108) ¼ 4.83, MSE ¼ 2.52, p ¼ .03, indicating that participants in the uncued
conditions spent more time on learning than those in cued conditions. The effect size (f ¼ .21) showed a medium effect of visual cueing.
No significant difference was found between the animation conditions (M ¼ 8.40 min, SD ¼ 1.10 min) and static graphics conditions
(M ¼ 8.56 min, SD ¼ 2.05 min), F(1, 108) ¼ .29, p ¼ .59, f ¼ .05. In addition, there was a significant interaction, F(1, 108) ¼ 5.60, p ¼ .02, f ¼ .23.
Therefore, analysis of simple main effects was conducted. To control for type I error, Bonferroni approach was used and alpha was set
at .025 (.05/2). It was found that participants spent significantly more time learning with uncued static graphics than those learning
with cued static graphics, F(1, 108) ¼ 10.41, MSE ¼ 2.52, p ¼ .002, with a medium-to-large effect size (f ¼ .31). However, the times that
participants spent studying cued and uncued animations were not significantly different, F(1, 108) ¼ .01, p ¼ .91, f ¼ .01. No other significant
results were found concerning learning times.
According to Paas and van Merriënboer (1993) and van Gog and Paas (2008), raw scores (performance and time) should be transformed
to z scores to compute the efficiency. In order to take into account participants’ prior knowledge, gain scores (posttest–pretest) were pffiffiffi
computed and standardized. Therefore, learning efficiency scores were computed by using the formula E ¼ ðzperformance zlearning time Þ= 2.
A two-way ANOVA was conducted to investigate whether participants’ learning efficiency differed significantly among the four
conditions. The presentation main effect was non-significant for the animation conditions (M ¼ .15, SD ¼ .74) and the static graphics
conditions (M ¼ .03, SD ¼ 1.04), F(1, 108) ¼ 1.20, MSE ¼ .74, p ¼ .28, f ¼ .11; whereas a significant visual cueing main effect was found
between the visually cued conditions (M ¼ .33, SD ¼ .78) and the uncued conditions (M ¼ .21, SD ¼ .95), F(1, 108) ¼ 11.24, p ¼ .001, f ¼ .32.
No interaction effect was found, F(1, 108) ¼ 3.04, p ¼ .08, f ¼ .17.
L. Lin, R.K. Atkinson / Computers & Education 56 (2011) 650–658 657
A two-way multivariate analysis of variance (MANOVA) was conducted to explore the effects of presentation formats and visual cueing
on the six subscales of intrinsic motivationdinterest, competence, value, effort, pressure and choice. Means on each of the six subscales
were computed and were used as dependent variables. No significant difference was found on the six subscales for the presentation format
main effect, Wilks’ lambda ¼ .92, F(6, 103) ¼ 1.51, p ¼ .18, f ¼ .30, nor the visual cueing main effect, Wilks’ lambda ¼ .90, F(6, 103) ¼ 1.85,
p ¼ .10, f ¼ .33. Neither was there a significant interaction, Wilks’ lambda ¼ .93, F(6, 103) ¼ 1.30, p ¼ .26, f ¼ .28.
5. Discussion
One purpose of the study was to investigate the superiority of animations over static graphics in a multimedia learning environment. No
previous research has investigated this issue in learning knowledge about rock cycle, which is one of the significant contributions of
the current study. We hypothesized that instructional animations should promote the retention of concept and process knowledge in the
domain of rock cycle. This hypothesis was partially supporteddanimations promote learning concepts. In order to learn concepts about
the rock cycle, learners need to construct internal representations of the rock cycle. From the results, we conclude that animations facilitate
this knowledge construction. One possible explanation is that the changes over time that the animations showed corresponded to the
nature of the rock cycle concepts. For instance, learners may benefit from the animations showing that magma is the molten rock under
the earth’s surface while lava is the molten rock coming out to the earth’s surface. According to Tversky et al. (2002), this correspondence is
the condition for successful use of animations. The current finding is consistent with Höffler and Leutner’s (2007) results that revealed
a medium positive effect for learning declarative knowledge with animations. As we have controlled the degree of interactivity, the number
of presentation segments and accompanying narrations to be identical across all of the four experimental conditions, we can conclude
that the animation effect in the study is not due to any of the above-mentioned three factors. In addition, it is worthwhile to indicate that
no positive result of animations was found for the intrinsic motivation scale in the study. Therefore, we should not attribute the animation
effect to motivation. It is the animation per se that facilitates concept retention in the domain.
The study revealed a non-significant pattern that on average participants who studied animations scored higher on process retention
test questions than those who studied static graphics. As retaining information from animations greatly depends on the perception of
animations (Lowe, 2003), it is likely that the animations presenting rock cycle processes are not salient enough to be perceived by the
learners in both cued and uncued conditions. This implies that more visual cueing devices are needed for the visualizations. An eye tracking
technique, tracking learners’ eye movements on animations, could be considered in future research to identify specific diagrammatic
elements that need to be visually cued. With regard to cognitive load, we did not find an animation effect for any of the three items intended
to measure intrinsic, germane and extraneous load. This may be due to the fact that the subjective cognitive load measures were not
administered during instruction. As a result, no distinction between concept retention and process retention can be made concerning
cognitive load. In future research, we could consider modifying the cognitive load measures by specifying task demands, effort and navi-
gational demands in learning concepts or processes of the domain content and administering them multiple times during the learning
phase. By doing so, the limitation in the current study could be overcome. In addition, we also admit the possible measurement errors in
measuring cognitive load, as currently the subjective rating scales make it difficult to distinguish each subcomponent of cognitive load
(Schnotz & Kurschner, 2007).
In this study, we also investigated the effect of visual cueing. In the past decade, literatures (de Koning et al., 2007, 2010b; Jamet et al.,
2008; Jeung et al., 1997; Kalyuga et al., 1999; Mayer & Moreno, 2003; Wouters et al., 2008) revealed findings that supported the instructional
benefits of visual cueing. Although in the current study no significant visual cueing effect was found for learning outcome measures,
significant differences were found in learning time and efficiency that favored visual cueing in an environment that does not impose time
constraint on learning. Specifically, when studying visually cued graphics, learners spent less time and learned more efficiently than their
peers studying uncued instructional materials. Furthermore, learners in cued-static condition spent less time than their peers in uncued-
static condition. Therefore, the results of the current study partially confirmed our hypothesis that from the perspective of learning time and
efficiency visual cueing enhances learning. Visual cues may reduce learners’ search activity on the graphics, leading to the reduced learning
time and the enhanced efficiency in the learning environment that does not impose a time limit. Therefore, learners finally reached the same
level of knowledge with different learning times due to the visual cueing effect. However, no cueing effect on cognitive load has been found
in the current study. This is consistent with a few previous studies (de Koning et al., 2007, 2010a, 2010b). Learners’ low ratings on the three
cognitive load measures and fairly high learning outcomes shed some light on the possible explanations. It is possible that the instructional
content was not difficult for learners after they studied it for a certain amount of time. Consequently, this resulted in low self-report ratings
on those subjective cognitive load measures. In future studies, more measures and techniques may be used to determine learners’
perceptions of difficulty. For instance, we can consider adding more subjective questions to ask learners about their perceived difficulty of
the instruction and their frustration level. We may also consider using physiological measures in the future. Specifically, physiological
sensors can be used to measure a learner’s pressure on a mouse and the movements on a chair to reflect his/her frustration and other
emotional states (D’Mello, Picard, & Graesser, 2007). Also, electroencephalography (EEG) methodology can be used to assess variations of
cognitive load (Antonenko, Paas, Grabner, & Van Gog, in press).
Some empirical studies (e.g., Atkinson et al., 2009; Boucheix & Lowe, 2010; de Koning et al., 2007, 2010a, 2010b; Mautone & Mayer, 2001)
only investigated the visual cueing effect in learning with animations. The results of the current study revealed that there was a visual cueing
effect on learning time when comparing the two static graphics conditions, which is also one of the significant contributions to the liter-
ature. The uncued static graphics prevented learning so that learners had to invest more time to compensate for disadvantaged learning. Our
explanation is that the reduced time in learning cued static graphics may be attributed to the reduced visual search activity. This shows that
adding visual cueing devices to static graphics, a simpler presentation format than animations, has instructional benefits. However, the
cueing effect disappeared when the instructional visualizations were animated (i.e., the interaction effect) suggesting that the type of
visualizations is a potential moderator that influences the visual cueing effect in multimedia learning. Therefore, we recommend
researchers take the type of visualizations into consideration when conducting research on visual cueing in the future.
658 L. Lin, R.K. Atkinson / Computers & Education 56 (2011) 650–658
References
Antonenko, P., Paas, F., Grabner, R., Van Gog, T. Using electroencephalography to measure cognitive load. Educational Psychology Review, in press.
Arguel, A., & Jamet, E. (2009). Using video and static pictures to improve learning of procedural contents. Computers in Human Behavior, 25(2), 354–359.
Atkinson, R. K., Lin, L., & Harrison, C. (2009). Comparing the efficacy of different signaling techniques. In Proceedings of world conference on educational multimedia, hypermedia
and telecommunications 2009 (pp. 954–962). Chesapeake, VA: AACE.
Ayres, P., Marcus, N., Chan, C., & Qian, N. (2009). Learning hand manipulative tasks: when instructional animations are superior to equivalent static representations.
Computers in Human Behavior, 25(2), 348–353.
Ayres, P., & Paas, F. (2007). Making instructional animations more effective: a cognitive load approach. Applied Cognitive Psychology, 21(6), 695–700.
Bétrancourt, M., & Tversky, B. (2000). Effect of computer animation on users’ performance: a review. Le Travail Humain, 63(4), 311–329.
Boekaerts, M. (2007). What have we learned about the link between motivation and learning/performance? Zeitschrift für Pädagogische Psychologie, 21, 263–269.
Boucheix, J., & Lowe, R. K. (2010). An eye tracking comparison of external pointing cues and internal continuous cues in learning with complex animations. Learning and
Instruction, 20(2), 123–135.
Catrambone, R., & Seay, A. F. (2002). Using animation to help students learn computer algorithms. Human Factors, 44(3), 495–511.
ChanLin, L. (1998). Animation to teach students of different knowledge levels. Journal of Instructional Psychology, 25(3), 166–175.
ChanLin, L. (2001). Formats and prior knowledge on learning in a computer-based lesson. Journal of Computer Assisted Learning, 17(4), 409–419.
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: L. Erlbaum Associates.
de Koning, B. B., Tabbers, H., Rikers, R. M. J. P., & Paas, F. (2007). Attention cueing as a means to enhance learning from an animation. Applied Cognitive Psychology, 21(6), 731–746.
de Koning, B. B., Tabbers, H., Rikers, R. M. J. P., & Paas, F. (2009). Towards a framework for attention cueing in instructional animations: guidelines for research and design.
Educational Psychology Review, 21(2), 113–140.
de Koning, B. B., Tabbers, H. K., Rikers, R. M. J. P., & Paas, F. (2010a). Attention guidance in learning from a complex animation: seeing is understanding? Learning and
Instruction, 20(2), 111–122.
de Koning, B. B., Tabbers, H. K., Rikers, R. M. J. P., & Paas, F. (2010b). Learning by generating vs. receiving instructional explanations: two approaches to enhance attention
cueing in animations. Computers & Education, 55(2), 681–691.
D’Mello, S., Picard, R., & Graesser, A. (2007). Toward an affect-sensitive autotutor. IEEE Intelligent Systems, 22(4), 53–61.
Gerjets, P., Scheiter, K., & Catrambone, R. (2004). Designing instructional examples to reduce intrinsic cognitive load: molar versus modular presentation of solution
procedures. Instructional Science, 32(1–2), 33–58.
Gerjets, P., Scheiter, K., Opfermann, M., Hesse, F. W., & Eysink, T. H. S. (2009). Learning with hypermedia: the influence of representational formats and different levels of
learner control on performance and learning behavior. Computers in Human Behavior, 25(2), 360–370.
Gopher, D., & Braune, R. (1984). On the psychophysics of workload: why bother with subjective measures? Human Factors, 26, 519–532.
Hart, S. G., & Staveland, L. E. (1988). Development of NASA-TLX (task load index): results of experimental and theoretical research. In P. A. Hancock, & N. Meshkati (Eds.),
Human mental workload (pp. 139–183). Amsterdam: North-Holland.
Hegarty, M. (1992). Mental animation: inferring motion from static displays of mechanical systems. Journal of Experimental Psychology: Learning, Memory, and Cognition, 18(5),
1084–1102.
Hegarty, M. (2004). Dynamic visualizations and learning: getting to the difficult questions. Learning and Instruction, 14(3), 343–351.
Hegarty, M., & Just, M. A. (1993). Constructing mental models of machines from text and diagrams. Journal of Memory and Language, 32(6), 717–742.
Höffler, T. N., & Leutner, D. (2007). Instructional animation versus static pictures: a meta-analysis. Learning and Instruction, 17(6), 722–738.
Husman, J., & Hilpert, J. (2007). The intersection of students’ perceptions of instrumentality, self-efficacy, and goal orientations in an online mathematics course. Zeitschrift für
Pädagogische Psychologie, 21, 229–239.
Jamet, E., Gavota, M., & Quaireau, C. (2008). Attention guiding in multimedia learning. Learning and Instruction, 18(2), 135–145.
Jeung, H., Chandler, P., & Sweller, J. (1997). The role of visual indicators in dual sensory mode instruction. Educational Psychology, 17(3), 329–343.
Kalyuga, S. (2007). Expertise reversal effect and its implications for learner-tailored instruction. Educational Psychology Review, 19(4), 509–539.
Kalyuga, S. (2008). Relative effectiveness of animated and static diagrams: an effect of learner prior knowledge. Computers in Human Behavior, 24(3), 852–861.
Kalyuga, S., Ayres, P., Chandler, P., & Sweller, J. (2003). The expertise reversal effect. Educational Psychologist, 38(1), 23–31.
Kalyuga, S., Chandler, P., & Sweller, J. (1999). Managing split-attention and redundancy in multimedia instruction. Applied Cognitive Psychology, 13(4), 351–371.
Kim, S., Yoon, M., Whang, S. M., Tversky, B., & Morrison, J. B. (2007). The effect of animation on comprehension and interest. Journal of Computer Assisted Learning, 23(3), 260–270.
Kriz, S., & Hegarty, M. (2007). Top-down and bottom-up influences on learning from animations. International Journal of Human–Computer Studies, 65(11), 911–930.
Large, A., Beheshti, J., Breuleux, A., & Renaud, A. (1996). Effect of animation in enhancing descriptive and procedural texts in a multimedia learning environment. Journal of the
American Society for Information Science, 47(6), 437–448.
Lowe, R. K. (2003). Animation and learning: selective processing of information in dynamic graphics. Learning and Instruction, 13(2), 157–176.
Mautone, P. D., & Mayer, R. E. (2001). Signaling as a cognitive guide in multimedia learning. Journal of Educational Psychology, 93(2), 377–389.
Mayer, R. E. (2005). Cognitive theory of multimedia learning. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (pp. 31–48). New York, NY, USA: Cambridge
University Press.
Mayer, R. E., & Chandler, P. (2001). When learning is just a click away: does simple user interaction foster deeper understanding of multimedia messages? Journal of
Educational Psychology, 93(2), 390–397.
Mayer, R. E., Hegarty, M., Mayer, S., & Campbell, J. (2005). When static media promote active learning: annotated illustrations versus narrated animations in multimedia
instruction. Journal of Experimental Psychology: Applied, 11(4), 256–265.
Mayer, R. E., & Moreno, R. (2003). Nine ways to reduce cognitive load in multimedia learning. Educational Psychologist, 38(1), 43–52.
Moreno, R., Mayer, R. E., Spires, H. A., & Lester, J. C. (2001). The case for social agency in computer-based teaching: do students learn more deeply when they interact with
animated pedagogical agents? Cognition and Instruction, 19(2), 177–213.
Münzer, S., Seufert, T., & Brünken, R. (2009). Learning from multimedia presentations: facilitation function of animations and spatial abilities. Learning and Individual
Differences, 19(4), 481–485.
Paas, F., Renkl, A., & Sweller, J. (2003). Cognitive load theory and instructional design: recent developments. Educational Psychologist, 38(1), 1–4.
Paas, F., Tuovinen, J. E., Tabbers, H., & Van Gerven, P. W. M. (2003). Cognitive load measurement as a means to advance cognitive load theory. Educational Psychologist, 38(1), 63–71.
Paas, F., & van Merriënboer, J. J. G. (1993). The efficiency of instructional conditions: an approach to combine mental effort and performance measures. Human Factors, 35(4),
737–743.
Park, O.-C., & Gittelman, S. S. (1992). Selective use of animation and feedback in computer-based instruction. Educational Technology, Research, and Development, 40, 27–38.
Rieber, L. P. (1990). Using computer animated graphics with science instruction with children. Journal of Educational Psychology, 82(1), 135–140.
Rosen, Y. (2009). The effects of an animation-based on-line learning environment on transfer of knowledge and on motivation for science and technology learning. Journal of
Educational Computing Research, 40(4), 451–467.
Ryan, R. M. (1982). Control and information in the intrapersonal sphere: an extension of cognitive evaluation theory. Journal of Personality and Social Psychology, 43, 450–461.
Scheiter, K., Gerjets, P., & Catrambone, R. (2006). Making the abstract concrete: visualizing mathematical solution procedures. Computers in Human Behavior, 22(1), 9–25.
Schnotz, W., & Kurschner, C. (2007). A reconsideration of cognitive load theory. Educational Psychology Review, 19(4), 469–508.
Sweller, J., van Merrienboer, J. J. G., & Paas, F. (1998). Cognitive architecture and instructional design. Educational Psychology Review, 10(3), 251–296.
Thompson, S. V., & Riding, R. J. (1990). The effect of animated diagrams on the understanding of a mathematical demonstration in 11- to 14-year-old pupils. British Journal of
Educational Psychology, 60, 93–98.
Tversky, B., Morrison, J. B., & Betrancourt, M. (2002). Animation: can it facilitate? International Journal of Human–Computer Studies, 57(4), 247–262.
van Gog, T., & Paas, F. (2008). Instructional efficiency: revisiting the original construct in educational research. Educational Psychologist, 43(1), 16–26.
Wong, A., Marcus, N., Ayres, P., Smith, L., Cooper, G. A., Paas, F., et al. (2009). Instructional animations can be superior to statics when learning human motor skills. Computers
in Human Behavior, 25(2), 339–347.
Wouters, P., Paas, F. G. W. C., & van Merriënboer, J. J. G. (2008). How to optimize learning from animated models: a review of guidelines based on cognitive load. Review of
Educational Research, 78(3), 645–675.
Yang, E., Andre, T., & Greenbowe, T. J. (2003). Spatial ability and the impact of visualization/animation on learning electrochemistry. International Journal of Science Education,
25(3), 329.