The Effects of Reviews in Video Tutorials
The Effects of Reviews in Video Tutorials
doi: 10.1111/jcal.12136
bs_bs_banner
Abstract This study investigates how well a video tutorial for software training that is based on
Demonstration-Based Teaching supports user motivation and performance. In addition, it is stud-
ied whether reviews significantly contribute to these measures. The Control condition employs a
tutorial with instructional features added to a dynamic task demonstration. The Review condition
additionally includes video reviews. Participants were 55 seventh graders who viewed task dem-
onstrations (and reviews) followed by practice. Both tutorials increased motivation (i.e., task rel-
evance and self-efficacy) and performance. In addition, the Review condition had significantly
better results for training time, self-efficacy and scores on an immediate post-test. Reviews have
rarely been studied in dynamic visualizations. The present study suggests that there may be im-
portant advantages to be gained from concluding a demonstration video with a summary of the
main points.
Introduction The key question is when one or the other form can be
expected to be more effective.
Until recently, most of the instructional support for the
The most important criterion is that the depiction
beginning and moderate software user came from paper
should be aligned with the type of mental representation
tutorials. Under the influence of YouTube’s rapid growth
required of the user. Dynamic representations such as
in popularity, and supported by easy to use programmes
video can be expected to benefit the user only when there
for video production, editing and sharing, more and more
is a fit between the content and structure of what the user
software companies and third party vendors have begun
sees and what must be remembered. In software training
to switch to video as the primary medium for their tuto-
the aim is the acquisition of procedural knowledge. The
rials (van der Meij, Karreman, & Steehouder, 2009). This
user must get to know the sequence of steps that lead to
raises the question whether video tutorials for software
task completion in a particular software programme. This
training can be at least as effective as the paper tutorials
requires the user to learn to perform a series of actions
that they are replacing. This issue brings us into the cur-
that lead to changes on the screen that must be observed.
rent discussion on the advantages of dynamic versus
According to the congruence principle, dynamic visuali-
static visualizations, a debate that revolves around the
zations (e.g., video) should be particularly beneficial for
critical boundary conditions (e.g., Lowe, Schnotz, &
learning such a task (Tversky, Bauer-Morrison, &
Rasch, 2011; Brucker, Scheiter, & Gerjets, 2014).
Bétrancourt, 2002).
A suitable means of instructing people about a proce-
Accepted: 10 January 2016 dure comes from demonstrating task performance (Smith
Correspondence: Hans van der Meij, University of Twente, Faculty of & Ragan, 2005). In order to learn from such a demonstra-
Behavioural, Management and Social sciences, Department of Instruc-
tional Technology, Drienerlolaan 5, 7522 AE Enschede, the Netherlands. tion, the user must carefully observe the modelled proce-
Email: [email protected] dure. Bandura’s (1986) social–cognitive learning theory
© 2016 John Wiley & Sons Ltd Journal of Computer Assisted Learning 1
2 H. van der Meij & J. van der Meij
provides fundamental insights in the processes involved observational learning. Bandura (1986) draws attention
in such observational learning. According to this theory, to the basic processes that should be supported in
learning from task demonstrations involves the interre- model-based learning: attention, retention, production
lated processes of attention, retention, production and and motivation.
motivation. The construction of a video should cater for We discuss these processes next and complement their
these processes. descriptions with the design guidelines that were
The next section describes the four processes and de- followed in the construction of the video tutorial in this
sign measures to support these. Special attention is given study. This design approach, in which task demonstra-
to retention for which the study investigates a unique de- tions are coupled with instructional measures for promot-
sign measure, namely the inclusion of a review video. ing learning, is called Demonstration-Based-Training
Just as with end summaries in paper texts, it is expected (Rosen et al., 2010; Grossman, Salas, Pavlas, & Rosen,
that a concise summary of task achievement after a dem- 2013).
onstration contributes to the user’s memory of a proce- Attention is an active process in which the demon-
dure. The remainder of the paper reports on an strated information is filtered or selected. Users must at-
experiment in which the effectiveness of a video tutorial tend primarily to the salient information; they must
with demonstrations is compared with a video tutorial concentrate on what is pertinent for task accomplishment
with demonstrations plus reviews. and ignore other information. This process is made diffi-
cult by a combination of a complex user-interface and the
transient nature of videos. The interface challenges the
Demonstration-Based Training
user to discover where and what to look for on the screen.
There is an extensive literature on the design and effec- The medium challenges the user to do so with continuous
tiveness of demonstration videos for motor skills devel- and rapid screen changes. This is a daunting task that
opment (e.g., Schwan & Riempp, 2004; Ayres, Marcus, calls for design measures that can support the user’s at-
Chan, & Qian, 2009; Akinlofa, O’Brian Holt, & Elyan, tentional processes.
2013) and problem solving (e.g., Spanjers, van Gog, Two important design features that address the user’s
Wouters, & van Merrienboer, 2012; Hoogerheide, distribution of attentional resources are signalling and
Loyens, & van Gog, 2014). This literature is only partly pacing.
relevant for the design of a video tutorial for software A well-known design measure for directing attention
training, however, because of a difference in goals. Mo- in reading from text is signalling (Lemarié, Lorch,
tor skills training primarily revolves around learning Eyrolle, & Virbel, 2008). In videos, two prevalent ways
physical actions (e.g., hand movements). In contrast, of supporting the user in allocating attention to pertinent
the emphasis in software training lies on getting to know screen information are highlighting and zooming. Both
the software interface and learning the action-reaction techniques draw the user’s attention to the relevant place
patterns in its handling. The user must learn to apply a or object on the screen (van der Meij & van der Meij,
procedure on the interface, rather than learn how to act 2013). Two recent empirical studies show that signalling
on an input device. In addition, task procedures are techniques can positively affect learning from dynamic
strictly defined. All steps must be included and each step visualizations (Amadieu, Mariné, & Laimay, 2011; Jin,
is unambiguous. Therefore, procedures are sometimes 2013).
qualified as algorithms (Smith & Ragan, 2005). This dis- Pacing is a slightly elusive design feature. The advice
tinguishes procedures from problem solving which is is that the pace should be moderate; it should not be too
more heuristic in nature. slow for risk of boredom, nor should it be too fast for risk
For the construction of the video tutorial in this study of cognitive overload (Koumi, 2013). In short, the native
we therefore also looked at two other sources. One pacing of the video should be adapted to what the audi-
source was recent research on the design and effective- ence can handle. Because it is hard to establish pace on
ness of software training with video (e.g., Lloyd & the basis of design guidelines alone, it is best to pilot test
Robertson, 2012; van der Meij & van der Meij, 2013; the video for its pacing. A corollary design feature of
van der Meij & van der Meij, 2014; van der Meij & pacing is the inclusion of a toolbar that the user can em-
van der Meij, 2015). The other source was research on ploy to play, rewind, pause or stop the video. Such a
toolbar enables the user to adapt the pace of the video de- user can benefit from such a break by engaging in main-
pending on what is needed to process the information. In taining activities. The pause gives the user time to engage
their study on videos for learning to tie nautical knots, in mental rehearsal (Rosen et al., 2010).
Schwan and Riempp (2004) reported important advan- Production refers to the learner’s actions taken to ac-
tages of user-pacing. Among others, they found that complish the modelled task performance. The main in-
users made a heavier use of toolbar functions to adapt structional feature advocated for supporting this process
the pacing of the videos on more difficult knots. More is the inclusion of complementary practice (Grossman
generally, user-pacing appears to be an important design et al., 2013; van der Meij & van der Meij, 2013).
feature that affects learning from dynamic visualizations Practice can serve as a check of understanding and re-
(e.g., Stiller, Freitag, Zinnbauer, & Freitag, 2009; call. During practice the user may come to realize that a
Witteman & Segers, 2010; Merkt, Weigand, Heier, & step in the procedure is forgotten or an error is made lead-
Schwan, 2011). ing to a need to restudy the video. This suggests that it is
Retention refers to the comprehension and storing of beneficial for the user to have easy access to the videos
information for future behaviour. The demonstration during practice. Practice can also consolidate a proce-
should be designed in such a way that the user can under- dure. It can reinforce what the user remembers. Empirical
stand how a task is performed, and it should support the studies on multimedia show that users usually benefits
user in remembering the procedure so that it can serve as from practice, but that it may depend on their prior
a guide for future action. Three main measures for knowledge whether practice best occurs before or after
supporting retention are optimized segment length, a demonstration (e.g., Reisslein, Atkinson, Seeling, &
simple-to-complex task sequencing and the inclusion of Reisslein, 2006; Wouters, Paas, & van Merriënboer,
pauses. 2010).
Presenting tasks in manageable units or segments fa- Motivation refers to the intensity, valence and persis-
cilitates the user’s understanding of a procedure. Com- tence of one’s learning-directed behaviour (Pintrich &
plex or long tasks should therefore be split into smaller Schunk, 2002). It is the driving force behind the pro-
units or segments. Such splits are preferably based on a cesses of attention, retention and production. Earlier we
meaningful subtask division. An important boundary mentioned the simple-to-complex sequencing of tasks
condition to keep in mind in this respect is video length. as a facilitator of understanding. In addition, this design
Research suggests that a maximum length of 3 min is ac- measure is likely to contribute to user motivation. Other
ceptable, but that a duration of 1 min is best to keep all features that can positively affect motivation are a task-
users aboard (Plaisant & Shneiderman, 2005; Wistia, oriented organization and the presence of a human narra-
2012; Guo, Kim, & Rubin, 2014). tor using a conversational style.
Another way of supporting understanding comes from For software instructions, a distinction is often made
organizing the tasks in a simple-to-complex sequence. between a function and a task orientation. The first refers
Placing easier tasks before more difficult ones has the ad- to a presentation mode that concentrates on affordances.
vantage that the user can keep up with increasing levels The user receives explanations of software functions,
of task complexity. On each moment in training the user features and interface elements. Reference guides are
then faces a task that should be manageable (van sometimes organized in this fashion. In a task-oriented
Merriënboer, Kirschner, & Kester, 2003). presentation, usage of the software by the audience is
The user can be supported in remembering a proce- given a central role. The focus lies on selecting or creat-
dure by the inclusion of brief, 2- to 5-s pauses at key mo- ing tasks that the user instantly recognizes as genuine and
ments in a demonstration. A recent empirical study by meaningful (van der Meij & Carroll, 1998).
Spanjers et al. (2012) shows that such pauses can support Another feature that can contribute to user motivation
retention of dynamic representations in two ways. One, is the presence of a human voice that addresses the user
pauses can demarcate key units or segments for the user in a conversational rather than formal style. Various em-
and thereby contribute to understanding. The breaks sig- pirical studies have found proof of this personalization
nal the important units or building blocks of which a pro- effect (Kartal, 2010; Reichelt, Kämmerer, Niegemann,
cedure consists. Two, pauses interrupt the continuous & Zander, 2014). A recent meta-study further substanti-
stream of information in a dynamic presentation. The ated this effect, noting that training time was an
important moderator. When instructions take longer than conclusion was that summaries could increase recall if
35 min the personalization effect disappeared (Ginns, their design is such that it stimulates readers to actively
Martin, & Marsh, 2013). process them.
Hartley and Trueman (1982) reviewed the outcomes
from the research conducted to that date. Besides the
Summaries with text and video
two previously mentioned studies, they reported having
Reviews, summaries of steps for task completion, can found one study from 1955 (i.e., Christensen & Stordahl)
bring the principal solution steps that lead to task accom- with no reliable effects, and one study from 1973 (i.-
plishment back into the user’s active memory. Thus, they e., Vezin, Berge, & Mavrelis) that reported a significant
would seem optimally suited to contribute to retention of benefit from the inclusion of an end summary. Next,
the information needed to complete the task. Surpris- Hartley and Trueman gave an account of five consecu-
ingly, very little, if any, documentation exists for the de- tive empirical studies on the effect of the placement of
sign and effectiveness of video reviews. A literature summaries on text retention and recall. Four of the five
search for empirical studies on the effectiveness of re- experiments used texts that included (a) a beginning
views in videos revealed no hits. The meta-analysis of summary, (b) end summary or (c) no summary. Students
expository animations by Ploetzner and Lowe (2012) were instructed to read a text (and summary) in order to
also did not report a single case involving reviews among make a judgment as to its readability. After reading the
the 44 empirical studies that were analysed. When our text (and summary) once, the students answered recall
search was extended to include summaries with texts questions. The overall finding was that summaries con-
the pursuit was only slightly more fruitful. Only a few sistently improved recall for cued information. There
older studies were discovered. Their findings are re- was no effect on recall of information not mentioned in
ported in the succeeding texts. Our discussion here con- the summary. Also, no difference was found for sum-
centrates on the studies that investigated summaries mary position. Beginning and end summaries were
after a text, which we will call ‘end summaries’. found to be equally supportive.
Hartley, Goldie and Steen (1976) conducted an exper- All in all, the few empirical studies on end summaries
iment in which they examined the influence of summary with texts show that these enhance retention and recall.
placement on text retention as indicated by recall. There The relative dearth of studies on the effectiveness of
were three conditions: (a) beginning summary, (b) end end summaries supports the contention of Hartley and
summary and (c) no summary. After reading the text, Davies (1976) that their value ‘seems to be so obvious
participants were asked a number of questions about that few people have felt any real need to subject the con-
the text. Recall was best for the ending summary. No dif- cept to empirical investigation’ (p. 251).
ferences were found between the beginning summary
and the no summary conditions.
Experimental design and research questions
McLaughlin Cook (1981) considered the argument
that the finding by Hartley et al. (1976) might be related This study compares a video tutorial on Word’s format-
to the attention paid to the summaries. That is, the ab- ting options that includes task demonstrations with and
sence of a positive effect for the beginning summary without reviews (Review and Control condition, respec-
might be caused by readers skipping over it. To investi- tively). Testing in software training usually revolves
gate this possibility, he conducted an experiment with around the three main facets mentioned in ISO-9421,
four conditions: (a) beginning summary on the same namely, engagement, effectiveness and efficiency. This
page as the text (‘beginning summary – same page’), study investigates these facets. More specifically, the fol-
(b) beginning summary on a separate page from the text lowing research questions are addressed:
(‘beginning summary – separate page’), (c) end sum-
mary and (d) no summary. Text recall was measured Research question 1: Does condition influence training
with a set of questions. No difference in recall was found time?
between the beginning summary – separate page and
‘end summary’ conditions, which both yielded signifi- Training time is checked to determine whether it is af-
cantly higher recall than the other conditions. The fected by the added presence of reviews. Because the
review videos are very short, no difference between con- two seventh grade classrooms. Individuals were ran-
ditions is expected. domly assigned to condition, after stratification for
classroom.
Research question 2: How well do the video tutorials
support motivation, and is there
Instructional materials
an effect of condition?
The video tutorials teach several formatting tasks in the
The video tutorials are designed to make the training German version of Microsoft Word 2007. The tasks are
tasks meaningful and doable in participants’ eyes. To as- organized into three ‘chapters’. Chapter 1 demonstrates
sess their motivational impact, we examine task rele- how to adjust the left and right margins for an entire doc-
vance and self-efficacy. Task relevance refers to present ument, in two task videos. Chapter 2 models the format-
and future value of completing a task. It indicates the im- ting of paragraphs, citations and lists, in four videos.
portance of a task to someone’s goals or concerns Chapter 3 demonstrates how Word can create a table of
(Pintrich & Schunk, 2002). Self-efficacy can be defined contents, in five videos. The tasks all revolve around im-
as a person’s expectancy for success in novel tasks proving the lay-out of reports that the participants
(Bandura, 1997). The two motivational constructs are must regularly produce for school. The tasks are pre-
measured before and after training. No effect of condi- sented in a simple-to-complex sequence.The tutorials
tion is expected for task relevance because both tutorials are presented on a website that is divided into two areas
present the same task demonstrations. Self-efficacy, (Figure 1). The area on the left presents a clickable table
however, is likely to be positively affected by the re- of contents. The table of contents is always visible to af-
views. As reviews summarize the main steps for task ac- ford easy and permanent access to the recorded demon-
complishment, they may increase the participant’s strations. Chapter titles (purple background) organize
confidence about (future) task completion. similar tasks. Task titles (light blue background) link to
the videos, as signalled by an icon at the end.
Research question 3: How well do the video tutorials After a participant clicks on a task title, the light blue
support task performance and background colour changes to orange and the demonstra-
learning, and is there an effect tion video comes up on the right-hand side of the
of condition? website. In addition, a transparent toolbar automatically
appears at the bottom, allowing the participant to start
The video tutorials are expected to be equally effective the video playing, pause, resume and stop (Figure 1).
in supporting task performance during training, if only The toolbar can also be used to increase or decrease the
because participants can always consult the demonstra- sound level. A progress bar shows how far the video
tion videos. After training, video access is blocked when has progressed.
learning is assessed. Participants are tested immediately Demonstration videos contain information about the
after training and again one week later. Because the re- goal, the required participant actions and the effects of
view videos support retention, they should yield higher these actions. The narrator, a native male speaker of Ger-
scores on both an immediate post-test and a delayed man, begins by introducing the upcoming task. The par-
post-test taken one week after training. ticipants are told about the nature of a formatting
problem (‘This text has margins that are too narrow’)
and the solution. This section of the narrative should
Method have a positive effect on the participants’ motivation,
contributing to goal setting (Grossman et al., 2013; van
Participants
der Meij & van der Meij, 2013). After that, the narrator
The participants consisted of 55 students (mean age consistently tells participants about required actions in-
13 years; range 12.0–14.4) at two middle schools in Ger- volving the input device (e.g., ‘Click the left mouse but-
many. Accordingly, all study materials were in German. ton’) and the interface (e.g., ‘Drag the margin to 2.5
The 33 male students and 22 female students were from centimeters’). The effects of these actions on the
interface are also shown, and the narrator regularly draws During training, participants are instructed to use prac-
the participants’ attention to these changes, using stan- tice files that have been created especially for these tasks.
dard phrases such as, ‘A window appears with the text These files are accessible from a folder with the student’s
…’ and ‘You now see …’ Zooming-in and highlighting name that is on the computer desktop. Practice files min-
(Figure 2) occasionally complement these comments, in imize the need for task-irrelevant actions, such as typing,
order to further emphasize pertinent screen objects or and they include few distracting formatting features (van
areas. Finally, after the task demonstration has been com- der Meij & Carroll, 1998). In addition, these files stan-
pleted there is a brief pause of 2 s. In all, there were 11 dardize practice; they make task completion efforts com-
task demonstration videos, with a mean length of parable across conditions. The training time measure is
1.14 min (range 0.48–1.46). based on saved modified practice files, as is task perfor-
Review videos summarize task demonstrations. They mance success during training. Inspection of the practice
appear automatically a few seconds after a demonstration files later revealed that participants had forgotten to save
has finished. All reviews begin with the announcement about 10% of these task files. In data analyses these omis-
‘You’re finished now, but remember…’ Thereafter, the sions were counted as incorrect solutions.
narrator frames his comments in the ‘I’-form. This indi- A paper instruction booklet provides participants with
cates the difference between the review and the task dem- a task scenario and supports them in switching between
onstration and minimizes the need for participants to viewing the video and engaging in practice. The booklet
recode statements into personal action plans (e.g., ‘First, sets out a task sequence that is identical with that of the
I must click on […] Then, I should select […] Finally, I table of contents on the website. This task sequence
press the TAB-key’). The review video concentrates on guides participants to work though the materials in a
the (sub)goals and the actions that need to be performed. fixed order. Chapter titles organize the task titles. Under
Compared with the demonstrations much less attention is each task title, the booklet instructs the participant
given to the system reactions. Just as in the demonstra- first to watch the video, and then to engage in practice
tions, the reviews include animated screen displays with (Figure 3). The booklet also includes a few (repeated)
signalling. Reviews take from 13 to 26 s. The mean questions to assess motivational mediators (i.e,, mood
length of a task demonstration video with review is and flow). The findings for these mediators favoured
1.31 min (range 1.02–2.06). the Review condition, but are not reported here.
modify the format of pre-test files. An immediate post- were headphones for the participants to use during train-
test asks participants to complete the same formatting ing. The first session began with a 5-min introduction
tasks addressed in the training, using post-test files that that informed participants about the Word training they
differ only in appearance from pre-test or practice files. would receive. After that, the IEQM and pre-test were
The delayed post-test is similarly constructed. Correct completed (maximum 20 min). After a short break, this
task completion is worth 1 point and an incorrect attempt was followed by another (10 min) introduction, in which
is worth 0 points. All performance test measures are the training procedure and the use of the instruction
timed (maximum 20 minutes). Test scores are converted booklet were explained. Participants could also practice
to a percentage of possible points. site navigation and file handling with a scaled-down ver-
sion of the website. Furthermore, participants were
instructed to wear headphones during training, to work
Procedure
individually and to ask for help only when they experi-
The experiment was conducted in two sessions that took enced technical problems. During training, participants
place in the schools’ computer laboratories. Each com- could always consult the videos. The maximum training
puter was labelled with a number and equipped with time was 40 min. After that, participants completed the
the files and instruments for the study. In addition, there Final Motivation Questionnaire and took the post-test
(20 min). During testing, participants were not allowed to significant difference in training time between condi-
consult the videos. The second session took place seven tions, F(1,54) = 5.81, p = 0.019, d = 0.67. Participants in
days later. In this session, participants took the delayed the Review condition completed training faster.
post-test (20 min).
Motivation before and after training
Analysis The scores for task relevance before training indicate the
A check on random distribution of participants across presence of a relatively low level of prior task interest
conditions revealed no statistically significant differences (Table 1). After training the task relevance appraisals
for age, F(1,54) = 1.03, n.s., or gender χ 2(1,55) = 0.045, were significantly higher, F (1,50) = 165.33, p < 0.001,
n.s. Conditions also did not differ on IEMQ-scores. Re- d = 2.51. The resulting scores for task relevance were
peated measures ANOVAs were computed to gauge substantially above the scale midpoint. An analysis of
changes over time within conditions. ANCOVAs were covariance on task relevance-after, with task relevance-
computed to examine the effect of condition on motiva- before as a covariate, showed no effect of condition, F
tion, task performance and learning using the IEMQ or (1,51) = 1.53, n.s., which was the predicted finding.
pre-test score as covariate. Tests on the assumption of The initial scores for self-efficacy were around the
homogeneity of variance indicated no violations. Like- scale midpoint, which suggests that participants began
wise, there was no violation of the assumption of homo- training with some degree of confidence in their capaci-
geneity of regression slopes in the ANCOVAs. For some ties to deal with the training tasks (Table 1). After train-
measures the degrees of freedom varied slightly, because ing self-efficacy belief was significantly higher, F
of missing data. One outlier was removed for the motiva- (1,50) = 61.68, p < 0.001, d = 1.57, yielding scores that
tion measures after training. The tables present only the were substantially above the scale midpoint. An analysis
data from students with complete data sets. Tests were of covariance for self-efficacy-after, with self-efficacy-
one-tailed for directional predictions (indicated with the before as a covariate, showed a significant effect of con-
test result) and two-tailed for all other cases, with alpha dition, F (1,51) = 4.29, p = 0.022 (one-sided). As pre-
set at 0.05. Cohen (1988) d-statistic is used to report ef- dicted, participants in the Review condition showed
fect size. These tend to be qualified as small for d = 0.2, greater increase in self-efficacy ratings after training.
medium for d = 0.5 and large for d = 0.8.
Task performance and learning
Table 1. Means (Standard Deviations) for Task Relevancea and Self-efficacya by Condition
Review (n = 28) 2.70 (1.35) 6.07 (0.66) 4.10 (1.70) 5.97 (0.52)
Control (n = 24) 3.24 (1.65) 5.80 (0.94) 3.73 (1.57) 5.61 (0.67)
+
Total (n = 52) 2.95 (1.50) 5.94 (0.80) 3.93 (1.64) 5.81 (0.62)
a
Scale maximum is 7. A higher score indicates higher appreciation.
Table 2. Mean Success Rates (Standard Deviations) for Pre-test, Training, Immediate Post-test and Delayed Post-test by Condition
Review (n = 28) 23.7 (15.0) 88.4 (15.6) 86.2 (18.1) 89.3 (13.5)
Control (n = 24) 22.9 (18.3) 86.5 (19.1) 77.1 (19.7) 81.8 (23.3)
Total (n = 52) 23.3 (16.4) 87.5 (17.1) 82.0 (19.2) 85.8 (18.9)
data, a mean success score of 98.3% for training tasks (in participants rated task relevance and self-efficacy at over
both conditions) would have been obtained. 80% of the scale maximum. For task relevance this find-
The findings for the effectiveness of the video tutorials ing is even more remarkable, as participants started out
as a support for learning are likewise positive. An aver- with scores initially below the scale mid-point.
age success rate of 82.1% was achieved on the immedi- As expected, a significant difference between condi-
ate post-test (Table 2). Compared with the mean pre- tions was found for self-efficacy, favouring the Review
test score of 23%, the increase was both statistically sig- condition. Presumably, this is an effect of reminding
nificant and substantial, F (1, 53) = 331.97, p < 0.001, the user what it takes to achieve task completion. That
d = 3.34. A slightly higher success rate was even seen is, when a review recapitulates the key steps and actions
on the delayed post-test. Compared with the mean pre- in a task completion process it conveys the impression
test score, this increase was also both statistically signif- that the task is manageable, requiring only a few actions
icant and substantial, F (1, 50) = 336.88, p < 0.001, to round it off successfully.
d = 3.57. The video tutorials also significantly and substantially
An analysis of covariance on the immediate post-test, improved success on task performance. From an initial
with pre-test scores as covariate, yielded a statistically success rate of 23%, the scores increased to a success rate
significant effect for condition, F (1,54) = 2.90, of 87% on the training tasks. Because participants could
p = 0.048 (one-sided) in favour of the Review condition, consult the videos during training, this score signals the
as predicted. However, contrary to prediction, the same effectiveness of the tutorials as a job-performance aid
analysis showed no difference between conditions for (van der Meij et al., 2009). In addition, significant and
the delayed post-test, F (1,51) = 2.04, n.s. substantial learning effects were found. Test scores for
the immediate and delayed post-test were 82% and
85%, respectively. In other words, the absolute level of
Discussion and conclusion
participants’ task and test performance was high, with
Contrary to expectations, a significant effect in training and without review.
time was found in favour of the Review condition. Per- The prediction that the Review condition would result
haps, the time difference reflects the benefit of having a in greater learning was partly supported. A significant ef-
short recap before practice. That is, participants who fect of condition favouring the Review condition was
have just seen a review are likely to have better retention found, but only on the immediate post-test. We ascribe
of the main steps in a procedure and would therefore this effect to the retention process that the reviews set
need to check back less often to support task execution out to support. In retention, users must transform their
during practice. observations of the demonstration videos into symbolic
The video tutorials had a strong and positive effect on codes that are stored for future behaviour. The reviews
motivation. Measures of the participants’ task relevance address this retention process, as they present the steps
indicated that training significantly increased this percep- towards task completion in condensed format. They
tion. A similar finding was obtained for self-efficacy, in- show what a mental replay or cognitive rehearsal of the
dicating that participants both found the formatting tasks steps involved in task completion would look like.
meaningful and felt confident that they could deal with While a positive effect of the review was found for the
such tasks in the future. The data also revealed high immediate post-test, there was no difference between
post-training results for motivation. After training, the conditions for the delayed post-test. The review is a
advance organizers. Review of Educational Research, 46, Merkt, M., Weigand, S., Heier, A., & Schwan, S. (2011).
239–265. doi:10.2307/1170040. Learning with videos vs. learning with print: The role of in-
Hartley, J., & Trueman, M. (1982). The effects of summaries on teractive features. Learning and Instruction, 21, 687–704.
the recall of information from prose: Five experimental stud- doi:10.1016/j.learninstruc.2011.03.004.
ies. Human Learning, 1, 63–82. van Merriënboer, J. J. G., Kirschner, P. A., & Kester, L. (2003).
Hartley, J., Goldie, M., & Steen, L. (1976). The role and Taking the load off a learner’s mind: Instructional design for
position of summaries: Some issues and data. Educational complex learning. Educational Psychologist, 38, 5–13.
Review, 31, 59–65. doi:10.1080/0013191790310107. doi:10.1207/S15326985EP3801_2.
Hoogerheide, V., Loyens, S. M. M., & van Gog, T. (2014). Pintrich, P. R., & Schunk, D. H. (2002). Motivation in educa-
Comparing the effects of worked examples and modeling tion. Theory, research, and applications (2nd ed.). Upper
examples on learning. Computers in Human Behavior, 41, Saddle River, NJ: Merrill Prentice Hall.
80–91. doi:10.1016/j.chb.2014.09.013. Plaisant, C., & Shneiderman, B. (2005). Show me! Guidelines
Jin, S.-H. (2013). Visual design guidelines for improving for recorded demonstration. Paper presented at the 2005
learning from dynamic and interactive digital text. Com- IEEE Symposium on Visual Languages and Human-
puters & Education, 63, 248–258. doi:10.1016/j. Centric Computing (VL/HCC’05), Dallas, Texas. http://
compedu.2012.12.010. www.cs.umd.edu/localphp/hcil/tech-reports-search.php?
Kartal, G. (2010). Does language matter in multimedia learn- number=2005-02
ing? Personalization principle revisited. Journal of Educa- Ploetzner, R., & Lowe, R. (2012). A systematic characterisation
tional Psychology, 102, 615–624. doi:10.1037/a0019345. of expository animations. Computers in Human Behavior,
Koumi, J. (2013). Pedagogic design guidelines for multimedia 28, 781–794. doi:10.1016/j.chb.2011.12.001.
materials: A call for collaboration between practitioners and Reichelt, M., Kämmerer, F., Niegemann, H. M., & Zander, S.
researchers. Journal of Visual Literacy, 32(2), 85–114. (2014). Talk to me personally: Personalization of
Lemarié, J., Lorch, R. F., Eyrolle, H., & Virbel, J. (2008). language style in computer-based learning. Computers
SARA: A text-based and reader-based theory of signaling. in Human Behavior, 35, 199–210. doi:10.1016/j.chb.
Educational Psychologist, 43, 27–48. doi:10.1080/ 2014.03.005.
00461520701756321. Reisslein, J., Atkinson, R. K., Seeling, P., & Reisslein, M.
Lloyd, S. A., & Robertson, C. L. (2012). Screencast tutorials (2006). Encountering the expertise reversal effect with a
enhance student learning of statistics. Teaching of Psychol- computer-based environment on electrical circuit analyses.
ogy, 39(1), 67–71. doi:10.1177/0098628311430640. Learning and Instruction, 16, 92–103. doi:10.1016/j.
Lowe, R., Schnotz, W., & Rasch, T. (2011). Aligning learninstruc.2006.02.008.
affordances of graphics with learning task requirements. Ap- Rosen, M. A., Salas, E., Pavlas, D., Jensen, R., Fu, D., &
plied Cognitive Psychology, 25, 425–459. doi:10.1002/ Lampton, D. (2010). Demonstration-based training:
acp.1712. A review of instructional features. Human Factors, 52,
McLaughlin Cook, N. (1981). Summaries: Further issues and 596–609. doi:10.1177/0018720810381071.
data. Educational Review, 33(3), 215–222. doi:10.1080/ Schwan, S., & Riempp, R. (2004). The cognitive benefit of
0013191810330305. interactive videos: Learning to tie nautical knots. Learning
van der Meij, H., & Carroll, J. M. (1998). Principles and heuris- and Instruction, 14, 293–305. doi:10.1016/j.learninstruc.
tics for designing minimalist instruction. In J. M. Carroll 2004.06.005.
(Ed.), Minimalism beyond the Nurnberg funnel Cambridge. Smith, P. L., & Ragan, T. J. (2005). Instructional design (3rd
MA: MIT Press. ed.). Hoboken, NJ: Wiley.
van der Meij, H., & van der Meij, J. (2013). Eight guidelines for Spanjers, I. A. E., van Gog, T., Wouters, P., & van
the design of instructional videos for software training. Tech- Merrienboer, J. J. G. (2012). Explaining the segmentation ef-
nical Communication, 60, 205–228. fect in learning from animations: The role of pausing and
van der Meij, H., & van der Meij, J. (2014). A comparison of temporal cueing. Computers & Education, 59, 274–280.
paper-based and video tutorials for software learning. Com- doi:10.1016/j.compedu.2011.12.024.
puters & Education, 78, 150–159. Stiller, K. D., Freitag, A., Zinnbauer, P., & Freitag, C. (2009).
van der Meij, J., & van der Meij, H. (2015). A test on the design How pacing of multimedia instructions can influence
of a video tutorial for software training. Journal of modality effects: A case of superiority of visual texts. Aus-
Computer-Assisted-Learning, 31, 116–132. tralasian Journal of Educational Technology & Society,
van der Meij, H., Karreman, J., & Steehouder, M. (2009). Three 25, 184–203.
decades of research and professional practice on software Tversky, B., Bauer-Morrison, J., & Bétrancourt, M. (2002).
tutorials for novices. Technical Communication, 56, 265–292. Animation: Can it facilitate? International Journal of
Human-Computer Studies, 57, 247–262. doi:10.1006/ Wouters, P., Paas, F., & van Merriënboer, J. J. G. (2010). Ob-
ijhc.2002.1017. servational learning from animated models: Effects of
Wistia. (2012). Does length matter? Retrieved from http:// studying-practicing alternation and illusion of control on
wistia.com/blog/does-length-matter-it-does-for-video-2k12- transfer. Instructional Science, 38, 89–104. doi:10.1007/
edition s1121-008-9079-0.
Witteman, M. J., & Segers, E. (2010). The modality effect Yue, C. L., Bjork, E. L., & Bjork, R. A. (2013). Reducing ver-
tested in children in a user-paced multimedia environment. bal redundancy in multimedia learning: An undesired desir-
Journal of Computer Assisted Learning, 26, 132–142. able difficulty? Journal of Educational Psychology, 105,
doi:10.1111/j.1365-2729.2009.00335.x. 266–277. doi:10.1037/a0031971.