Web-Based Learning Environment
Web-Based Learning Environment
Executive Summary
Web-based courses and programs have increasingly been developed by many academic institu-
tions, organizations, and companies worldwide due to their benefits for both learners and educa-
tors. However, many of the developmental approaches lack two important considerations needed
for implementing Web-based learning applications: (1) integration of the user interface design
with instructional design and (2) development of the evaluation framework to improve the overall
quality of Web-based learning support environments. This study addressed these two weaknesses
while developing a user-centered, Web-based learning support environment for Global Position-
ing System (GPS) education: Web-based distance and distributed learning (WD2L) environment.
The research goals of the study focused on the improvement of the design process and usability of
the WD2L environment based on a theory-based Integrated Design Process (IDP) proposed in the
study. Results indicated that the proposed IDP was effective in that the study showed (1) the
WD2L environment’s equivalence to traditional supplemental learning, especially as a Web-based
supplemental learning program and (2) users’ positive perceptions of WD2L environment re-
sources. The study also confirmed that for an e-learning environment to be successful, various
aspects of the learning environment should be considered such as application domain knowledge,
conceptual learning theory, instructional design, user interface design, and evaluation about the
overall quality of the learning environment.
Keywords: Human-Computer Interaction, Usability Evaluation, Web-Based Distance and Dis-
tributed Learning (WD2L), Instructional Design, e-Learning
Introduction
As an increasingly powerful, interactive, and dynamic medium for delivering information, the
World Wide Web (Web) in combination with information technology (e.g., LAN, WAN, Internet,
etc.) has found many applications. One
Material published as part of this publication, either on-line or popular application has been for educa-
in print, is copyrighted by the Informing Science Institute. tional use, such as Web-based, distance,
Permission to make digital or paper copy of part or all of these
works for personal or classroom use is granted without fee
distributed or online learning. The use of
provided that the copies are not made or distributed for profit the Web as an educational tool has pro-
or commercial advantage AND that copies 1) bear this notice vided learners and educators with a
in full and 2) give the full citation on the first page. It is per- wider range of new and interesting
missible to abstract these works so long as credit is given. To learning experiences and teaching envi-
copy in all other cases or to republish or to post on a server or
to redistribute to lists requires specific permission and payment ronments, not possible in traditional in-
of a fee. Contact [email protected] to request class education (Khan, 1997). Web-
redistribution permission. based learning environments have been
developed mainly by instructional designers using traditional instructional design models such as
the instructional systems design (Dick & Carey, 1996), cognitive flexibility theory (Spiro, Fel-
tovich, Jacobson, & Coulson, 1991), and constructivist learning environment (Jonassen, 1999).
However, many of these approaches still lack two important considerations needed for imple-
menting learning applications based on the Web: (1) integration of the user interface design with
instructional design, and (2) development of the evaluation framework to improve the overall
quality of Web-based learning environments.
First, little attention has been paid to design issues of the human-computer interface, which are
critical factors to the success of Web-based instruction (Henke, 1997; Plass, 1998). Learners must
be able to easily focus on learning materials without having to make an effort to figure out how to
access them (Lohr, 2000). However, current instructional design principles and models do not
explicitly address usability issues of the human-computer interface. Second, the rapid growth of
Web-based learning applications has generated a need for methods to systematically collect con-
tinuous feedback from users to improve learning environments. Unfortunately, few attempts have
been made to develop such formative evaluation frameworks for Web-based learning environ-
ments whose foci are both the instructional system and user interface system. In addition, few
approaches take user interface design issues into account in their evaluation processes. A number
of evaluation frameworks that can be used to evaluate the user interfaces have been proposed
(e.g., Nielsen, 1993; Rubin, 1994). But, these models are intended for software environments
rather than for Web-based learning environments in which user interface systems should be de-
veloped to support users’ learning activities.
This study addressed these weaknesses while developing a user-centered, Web-based learning
support environment for Global Positioning System (GPS) education: a Web-based distance and
distributed learning (WD2L) environment. More specifically, there are two main research goals
addressed in this study, and these goals aimed to improve the design process and usability of the
WD2L environment. First, this study offered a systematic approach to the design, development,
and evaluation of a user-centered, WD2L environment for supporting engineering courses. Sec-
ond, this study evaluated the design process model by assessing the overall quality of the WD2L
environment prototype in terms of 1) students’ learning performance and 2) the quality of re-
sources implemented in the WD2L environment.
We first give an overview of relevant literature that guided the design, development, and evalua-
tion of the WD2L environment supporting GPS education. The development process will then be
briefly summarized. In addition, evaluation processes through the proposed formative evaluation
framework will be outlined. Finally, relationships between the design process framework and the
effectiveness of the WD2L environment will be discussed.
Background
Overview of GPS Education
To understand the application domain, a GPS course was analyzed or used as the testbed. As
shown in Table 1, there is the educational demand for a new learning environment to effectively
support the course while meeting the societal demands on engineers educated in GPS fundamen-
tals.
However, there are also developmental challenges that should be considered. This identified do-
main knowledge also served as a basis from which to draw practical implications from the litera-
ture.
24
Nam & Smith-Jackson
25
Web-Based Learning Environment
& Duffy’s Problem-Based Learning (1995), Schank & Cleary’s goalbased scenarios (1995), and
Cognition & Technology Group’s microworlds, anchored instruction (1992).
26
Nam & Smith-Jackson
come the short-comings of structured software engineering methods that ignores issues involved
in human-computer interaction and user interface design. The technologist approach claims that
designers produce poor quality interfaces because they have to spend more time in performing
time-consuming tasks, such as programming an interface, than in doing design activity during
development (Cockton, 1988). To allow designers to concentrate on design, the technologist ap-
proach attempts to provide automated development tools (e.g., the User Interface Management
System) and rapid prototyping tools (e.g., HyperCard and Multimedia Toolkit). The cognitive
approach applies psychological knowledge, such as theories of information processing and prob-
lem solving to the interface design (Barnard, 1991). This most theoretical approach to interface
design is characterized by an attempt to build precise and accurate users’ cognitive models that
represent their interaction with computers.
In order to design user interfaces that are easy to use and intuitive to anyone, it is important to
have good design skills as well as some knowledge of psychology, methodologies and prototyp-
ing. Therefore, all four approaches are fundamental to successful design of Web-based learning
environments. However, designing a usable interface that is also learner-centered is not trivial.
Thus, this study suggests employing a user-centered design process that takes human factors into
account. Gould & Lewis (1985) provide three principles of user-centered design: 1) an early fo-
cus on users and tasks, 2) empirical measurement of product usage, and 3) iterative design
whereby a product is designed, modified, and tested repeatedly. Rubin (1994) also suggests sev-
eral techniques, methods, and practices that can be used for the user-centered design. Some of
the examples include participatory design, focus group research, surveys, design walkthroughs,
expert evaluations, and usability testing.
27
Web-Based Learning Environment
tify and remove prominent errors. Various tools provided to support an instructor in Web-based
learning environments can be evaluated with the instructor, such as a course management system
(e.g., WebCT or Blackboard). Participants are also asked to evaluate the system in terms of
screen design, information structure, and menu structure. In the Small-group evaluation, group
learning activities (e.g., group discussion) and multi-user interface system (e.g., Discussion
Board) can be evaluated by a group of people representative of the target population.
This study offered the Design Process Template to help implement each step of the design proc-
ess (Figure 2). There were two main reasons for providing this template. First, the template was
intended to provide factors that should be considered in each design process, such as process ob-
jectives, inputs, design steps, outputs, methods and tools. Another reason was that information
and developmental factors needing to be considered are not constant because of changes in tech-
nology, course structure, and users’ needs. Although it is not intended to be exhaustive, the tem-
plate helped to address such issues when developing the WD2L environment prototype.
28
Nam & Smith-Jackson
Distributed • Allow downloading and printing the materials from the GPS Resources
WD2L environment and any other Web sources GPS Glossary
Collaborative • Create a medium of collaboration, conversation, discus- Discussion Board
Learning sion, exchange, and communication of ideas (By Group)
29
Web-Based Learning Environment
The Design Goals Setting process describes the determination of design goals and principles that
drive all design decisions throughout the development, which also serve as evaluation criteria for
usability testing in the Formative Evaluation Phase. Table 3 shows examples of design goals that
will govern all design decisions throughout the development of the WD2L environment.
Table 3. Design Goals for WD2L Environment Development
System Design Goal Description
As design goals of the instructional system, this study followed Dick and Carey’s (1996) evalua-
tion criteria: clarity of instruction and impact on learner. Clarity is a design goal to make sure if
what is being presented is clear to individual target learners. Impact is intended to increase an
individual learner’s attitude. The primary goal of the user interface was to design the interface so
the user can easily complete tasks by allowing simple, natural interactions with the WD2L envi-
ronment. For example, this study employed Norman's (1987) four principles of good design:
visibility, good conceptual model, good mapping, and feedback. Visibility indicates that the use
of a device should be as visible as possible to a user by clearly indicating the state of the device,
functionality, and the alternatives of action. A good conceptual model refers to consistency in the
presentation of user operations and results, which in turn allows the user to predict the relation-
ships between his/her actions and subsequent results (i.e., good mapping principle). Finally, the
feedback principle refers to informative feedback that users receive on their actions.
30
Nam & Smith-Jackson
After getting into the GPS Theory & Design Website, John who is taking the GPS
(ECE4164) course checks out the Announcements, and finds out a new announcement
where a quiz about corrections to Keplerian orbits for precise positioning (Chapter 5) has
been posted by the instructor. He selects the Ch. 5 in the Lecture Notes sub-menu of the
Classroom menu. At the top of the page, objectives of chapter 5 are provided, describing
what students will learn and what kinds of achievement they will make after completing
this chapter. He also reviews the “Table of Contents” where each topic is hyperlinked to
the corresponding learning unit. He clicks the Introduction link, and study it. To make
sure that he has a full understanding of the basic knowledge of Chapter 5, he clicks the
Practice 1 link where it allows practicing what has been learned and getting feedback on
his performance.
object object
Physical
action
Information content identified for the user interface and instructional system were integrated, re-
sulting in the Content Outline Document as an output of the process. The Content Outline Docu-
ment describes a list of the content identified for key user tasks in terms of page titles, page ele-
ments, and brief descriptions. The Structure Design process describes the main structure of the
WD2L environment. The main objective of the process was to specify the presentation and stor-
age structure of the WD2L environment. The structure of information in a Web site is important
in that well-structured information allow users to effectively perform necessary tasks or access
the required information. The Page Design process described the determination of content lay-
outs or schematics of main pages, displaying rough navigation and the layout of elements that
need to appear on a page. The main objective of the process was to specify the content layout and
navigational organization of a few key pages. This study adapted the Wireframing process pro-
vided by Koto & Cotler (2002) for the Web redesign. To determine content layouts of a page, all
page content identified in the previous process were reviewed.
Phase 3: Development
The Development phase was aimed to construct a high-fidelity (hi-fi) prototype of the WD2L en-
vironment, based on results of the initial user evaluation on low-fidelity (low-fi) prototypes. This
phase consisted of three design processes, which translate the conceptual user interface and in-
31
Web-Based Learning Environment
structional design into the hi-fi prototype of the WD2L environment: low-fidelity prototyping, de-
sign walk-through, and high-fidelity prototyping.
The Low-Fidelity Prototyping process describes the development of the low-fi prototypes of the
WD2L environment. The main goal of the process was to build a rough interface and instructional
system by integrating design ideas developed in the previous processes. The Design Walk-
Through process was concerned with soliciting initial feedback from users by having them walk
through the low-fi prototypes of the WD2L environment. The goals of the process were 1) to con-
firm that the proposed design of the WD2L environment (i.e., the low-fi prototype) is consistent
with target users’ expectations and skill levels, and 2) to use initial feedback to revise the low-fi
prototypes early in the design process before the full functionality is implemented. The High-
Fidelity Prototype process described the development of the hi-fi WD2L environment prototype,
in which full functionality is completed.
Method
Participants: Three SMEs who exhibited a high level of expertise in three main areas were se-
lected; instructional design (34-year-old Ph.D. candidate), user interface design (32-year-old hu-
man factors Ph.D. student), and GPS content (27-year-old Master candidate).
Equipment/Apparatus: To review and suggest their recommendations to improve the first ver-
sion of WD2L environment prototype, the SMEs were asked primarily to utilize their expertise in
their specialties. In addition, to help the SMEs review important aspects of the WD2L environ-
ment prototype, this study developed and provided three types of expert review forms: User In-
terface Review Form, Instructional Design Review Form, and Content Review Form.
Procedures: Three SMEs were given written instructions for the task by asking them to review
and provide design comments or recommendations that would help revise the prototype. The user
profile specified in the Requirement Specification Document was also given to help the SMEs
have a better understanding of the target user group. It took about two hours for each expert to
complete the evaluation of the WD2L environment prototype.
32
Nam & Smith-Jackson
mented and provided design recommendations for the modification of the instructional system.
The overall quality of the instructional design was good (e.g., Design for Target Audience (6.0),
Match to Learning Objectives (4.0), and Clear to be Self-Instructional (5.0)). The GPS content
expert also reviewed learning units and provided design recommendations for the modification.
Learning units received relatively high scores, ranging from 4.8 (Practice Unit – practice 4) to 5.4
(Quiz Review Unit – Quiz 1 Review).
Method
Participants: A new pool of four participants participated in the second session of the One-to-
One Evaluation. There were 3 male and 1 female participants (Mean hereafter M = 23.0 years,
Standard Deviation hereafter SD = 0.82 years). Most participants classified their computer skill
level as somewhere between an intermediate and an experienced level.
Experimental Materials and Benchmark Tasks: To evaluate main functions of the interface and
instructional system, this study developed eight “benchmark” tasks representing users’ most
common tasks on the WD2L environment. For the interface system, for example, this study de-
veloped four benchmark tasks, which were searching information, uploading assignments, finding
GPS resources, and sending email. Another four different benchmark tasks were developed for
the instructional system, which were studying the learning content (i.e., Chapter 5), performing
practice sessions, reviewing the quiz, and performing prelaboratory activities.
Evaluation Criteria: As evaluation criteria for determining the overall quality of the instructional
system, this study used both clarity and impact of instruction. The overall quality of the user in-
terface system was determined in terms of the effectiveness, efficiency, and user satisfaction. To
measure user satisfaction with user interfaces, this study employed the Questionnaire for User
Interface Satisfaction questions (QUISTM 7.0) consisting of five categories: initial satisfaction,
screen, terminology and system information, learning, and system capabilities.
Procedure: Participants were given written instructions for the task and asked to review the Site
Map page of the WD2L environment to familiarize with the prototype. Then, the participants per-
formed eight benchmark tasks representing users’ most common tasks on the WD2L environment,
which were presented in a random order. Before doing that, the participants were asked to think
aloud throughout the whole session and talk about what they are doing, why they are doing it, and
what they expect to happen when they perform an action. After benchmark tasks #4, #5, #7, and
#8, evaluation of instruction questionnaires were administered to identify participants’ evaluation
on the clarity and impact of instruction, respectively. At the end of the evaluation, participants
completed the questionnaire.
33
Web-Based Learning Environment
Results
As shown in Table 5, several measures were employed to investigate the overall quality of the
WD2L environment prototype from users’ perspective.
Table 5. A Summary of Usability Specifications: Evaluation 2
Usability Measuring Value to be Target Observed
Results
Attribute Instrument Measured Level
Number of features 4 4.50
Benchmark Task #1 Time on task 15 14.50
(Searching Information) Number of errors 0 0.25
Frequency of the Help use ≤1 0.00
Number of features 8 8.00
Benchmark Task #2 Time on task 40 36.25
(Uploading Assignments) Number of errors 0 0
Initial
Frequency of the Help use ≤1 0.25
Performance
Number of features 4 4.00
Benchmark Task #3 Time on task 30 29.50
(Finding GPS Resources) Number of errors 0 0.00
Frequency of the Help use ≤1 0.75
Number of features 7 5.50
Benchmark Task #6 Time on task 50 48.00
(Sending Email) Number of errors 0 0.00
Frequency of the Help use ≤1 0.25
Benchmark Task #4 Clarity of instruction 5.10 5.50
(Studying content) Impact of instruction 5.10 5.25
Benchmark Task #5 Clarity of instruction 5.10 5.20
Clarity &
Impact of (Performing Practice) Impact of instruction 5.10 5.17
Instruction
Benchmark Task #7 Clarity of instruction 5.10 5.60
(Reviewing Quiz) Impact of instruction 5.10 5.58
Benchmark Task #8 Clarity of instruction 5.10 5.17
(Performing Prelaboratory) Impact of instruction 5.10 5.25
Initial satisfaction 8.10 8.13
Screen 8.10 8.19
QUIS 7.0
Satisfaction System information 8.10 8.13
System capabilities 8.10 8.15
Multimedia 8.10 8.17
34
Nam & Smith-Jackson
The fourth column in Table 5 indicates the “Target Level” representing the performance goal.
Target levels of the number of features measurement and time on task were derived by measuring
the fastest steps to complete a benchmark task and times to finish it by the expert user (i.e., the
researcher of this study). It took the expert user about 30 seconds to finish the task. Target levels
of clarity and impact of instruction measurements were set as 85% from a perfect score (i.e., 5.1
out of 6.0). Ninety percent of a perfect score in the QUISTM (i.e., 8.1 out of 9.0) was determined
as target levels of satisfaction measurements. On the other hand, target levels of number of posi-
tive/negative remarks were decided a little more arbitrarily, but were intended to be rigorous
enough to catch major usability problems (≤ 5).
Effectiveness of the User Interface System: The percent of tasks completed was computed as the
ratio of completed tasks to total tasks (n = 8), reflecting the overall task performance. Results
showed that participants completed all 4 benchmark tasks successfully.
Efficiency of the User Interface System: The efficiency of the user interface system was deter-
mined through three metrics: time on task, number of errors, and frequency of help use. As shown
in Table 5, participants spent a shorter amount of time completing tasks as compared to the target
level. Results also showed that participants did not make mistakes to perform tasks except for the
task of finding information (i.e., Benchmark task #1, mean number of error = 0.25).
Clarity and Impact of Instructional system: The degree of clarity of instruction was rated higher
than target levels. The content received a mean of 5.50 (SD = 0.51), while practice sessions, quiz
review, and prelaboratory received mean values of 5.20 (SD = 0.77), 5.60 (SD = 0.60), and 5.17
(SD = 0.58), respectively. The degree of impact of instruction was also rated higher than target
levels set as 85% from a perfect score. The content received a mean of 5.25 (SD = 0.75), while
practice sessions, quiz review, and prelaboratory received mean values of 5.17 (SD = 0.72), 5.58
(SD = 0.67), and 5.25 (SD = 0.62), respectively.
Design Changes and Discussion
Results of the two, One-to-One Evaluation sessions showed that almost evaluation criteria were
met. However, some changes were still necessary to the third version of the WD2L environment
prototype, as reflected by participant design comments.
Research questions
The study sought to answer the following research questions concerning the effectiveness of the
Web-based GPS supplemental learning program developed in the present study.
1. Are there any differences in students’ learning performance between the Web-based GPS
supplemental learning program and traditional supplemental learning?
2. How do users evaluate the quality of resources implemented in the Web-based GPS sup-
plemental learning program?
Experimental design
Procedure: Participants were asked to take a short, essay type of the test (pretest). To learn the
content, all participants took the class through a traditional classroom instruction for three sepa-
rate days. Right after the class, the short essay-type test was repeated (posttest). Participants who
35
Web-Based Learning Environment
were randomly assigned to the “Web-supplemental” condition (10 students) were instructed to
use the WD2L environment as a GPS supplemental learning program to further study. They were
told to visit the site at least once a day for 30 minutes. The other half of participants (10 students),
who were randomly assigned to the “Traditional” condition, were told to study the learning con-
tent (i.e., Chapter 5) further using their normal method (e.g., reading books or asking instructor).
After 5 days, all participants took the transfer of knowledge test. The “Web-supplemental” group
completed the “Evaluation of Web-based GPS supplemental learning program” questionnaire.
Participants: Twenty students who took the GPS course in Fall of 2003 volunteered in the study.
There were 4 female and 16 male participants (M = 23.13 years, SD = 2.9).
Independent Variables: This study employed Campbell & Stanley’s (1966) true experimental
design (Figure 4) in that the study included a purposively created control group (participants in
the “traditional” condition), common measured outcome (learning performance), and random as-
signment (participants were randomly assigned into each condition). There was one independent
variable manipulated in the study: supplemental learning type. The supplemental learning type
condition was manipulated as a 2 level condition: Web-supplemental and traditional conditions.
Participants in each condition were pre-tested and post-tested on their recall and assessed on their
transfer of knowledge.
Dependent Variables
Recall Test: A recall assessment should be used in order to gauge how much of the pre-
sented material the learner can remember (Mayer, 2001). Participants were assessed on their ini-
tial knowledge prior to and recall following the lecture. The test question was “What physical
effects (including the most important one) produce perturbations on satellite orbits predicted by
the basic Kepler orbital theory?” Accuracy was based on the occurrence of acceptable ideas in the
participant’s responses. To compute a score for a participant, initial knowledge and recall were
measured by the participant’s ability to remember the following idea units in their pre- and post-
test responses: Non-sphericity of the Earth; Tidal forces; Solar radiation; Relativistic effects. Per-
formance was expressed as the number of idea units reported divided by the total possible.
Transfer Test: Transfer test questions were developed on the basis of Mayer & Chandler
(2001) and McFeeters (2003). The test sought to measure “meaningful understanding in which
participants are required to use the presented information in ways beyond what was presented”
(Mayer & Chandler, 2001, p. 393). The transfer test contained the following three questions.
36
Nam & Smith-Jackson
The teaching assistant graded each question by using a separate rubric. In order for a participant’s
response to be considered accurate, each rubric included specific ideas from each question that
should have been included in the participant’s response. The rubric contained four acceptable
ideas per question. Each acceptable idea was given a point value. The most specific acceptable
idea was given the highest points (3 points). Less specific acceptable ideas were given a lower
score (2 points). Vague answers were given the lowest score (1 point). If an answer was consid-
ered unacceptable it was given a score of zero. Students received credit for an answer if they ex-
pressed either of four categories of ideas provided in the rubrics regardless of writing style or use
of terminology. Each participant’s transfer performance is expressed as the number of acceptable
answers generated divided by a total of 9.
Web-based Learning Program Questionnaire: To investigate how users evaluated the
quality of resources implemented in the Web-based GPS supplemental learning program, this
study modified and used Felix’s (1998) questionnaire for evaluation of Web-based learning pro-
gram. The questionnaire included 8 dimensions: objectives/directions, content/structure, interac-
tivity, navigation, text, sound, graphics, and interface.
Results
Descriptive Analysis: The collected data included pre- and post-test scores on a one question
essay test, a three essay question transfer test, and a 47-item learner preference questionnaire.
Mean pretest scores were 0.70 (SD = 0.37) for the Web-supplemental group and 0.65 (SD = 0.29)
for the traditional group. The mean score of post-test for the Web-supplemental group was 0.95
(SD = 0.15) and 0.95 (SD = 0.16) for the traditional group. Mean transfer scores were 0.57 (SD =
0.26) for the Web-supplemental group and 0.63 (SD = 0.25) for the traditional group. These dif-
ferences in mean pretest and post-test scores and no difference in mean transfer scores between
the two groups do not indicate statistically significant. Therefore, a series of t-tests were con-
ducted to investigate whether there is nay significant difference in participants’ initial knowledge
and learning performance.
Validity Test: Although a small sample size was used in the study, t-tests were performed as the
data obtained met several assumptions underlying the t-test. For example, we could assume that
the variances are approximately equal, given Levene's test results of homogeneity of variance α =
0.05 (p > 0.05). The Mann-Whitney test, a nonparametric test to compare two groups, was also
conducted and showed the same results with t-tests. Therefore, results from t-tests will be re-
ported in the study. The t-test assesses whether the means of two groups are statistically different
from each other. To test whether or not the difference between the means is the same, the p-value
is compared with a significance level. If it is smaller, the result is significant. That is, if the null
hypothesis (i.e., the hypothesis that there is no difference in the means of two groups) were to be
rejected at α = 0.05, this would be reported as p < 0.05.
The result showed no significant differences in participants’ initial knowledge between the Web-
supplemental group and traditional group: t (18) = -0.34, p = .741. On the other hand, significant
differences were found between pretest and posttest scores for both groups (t (9) = 2.37, p < 0.05
for the Web-supplemental group; t (9) = 2.45, p < 0.05 for the traditional group). This result indi-
cates a significant increase in scores after the lecture. However, gain scores (difference between
pretest and posttest scores) between the two groups were equal after the exposure to the lecture (t
(18) = 0.31, p = 0.761).
Transfer of knowledge between the two groups: To test the hypothesis that there is no signifi-
cant difference in students’ learning performance between the Web-based GPS supplemental
learning and traditional supplemental learning program, a t-test on transfer of knowledge was
conducted. It was concluded that there were no significant differences in students’ learning per-
37
Web-Based Learning Environment
formance between the Web-based GPS supplemental learning group and traditional supplemental
learning group (t (18) = 0.59, p = .563).
Learner Preference: Students were asked to indicate their preference in which GPS learning ma-
terials might be used on the Web. Almost all the participants considered that the best way to use
Web materials was as an addition to face-to-face teaching used in their own time (6 out of 8 re-
sponses). Participants in the Web-supplemental group were asked to evaluate various aspects of
the programs they used for GPS learning. Responses were favorable, ranging from 70% to 90%
agreeing that the objectives were clear, the content was logical, the program was interactive and
the navigation was easy. Some 60% to 100% rated the quality of the text, graphics, and interface
as 6 or above on a scale of 1 to 9 (The first dimension in the text category needs to be reversed
since lower ratings represent readability). On the other hand, more than 60% of the participants
did not consider voice recordings of learning material useful to their GPS learning
38
Nam & Smith-Jackson
It is also true that there are many researchers who discredit studies referred to as media compari-
son studies (e.g., Lockee, Burton, & Cross, 1999; Russell, 1999). They argue that measuring the
impact of media on learning is futile in comparison studies. For example, Lockee et al. maintain
that media comparison studies are badly flawed because of a lack of randomization in the sample
selection, an assumption that grades actually measure student achievement, and no assumption of
homogeneity of groups. However, the present study is not a media comparison study and does not
exhibit any of these threats to internal validity. This study did not compare face-to-face/campus-
based learning and distance-learning programs as mentioned in Lockee et al.’s study, but com-
pared Web-based supplementation to students’ traditional supplementation activities. Further-
more, this study used only on-campus students and compared students who were randomly as-
signed to one of two conditions. A validity test in the study showed that the groups were homo-
geneous (e.g., no significant differences in participants’ initial knowledge between the Web-
supplemental group and traditional group).
39
Web-Based Learning Environment
improvement in the overall quality of the instructional system evaluated by instructional design
experts. This clearly suggests that the theory-based design of the instructional system may play an
important role in developing effective learning content.
It can be further noted that there are implications for usability studies for educational applica-
tions. Since concerns for usability have not been truly addressed when designing and developing
educational applications, more usability studies should be conducted (Levi & Conrad, 2000; Pav-
lik, 2000). Learners in the WD2L environment must be able to easily focus on learning materials
without having to make an effort to figure out how to access them (Lohr, 2000). The findings of
the study confirmed that the user interface system that supports students’ learning activities can
fulfill that requirement.
There were several potential limitations to the study, which may hinder generalization of the re-
sults. For example,
• The WD2L environment prototype in the study was developed by focusing on only one
GPS chapter (i.e., chapter 5) for a small number of the student user group. Replication of
the findings using a fully developed WD2L environment for other user groups (e.g., the
instructor and system administrator) and a larger number of participants is needed before
strong conclusions are warranted. The WD2L environment was also custom built at the
time this study was conducted, but results of the study can also be used to improve any
course management and delivery systems such as WebCT and Blackboard, which are not
designed to fully support students’ various learning activities.
• The present study identified students’ traditional activities for supplemental learning -
reading a book, questioning the instructor, and discussing with classmates – in an infor-
mal way (e.g., through conversation with a teaching assistant and students). Had the
study identified more information about students’ traditional activities for supplemental
learning, subjective ratings of the traditional to the Web-based supplemental learning
could have been compared.
• The evaluation activity takes place either formatively or summatively (Rubin, 1994). This
study focused only on the formative evaluation of the Web-based supplemental learning
environment, because the evaluation in Web-based learning environments is a continuing
process throughout the development lifecycle (Belanger & Jordan, 2000). A summative
evaluation is also needed to fully investigate the effectiveness of the program with a lar-
ger sample of participants.
• It is often more valid to evaluate learning and instructional design using action-research
methods even during the formative evaluation stage. The external validity of this study
could have been enhanced by implementing portions of the prototype in the actual learn-
ing environment and, in parallel, conducting formative evaluations. Given the time-cycle
of the actual course used in this study, it was difficult to synchronize the research and
classroom schedules to apply an action-research approach.
• Given that usability engineering and instructional design are both emerging specialty ar-
eas, the integrated framework is constrained by the knowledge domain. Thus, it is ex-
pected that the framework that has emerged from this study will require updating in the
future on the basis of new theories and empirical evidence relevant to usability and in-
structional design.
40
Nam & Smith-Jackson
References
Andrew, M. (2003). Should we be using Web-based learning to supplement face-to-face teaching of under-
graduates? In Proceedings of the 6th International Conference on Computer-Based Learning in Sci-
ence, (478-488). Cyprus.
Barnard, P. (1991). Bridging between basic theories and the artifacts of human-computer interaction. In J.
M. Carroll (Ed.), Designing interaction: Psychology at the human-computer interface (pp. 103-127).
Cambridge University Press.
Belanger, F., & Jordan, D. H. (2000). Evaluation and implementation of distance learning: Technologies,
tools and techniques. Hershey, PA: Idea Group.
Campbell, D. T., & Stanley, J. C. (1966). Experimental and quasi-experimental design for research. Chi-
cago: Rand McNally.
Chadwick, S. A. (1999). Teaching virtually via the Web: Comparing student performance and attitudes
about communication in lecture, virtual Web-based, and Web-supplemented courses. The Electronic
Journal of Communication, 9, 1-13.
Cockton, G. (1988). Generative transition networks: A new communications control abstraction. In D. M.
Jones, & R. Winder (Eds.), People and computers IV (pp. 509-525), Cambridge University Press.
Cognition and Technology Group at Vanderbilt. (1992). The Jasper series as an example of anchored in-
struction: Theory, program description, and assessment data. Educational Psychologist, 27, 291-315.
Davidson, K. (1998). Education in the internet--linking theory to reality. Retrieved October 3, 2002, from
http://www.oise.on.ca/~kdavidson/cons.html
Dayton, T. (1991). Cultivated eclecticism as the normative approach to design. In J. Karat (Ed.), Taking
software design seriously (pp. 21-44). Academic Press.
Dick, W., & Carey, L. (1996). The systematic design of instruction. New York, NY: Harper Collins.
Driscoll, M. P. (2000). Psychology of learning for instruction (2nd ed.). Needham Heights, Massachusetts:
Allyn & Bacon.
Felix, U. (1998). Evaluation of Web-based language learning program. Retrieved September 5, 2003, from
http://www.arts.monash.edu.au/lc/sill/evalqst.htm
Gagne, R. M., Briggs, L. J., & Wagner, W. W. (1992). Principles of Instructional Design (4th edition).
New York, USA: Harcourt, Brace, Jovanovich.
Gould, J. D., & Lewis, C. (1985). Designing for usability: Key principles and what designers think. Com-
munications of the ACM, 2, 300-311.
Hannafin, M., Land, S., & Oliver, K. (1999). Open learning environments: Foundations, methods, and
models. In C. Reigeluth (Ed.), Instructional Design Theories and Models (pp. 115-140). Mahwah, NJ:
Lawrence Erlbaum Associates.
Henke, H. A. (1997). Evaluating Web-based instruction design. Retrieved September 5, 2001, from
http://scis.nova.edu/~henkeh/story1.htm
Jonassen, D. H. (1991). Objectivist vs. constructivist: Do we need a new philosophical paradigm? Educa-
tional Technology Research and Development, 39, 5-14.
Jonassen, D. H. (1999). Designing constructivist learning environments. In C. M. Reigeluth (Ed.), Instruc-
tional-design theories and models: A new paradigm of instructional theory (Vol. II, pp. 215-239).
Mahwah, NJ: Lawrence Erlbaum Associates.
Jonassen, D. H., McAleese, T. M. R., & Duffy, T. M. (1993). A Manifesto for a constructivist approach to
technology in higher education. In T. M. Duffy, J. Lowyck, & D. H. Jonassen (Eds.), The design of
constructivistic learning environments: Implications for instructional design and the use of technology.
Heidelburg, FRG: Springer-Verlag.
41
Web-Based Learning Environment
Khan, B. H. (1997) (Ed.). Web-based instruction. Englewood Cliffs, NJ: Educational Technology Publica-
tions.
Kirkpatrick, D. L. (1994). Education training programs: The four levels. San Francisco: Berrett-Kohler.
Koto, K., & Cotler, E. (2002). Web redesign: Workflow that works. New Riders.
Levi, M. D., & Conrad, F. G. (2000). Usability testing of World Wide Web. Retrieved March 5, 2003, from
http://stats.bls.gov/ore/htm_papers/st960150.htm
Lockee, B. B., Burton, J. K., & Cross, L. H. (1999). No comparison: Distance education finds a new use for
“no significant difference.” Educational Technology Research & Development, 7, 33-42.
Lohr, L. L. (2000). Designing the instructional interface. Computers in Human Behavior, 16, 161-182.
Marshall, V., & Schriver, R. (1994). Using evaluation to improve performance. Technical and Skills Train-
ing, January, 6-9.
Mayer, R. E. (2001). Multimedia learning. New York: Cambridge University Press.
Mayer, R. E., & Chandler, P. (2001). When learning is just a click away: Does simple user interaction fos-
ter deeper understanding of multimedia messages? Journal of Educational Psychology, 93, 390-397.
McFeeters, F. E. (2003). The effects of individualism vs. collectivism on learner’s recall, transfer and atti-
tudes toward collaboration and individualized learning. Unpublished dissertation, Virginia Polytechnic
Institute and State University, Blacksburg, VA.
Moallem, M. (2001). Applying constructivist and objectivist learning theories in the design of a Web-based
course: Implications for practice. Educational Technology & Society, 4, 113-125.
Nielsen, J. (1993). Usability engineering. New York, NY: Academic Press.
Norman, D. A. (1987). Design principles of human-computer interfaces. In R. M. Baeker & W. A. S. Bux-
ton (Eds.), Readings in human-computer interaction: A multidisciplinary approach (pp. 492-501).
Morgan Kaufman.
Pavlik, P. (2000). Collaboration, sharing and society – Teaching, learning and technical considerations
from an analysis of WebCT, BSCW, and Blackboard. Retrieved September 25, 2002, from,
http://members.fortunecity.com/pgp5/Collaboration.Learning.and.Society.htm
Plass, J. L. (1998). Design and evaluation of the user interface of foreign language multimedia software: A
cognitive approach. Language Learning & Technology, 2, 35-45.
Reigeluth, C. M. (1996). A new paradigm of ISD? Educational Technology, 36, 13-20.
Rubin, J. (1994). Handbook of usability testing: How to plan, design, and conduct effective tests. New
York, NY: John Wiley & Sons.
Russell, T. L. (1999). The no significant difference phenomenon. Chapel Hill, NC: Office of Instructional
Telecommunications, North Carolina State University.
Saettler, P. (1990). The evolution of American educational technology. Englewood, CO: Libraries Unlim-
ited.
Savery, J. R., & Duffy, T. M. (1995). Problem based learning: An instructional model and its constructivist
framework. Educational Technology, 35, 31-38.
Schank, R. C. & Cleary, C., (1995). Engines for education. Hillsdale, New Jersey: Lawrence Erlbaum As-
sociates.
Schwier, R. A. (1995). Issues in emerging interactive technologies. In G. J. Anglin (Ed.), Instructional
technology: Past, present, and future (2nd Ed., pp. 119-127), Englewood, CO: Libraries Unlimited.
Shneiderman, B. (1993). Designing the user interface: Strategies for effective human-computer interaction
(2nd ed.). Reading, MA: Addison-Wesley.
42
Nam & Smith-Jackson
Spiro, R. J., & Feltovich, P. J., Jacobson, M. J., & Coulson, R. L. (1991). Cognitive flexibility, constructiv-
ism, and hypertext: Random access instruction for advanced knowledge acquisition in ill-structured
domains. Educational Technology, 31, 24-33.
Wallace, M. D., & Andersen, T. J. (1993). Approaches to interface design. Interacting with computers, 5,
259-278.
Biographies
Chang S. Nam is an assistant professor in the Department of Industrial
Engineering at the University of Arkansas. He received his Ph.D. in
Industrial and Systems Engineering from Virginia Polytechnic Institute
and State University in the United States. His research interests include
brain-computer interface, cognitive and cultural ergonomics, adaptive
and intelligent human-computer interaction, and haptic virtual envi-
ronments.
43