Effectiveness of Virtual Laboratories in Terms of Achievement, Attitudes, and Learning Environment Among High School Science Students
Effectiveness of Virtual Laboratories in Terms of Achievement, Attitudes, and Learning Environment Among High School Science Students
Effectiveness of Virtual Laboratories in Terms of Achievement, Attitudes, and Learning Environment Among High School Science Students
Doctor of Philosophy
of
Curtin University
January 2013
Declaration
This thesis contains no material which has been accepted for the award of any other
degree or diploma in any university. To the best of my knowledge and belief, this
thesis contains no material previously published by any person except where due
acknowledgement has been made.
ii
Abstract
Data analysis supported the LAG’s factorial validity, internal consistency reliability,
discriminant validity, and ability to differentiate between the perceptions of students
in different classrooms. All six learning environment scales correlated significantly
and positively with students’ attitudes and some of those scales (Integration,
Material Environment, Teacher Support, Differentiation) also correlated
significantly with students’ achievement. Most learning environment scales were
also found to be independent predictors of attitudes.
No significant differences were found between instructional groups for any criteria
of effectiveness, indicating that the promise of such technological interventions in
iii
the classroom might not be fulfilled. However, the use of virtual laboratories did
not negatively impact on students. Significant interactions were found between
instructional method and sex for three dependent variables (Material Environment,
Teacher Support, and the Attitude to Inquiry), with virtual laboratories being more
effective for males than females. The results of this study have the potential to
inform educational practitioners, add to the body of knowledge in the field of
learning environments, and stimulate further investigations into the effectiveness of
virtual laboratories as an instructional method.
iv
Acknowledgements
They say that the process to become a Doctor of Philosophy (PhD) is 10%
intelligence and 90% persistence. While persistence might be internal, my
persistence to complete this dissertation was catalyzed by the support, advice, and
encouragement from others. First and foremost, I’d like to thank all the participants
in my study: the students who responded to the questionnaire, and some of whom
allowed me to interview them over their summer break, the students’ parents who
offered their consent and many of whom expressed their support for my endeavor,
and, of course, the teachers who volunteered their precious time implementing this
study in their classrooms. Additionally, these teachers cooperated gracefully with
my onerous instructions and numerous requests for feedback. Their insight and
encouragement throughout this process has really made a difference to me – thank
you! The administration in each of these schools also deserves my appreciation for
allowing this study to take place under their auspices. In particular, the
administration of the school in which I was based was helpful and encouraging.
I would also like to take this opportunity to thank all the people who continually
asked about my progress and took an interest in my completion of this process: Dr.
Frommer, Barbie, Nicole, Allison, Aliza and many other friends and colleagues.
Without your pressure, I would not have come this far!
Naturally, but most significantly (may I use that word now?), words cannot express
my appreciation for my supervisor, Professor Barry Fraser. Your support,
dedication, tireless efforts, and meticulous editing have been outstanding! My
appreciation goes to your whole team at the Science and Mathematics Education
Centre, especially Dr. Rekha Koul, for helping me process and understand the data
numerous times!
Last, but certainly not least, my appreciation goes to my own kin. Thank you to my
family in Australia who accommodated me during my visit to Curtin University.
v
All along the dissertation writing process, I kept envisioning what it would look like
at the end and I would often find myself composing the ‘Acknowledgments’ in my
head. However, now that I have to put words on paper, I am at a loss. All I can
muster is: thank you from the bottom of my heart to the love of my life, Asher, who
pushed me to always pursue further and tolerated my unavailability, and thank you
the miniature loves of my life, Mordechai, Aryeh, and Mira, who have egged me on
and have had the patience to wait until ‘Mommy has finished working’.
vi
Abbreviations
CES Classroom Environment Scale
CCEI Computerized Classroom Ergonomic Inventory
CLES Constructivist Learning Environment Survey
COLES Constructivist-Orientated Learning Environment Survey
CUCEI College and University Classroom Environment Inventory
DOLES Distance and Open Learning Environment Scale
DELES Distance Education Learning Environments Survey
ICEQ Individualized Classroom Environment Questionnaire
LAG Laboratory Assessment in Genetics
LEI Learning Environment Inventory
MCI My Class Inventory
OLLES Online Learning Environment Survey
QTI Questionnaire on Teacher Interaction
SAI Scientific Attitude Inventory
SLEI Science Laboratory Environment Inventory
TOSRA Test of Science-Related Attitudes
TROFLEI Technology-Rich Outcomes-Focused Learning Environment
Inventory
VL Virtual Laboratory
WEBLEI Web-Based Learning Environment Instrument
WIHIC What Is Happening In this Class? Questionnaire
vii
Table of Contents
DECLARATION .................................................................................................................. II
ABSTRACT ........................................................................................................................ III
ACKNOWLEDGEMENTS ................................................................................................. V
ABBREVIATIONS ........................................................................................................... VII
LIST OF TABLES ............................................................................................................... X
LIST OF FIGURES ........................................................................................................... XI
CHAPTER 1 .......................................................................................................................... 1
INTRODUCTION ................................................................................................................. 1
1.1 INTRODUCTION ...................................................................................................... 1
1.2 RATIONALE ............................................................................................................ 2
1.3 RESEARCH QUESTIONS, DESIGN AND METHOD ..................................................... 5
1.4 CONTEXT ................................................................................................................ 7
1.4.1 The State of Science Education Today on a National Scale .............................. 8
1.4.2 Reforming Science Curricula ............................................................................ 9
1.4.3 Science Laboratories ....................................................................................... 11
1.5 LIMITATIONS ........................................................................................................ 15
1.6 OVERVIEW OF THESIS .......................................................................................... 16
CHAPTER 2 ........................................................................................................................ 18
LITERATURE REVIEW ................................................................................................... 18
2.1 INTRODUCTION .................................................................................................... 18
2.2 THEORETICAL FRAMEWORK: LEARNING ENVIRONMENTS RESEARCH ................ 19
2.2.1 History and Development of Learning Environments Research ..................... 20
2.2.2 Instruments for Assessing the Learning Environment ..................................... 22
2.2.3 Past Applications of Learning Environment Scales ........................................ 43
2.3 STUDENT ATTITUDES ........................................................................................... 60
2.3.1 Definition of Attitude ....................................................................................... 61
2.3.2 Assessment of Student Attitudes ....................................................................... 63
2.3.3 Impact of Educational Interventions on Students’ Attitudes ........................... 65
2.4 GENDER DIFFERENCES IN SCIENCE EDUCATION ................................................. 69
2.5 VIRTUAL LABORATORIES IN SCIENCE EDUCATION ............................................. 74
2.5.1 The Proponents: Rationale for Integrating Educational Technology ............. 74
2.5.2 Virtual Laboratories ........................................................................................ 78
2.5.3 Virtual Learning Environments ....................................................................... 84
2.5.4 Overview of Studies Employing Virtual Laboratories ..................................... 88
2.5.5 The Critics: The No Significant Difference Phenomenon Regarding
Educational Technology ............................................................................................... 93
2.6 SUMMARY ............................................................................................................ 99
CHAPTER 3 ...................................................................................................................... 101
METHODOLOGY ............................................................................................................ 101
3.1 INTRODUCTION .................................................................................................. 101
3.2 RESEARCH QUESTIONS ...................................................................................... 101
3.3 SAMPLE SELECTION AND CHARACTERIZATION ................................................. 102
3.4 INSTRUMENTATION AND RESOURCES USED TO IMPLEMENT THE STUDY.......... 105
3.4.1 Instrumentation: Development of LAG Questionnaire .................................. 105
3.4.2 Other Resources ............................................................................................ 114
3.5 PROCEDURES ...................................................................................................... 115
3.5.1 Treatment Conditions .................................................................................... 115
viii
3.5.2 Design and Delivery of Virtual Laboratories ................................................ 118
3.5.3 Timetable ....................................................................................................... 120
3.5.4 Administration of LAG Questionnaire ........................................................... 123
3.5.5 Ethical Issues ................................................................................................. 124
3.6 DATA COLLECTION, ENTRY, AND ANALYSIS..................................................... 124
3.6.1 Collection of Data ......................................................................................... 125
3.6.2 Entry of Data ................................................................................................. 127
3.6.3 Statistical Methods for Analysis of Data ....................................................... 129
3.7 LIMITATIONS ...................................................................................................... 133
3.7.1 Loss of Sample ............................................................................................... 133
3.7.2 Treatment Conditions .................................................................................... 134
3.7.3 Technical Issues............................................................................................. 135
3.7.4 Instrument Administration ............................................................................. 136
3.8 SUMMARY .......................................................................................................... 136
CHAPTER 4 ...................................................................................................................... 139
DATA ANALYSES AND RESULTS .............................................................................. 139
4.1 INTRODUCTION .................................................................................................. 139
4.2 VALIDITY AND RELIABILITY OF LEARNING ENVIRONMENT, ATTITUDE, AND
ACHIEVEMENT SCALES COMPOSING THE LAG ................................................................... 140
4.2.1 Factor Structure of Learning Environment and Attitude Scales ................... 141
4.2.2 Internal Consistency Reliability of Learning Environment and Attitude Scales
143
4.2.3 Discriminant Validity of Learning Environment and Attitude Scales ........... 145
4.2.4 Ability of Learning Environment to Differentiate Between Classrooms ....... 146
4.2.5 Validation of Achievement Section of the LAG.................................................. 146
4.3 ASSOCIATIONS BETWEEN LEARNING ENVIRONMENT, AND ATTITUDES, AND
ACHIEVEMENT ..................................................................................................................... 148
4.3.1 Associations Between Learning Environment and Attitudes ......................... 149
4.3.2 Associations Between Learning Environment and Achievement ................... 150
4.4 EFFECTIVENESS OF VIRTUAL LABORATORIES AND THEIR DIFFERENTIAL
EFFECTIVENESS FOR DIFFERENT SEXES IN TERMS OF LEARNING ENVIRONMENTS,
ATTITUDES, AND ACHIEVEMENT......................................................................................... 152
4.4.1 Overview of Results for Effectiveness for Virtual Laboratories and Differential
Effectiveness of Virtual Laboratories for Males and Females ................................... 154
4.4.2 Effectiveness of Instruction Using Virtual Laboratories in Terms of Learning
Environment Perceptions, Attitudes, and Achievement .............................................. 156
4.4.3 Sex Differences in Learning Environment Perceptions, Attitudes, and
Achievement ................................................................................................................ 165
4.4.4 Differential Effectiveness of Virtual Laboratories for Males and Females... 169
4.5 SUMMARY .......................................................................................................... 176
CHAPTER 5 ...................................................................................................................... 179
DISCUSSION .................................................................................................................... 179
5.1 INTRODUCTION .................................................................................................. 179
5.2 OVERVIEW OF THESIS ........................................................................................ 179
5.2.1 Research Question 1 ...................................................................................... 181
5.2.2 Research Question 2 ...................................................................................... 183
5.2.3 Research Questions 3 and 4 .......................................................................... 184
5.2.4 Summary of Qualitative Data ........................................................................ 188
5.2.5 Teachers’ Perspectives Regarding the Learning Environment and Student
Outcomes .................................................................................................................... 191
5.3 SIGNIFICANCE AND IMPLICATIONS .................................................................... 192
5.3.1 Implications for Educational Research ......................................................... 193
5.3.2 Implications for Educational Practitioners ................................................... 195
ix
5.4 LIMITATIONS AND SUGGESTIONS FOR FURTHER RESEARCH ............................. 198
5.5 CONCLUSION ...................................................................................................... 202
APPENDICES ................................................................................................................... 236
APPENDIX A: LABORATORY ASSESSMENT IN GENETICS (LAG) .................................... 237
APPENDIX B: SEMI-STRUCTURED INTERVIEW QUESTIONS FOR STUDENTS .................... 246
APPENDIX C: SEMI-STRUCTURED INTERVIEW QUESTIONS FOR TEACHERS.................... 248
APPENDIX D: LIST OF VIRTUAL LABORATORIES AVAILABLE FOR TEACHERS ............... 250
APPENDIX E: INSTRUCTIONS TO TEACHERS FOR PARTICIPATING IN MY STUDY ........... 252
APPENDIX F: EXAMPLE OF A VIRTUAL LABORATORY WORKSHEET .............................. 253
List of Tables
TABLE 2.1 OVERVIEW OF SCALES USED IN SOME LEARNING ENVIRONMENT INSTRUMENTS
(CUCEI, MCI, QTI, SLEI, CLES, WIHIC, AND COLES) ............................................ 24
TABLE 2.2 SCALE DESCRIPTION, MOOS’ DIMENSION, AND SAMPLE ITEM FOR EACH
TECHNOLOGY-RICH OUTCOMES-FOCUSED LEARNING ENVIRONMENT INVENTORY
(TROFLEI) SCALE........................................................................................................ 39
TABLE 2.3 SOME STUDIES OF ASSOCIATIONS BETWEEN THE LEARNING ENVIRONMENT
AND STUDENT OUTCOMES ............................................................................................ 46
TABLE 2.4 FRASER'S (1981) TOSRA SCALES AND KLOPFER'S (1971) CLASSIFICATION .... 64
TABLE 3.1 SCALE DESCRIPTION AND SAMPLE ITEM FOR EACH LEARNING ENVIRONMENT
SCALE IN THE LAG ..................................................................................................... 110
TABLE 3.2 SCALE DESCRIPTION, JUSTIFICATION, AND SAMPLE ITEM FOR EACH TOSRA
SCALE USED IN THE LAG ............................................................................................ 112
TABLE 3.3 TITLE, TYPE, DESCRIPTION AND SOURCE FOR EACH VIRTUAL LABORATORY
.................................................................................................................................... 119
TABLE 3.4 IMPLEMENTATION OF CONDITIONS OF THE STUDY BY EACH TEACHER
INCLUDING CLASS COMPOSITION, DURATION OF STUDY, THE ADMINISTRATION OF
THE VIRTUAL LABORATORIES (VL), AND INFORMATION ABOUT COVARIATES......... 121
TABLE 4.1 FACTOR ANALYSIS RESULTS FOR ATTITUDE AND LEARNING ENVIRONMENT
SCALES ........................................................................................................................ 142
TABLE 4.2 SCALE MEAN, STANDARD DEVIATION, INTERNAL CONSISTENCY (CRONBACH
ALPHA RELIABILITY), DISCRIMINANT VALIDITY (MEAN CORRELATION WITH OTHER
SCALES), AND ABILITY TO DIFFERENTIATE BETWEEN CLASSROOMS (ANOVA
RESULTS) FOR LEARNING ENVIRONMENT AND ATTITUDE SCALES ........................... 144
TABLE 4.3ASSOCIATIONS BETWEEN LEARNING ENVIRONMENT QUESTIONNAIRE SCALES
AND ATTITUDES AND ACHIEVEMENT IN TERMS OF SIMPLE CORRELATIONS (R),
MULTIPLE CORRELATIONS (R) AND STANDARDIZED REGRESSION COEFFICIENTS () ....
.................................................................................................................................... 149
TABLE 4.4 TWO-WAY ANALYSIS OF VARIANCE (ANOVA) FOR INSTRUCTIONAL METHOD
AND SEX FOR EACH SCALE OF THE LAG .................................................................... 155
TABLE 4.5 ITEM MEAN, ITEM STANDARD DEVIATION AND DIFFERENCE BETWEEN
INSTRUCTIONAL METHODS (ANOVA RESULTS AND EFFECT SIZE) FOR EACH
LEARNING ENVIRONMENT AND STUDENT OUTCOME MEASURED BY THE LAG........ 157
TABLE 4.6 ITEM MEAN, ITEM STANDARD DEVIATION AND SEX DIFFERENCE (ANOVA
RESULTS AND EFFECT SIZE) FOR EACH LEARNING ENVIRONMENT SCALE AND
STUDENT OUTCOME MEASURED BY THE LAG........................................................... 166
x
TABLE 4.7DIFFERENTIAL EFFECTIVENESS (INSTRUCTIONAL METHOD X SEX INTERACTION)
OF VIRTUAL LABORATORIES FOR MALES AND FEMALES FOR EACH LEARNING
ENVIRONMENT SCALE AND STUDENT OUTCOME MEASURED BY THE LAG .............. 170
TABLE 5.1 SUMMARY OF STUDENT INTERVIEW RESULTS FOR STUDENTS EXPERIENCING
EACH INSTRUCTIONAL METHOD FOR EACH LEARNING ENVIRONMENT AND OUTCOME
VARIABLE (BASED ON SECTION 4.4) .......................................................................... 189
TABLE 5.2 SUMMARY OF STUDENT INTERVIEW RESULTS FOR SEX DIFFERENCES FOR EACH
LEARNING ENVIRONMENT AND OUTCOME VARIABLE ............................................... 191
TABLE 5.3 SUMMARY OF TEACHERS’ OBSERVATIONS FOR EACH LEARNING ENVIRONMENT
AND OUTCOME VARIABLE, AND GENDER .................................................................. 192
List of Figures
FIGURE 3.1 NUMBERS OF STUDENTS IN EXPERIMENTAL AND CONTROL CLASSES FOR EACH
TEACHER ..................................................................................................................... 104
FIGURE 3.2 NUMBERS OF FEMALE AND MALE STUDENTS FOR EACH TEACHER................ 104
FIGURE 3.3 SCREENSHOT FROM A SAMPLE VIRTUAL LABORATORY ................................ 116
FIGURE 4.1 FREQUENCY DISTRIBUTION FOR ACHIEVEMENT ............................................ 147
FIGURE 4.2 PROFILE OF MEANS FOR INSTRUCTIONAL GROUPS AS MEASURED BY LAG .. 158
FIGURE 4.3 PROFILE OF MEANS FOR DIFFERENT SEXES AS MEASURED BY LAG ............. 167
FIGURE 4.4DIFFERENTIAL EFFECTIVENESS OF VIRTUAL LABORATORIES FOR FEMALES AND
MALES FOR THE LEARNING ENVIRONMENT SCALE OF MATERIAL ENVIRONMENT ... 172
FIGURE 4.5 DIFFERENTIAL EFFECTIVENESS OF VIRTUAL LABORATORIES FOR FEMALES
AND MALES FOR THE LEARNING ENVIRONMENT SCALE OF TEACHER SUPPORT ....... 173
FIGURE 4.6 DIFFERENTIAL EFFECTIVENESS OF VIRTUAL LABORATORIES FOR FEMALES
AND MALES FOR THE ATTITUDE SCALE OF INQUIRY ................................................. 174
xi
Chapter 1
Introduction
“If we knew what it was we were doing, it would not be called research, would it?”
– Albert Einstein
1.1 Introduction
This chapter introduces the components of this study. The rationale for this study is
explained in Section 1.2. The research questions, design, and method are described
in Section 1.3. The context, which describes the setting in which the study was
implemented, and also the curriculum on which the study was based, are explored in
1
Section 1.4. Limitations and boundaries regarding this study are delineated in
Section 1.5. This chapter concludes with an overview of the remaining chapters that
review relevant literature, discuss an appropriate framework, explain the methods of
the study, describe methods for analyzing the data, report the results, and provide
implications for practical applications and future research.
1.2 Rationale
Achievement scores in the sciences for American students have raised alarms about
the abilities, skills, and knowledge base of the nation’s future work force (see
Section 1.4.1). As decried by Thomas Friedman in The World is Flat (2006), the US
has entered an era of ‘outsourcing’ low-skilled jobs to developing countries because
the cost is less. Outsourcing also occurs for high-skilled jobs involving Science,
Technology, Engineering, and/or Mathematics (STEM) for which the American
workforce is ill-equipped; this is referred to as the ‘brain drain’ or “the chronic
decline in homegrown STEM talent” (Dugger, 2010). A 2005 report of the US
Bureau of Labor Statistics predicted that, by 2012, the number of jobs in STEM
occupations would grow by 47%, which is three times the rate for all other
occupations (Russell & Siley, 2005). Fortunately, Friedman argues, educational
systems are dynamic and can be enhanced to better train American youth and
prevent such outsourcing (Friedman, 2006).
2
being virtual laboratories (Borgman et al., 2008). The integration of technology into
science laboratories has begun, but several researchers note the lack of empirical
evidence concerning its effectiveness in general (Russell, 1999), and the
effectiveness of using virtual laboratories in particular (Harms, 2000; Hofstein &
Lunetta, 2004; Javidi & Sheybani, 2006). Ma and Nickerson (2006) acknowledge
the necessity to further evaluate the educational effectiveness of laboratory
simulations by conducting controlled studies. While there are a number of studies
that have assessed such educational innovations from the field of Information
Technology, there is hardly any evaluative research on virtual laboratories from an
educational perspective. The purpose of this study, then, was to evaluate the
effectiveness of the use of such virtual laboratories in science classes.
The researcher chose the field of learning environments as the foundation for the
current study. Classroom learning environment research focuses on interactions that
take place within a classroom, between students, and between teachers and students
(Fraser, 2012). Learning environment instruments can be used to assess student
perceptions of what is taking place in the classroom and these assessments can guide
future directions to improve the learning environment. Because associations have
been established between the learning environment and student attitudes towards
science, as well as with achievement in science (Fraser, 2012), enhancing the
learning environment through an educational innovation (such as virtual
laboratories) might also improve students’ attitudes and achievement levels.
Attitudes towards science amongst middle to early high school students have been
found to decline relative to their earlier schooling experiences (Oliver & Venville,
3
2011). Students who lose interest in the sciences are less likely to further explore
the field in higher education and tend not to pursue such lines of work (Tytler &
Osborne, 2012). If educational researchers can uncover evidence for the
effectiveness of instructional media that engage students in science at this critical
age of development, it might inform current and future practices for improving
attitudes towards science and science-related careers. Therefore, in addition to
assessing students’ perceptions of the learning environment, this study also
examined students’ attitudes towards science, especially because robust and
economical instruments are available to assess such attitudes.
4
whole picture of the effect of virtual laboratories, especially because students cannot
indicate their opinions outside of the specific questions about which they are asked
on an instrument. For this reason, the collection of qualitative data through semi-
structured interviews was an important element in this study. A triangulation of
quantitative and qualitative methods of data collection in learning environment
research has been recommended by Tobin and Fraser (1998). Elaboration of the
research design is described in Section 1.3 and further details are furnished in
Chapter 3.
This is the first study of its kind to evaluate the effectiveness of virtual laboratories
in science education in terms of students’ perceptions of the learning environment
and the student outcomes of attitudes and achievement. Therefore, findings have the
potential to usefully inform future researchers in science education, practitioners
such as administrators and teachers, and policy-makers, and eventually impact on
students.
Once the purpose of this study was conceived, it was further divided into four
separate aims for exploring various aspects of the educational intervention. Each
aspect of my investigation was guided by a research question, as appears below.
To check whether the instruments used in this study were valid and reliable, the first
research question was constructed:
Research Question 1:
Are scales from the Test Of Science Related Attitudes (TOSRA), Science
Laboratory Environment Inventory (SLEI), and Technology-Rich Outcomes-
Focused Learning Environment Inventory (TROFLEI), as well as
achievement items, valid and reliable when used with a sample of high
school students taking biology in the US?
To uncover associations between the three criteria used to assess the effectiveness of
virtual laboratories, the second research question was written:
5
Research Question 2:
Research Question 3:
Research Question 4:
Is the use of virtual laboratories differentially effective for males and females
in terms of students’:
Chapter 3 describes the research design and method in detail; the following is a brief
overview of Chapter 3. This study used a quasi-experimental design to compare
students in 11 high school classes who engaged in virtual laboratories with students
in 10 high school classes who did not (they continued learning and experimenting in
their normal fashion). Eight different virtual laboratories related to the topic of
genetics were chosen by the researcher for their design and use of inquiry. Teachers
6
used at least four of these virtual laboratories. The treatment period lasted from two
to twelve weeks.
1.4 Context
The field of science education provided the general context for this study. This
section surveys the landscape of science education today regarding recent trends and
future directions on a national scale, with respect to global circumstance (Section
1.4.1). Next, Section 1.4.2 delves into the particulars of the science curriculum, on
which the content of the virtual laboratories used in this study was based. The role
of the science laboratory is also explored because it provided the setting for the
intervention evaluated in this study (Section 1.4.3).
7
1.4.1 The State of Science Education Today on a National Scale
This study took place in the United States of America and involved public high
school students from four different states along the eastern coast. The sciences are
considered a ‘high need’ area in education because there is a shortage of qualified
teachers and because students have been losing interest in this area (Baird, 2012).
With the globalization of the economy as well as other aspects of society and
culture, STEM education must involve global collaboration as it prepares the future
workforce (Friedman, 2006). In the last decade or two, several international
assessments were developed to compare the achievement of students in different
countries. The most notable are the Third (or, renamed ‘Trends in’) International
Mathematics and Science Study (TIMSS) and the Programme for International
Student Assessment (PISA) for reading, mathematics, and science literacy. Data for
TIMSS have been collected from 4th and 8th graders every four years since 1995 and
data for PISA have been collected from 15 year-olds every three years since 2000
(Programme for International Student Assessment (PISA), 2009; Trends in
International Science and Mathematics Study (TIMSS), 2007).
8
large role in daily life” (Organization for Economic Co-operation and Development
(OECD), 2010, p. 24). Similar findings were reported from the TIMSS (Gonzales,
Williams, Jocelyn et al., 2008).
While the examples offered in Section 1.4.1 refer to the how of engaging students in
science, what science content is being delivered also requires upgrading. As the US
became aware of how poorly its students were achieving in science relative to other
developed countries, it sought to establish clear learning statements as goals for its
students to attain. Historical examples of such science learning goals include
Benchmarks for Science Literacy (American Association for the Advancement of
Science (AAAS), 1989), the National Science Education Standards (National
Research Council (NRC), 1996, 2005), and a New Generation of Science Standards
(NGSS, 2011). In accordance with the new framework’s scientific practices to
promote scientific inquiry, to focus on cross-cutting concepts (concepts fundamental
to different disciplines of science), and to deepen core content, virtual laboratories
(the intervention in the current study) are one medium through which such scientific
inquiry can be practiced and enable a greater emphasis on cross-cutting concepts,
without the distraction of time-consuming hands-on tasks.
9
The specific topic on which the virtual laboratories in this study were based was
genetics, the study of inheritance. Disciplinary core ideas introduced by the NGSS
are addressed by such virtual laboratories, including ‘LS1: From molecules to
organisms: Structures and processes’, ‘LS3: Heredity: Inheritance and variation of
traits’, and ‘LS4: Biological evolution: Unity and diversity’ (National Research
Council (NRC), 2011, pp. ES-3). Such ideas are considered to be difficult to learn
(Bahar, Johnstone, & Hansell, 1999) because they require multilevel thinking: an
organism is at the macro-level, while cells, chromosomes and DNA are at the micro-
and molecular level, and genotypes are at the symbolic level (Johnstone, 1991). For
instance, to master the topic of genetics at the high school level, students must be
able to discuss the structure and function of key molecules in the cell, explain the
process and purpose of DNA replication, meiosis, gene expression, cellular
regulation, and mutations, predict the impact of environmental factors on these
processes, discuss how they lead to diversity, and the identify role of genetics in
evolution. The teaching of genetics proves to be complex, as well. Controversy
exists over what order in which the sub-topics should be taught and at which point in
the curriculum genetics should be presented (Redfield, 2012) in order to maximize
understanding.
Since Watson and Crick’s (1953) discovery of the structure of DNA, the area of
genetics, one that began in the mid-1800’s as classic ‘Mendelian genetics’, took on a
new direction and was renamed ‘molecular genetics’. From this historic event,
entire new fields within molecular genetics were borne (Marbach-Ad, Rotbain, &
Stavy, 2008), including genetic engineering, which is the intentional modification of
an organism’s characteristics by manipulating its genetic material. From an
economic perspective, the burgeoning pharmaceutical industry provides the impetus
to raise standards in the learning of genetics because it relies on a skilled workforce,
which will be drawn from today’s students.
10
the use of models and visualization might prove to be helpful, especially at the
molecular level at which students have difficulty understanding this topic simply
based on textual presentations (Marbach-Ad, Rotbain, & Stavy, 2008). A number of
researchers note the potential of computer animations/simulations to facilitate the
visualization of abstract concepts and processes at the molecular level (Marbach-Ad,
Rotbain, & Stavy, 2008; Tsui & Treagust, 2004; Wu, Krajcik, & Soloway, 2001).
Virtual laboratories, which are such examples of computer-based simulations, are
capable of reducing the logistic load required in both the classroom and laboratory
when learning molecular genetics, so that students might better focus on its
demanding cognitive aspects.
To assess whether learning goals are achieved and whether interventions are
beneficial, the National Assessment of Educational Progress (NAEP), colloquially
referred to as the ‘Nation’s Report Card’, showed that science scores improved
between 2009 and 2011, and that the achievement gap for minorities also narrowed
(National Center for Educational Statistics (NCES), 2012a). Another study revealed
that, over the course of a decade (from 2000–2010), more high school students were
enrolled in science and mathematics courses but that achievement had not improved
(Aud, 2012). However, the stagnancy of these scores might even be commended
because it indicates that standards have not been artificially lowered in order to
entice students to engage in more science and mathematics (Campbell, 2012).
These findings suggest that the interventions aimed at improving science education
in the last decade might hold promise. Therefore, one of the aims of this study was
to investigate whether using an intervention (i.e. virtual laboratories) would lead to
increased student science achievement.
The laboratory has been a prominent feature of science education since the inception
of teaching science systematically in the 19th century. A laboratory refers to
“experiences in school settings in which students interact with equipment and
materials or secondary sources of data to observe and understand the natural world”
(Hofstein & Kind, 2012, p. 190). However, in the early years of science
experimentation in school, laboratories were simply environments in which to
practice or confirm information learned during lectures or from textbooks. The
11
evolution of laboratories as being the environment through which exploration and
inquiry occur took decades, and is a process that is still ongoing.
12
they must internalize the their own thought processes as well as those of their peers
(Hofstein & Kind, 2012).
Despite all of these reform efforts over the years, challenges still remain; Hofstein
and Lunetta (2004) pose serious questions about the efficiency and benefits of the
science laboratory. The laboratory in science education has been shown to be
effective in the development of practical, manipulative skills related to handling
equipment, but it has failed to enhance concept-building, critical thinking, and an
understanding of the nature of science; in essence, the laboratory has become a place
for “manipulating equipment and materials, but not ideas” (Hofstein & Kind, 2012,
p. 192).
Some reasons for the lack of evidence regarding the effectiveness of laboratories
include inadequate assessment and research procedures (Lazarowitz & Tamir,
1994), such as insufficient control over laboratory procedures (e.g. laboratory
manuals, teacher behavior, teachers’ assessments of student achievement),
inappropriate samples, and the use of measures that were not sensitive to the
laboratory learning environment (Hofstein & Lunetta, 1982). Since then, a number
of instruments to measure dimensions specific to the science laboratory were
developed, such as the Science Laboratory Environment Inventory (SLEI) discussed
further in Section 2.3.2.
13
importance of incorporating metacognition into all activities; this is also considered
to be a way to develop independent learners (NRC, 1996, 2005, 2011). Four
conditions are necessary in order to foster an environment of inquiry, in which
metacognition can occur: time, opportunity, guidance, and support (Baird & White,
1996). Time can be afforded by reducing the amount of time spent on tasks that can
be handled by technology. Similarly, Hofstein and Lunetta (2004) present a way to
overcome the challenges of a lack of inquiry in science laboratories: investing in the
training and use of ‘inquiry empowering technologies’. Already in the early 1980s,
digital technologies were recognized as important tools for the science laboratory
(further discussion about the history of educational technology is found in Section
2.5). Essentially, such technologies can be used to perform time-consuming tasks
such as gathering and analyzing data. This allows students more time to observe,
reflect and construct conceptual knowledge; conduct, interpret, and report more
accurate and relevant data; and focus on student collaboration, development of a
community of inquirers, and engagement in argumentation (Hofstein & Kind, 2012).
All of these features are outcomes of laboratory investigation steeped in the
concepts of inquiry, constructivism, and social learning.
14
1.5 Limitations
The intention of this study was to compare the instructional effectiveness of virtual
laboratories relative to instruction without virtual laboratories. Therefore, virtual
laboratories, in the context of this study, were meant to supplement current methods
of instruction, rather than substitute traditional methods (i.e. hands-on experiments)
with more innovative and technological ones. In other words, the intention of this
study was not to compare virtual laboratories with their hands-on counterparts. The
researcher was not interested in investigating whether virtual laboratories were more
effective than hands-on laboratories for the same experiment because research
(Bredderman, 1982; Johnson, Wardlow, & Franklin, 1997; Ma & Nickerson, 2006)
already has indicated the effectiveness of hands-on experiences with regard to
experimentation. In fact, the small body of past research on physical (hands-on)
versus virtual laboratories is inconclusive regarding which method is more
beneficial for students (de Jong, Linn, & Zacharia, 2013). This point is further
expanded upon in Section 2.5 where the literature regarding the effectiveness of
virtual laboratories, and educational technology in general, is reviewed.
Rather, the researcher simply noted a lack of opportunities for students to engage in
complex experimentation and techniques with which they are expected to become
familiar, according to newer standards of science education (see Section 1.4.1).
Therefore, the hypothesis of my study was that the introduction of virtual
laboratories would help students in this regard more than current instructional
methods that only involve verbal explanations or textbook illustrations. Essentially,
the researcher did not wish to run similar physical experiments with the comparison
group because such experiments are not usually possible in a high school setting.
High school laboratories simply do not have the safety precautions in place for
conducting such experiments; nor do high schools have the resources, such as costly
equipment and long periods of time for conducting these experiments.
Thus, this study was limited to high school classrooms that do not have the
capability to conduct complex experiments; the comparison of virtual experiments
with similar physical ones was beyond the scope of this investigation. As long as
the results of this study do not suggest a negative impact of virtual laboratories on
students’ educational experience (including perceptions of their learning
15
environment, attitudes, and achievement), then they suggest the effectiveness of
virtual laboratories as an instructional method.
Background information about this study, its implementation, and its results are
presented in five chapters. Chapter 1 introduced the background (Section 1.1),
rationale and purpose (Section 1.2), research questions and research design (Section
1.3), educational context (Section 1.4), and limitations (Section 1.5) of the study, as
well as an overview of the rest of the thesis (Section 1.6).
Chapter 2 reviews the literature relevant to the current study, and is organized into
several sections and sub-sections. Section 2.2 describes the theoretical framework
for the evaluation of the intervention, namely, learning environments, including its
history and development, instruments used to assess the learning environment, and
the application of learning environment scales to current educational research.
Section 2.3 deals with students’ attitudes towards science, another measure of the
effectiveness of virtual laboratories in my study, by defining the term ‘attitude’,
describing how attitudes are assessed, and reviewing research on the impact of
educational interventions on attitudes. Gender differences in science education are
also considered in Section 2.4. The intervention in this study, virtual laboratories, is
discussed in Section 2.5, including its definition, history, and benefits, and the
possibility that educational technology might not offer any advantages. Finally,
Section 2.6 examines various aspects of achievement, another measure of
effectiveness, in science education.
The methodological aspects of this study are depicted in Chapter 3. Section 3.2
delineates the research questions that guided the methods, while Section 3.3
describes the sample selection, and Section 3.4 discusses the assessment instruments
and other resources. The procedures for the study’s implementation are elucidated
in Section 3.5, and a description of how the data were collected, entered, and
analyzed is presented in Section 3.6. Errors and other general limitations are
pointed out in Section 3.7
16
The next chapter, Chapter 4, reports results for validation of the various parts of the
LAG instrument in Section 4.2, for associations between perceptions of the learning
environment (SLEI, TROFLEI) and attitudes (TOSRA) and achievement in Section
4.3, and for the effectiveness of virtual laboratories in Section 4.4, including results
for the differential effectiveness of virtual laboratories for males and females.
The final chapter summarizes the earlier chapters regarding research methods and
results (Section 5.2), explicates the significance of the results and implications for
educational research and practice (Section 5.3), points out the limitations of this
study as well as suggesting directions for further research (Section 5.4), and
provides a conclusion for the study (5.5).
17
Chapter 2
Literature Review
“If I have seen further it is by standing on the shoulders of giants.” – Isaac Newton
2.1 Introduction
This chapter reviews literature that supports the various aspects of this study. The
aim of this study was to investigate the effectiveness of virtual laboratories in terms
of students’ perceptions of the learning environment, their attitudes towards science,
and their achievement in science. Additionally, it examined the differential
effectiveness of virtual laboratories for different sexes using the same measures.
First, Section 2.2 focuses on the literature that provided the theoretical framework
for the evaluation of the intervention: the field of learning environments provided a
framework for evaluating the effectiveness of virtual laboratories. Included in this
section is a review of the literature regarding the historical background for the
development of the field (Section 2.2.1), the instruments used to assess the learning
environment (Section 2.2.2), and the application of learning environment scales to
current research in classrooms (Section 2.2.3).
Next, Section 2.3 reviews the literature that deals with students’ attitudes towards
science, another measure of the effectiveness of virtual laboratories. The term
‘attitude’ is defined in Section 2.3.1, methods of assessment are presented in Section
2.3.2, and literature concerning the impact of educational interventions on students’
attitudes is reviewed in Section 2.3.3.
Finally, literature about the subject of the intervention in this study, virtual
laboratories, is reviewed in Section 2.5. More specifically, this section reviews
literature that describes the history of as well as the rationale for integrating
18
educational technology into classrooms (Section 2.5.1), that defines virtual
laboratories and portrays their advantages and application (Section 2.5.2), and
literature that provides a critical voice against such interventions (Section 2.5.3).
This study was couched in an area of educational research that has grown from its
infancy to premiership over the last 40 years. How does one measure the effects of
educational reform? Traditionally, educational research has focused on the learning
outcomes, especially achievement scores, of students experiencing an educational
intervention. However, evidence for the effectiveness of education is broader than a
mean score on achievement tests. This is the focus of the learning environments
framework; ‘learning environments’ is an area of research that involves not only the
learning outcome of achievement, but also a complex web of psychosocial factors
that impact on students, classrooms, and schools. More specifically, it explores
intangible aspects that give the classroom a characteristic tone (Fraser, 2001).
In fact, Fraser (2001, 2012) claims that the students, in contrast to external
observers, are the best evaluators of the classroom setting because they have been
observers in a multitude of classrooms during their entire lives. He states that, by
the time a student graduates from university, s/he will have been experiencing
classrooms for over 20,000 hours! Therefore, the perspective that is taken into
account in the field of learning environments is that of the student. That is, the field
uses students’ perceptions of the classroom environment, assessed by quantitative
surveys (Fraser, Giddings, & McRobbie, 1995), as criteria of effectiveness and
predictors of students’ cognitive and affective outcomes (Walberg & Anderson,
1968). Because these perceptions might in turn impact upon their attitudes and
achievement, the field of learning environments indirectly involves learning
outcomes, even though the real focus is the student’s perception of the classroom
environment.
This section reviews literature concerning various aspects of the field of learning
environments. First, Section 2.2.1 provides the historical background of the
development of the field. Next, instruments used to assess the learning environment
19
are explored in Section 2.2.2. Finally, Section 2.2.3 reviews how learning
environment scales have been applied in research in classrooms.
The field of learning environments has foundations that date back to Lewin’s (1936)
seminal study in a business setting that led to the formula, Behavior = f(Person,
Environment), in which behavior is defined as a function of the person and the
environment; this idea was applied to human behavior in any setting. His work was
followed by Murray (1938) who advocated a needs–press model in which personal
needs are either supported or frustrated by the environmental press. In line with this
model, Murray also coined the terms ‘alpha press’, referring to the perspective of an
objective observer, and ‘beta press’, which is the perspective of the participant of the
environment. Furthermore, Stern, Stein, and Bloom (1956) delineated between the
individual’s perception of the environment (private beta press) and the shared
group’s perception of the environment (consensual beta press), a distinction
important to researchers when deciding upon the perception scores of the individual,
the group, or an external observer. Work in learning environments was furthered by
Stern (1970) who expanded upon the notion of person–environment fit.
20
Within a decade, the pioneering research on learning environments that began in the
US soon became international. Wubbels and Levy (1993) in the Netherlands
developed the Questionnaire on Teacher Interaction (QTI) to assess student–teacher
interactions. This work with the QTI was furthered by others in countries such as
Brunei Darussalam (Scott & Fisher, 2004), Singapore (Quek, Wong, & Fraser,
2005), Korea (Lee, Fraser, & Fisher, 2003), and Indonesia (Fraser, Aldridge, &
Soerjaningsih, 2010). Barry Fraser and his colleagues established Australia as a
center of research for learning environments and initially constructed the
Individualized Classroom Environment Questionnaire (ICEQ), which differed from
previous questionnaires that assessed teacher-centered classrooms to focus on
classrooms that were more student-centered (Fraser & Butts, 1982). He was also
involved in the development and cross-validation of numerous other instruments
applied to learning environments around the world as described in this and the next
section.
The field was further established by the creation of specific research groups,
journals, and books devoted to learning environments, in addition to the
accumulation of studies conducted by individual researchers. In the mid-1980s, the
American Educational Research Association formed a Special Interest Group (SIG)
on Learning Environments. The launch of the Learning Environments Research: An
International Journal (Fraser, 1998a) carried the field of learning environments to
the next echelon in its rich history and development spanning the last few decades.
As well, new book series, Advances in Learning Environments Research (Aldridge
& Fraser, 2008), that has emerged to cater for topics in greater depth and breadth
than that allowed in journals.
Research in the burgeoning field of learning environments still continues and new
instruments to assess the student’s perspective are currently being conceived at the
same time that scales from historically-significant questionnaires are still being
adapted to new circumstances. While designing studies that evaluate classroom
environments, researchers must select the appropriate instrument that best fits the
scope of the intended study, while also taking care to choose the appropriate unit of
analysis (e.g. the student, the class, the teacher) for scores from the questionnaire
responses to ensure statistically-accurate results (Dorman, 2012). A review of the
21
historically-significant instruments to assess learning environments ensues, with a
focus on the relevant questionnaires from which scales were selected and adapted
for my study.
There are several advantages in the use of quantitative questionnaires to collect data.
In general, gathering data through the administration of questionnaires provides a
snapshot of the classroom environment (Fraser, 1998a, 1998b). The nature of these
quantitative instruments allows for data collection from several large groups at one
time and for comparisons to made across these groups and between subgroups
(Fraser, Fisher, & McRobbie, 1996); it is therefore an efficient method for gathering
a large data set in a short amount of time, in contrast to the amount of time required
to collect, record, transcribe, and organize qualitative data. This is particularly
relevant to classrooms where research-based improvements need to be implemented
swiftly before the environment changes. Additionally, questionnaires enable an
examination of multiple aspects of a learning environment to be assessed at a single
time (Fraser, 1998a, 1998b), as opposed to the limited field of view of an external
observer. Fraser (2012) also notes that gleaning perspectives from the participants
22
in the environment, namely, the students and teachers, can capture information
which an external observer can miss or consider insignificant. Naturally, gathering
data through quantitative measures introduces less bias than a researcher observing
the classroom environment or interviewing students himself or herself (Anderson &
Arsenault, 1998). Finally, in comparison with the effort required to train an external
agent in observation or interviewing techniques, teachers who administer
quantitative surveys do not require specialized training, ensuring greater efficiency
in data collection (Fraser, 1998a, 1998b).
Needless to say, while quantitative data collection through the use of surveys allows
all of the aforementioned benefits in the research process, it also lacks the ability to
grasp the nuances in students’ perceptions of the environment. In particular, the
researcher could be unable to understand the rationale behind students’ perceptions
and lack the information necessary to explain anomalies in the data (Duit &
Confrey, 1996). For this reason, while quantitative data collection via
questionnaires dominated the field of learning environments in the past, the method
of triangulation in which the productive combination of quantitative and qualitative
approaches to data collection characterizes the field today (Aldridge, Fraser, &
Huang, 1999; Fraser & Tobin, 1991; Mathison, 1988; Tobin & Fraser, 1998).
Some brief explanations are in order regarding the general structure of such
questionnaires. The scales within a questionnaire (e.g. Student Cohesiveness and
Independence) are the dimensions by which the learning environment can be
quantitatively measured. The scales comprise specific items that address the
particularities of that dimension; for example, an item under Student Cohesiveness
might ask respondents to indicate their agreement with the statement “I know other
students in this class”. Most questionnaires contain a Likert or frequency scale
where responses range from ‘strongly disagree’ to ‘strongly agree’ or from ‘almost
never’ to ‘almost always’, respectively.
According to Moos’ (1974) scheme, there are three general dimensions that
characterize all human environments. Scales within any specific instrument can be
classified under the relationship dimension (i.e. the strength and type of personal
relationships within the environments, and the extent to which people are involved
23
in the environment and support one another), the personal development dimension
(i.e. the extent to which self-reflection and personal growth occur), or system
maintenance and change dimensions (i.e. the extent to which the environment is
orderly, clear in expectations, maintains control, and is responsive to change)
(Moos, 1974).
Table 2.1 Overview of Scales used in some Learning Environment Instruments (CUCEI,
MCI, QTI, SLEI, CLES, WIHIC, and COLES)
Scales classified according to Moos’ dimensions
Instrument Level Items Relationship Personal System
per development maintenance
scale and change
College and Higher 7 Personalisation Task Orientation Innovation
University Education Involvement Individualisation
Classroom Student
Environment Cohesiveness
Inventory (CUCEI) Satisfaction
My Class Elementary 6–9 Cohesiveness Difficulty
Inventory (MCI) Friction Competitiveness
Satisfaction
Questionnaire on Secondary/ 8–10 Leadership
Teacher Primary Helpful/Friendly
Interaction (QTI) Understanding
Student
responsibility
and freedom
Uncertain
Dissatisfied
Admonishing
Strict
Science Upper 7 Student Open-endedness Rule clarity
Laboratory Secondary/ cohesiveness Integration Material
Environment Higher Environment
Inventory (SLEI) Education
Constructivist Secondary 7 Personal relevance Critical voice Student
Learning Uncertainty Shared control negotiation
Environment
Survey (CLES)
What Is Secondary 8 Student Investigation Equity
Happening In this cohesiveness Task orientation
Class (WIHIC) Teacher support Cooperation
Involvement
Constructivist- Secondary 11 Student Task orientation Equity
Oriented Learning cohesiveness Cooperation Differentiation
Environment Teacher support Formative
Survey (COLES) Involvement assessment
Young adult ethos Assessment
Personal relevance criteria
Adapted from Fraser (2012)
24
Table 2.1 displays important questionnaires used in the learning environments field,
and categorizes the scales of these questionnaires according to Moos’ three
dimensions. More dated questionnaires, as well as less commonly used
questionnaires, are not included in this table. Additionally, the TROFLEI is omitted
because it is described in a separate table (see Table 2.2).
25
form than on the class form (Fraser, Giddings, & McRobbie, 1995). Therefore, the
major advantage of the personal form is the increased sensitivity of the perceptions
of subgroups (eg. gender) within the classroom, in contrast to the traditional class
form to which students could respond inconsistently. For instance, when asked
about whether the work in the class is difficult, some students might consider
whether the whole class thinks that the work is difficult, while others perhaps
perceive that certain students think that the work is difficult, and still others reflect
on whether the work is difficult for themselves. In this confusion, it would be
difficult to extract the perspectives of subgroups. This distinction between personal
and class forms accommodates the distinction between ‘private’ beta press and
‘consensual’ beta press (Section 2.3.1).
The following sections review the learning environment instruments that have been
developed in the field (including the instruments from which scales have been
adapted for this study): the Learning Environment Inventory (LEI) and My Class
Inventory (MCI) (Section 2.2.2.1), the Classroom Environment Scales (CES)
(Section 2.2.2.2), the Individualized Classroom Environment Questionnaire (ICEQ)
(Section 2.2.2.3), the College and University Classroom Environment Inventory
(CUCEI) (Section 2.2.2.4), the Questionnaire on Teacher Interaction (QTI) (Section
2.2.2.5), the Constructivist Learning Environment Survey (CLES) (Section 2.2.2.6),
the Science Laboratory Environment Inventory (SLEI) (Section 2.2.2.7), the What Is
26
Happening In this Class? (WIHIC) survey (Section 2.2.2.8), the Technology-Rich
Outcomes-Focused Learning Environment Inventory (TROFLEI) (Section 2.2.2.9),
the Constructivist-Orientated Learning Environment Survey (COLES) (Section
2.2.2.10), and a few other questionnaires (Section 2.2.2.11). The summaries in these
sections provide a more detailed description of the learning environment
questionnaires outlined in Table 2.1. Each instrument’s synopsis below includes
information about the number of scales and items within each scale, the age level for
which the questionnaire is designed, past studies that have validated the
questionnaire, and the fit or lack thereof with the current study.
As part of an evaluation of the Harvard Physics Project in the late 1960s, Walberg
formulated the Learning Environment Inventory (LEI), which was widely used in
the United States for secondary classrooms (Walberg & Anderson, 1968). The LEI
includes 15 scales (Cohesiveness, Friction, Favoritism, Cliqueness, Satisfaction,
Apathy, Speed, Difficulty, Competitiveness, Diversity, Formality, Material
Environment, Goal Direction, Disorganization, Democracy) each containing 7 items
that are responded to in four gradations of agreement with some reverse–scored
items. Because this instrument is geared towards a teacher-centered style of
classroom and is quite lengthy, it is better suited to traditional educational
environments at the secondary level. Furthermore, its length and complexity was
not found suitable for students involved in this study.
Later, the LEI was adapted to be used with younger students (ages 8–12 years) as its
item wording is simplified, its number of scales are trimmed to just 5 containing 25
items in the short form (Cohesiveness, Friction, Satisfaction, Difficulty,
Competitiveness), and responses are reduced to a Yes–No format; this instrument
became known as the My Class Inventory (MCI) (Fisher & Fraser, 1981; Fraser,
Anderson, & Walberg, 1982). Swee Chiew Goh and Barry Fraser (1998) expanded
the MCI’s response option to include a three-point frequency scale (Seldom,
Sometimes and Most of the Time) and included a Task Orientation scale, and then
used the revised MCI in research in Singapore among primary mathematics
students. The MCI has also been successfully employed in Brunei Darussalam with
just three scales (Cohesiveness, Difficulty and Competition) to reveal sex
27
differences in students’ perceptions (Majeed, Fraser, & Aldridge, 2002). In the US,
the MCI has been used in Florida to evaluate a K–5 mathematics program that
integrates children’s literature called Project SMILE (Science and Mathematics
Integrated with Literature Experiences) (Mink & Fraser, 2005), in Texas to evaluate
the use of science kits in primary school (Houston, Fraser, & Ledbetter, 2008), and
in Washington as an accountability tool for elementary-school counselors (Sink &
Spencer, 2005). Because this instrument is geared towards primary school students
and the response options are limited, it was not considered relevant for this study.
Moos (1974) developed the Classroom Environment Scales (CES) after evaluating
and researching diverse human environments such as psychiatric hospitals, prisons,
universities, and work settings in the US. The CES includes 9 scales (Involvement,
Affiliation, Teacher Support, Task Orientation, Competition, Order and
Organization, Rule Clarity, Teacher Control, Innovation) each containing 10 items
answered in a True–False response format (Trickett & Moos, 1973). While some of
the CES’s scales were used in the current study, as they have been integrated into
more contemporary questionnaires (i.e. Teacher Support and Task Orientation), the
instrument in its entirety was not appropriate for use my study because it is geared
towards a teacher-centered setting, and is lengthy and complex, and its response
format is limited.
28
presented challenges, even though the scale of Investigation was used because it was
borrowed from more recent questionnaires.
29
Support. While Teacher Support is a relevant dimension to assess in the current
study, a more economical and current version of this scale was adopted from
another, more current questionnaire (see Section 2.2.2.9).
A growing trend since the early part of this century has been the constructivist
learning theory which postulates that learning is a proactive, cognitive process in
which the learner makes sense of the world in relation to prior constructed
knowledge through negotiation and consensus building. To assess the degree to
which constructivist epistemology is reflected in the learning environment,
including the teachers’ epistemological assumptions and the students’ awareness of
the invisible forces that affect their thinking, the Constructivist Learning
Environment Survey (CLES) was developed (Taylor, Fraser, & Fisher, 1997).
The validated CLES has been used in the evaluation of educational innovations (see
Section 2.2.3.2 for further detail). For instance, data from the CLES revealed the
success of novel teaching strategies in middle-school mathematics classrooms
(Ogbuehi & Fraser, 2007) and of a new mathematics program called the Class
Banking System (Spinner & Fraser, 2005). Additionally, when a teacher
professional development program based on the Integrated Science Learning
30
Environment (ISLE) was evaluated using the CLES, the results showed that
changing teachers’ learning environment at the university level enhanced their
students’ middle-school classroom environments (Nix, Fraser, & Ledbetter, 2005).
The final version of the CLES was a revision of the original version (Taylor &
Fraser, 1991) that focused on students as co-constructors of knowledge but ignored
the cultural context of the classroom environment. It contains 7 items per scale
(Personal Relevance, Uncertainty, Critical Voice, Shared Control, Student
Negotiation) with responses on a 5-point frequency scale. Its advantages include an
organizational arrangement of items in blocks for the respondent and minimal use of
negative wording. While constructivism is a desirable dimension featured in virtual
laboratories, none of the CLES scales seemed relevant for assessing their
implementation and so this instrument was of limited use for this study.
31
addition, students and teachers were interviewed to provide comments to guide
revisions to the survey during the various stages. Furthermore, student data
collected using the SLEI were subjected to item and factor analysis, which resulted
in the final version containing 7 items per scale (Student Cohesiveness, Open-
Endedness, Integration, Rule Clarity, Material Environment) with responses on a 5-
point frequency scale (Newby & Fisher, 1997).
A sample of over 5,447 students in 269 classes in the USA, Canada, England, Israel,
Australia, and Nigeria was used to field test and validate the SLEI (Fraser &
McRobbie, 1995). Simultaneous testing revealed consistent scores on internal
consistency reliability and discriminant validity when used with 1,594 students in 92
classes (Fraser, Giddings, & McRobbie, 1995), as well as predictive validity when
used along with attitude scales to predict the effect on student outcomes (Fraser,
Giddings, & McRobbie, 1992). Further validation was accomplished through a
study of 489 senior high-school biology students in Australia by Fisher, Henderson
and Fraser (1997).
The SLEI was also translated into Korean for use in a study of differences between
the classroom environments of three streams (science-independent, science-oriented
and humanities), consisting of 439 high-school students in total. This version of the
SLEI exhibited sound factorial validity and internal consistency reliability, and was
able to differentiate between the perceptions of students in different classes.
Generally students in the science-independent stream perceived their laboratory
classroom environments more positively than did students in either of the other two
streams (Fraser & Lee, 2009).
32
elementary teachers (Martin-Dunlop & Fraser, 2007) and the effect of
anthropometric activities on a classroom learning environment (Lightburn & Fraser,
2007). Each of these studies is explored in detail in Section 2.2.3.2.
The SLEI has also been adapted to more specific environments, such as the
Chemistry Laboratory Environment Inventory (CLEI), which was found to be valid
when used in Singapore to uncover associations between the learning environment
and attitudes (Wong & Fraser, 1996) and to assess the differences in chemistry
laboratory environments between streams (gifted versus non-gifted) and sexes
(Quek, Wong, & Fraser, 2005).
At around the same time, adaptations were made to the SLEI for use for courses in
which computing technology is a fundamental tool. The Computer Laboratory
Environment Inventory (CLEI) was developed to assess the learning environment of
a computer laboratory in higher education and was tested with 80 college-level
students (Newby & Fisher, 1997). The survey contains 5 scales (Student
Cohesiveness, Open-Endedness, Integration, Technology Adequacy, Material
Environment) and responses are on a 5-point frequency scale.
As a whole, the SLEI, with its focus on laboratory classroom environments seemed
to be an appropriate instrument for use in the current study of the effectiveness of an
alternative laboratory. However, most of the scales are geared towards hands-on
experimentation in a whole-class setting involving social aspects of the classroom
(i.e. Student Cohesiveness, Open-Endedness, Rule Clarity) and therefore are
irrelevant to the setting of the study, which focused on the individual student.
Therefore, only the scales of Integration and Material Environment were borrowed
from the SLEI for use in the current study’s instrument because they pertain to
various aspects of virtual laboratories.
The personal form of the SLEI was more appropriate for this study to ensure that
students provided their own perspectives rather than their perspectives of the whole
class. Also, the actual version of the SLEI scales was applicable to the
circumstances because changing the environment in light of student preferences was
not part of my study. For use among high-school science students, the language was
modified to include only positively-worded items to avoid confusion. Also, an item
33
was added to each SLEI scale to maintain consistency in the number of items in
other scales in this study’s instrument.
The original What Is Happening In this Class? (WIHIC) was developed by Fraser,
McRobbie, and Fisher (1996) to combine previous questionnaires and incorporate
contemporary educational concerns such as equity and constructivism. It was found
to be reliable and valid when tested with a sample of 50 high school classes each in
Australia and Taiwan (in Chinese). Interestingly, even though Australian students
viewed their learning environments more favorably, Taiwanese students had more
positive attitudes towards science. Many of the studies employing the WIHIC
investigated associations between perceptions of the learning environment and
attitudes towards learning; for a more comprehensive description of these studies,
see Fraser’s review of Classroom Learning Environments (Fraser, 2012).
Originally consisting of 90 items in nine scales, the WIHIC was field tested with
355 middle school students. Factor analysis and interviews resulted in a revised
form containing 56 items in 7 scales (Student Cohesiveness, Teacher Support,
Involvement, Investigation, Task Orientation, Cooperation, Equity) with a 5-point
frequency scale (Aldridge, Fraser, & Huang, 1999). In addition to its wide use and
validity, the WIHIC’s items are organized in blocks, there are no reverse-scored
items (to minimize confusion), and there is a personal and class form available to
accommodate differences in individualized perceptions of the classroom (Aldridge,
Fraser, & Huang, 1999; Fraser, 1998a). This instrument has been extensively
applied to various subject areas, age levels, and countries, and is available in many
languages, as described below.
34
In a second round of field testing of the WIHIC with 1,081 students in Australia and
1,879 students in Taiwan using a Chinese version, Aldridge and colleagues reported
strong factorial validity and internal consistency reliability and that each scale was
capable of differentiating significantly between the perceptions of students in
different classrooms (Aldridge, Fraser, & Fisher, 2000). In fact, these sound
psychometric qualities have been replicated in every study using the WIHIC.
The WIHIC has been translated into Mandarin for use in Taiwan (Aldridge, Fraser,
& Fisher, 2000; Aldridge, Fraser, & Huang, 1999), Indonesian for use in Indonesia
(Fraser, Aldridge, & Adolphe, 2010; Wahyudi & Treagust, 2004), Korean for use in
Korea (Kim, Fisher, & Fraser, 2000), Arabic for use in the UAE (Afari, Aldridge,
Fraser et al., in press; MacLeod & Fraser, 2010), and Spanish for use in Miami in
the US (Allen & Fraser, 2007; Helding & Fraser, in press; Robinson & Fraser, in
press). Additionally, countries where the instrument has been validated in English,
besides the aforementioned studies in Australia, include the US (Allen & Fraser,
2007; den Brok, 2006; Helding & Fraser, in press; Martin-Dunlop & Fraser, 2007;
Ogbuehi & Fraser, 2007; Pickett & Fraser, 2009; Robinson & Fraser, in press; Wolf
& Fraser, 2008), Canada (Zandvliet & Fraser, 2004, 2005), Singapore (Chionh &
Fraser, 2009; Khoo & Fraser, 2008), India (Koul & Fisher, 2005), South Africa
(Aldridge, Fraser, & Ntuli, 2009), and Turkey (den Brok, Telli, Cakiroglu et al.,
2010).
Because of its robustness, the WIHIC’s scales have been adapted for use with other
instruments in particular environments in many different areas of research. For
35
instance, 2,638 grade 8 science students from 50 schools in the Limpopo Province of
South Africa were used as a sample to develop and validate a classroom
environment instrument in the Sepedi language for monitoring the implementation
of outcomes-based classroom environments. The Outcomes-Based Learning
Environment Questionnaire (OBLEQ) contains four scales from the WIHIC, in
addition to three other scales (Aldridge, Laugksch, Seopa et al., 2006).
Greek versions of two scales of the WIHIC, namely, Involvement and Teacher
Support, were incorporated into a new questionnaire entitled How Chemistry Class
is Working (HCCW), which was validated with over 1600 students in Greece and
Cyprus. A more positive classroom environment was perceived among Cypriot
students than among Greek students (Giallousi, Gialamas, Spyrellis et al., 2010).
36
rather than on content (Fraser, 2012); this approach is also often referred to as
‘backward planning’ (Wiggins & McTighe, 2005). Appropriate instruments are
necessary in order to evaluate this approach to education. As well, the integration of
technology into education is a contemporary dimension of classroom environments.
This reflects the view that the classroom environment is dynamic rather than static
and that instrumentation to evaluate new dimensions needs to be continually
devised. Rather than using a generalized instrument ‘off the shelf’, it is now
common to validate context-specific instruments when conducting classroom
environment research (Dorman, Aldridge, & Fraser, 2006).
The TROFLEI has been applied across all learning areas using both the personal and
class forms and actual and preferred forms (Aldridge & Fraser, 2008). A unique
aspect of the TROFLEI is that it employs a side-by-side response format, which
enables students to provide their separate perceptions of actual and preferred
classroom environment in an economical way. To provide contextual cues and to
minimize confusion to students (Aldridge et al., 2000), TROFLEI items that belong
to the same scale are grouped together instead of arranging them randomly or
cyclically. Only positively-worded items are employed to ease students’
understanding of the statements, as indicated in past studies (Barnette, 2000).
The TROFLEI was originally validated in a study of 1,035 students in grades 10 and
11 at Seven Oaks Senior College in Western Australia (Aldridge & Fraser, 2003),
which has an emphasis on outcomes-focused education and the use of Information
37
Communication Technology (ICT). More extensive validation was carried out using
a larger sample of 2,317 students from 166 grade 11 and 12 classes from Western
Australia and Tasmania. During its first year of operation, the new school was
subjected to formative and summative evaluation that included use of the TROFLEI.
The study revealed strong factorial validity and internal consistency reliability for
both the actual and preferred forms of the TROFLEI. As well, the actual form of
each scale was capable of differentiating between the perceptions of students in
different classrooms. Results after four years supported the efficacy of the school’s
educational programs and offered insights regarding differences in the classroom
environment perceptions between males and females and between students enrolled
in university-entrance examinations and in wholly school-assessed subjects
(Aldridge & Fraser, 2008). Furthermore, Aldridge, Dorman and Fraser (2004) used
multi-trait-multi-method modeling with a sub-sample of 1,249 students, of whom
772 were from Western Australia and 477 were from Tasmania, to support the
TROFLEI’s construct validity and sound psychometric properties, including that the
actual and preferred forms share a common structure.
38
affective outcomes scales (attitude to subject, attitude to computers, and academic
efficacy) (Koul, Fisher, & Shaw, 2011).
Table 2.2 Scale Description, Moos’ Dimension, and Sample Item for Each Technology-
Rich Outcomes-Focused Learning Environment Inventory (TROFLEI) Scale
Environment Scale Description Moos’ Sample Item
Scale Dimension
Student The extent to which students Relationship I am friendly to members of
Cohesiveness know, help, and are supportive this class.
of one another.
Teacher The extent to which the teacher Relationship The teacher takes an interest
Support helps, befriends, trusts, and is in me.
interested in students.
Involvement The extent to which students Relationship I explain my ideas to other
have attentive interest, students.
participate in discussions, do
additional work and enjoy the
class.
Task The extent to which it is Personal I know how much work I have
Orientation important to complete activities Development to do.
planned and stay on the subject
matter.
Investigation The extent to which skills and Personal I carry out investigations to
processes of enquiry and their Development test my ideas.
use in problem solving and
investigation are emphasized.
Cooperation The extent to which students Personal I share my books and
cooperate rather than compete Development resources with other students
with one another on learning when doing assignments.
tasks.
Equity The extent to which students System I get the same opportunity to
are treated equally by the Maintenance answer questions as other
teacher. and Change students.
Differentiation The extent to which teachers System I do work that is different
cater for students differently on Maintenance from other students’ work.
the basis of ability, rate of and Change
learning and interests.
Computer The extent to which students System I use the computer to take part
Usage use their computers as a tool to Maintenance in online discussion with other
communicate with others and to and Change students.
access information.
Young Adult The extent to which teachers Relationship I am encouraged to take
Ethos give students responsibility and control of my own learning.
treat them as young adults.
(Koul, Fisher, & Shaw, 2011)
Another validation study was conducted cross-culturally with 980 Turkish and 130
American high school science students in grades 9–12. This study revealed sound
psychometric properties of the TROFLEI for use with both populations (Welch,
39
Cakir, Peterson et al., 2012). The TROFLEI was also validated in Thailand
involving tertiary-level students in electronics laboratories (Promratrak & Malone,
2006).
Table 2.2, adapted from Koul, Fisher, and Shaw (2011), displays for each of the 10
scales of the TROFLEI, a scale description, its categorization under Moos’
dimensions, and a sample item. The scales chosen for use in the current study are
discussed at greater length in Chapter 3.
40
activities are relevant to the student’s everyday out-of-school experiences) was also
borrowed from the CLES for inclusion in the COLLES.
Data analysis supported the sound factorial validity and internal consistency
reliability of both actual and preferred versions of the COLES for a sample of 2,043
grade 11 and 12 students in Western Australian schools. In addition, the actual form
of the COLES was capable of differentiating between the perceptions of students in
different classrooms. In order to provide feedback as a basis for reflection in teacher
action, results from the COLES were also complemented by students’ reflective
journals, written feedback, discussion at a forum, and teacher interviews. The
experiences of these teachers concerning the viability of using feedback from the
COLES was considered as part of their action research aimed at improving their
classroom environments (Aldridge et al., 2012).
Instruments that have been developed to assess remote learning environments at the
post-secondary level include the Distance and Open Learning Environment Scale
(DOLES) for higher education (Jegede, Fraser, & Fisher, 1995) and the Distance
Education Learning Environments Survey (DELES) (Walker & Fraser, 2005). The
DOLES contains the five core scales of Student Cohesiveness, Teacher Support,
Personal Involvement and Flexibility, Task Orientation and Material Environment,
and Home Environment, as well as the two optional scales of Study Center
Environment and Information Technology Resources. The DELES was constructed
online and includes six scales (Instructor Support, Student Interaction and
Collaboration, Personal Relevance, Authentic Learning, Active Learning and
Student Autonomy).
41
The Web-Based Learning Environment Instrument (WEBLEI) was developed to
assess students’ perceptions of online learning environments for higher education.
The online mode of education represents a paradigm shift in learning environments
as it involves a separation of time and place between teacher and learner, between
learners, and between learners and learning resources. A study conducted with
university students in Australia, the majority of whom were new to the concept of an
online mode for coursework, validated the questionnaire’s four scales of Access,
Interaction, Response, Results (Chandra & Fisher, 2009).
42
While such questionnaires to assess alternative classroom environments were
somewhat relevant to the current study, each was too specific in the environment
that it assesses. Therefore, they were used as a reference to modify wording of
specific items within scales that were adopted from more generalized questionnaires
as described in Sections 2.2.2.7 and 2.2.2.9. Specifically, the OLLES informed
modifications necessary for the Material Environment scale adopted from the SLEI.
In order to maximize validity and reliability of this study, the author chose to
balance the need to customize instrumentation to the specific aspects of this study
with the robustness of more standardized questionnaires such as the SLEI and
TROFLEI detailed above.
The learning environment instruments described above have been used to pursue
numerous lines of past research. Specific lines of past research within the field of
learning environments, which are briefly reviewed below, are associations between
student outcomes and the learning environment (Section 2.2.3.1), teachers’ efforts to
improve the classroom environment (Section 2.2.3.2), comparison of actual and
preferred environments (Section 2.2.3.3), cross-national studies (Section 2.2.3.4),
and other lines of research (Section 2.2.3.5). Finally, Section 2.2.3.6 singles out the
line of research that pertains to this study, namely, using learning environment
dimensions as criterion variables in the evaluation of educational innovations.
Past research has consistently linked the nature of the learning environment with
students’ cognitive (i.e. achievement) and affective (eg. attitudes) learning
outcomes. In fact, a multitude of factors have a multiplicative, diminishing-returns
effect on educational productivity, as theorized by Walberg’s (1981) economic
model of agricultural, industrial, and cultural productivity: age, ability, motivation,
quality and quantity of instruction, and the psychosocial environments of the home,
classroom, per group, and mass media. The effect of these factors is multiplicative
in that any factor at zero point (e.g. motivation) will result in zero learning, and
therefore it is better to improve a limiting factor that is low rather than to improve a
factor that is already functioning well. While there is this multitude of factors that
43
affect educational productivity, the psychosocial learning environment has emerged
as a strong predictor of both achievement and attitudes even when other factors are
held constant (Fraser, 2007, 2012). In other words, students’ perceptions of their
classroom environments, relative to other influential forces such as students’
backgrounds, are more closely associated with learning outcomes.
Even though many past learning environment studies have employed techniques
such as multiple regression analysis, oversight in this method results because
classroom environment data are typically derived from students grouped in pre-
formed classes, which are inherently hierarchical. Therefore, multilevel analysis is
44
appropriate under such conditions to avoid aggregation bias and imprecision. Two
studies of outcome-environment associations compared the results obtained from
multiple regression analysis with those obtained from an analysis involving the
hierarchical linear model (HLM). The multiple regression analyses were performed
separately at the individual student level and the class mean level. In the HLM
analyses, the environment variables were investigated at the individual level and
also they were aggregated at the class level. In a study involving 1,592 grade 10
students in 56 chemistry classes in Singapore, associations were investigated
between three student attitude measures and a modified version of the SLEI (Wong,
Young, & Fraser, 1997). In Goh, Young and Fraser’s (1995) study with 1,512 grade
5 mathematics students in 39 classes in Singapore, scores on a modified version of
the MCI were related to student achievement and attitude. The two methods
produced results that were consistent in strength and in direction.
Using a large sample of high school students in Turkey, a translated version of the
QTI was administered in conjunction with an attitude questionnaire to explore
associations between teacher–student interpersonal behavior and students’ attitudes
to science. The use of multilevel analysis revealed that the influence dimension of
the QTI was related to student enjoyment, while proximity was associated with
attitudes to inquiry (den Brok et al., 2010). In another study involving the
TROFLEI, the classroom environment was investigated by applying structural
equation modeling using LISREL, antecedent variables (gender, grade level, and
home computer and Internet access), and student affective outcomes (attitude to the
subject, attitude to computer use and academic efficacy) among 4,146 high-school
students from Western Australia and Tasmania. Results revealed that: improving
classroom environment had the potential to improve student outcomes; antecedents
did not have any significant direct effect on outcomes; and academic efficacy
mediated the effect of several classroom environment dimensions on attitude to
subject and attitude to computer use (Dorman & Fraser, 2009).
45
Table 2.3 Some Studies of Associations Between the Learning Environment and Student
Outcomes
Study Outcome Measures Sample
Studies Involving the MCI
Fraser & Fisher (1982b) Inquiry skills; understanding 2,305 Grade 7 science students in 100
of nature of science; attitudes classes in Tasmania, Australia
Goh, Young, & Fraser (1995) Attitudes 1,512 primary school students in
Singapore
Majeed, Fraser, & Aldridge Attitudes 1,565 mathematics students in 81 classes
(2002) in Brunei Darussalam
Studies Involving the SLEI
Fraser & McRobbie (1995); Attitudes Approximately 80 senior high school
McRobbie & Fraser (1993) chemistry classes in Australia
Fisher, Henderson, & Fraser Attitudes 489 senior high school biology students in
(1997) Australia
Wong & Fraser (1996) Attitudes 1,592 Grade 10 chemistry students in
Singapore
Lightburn & Fraser (2007) Attitudes 761 high-school students in the US
Quek, Wong, & Fraser (2005) Attitudes 497 secondary school students in
Singapore (using an adaptation, the CLEI)
Studies Involving the CLES
Kim, Fisher, & Fraser (1999) Attitudes 1,083 Grade 10 and 11 science students in
24 classes in Korea
Aldridge, Fraser, Taylor, & Chen Attitudes 1,081 Grade 8–9 science students in
(2000) Taiwan and 1,879 Grade 7–9 science
students in Australia
Aldridge, Fraser, & Sebela (2004) Attitudes 1,843 Grade 4–9 students in 29
mathematics classes in South Africa
Nix, Fraser, & Ledbetter (2005) Attitudes 1,079 high school students in 59 classes in
Texas, USA
Studies Involving the WIHIC
Aldridge et al. (1999); Aldridge & Enjoyment 1,081 junior high school students in
Fraser (2000) Australia and 1,879 such students in
Taiwan
Kim, Fisher, & Fraser (2000) Attitudes 543 Grade 8 students in 12 schools in
Korea
Telli, Çakıroğlu, & Brok (2006) Attitudes 1,983 students in 57 classrooms in Turkey
Wolf (2008) Attitudes 1,434 middle-school science students in
71 classes in the US
Fraser & Chionh (2009) Achievement, Attitudes, and 2,310 grade 10 geography and
Self-esteem mathematics students in Singapore
Fraser, Aldridge, & Adolphe Attitudes 567 high-school science students in
(2010) Australia and 594 such students in
Indonesia
Studies Involving the TROFLEI
Dorman & Fraser (2009) Attitudes 4,146 grade 8–13 students in Western
Australia and Tasmania
Koul, Fisher, & Shaw (2011) Attitude to subject, Attitude 1,027 high-school students in New
to Computers, and Academic Zealand
Efficacy
46
A meta-analysis conducted by Haertel, Walberg and Haertel (1981) involving 734
correlations from 12 studies involving 823 classes, eight subject areas, 17,805
students and four nations revealed associations between various dimensions of the
learning environment and student outcomes. Additionally, correlations were
generally higher in samples of older students and in studies employing collectivities
such as classes and schools (in contrast to individual students) as the units of
statistical analysis. In particular, higher achievement on a variety of outcome
measures was found consistently in classes perceived as having greater
Cohesiveness, Satisfaction and Goal Direction and less Disorganization and
Friction. Other meta-analyses also provide further evidence supporting the link
between educational environments and student outcomes (Fraser, Walberg, Welch et
al., 1987).
Table 2.3 displays details of some of the well-known studies that have established
associations between learning environment scales and various measures of student
outcomes. These studies are organized in the table below according to which
specific learning environment instrument was employed in the study. Other
information about each study, such as the sample size, grade level, and location, is
also shown.
47
feedback information based on student or teacher perceptions can be employed as a
basis for reflection upon, discussion of, and systematic attempts to improve
classroom and school environments. Therefore, this section reviews this specific
line of research involving teachers’ use of learning environment perceptions in
guiding practical attempts to improve their own classrooms and schools.
Fraser and Fisher’s (1986) case studies of teachers attempting to improve their
classroom environment involved five steps: assessment when students were given
the preferred form of the CES and one week later the actual form; feedback to the
teachers from students’ responses regarding the gap between the preferred and the
actual environment; private reflection and informal discussion that helped the
teacher to consider which dimensions require intervention; intervention for about a
two-month period during which specific strategies to address dimensions of concern
were implemented; and reassessment at which point students again responded to the
actual form of the CES to determine whether the changes they preferred indeed had
occurred. Some changes in the actual environment did occur as a result, and two of
the dimensions on which significant changes were recorded were those on which the
teacher had specifically attempted to change.
This practical approach to learning environments research has been used with pre-
service teacher education students in their own university settings and in their
students’ school classrooms (Yarrow, Millwater, & Fraser, 1997), as well as with in-
service teachers (Aldridge, Fraser, & Ntuli, 2009). For instance, in South Africa,
two such studies were conducted using action research in an attempt to improve
teachers’ classroom learning environments. In Aldridge, Fraser and Ntuli’s (2009)
study, 31 in-service teachers undertaking a distance-education program administered
an adapted version of the WIHIC in the IsiZulu language to 1077 primary school
students, which enabled some of the teachers to use feedback from the questionnaire
to improve their classroom environments with varying degrees of success. In
Aldridge, Fraser and Sebela’s (2004) study, a group of 29 mathematics teachers in
South Africa administered the English version of the CLES to their primary-level
students and some of the teachers were able to improve the constructivist orientation
of their classrooms.
48
In the US, Sinclair and Fraser (2002) used the actual and preferred forms of a
questionnaire based on the WIHIC to guide changes in their classrooms’ learning
environments. Results generally supported the success of teachers’ attempts to
change their classroom environments based on feedback from the students, but they
also indicated that efforts to change the learning environment should involve
different interventions for students of different genders.
Most recently, the COLES was used as a basis for reflection for teachers’ attempts
to bridge the gap between preferred and actual classroom environments over a six-
week period. The COLES was administered as a pre-test and post-test with the aim
being for teachers to use the feedback from the pre-test to reduce the actual–
preferred discrepancies on selected COLES scales by the time of the post-test. In
this particular study, the authors created a novel method, using circular profiles, to
communicate to the teachers feedback information based on students’ responses to
the COLES. Qualitative data were also collected from reflective journals, written
feedback, forum discussions and teacher interviews. Teachers felt that this process
enabled them to reflect on their teaching practices and ultimately to help them to
improve their classroom environments (Aldridge et al., 2012).
49
might be enhanced by attempting to change the actual classroom environment in
order to increase congruence with student preferences.
With the recent globalization of the economy, new vistas open for education in the
international arena as well. Approaching educational environments from a cross-
national perspective is advantageous in that variation (such as teaching methods,
student attitudes, new nationalities) is increased within the sample for a study, and in
that standard practices in one country can be called into questioned in an unbiased
manner in another country (Fraser, 2012).
In a separate study, the WIHIC was validated in two languages in Indonesia (in
Bahasa) and in Australia (in English) and some differences were found between
countries, as well as for different sexes. This study also confirmed associations
between the learning environment and several attitude scales (Fraser, Aldridge, &
Adolphe, 2010).
50
In designing new instruments, it is important to validate them across nations
simultaneously, and many such questionnaires were developed in this way. For
instance, the SLEI was validated across the USA, Canada, Australia, England,
Israel, and Nigeria (Fraser, Giddings, & McRobbie, 1992) and the WIHIC was
validated using students in Australia, England, and Canada (Dorman, 2003). As
well, the TROFLEI was cross-culturally validated in the US and Turkey for high
school (grades 9–12) students. Differences were noted across national borders in
each study, suggesting the role of culture in perceptions of the learning environment.
Other lines of research involve the use of triangulation or, combining quantitative
and qualitative methods, which permeates the field today (Aldridge, Fraser, &
Huang, 1999; Fraser & Tobin, 1991; Mathison, 1988; Tobin & Fraser, 1998).
Quantitative data collection is accomplished most often through the use of a
questionnaire while qualitative data usually encompasses student and teacher
interviews (and perhaps sometimes interviews of administrators and parents),
classroom observations, and students’ written work. Unique contributions to
learning environments that use a mixed-methods approach within the same study
include the complementation of qualitative data to quantitative results that clarified
patterns in Taiwanese and Australian classrooms and identified the differences
between them (Aldridge, Fraser, & Huang, 1999), the investigation of higher-level
cognitive learning in US classrooms (Tobin, Kahle, & Fraser, 1990), and a
multilevel exploration of the learning environment to judge whether a certain
teacher was typical of other teachers within her school and of other schools within
the state in Australia (Fraser, 1999). In a mostly qualitative study comparing
exemplary teachers with non-exemplary teachers, data from questionnaires were
also obtained and the merging of these two methods helped to shed light on the
differences between classrooms of such teachers (Fraser & Tobin, 1989). Currently,
many evaluations of educational innovations using a learning environment
framework include the use of at least semi-structured interviews in their design, in
addition to the main method of questionnaires as data collection.
52
from a study employing a Turkish translation of the WIHIC in Turkey were: self-
directed learning; task-orientated cooperative learning; mainstream; task-orientated
individualized; low-effective learning; and high-effective learning (den Brok et al.,
2010). As well, using cluster analysis for results from the TROFLEI on a large
Australian sample of students, five relatively homogeneous groups of classes
became apparent: exemplary; safe and conservative; non-technological teacher
centered; contested technological; and contested non-technological (Dorman,
Aldridge, & Fraser, 2006).
This section discusses another line of past and current research involving learning
environment scales that is relevant to my study and therefore deserves this separate
section to allow for greater depth. More recently, educational innovations have
changed the dynamic of traditional classrooms and their evaluation has created a
new subgenre of the learning environment framework. Learning environment scales
have been useful in providing criteria of effectiveness for evaluating educational
innovations in the numerous past studies described below. Thus, in my study,
learning environment variables were used both as criteria of instructional
effectiveness and as predictors of student outcomes such as attitudes and
achievement. In this manner, educational innovations influence learning
environments, which in turn influences attitudes and achievement. This constitutes
the specific research approach for my study because the use of virtual laboratories is
considered an educational innovation that requires evaluation.
53
environment questionnaires has often been within the context of a need to evaluate a
particular educational evaluation. An evaluation of Harvard Project Physics, a
national curriculum introduced in the late 1960s to utilize new instructional media
that emphasize the philosophical, historical, and humanistic aspects of physics,
resulted in the development of the first learning environment questionnaire, the LEI,
as described in Section 2.2.21. In one particular study, which was part of a series of
investigations about the classroom as a social system, according to the Getzels and
Thelen’s (1960) theory, 1,700 US high school students who completed the project
were surveyed (within-class design) with a questionnaire that was based on the
Physics Achievement Test, the Science Process Inventory, the Semantic Differential
for Science Students, the Pupil Activity Inventory, and the Classroom Climate
Questionnaire. The study showed that there were significant and complex relations
between climate measures (18 structural and affective) and learning criteria (9); for
instance, characteristics such as ‘isomorphism’, ‘organization’, and ‘synergism’
predicted learning variables more frequently than ‘coaction’ and ‘syntality’
(Walberg & Anderson, 1968).
Another seminal study was the evaluation of the Australian Science Education
Project (ASEP) that, during 1969 to 1974, produced learning materials for high
school science classes. The sample involved 300 schools and used case studies as
well as questionnaires (Owen, 1979). At that time, because few instruments existed,
half of the LEI scales that were relevant were selected and some new scales were
developed, including a new scale of Individualization. ASEP students perceived
their classroom as being more satisfying, individualized, and having a better
material environment compared to a control group (Fraser, 1979).
The difference between these historical and founding studies and more recent
evaluations of educational innovations is the evolution of learning environment
variables – whether they serves as independent variables or dependent variables (i.e.
criteria of effectiveness).
Inquiry-based learning encourages students to ask questions, share ideas, and engage
in dialogue to investigate information. A key component is whole-group
54
collaboration, although individuals participate equally and are held accountable.
Many studies have supported the effectiveness of inquiry-based programs (Wolf &
Fraser, 2008). For example, evaluation of a computer-assisted learning course in
which students used a database to explore birds of Antarctica, a study which is
described in the section on Technology Integration below, revealed positive student
perceptions of dimensions such as Investigation and Open-Endedness, which both
are hallmarks of inquiry-based learning (Maor & Fraser, 1996). In another study,
which differed from prior evaluations of inquiry-based learning in that it utilized a
control group, inquiry-based laboratory teaching was evaluated in terms of
perceptions of the class learning environment, students’ attitudes towards science,
and cognitive achievement. The data from 1,434 middle-school physical science
students in the US were collected using the WIHIC to measure the perceptions of
the learning environment, selected items from the TOSRA to measure attitudes
towards science, a 9-item scale to assess achievement based on a standardized state
test, and interviews. The instructional method was differentially effective for males
(higher with inquiry) and females (higher with non-inquiry) (Wolf & Fraser, 2008).
In two separate studies, the CLES was used in Korean high schools to assess novel
constructivist approaches. One study involved longitudinal action research with 136
earth science students and revealed that students’ perceptions became increasingly
positive over time (changes on the Personal Relevance scale were also associated
with improved attitudes towards science) (Oh & Yager, 2004). Another study
involved teachers who attended a professional development program at the
University of Iowa involving the implementation of constructivist approaches (Cho
et al., 1997).
Although many studies reviewed in Section 2.2.3.2 involved the school subject of
science, many instruments have been adapted for mathematics classes as the two
subjects are often related. For instance, one particular innovation relies on
mathematics media (numbers and measurements) within an innovative science
course that uses anthropometric activities. This innovation was evaluated using four
scales from the SLEI, TOSRA and Fennema-Sherman attitude scales, together with
an achievement test and report card grades, respectively. This study was carried out
55
on 761 biology high school students in the US, including a control group for
learning environment perceptions and attitudes (Lightburn & Fraser, 2007).
56
attitudes towards mathematics. For each dimension, the efficacy of the innovative
teaching model was supported (Ogbuehi & Fraser, 2007).
A number of innovative programs have been aimed at teachers, who are responsible
for transmitting science content and promoting positive attitudes towards science,
and who have an important role in the learning environment. An evaluation of a
long-term, teacher professional development program in the US, based on the
Integrated Science Learning Environment (ISLE), involved a combination of
methods: constructivist concept-mapping, psychosocial cognition, and Information
Technology (IT). The evaluation of this program was novel in that the researchers
assessed the effectiveness of the teacher-training program using a new form of the
CLES (Comparative Student or CLES-CS) which has the same scales as the original
CLES but includes two, separate, side-by-side frequency scales for each item to rate
this class and another class (whose teachers have not been trained through the ISLE
program). For a sample size of 1,079, students whose teachers participated in the
ISLE program perceived higher levels of Personal Relevance and Uncertainty in
their classes compared with other science and non-science classes in the same
school (Nix, Fraser, & Ledbetter, 2005; Nix & Fraser, 2011).
57
was used to assess student perceptions of classroom learning environment as a
pretest and as a posttest. The use of MANOVA and effect sizes supported the
efficacy of the mentoring program in terms of some improvements over time in the
learning environment, as well as in students’ attitudes and achievement (Pickett &
Fraser, 2009).
Since the advent of the computer and the Internet, there has been much pressure to
incorporate information technology into science classrooms; as well, there is an
increasing interest in evaluating the effects of this technology on students in terms of
learning environments. An evaluation of a micro-PROLOG-based Computer-
Assisted Learning (CAL) involved developing and validating a new instrument,
called the Geography Class Environment Inventory (GCEI). 671 high school
students in Singapore were given the GCEI, which includes four scales (Gender
Equity, Investigation, Innovation, and Resource Adequacy) to measure perceptions
of the learning environment, the Semantic Differential Inventory (SDI) to measure
attitudes towards the subject, and a Geography Aptitude Test (GAT) to measure
achievement. Relative to non-CAL students, CAL students had higher scores for
achievement, attitudes, and perceptions of classroom environment (Teh & Fraser,
1994).
As well, Maor and Fraser (1996) evaluated inquiry-based CAL with 120 high-school
students in Western Australia who interacted with a computerized database
associated with a program entitled The Birds of Antarctica. A new questionnaire
based on the LEI, ICEQ, and SLEI, called the Computerized Classroom
Environment Inventory (CCEI), was developed to include five scales (Investigation,
Open-Endedness, Organization, Material Environment, Satisfaction). Questionnaire
items were re-worded for whole-class observations from ‘I’ statements to ‘students’
statements. The results showed increased student-perceived Investigation and
Open-Endedness, whereas the teachers’ perceptions were more positive.
58
actual and preferred forms), one scale from the Computer Aptitude Survey (CAS) to
measure attitudes towards computers, and one scale from the TOSRA (Enjoyment of
Lessons). While there were positive associations between perceptions of the
learning environment and students’ attitudes towards science and mathematics, there
were statistically significant differences between perceptions of the actual and
preferred environments, differences for males and females, and differences between
science and mathematics (Raaflaub & Fraser, 2002).
Regarding networked use of computers, Zandvliet and Buker (2003) considered the
relationship between technology and instruction as they evaluated Internet
classrooms in terms of the physical and psychosocial environments and student
satisfaction. They argued that technology brings more diversity to the factors that
influence the learning environment; these factors are divided into three major
categories that comprise the person’s learning experience and thus satisfaction: the
ecosphere (physical surroundings for example, lighting and space); the sociosphere
(the person’s net interactions with all other people within that environment (e.g.
autonomy and cohesion); and the technosphere (includes all the man-made objects
available). In one of their studies, 358 high school students in B.C., Canada,
responded to the actual form of the WIHIC, items from the TOSRA, and the
Computer Classroom Environment Checklist (CCEC) for physical factors (for which
the unit of analysis was the classroom and the scales included Workspace
Environment, Computer Environment, Visual Environment, Spatial Environment,
and Air Quality Rating). In another study, the physical and psychosocial learning
environments of computer-networked classrooms were evaluated for their effects on
student satisfaction. Scores from the CCEI, WIHIC (actual and personal forms), and
TOSRA, as well as systematic observation and case studies, comprised the data
collected from 1,404 students in Australian and Canadian high schools, which
indicated that the psychosocial environment (specifically Independence and Task
Orientation) was significantly associated with satisfaction with learning, although
learning satisfaction was not associated with the physical classroom environment.
However, there were statistically significant associations between the physical and
psychosocial learning environment variables in classes using new informational
59
technology and, thus, the physical environment indirectly impacted students’
satisfaction with learning (Zandvliet & Fraser, 2005).
Aldridge and Fraser’s (2003, 2008, 2012) longitudinal study, which also involved
the TROFLEI’s development and validation, evaluated a technology-rich
environment that focused on outcomes-based learning. The four-year investigation
involving 1918 students led to more positive student perceptions of seven out of the
ten TROFLEI scales, but the degree of change in the learning environment varied
for different learning areas.
In this study, the effectiveness of virtual laboratories was evaluated in terms of not
only students’ perceptions of their learning environment (see Section 2.2) but also
students’ attitudes towards science. Below, I consider aspects of the affective
domain of learning and its relationship to the cognitive domain of learning. First,
the term ‘attitude’ is defined in Section 2.3.1. Then methods of assessment are
60
presented in Section 2.3.2, and this is followed by a review of the literature about
the impact of educational interventions on students’ attitudes (Section 2.3.3).
For decades, the attempt to clarify the term ‘attitude’ has engendered much
controversy because it incorporates a broad range of dimensions that are loosely
defined with vague references. Examples of such dimensions are interest,
engagement, motivation, mindfulness, flow, self-efficacy, identity, perceived ability,
the degree of fun, personal relevance, and the like. This haziness is further clouded
by the inclusion of sub-topics under the all-encompassing ‘science’ umbrella, such
as various careers, formal and informal education, perceptions of scientists and
science media (Aldridge & Fraser, 2008; Olitsky & Milne, 2012; Oliver & Venville,
2011; Tytler & Osborne, 2012). More recently, Koballa and Glynn (2007) define an
attitude as “a general and enduring positive or negative feeling about some person,
object, or issue”, in this case, science (p. 78). This definition maintains the
neutrality of the term ‘attitude’, whereas many of the aforementioned dimensions
refer to only the positive form of the affective domain; for instance, ‘interest’
denotes a positive feeling about the subject.
Because of the lack of clarity concerning the term attitude, Klopfer (1971) began to
distinguish between ‘attitudes towards science’, the subject of this section, and
‘scientific attitudes’, a mindset committed to evaluating evidence, harboring
skepticism, and requiring rational explanations for phenomena. However, ‘attitudes
towards science’ can still encompass attitudes towards scientists, school science,
science learning experiences and activities, as well as the pursuit of science-related
careers (Tytler & Osborne, 2012). Later, Klopfer (1976) further classified the
affective domain, specific to science education, into four categories of attitudes:
towards events in the natural world (awareness and emotional responses to
experiences), towards activities (school science and informal science), towards
science in general (the nature of science as a means of knowing about the world),
and towards inquiry (the adoption of inquiry processes including methodical
assessment of phenomena).
61
OECD’s (2009) Programme for International Student Assessment (PISA) assesses
students every three years in a variety of subject areas. Their definition for attitudes
towards science is based on the belief that a student’s scientific literacy includes
certain attitudes, beliefs, motivational orientations, sense of self-efficacy, values,
and ultimate actions, which builds upon Klopfer’s (1976) structure for the affective
domain in science education as well as other reviews of attitudinal research
(Gardner, 1975; Osborne, Simon, & Collins, 2003).
Considering other definitions for the affective domain, some educational researchers
refer to ‘engagement’, a positive feeling or a passion, as an indicator for attitudes
(Olitsky & Milne, 2012). Engagement can be further broken down into its various
components, such as behavioral engagement (e.g. on-task actions in a science
classroom or participation in extra-curricular activities), emotional engagement
(interests and values evident from students’ reactions to their environment), and
cognitive engagement (motivation, self-efficacy, and behavior) (Fredricks,
Blumenfeld, & Paris, 2004; McCarty, Hope, & Polman, 2010). A corollary of
‘engagement’ is the concept of ‘flow’, defined as “the feeling generated by total
engagement with an activity” (Tytler & Osborne, 2012, p. 605). According to a
pioneering study by Csikszentmihalyi and Schneider (2001), tests, quizzes, and
concrete tasks, including laboratory work, all produced above-average levels of
‘flow’ while the presentation of lectures and video clips produced little ‘flow’. The
current study involved virtual laboratories, whose use was anticipated to produce
greater ‘flow’.
62
introduced to students by embedding them into supportive learning conditions
including student motivation and interests, as well as the beliefs of learners and
teachers. Furthermore, the authors asserted the need for students to be engaged in
mindful learning. According to Salomon and Globerson (1987), mindfulness
involves “volitional, meta-cognitively guided employment of non-automatic, usually
effort-demanding processes” (p. 623). Accordingly, the learning benefits of being
motivated and mindful are expected to be long-term because they are related to
higher levels of learning that engage all faculties and produce stronger impressions
in the minds of learners.
63
Similarly, Fraser (1978) noted three major limitations of existing instruments used
to assess attitudes toward science: low statistical reliability, a lack of economy of
items, and the combination of different attitude dimensions into a single scale which
creates a mixture of variables. In response, Fraser (1981) developed the Test of
Science-Related Attitudes (TOSRA). This is the instrument that was selected for the
current study because some of its scales were deemed highly suitable for the
investigation of how students’ attitudes towards science changed as a result of using
virtual laboratories.
Table 2.4 Fraser's (1981) TOSRA Scales and Klopfer's (1971) Classification
TOSRA Scale Name Klopfer Classification
Social Implications of Science H.1 Manifestation of favorable attitude towards science and
scientists
Normality of Scientists
Attitude to Scientific Inquiry H.2 Acceptance of scientific enquiry as a way of thought
Adoption of Scientific Attitudes H.3 Adoption of scientific attitudes
Enjoyment of Science Lessons H.4 Enjoyment of science learning experiences
Leisure Interest in Science H.5 Development of interest in science and science related
activities
Career Interest in Science H.6 Development of interest in pursuing a career in science
64
arranged on a Likert scale with the responses of Strongly Agree, Agree, Undecided,
Disagree, and Strongly Disagree. Approximately half of the items in the TOSRA
are negatively worded, thus challenging the respondent to think carefully about each
statement.
For the current study, attitudes were assessed using a modified version of the
Enjoyment of Science Lessons scale as well as the Attitude to Scientific Inquiry
scale. Sample items of the former include “I look forward to this class” and “This
class is among the most interesting at this school”, whereas examples of the latter
scale are “I would prefer to do experiments than to read about them” and “It is better
to create my own hypothesis than to be given a hypothesis to test out”. To avoid
confusion in responses, the items were all worded positively, as recommended by
Barnette (2000). As well, because each TOSRA scale contains 10 items, items that
were highly similar to other items in the same scale were removed to enable
consistency with all other scales that had eight items in my study’s questionnaire.
Recently, research about students’ science-related attitudes has been on the rise
because of a decrease in student enrolment in the sciences at the secondary and
65
tertiary levels of education, especially in Western countries (Osborne, Simon, &
Collins, 2003). In fact, there seems to be an inverse relationship between the
economic advancement of a country and their students’ interest in school science.
In general, attitudes towards science tend to decline with age so that students in the
younger grade levels report enjoyment of science lessons, while middle-school
students begin to lose interest and high school students enjoy science the least out of
all school ages. Similarly, gender differences in attitudes are less apparent in the
younger years and emerge during middle school, especially in relation to the
compartmentalization of the sub-topics within science, such as physical science and
chemistry (Oliver & Venville, 2011; Tytler & Osborne, 2012). Nevertheless,
despite the decrease seen in science attitudes, the overall interest in science remains
predominantly positive (Tytler & Osborne, 2012).
However, the decline in attitudes towards science is also disturbing because attitudes
correlate with achievement. The Trends in International Mathematics and Science
Study (TIMSS) showed a consistent relationship between attitudes and achievement
over the years, with students with more positive attitudes having higher achievement
in science than those with medium or low attitudes in science (Nasr & Soltani, 2011;
Neuendorf, 2002). The PISA study in 2006 reported that most students agree that
science is important to learn and that science and technology improve living
conditions, but that fewer students found science personally relevant and that even
fewer students expressed an interest in pursuing a science-related career. The study
also showed a correlation between socio-economic status and interest in science-
related careers (Organization for Economic Co-operation and Development
(OECD), 2009).
66
and pre-adolescent experiences are the most notable determinants, but others include
self-evaluation of science ability, parental expectation and level of guidance,
exposure to career guidance and goals, exposure to inspirational teachers, and
teacher expectation of success (Osborne, Simon, & Collins, 2003; Shibeci, 1984;
Tytler & Osborne, 2012).
Once such determinants are identified, researchers and educators can implement
proactive strategies that address such issues in order to improve students’ interest in
science. For instance, enrichment experiences in school science have been shown to
be effective in raising students’ positive attitudes towards science (Quek, Wong, &
Fraser, 2005; Tytler & Osborne, 2012). Olitsky and Milne (2012) propose the
development of programs that focus on engagement in science, provide
opportunities for students to construct their own meanings in science through direct
experience, and engage students at an emotional level. In exploring Olympiad
(honors-level) students’ attitudes towards and passion for science, more positive
attitudes were observed as a result of this enrichment program, even though school
science originally had decreased their interest in science (Oliver & Venville, 2011).
Nasr and Soltani (2011) conducted a longitudinal study to examine the relationship
between attitudes towards science and achievement in science in a grade 10 biology
course in Isfahan, Iran. They found no statistically significant differences between
the sexes. However, using the Simpson–Troost Attitude Questionnaire–revised
(STAQ–R), meaningful positive associations were uncovered between achievement
and the dimension of ‘biology is fun for me’. The other dimensions, which lacked
significant associations with achievement, included Motivating Biology Class, Self-
Directed Efforts, Family Models, and Peer Models.
67
course, observing their levels of motivation, and through photographs and
videotapes (Barak & Asad, 2012).
68
as the SLEI (Fraser, Giddings, & McRobbie, 1995; Kijkosol, 2005; Martin-Dunlop
& Fraser, 2007), the QTI (Fisher, Henderson, & Fraser, 1995; Kijkosol, 2005; Quek,
Wong, & Fraser, 2005), the WIHIC (Khoo & Fraser, 2008; Martin-Dunlop & Fraser,
2007; Raaflaub & Fraser, 2002; Wolf & Fraser, 2008), and the TROFLEI (Aldridge
& Fraser, 2003, 2008; Koul, Fisher, & Shaw, 2011; Koul & Fisher, 2005).
Adaptations of the TOSRA to mathematics classes, called the TOMRA, also showed
positive associations with learning environment scales from the CLES, WIHIC,
ICEQ, and MCI (Mink & Fraser, 2005; Ogbuehi & Fraser, 2007; Spinner & Fraser,
2005).
The current study investigated the effectiveness of virtual laboratories (the third
research question) as well as their differential effectiveness for males and females
(the fourth research question). While a full review of literature on gender issues in
science education is beyond the scope of this thesis, a review of the perceptions,
attitudes, and achievement of the different sexes in science education is necessary to
provide a context for this investigation of whether virtual laboratories assist or
hinder gender equity. If virtual laboratories assist in closing the gender gap in
science education, they could be utilized in the classroom with greater confidence
about their many benefits. On the other hand, if virtual laboratories are
differentially beneficial for one sex over another, such differences would have to be
taken into account when implementing their use in the classroom.
69
Assessment of Educational Progress (NAEP) reported in 2011 that American males
in grade eight scored on average five points higher than females in science
achievement examinations, which is consistent with the same study conducted in
2009 (National Center for Educational Statistics (NCES), 2012a). However, recent
research has pointed to the absence of such a gender gap in the sciences (Koul,
Fisher, & Shaw, 2011; Scantlebury, 2012). Whether the absence of a gender gap
naturally exists or whether it exists as a result of interventions intended to create
equality between the sexes is reviewed below.
In 2009, the Programme for International Student Assessment (PISA) revealed small
gender differences amongst 15 year-old science students regarding attitudes and
achievement, but the results were inconsistent in that they varied with different
countries, types of schools, and socio-economic levels (Organization for Economic
Co-operation and Development (OECD), 2009). A Trends in International
Mathematics and Science Study (Trends in International Science and Mathematics
Study (TIMSS), 2007) reported gender differences in favor of girls at the fourth and
eighth grade levels. As well, female students in grades 4, 8, and 10 scored higher
than males on hands-on science tasks, though males scored higher on the traditional
paper-and-pencil science assessment. In the same study, there was no gender gap in
interactive computer tasks in science (National Center for Educational Statistics
(NCES), 2012b). In an individual study of grade 10 biology students in Isfahan,
Iran, no significant differences between males and females were reported for
attitudes, but females scored higher in achievement (Nasr & Soltani, 2011).
70
Numerous studies using various learning environment questionnaires have
replicated a pattern in which females scored more highly than males on scales such
as Rule Clarity, Task Orientation, Cooperation, Equity, and Teacher Support.
However, for scales such as Involvement, Investigation, Differentiation, and Young
Adult Ethos, variable results have been reported for differences between the sexes
(Aldridge & Fraser, 2008; Khoo & Fraser, 2008; Kijkosol, 2005; Koul, Fisher, &
Shaw, 2011; Quek, Wong, & Fraser, 2005; Raaflaub & Fraser, 2002; Wolf & Fraser,
2008). In conclusion, females tend to perceive most aspects of their science learning
environment more favorably than their male counterparts. Furthermore, a number of
these same studies showed more positive attitudes towards science for males relative
to females (Khoo & Fraser, 2008; Raaflaub & Fraser, 2002; Wolf & Fraser, 2008).
Similarly, gender differences found for interest in science-related careers have also
been consistent. Females tend to perceive a lack of relevance of the physical
71
sciences to their personal lives and often avoid choosing careers that are heavily
based on such a subject. Instead, they tend to show interest in science careers that
involve nurturing (e.g. nursing). As well, life demands have a larger impact on
women than on men, which ultimately might cause women to neglect or
underachieve in science-related careers. The opposite is generally true for males
who show more interest in demanding science-related careers (Beede, Julian,
Langdon et al., 2011; Oakes, 1990; Scantlebury, 2012).
In general, students’ attitudes towards science decline as they go through the science
‘pipeline’ from preschool to their careers, but this decline is greater for girls than for
boys. Female interest in science typically begins to decrease in the middle school
years (Scantlebury, 2012).
Why is the extent of the decline in interest in science with grade level unbalanced
between the sexes? One contribution might be teachers’ preconceived notions about
the difference in abilities between males and females. For instance, some research
has revealed that some teachers call on boys to answer more-challenging questions
and encourage them towards science-related careers (Oakes, 1990; Scantlebury,
2012). Huang and Fraser (2009) conducted a study involving 818 Taiwanese male
and female science teachers’ perceptions of the school environment. A critical
finding from this study was that male science teachers reported that science is a
subject more suitable for boys and that they encouraged boys more than girls in this
area, while female teachers viewed science as equally important for boys and girls.
72
If teachers instill more confidence in males than in females in the sciences, then
males are more likely to excel. In fact, Thompson (2008) claims that gender
differences in science are due to differences in levels of self-confidence in learning
science, rather than intellectual ability, and, because males have more self-
confidence, they tend to outperform females.
Thus, it seems that, naturally, little difference exists between the sexes regarding
their attitudes and achievement in science, but that these gender differences could be
created by teachers or other educational interventions that tip the scale in favor of
male interest and achievement in science. This also explains why traditional
classrooms could have a narrower gender gap than classrooms with an innovative
intervention (Wolf & Fraser, 2008).
If such gender differences, whether natural or contrived, exist, how might education
be reformed to encourage more female interest in the sciences in order to reduce or
eliminate the gender gap? This question was addressed in the early 1980s with the
rise of feminism by the introduction of ‘girl-friendly’ curricula that highlight
women’s contributions to science and other female-focused themes (Scantlebury,
2012).
In another study, grade nine Israeli Arabs were exposed to an integrative Science,
Technology, Engineering and Mathematics (STEM) intervention about image
processing using computers. In this case, the pretest for interest in learning
computers in school showed sex differences, with males outperforming females, but
no significant differences were found between the sexes for the posttest (Barak &
Asad, 2012). An intervention such as this can help to decrease the gender gap in
science education.
73
2.5 Virtual Laboratories in Science Education
The use of technology for instruction is not a new idea. In reality, the reference to
the term ‘technology’ changes with time. In the early part of the 20th century,
‘technology’ might have referred to phonographs and transistor radios, progressed to
sound recordings, television and computers (Russell, 1999), and more recently
included interactive whiteboards (Moss, Jewitt, Levaaic et al., 2007), Personal
Response Systems (Herrmann, 2012), iPads (Nooriafshar, 2011), and other mobile
devices (Milrad & Spikol, 2007). Naturally, these technologies have been adapted
to the educational realm and, alongside, their educational effectiveness was
evaluated. A full review of the integration of technology in education is beyond the
scope of the current study; this section merely examines the general role of
technology in science education.
Many technological advances are quickly revolutionizing the rate of discovery and
youngsters are expected to be familiar with such innovations. As Javidi (Javidi,
74
1999) notes: “To allow educational tools to fall behind the pace of technological
advance is to sell out a generation of learners” (p. 1). Because students learn better
from processes which are sensory, visual, inductive, and active (Felder & Silverman,
1988), they benefit from lessons that are interspersed with technology-rich activities
that contain digital images and animations, activities that involve the use of
simulations and databases, and research via the internet (Beichner, Bernold,
Burniston et al., 1999; Trindade, Fiolhais, & Almeida, 2002).
Whether or not the evidence supports its use (see Section 2.5.5), technology is the
comfort zone for many students today. This idea is most eloquently summarized by
the terms ‘digital natives’, referring to those born into an era surrounded by
technology and are thus conferred with the ability to manage it, and ‘digital
immigrants’, referring to those who need to adjust to technological innovation; it is
argued that the thought processes of the former are fundamentally different from
those of the latter (Prensky, 2001). Therefore, the modernization of presentation
modes in education might be of benefit to students and to teachers who could use
more tools to reach the young minds that have been trained by popular entertainment
media to seek constant stimulation.
75
As applied to the natural sciences, specifically regarding the topic of genetics, one
study showed that the use of multiple representations dynamically linked in an
interactive multimedia program called BioLogica, enhanced students’ learning of
introductory genetics. In this case, the intervention enabled teachers “to increase the
use of visual-graphical representations, thus making genetics more interesting and
easier to learn and understand” (p. 285). The authors underscore the role of the
teacher in encouraging students to engage with such multimedia programs (Tsui &
Treagust, 2004).
Overall, innovations that alter the dynamic of the traditional classroom, from
collaborative teaching to the incorporation of technology such as online textbooks
and virtual laboratories, to instances of ‘learning without walls’ such as fully online
classes or distance education, initiate a paradigm shift in defining the learning
environment. With such innovations, the teacher’s role as director diminishes and a
new model of teacher as facilitator emerges that allows for more student-focused
learning; the focus is on ‘learning’ and not necessarily on ‘teaching’ (Chang &
Fisher, 2003; Rogers, 2000). Another byproduct of such innovations, especially
concerning online and distance education, is the globalization of communication
within education, which allows trans-cultural exchange (van de Bunt-Kokhuis,
2001).
Zandvliet and Fraser (2004) note a number of challenges that prevent the successful
integration of technology into classrooms. They point out that the use of ICT in
schools is partially attributable to technological, commercial and societal pressures
76
but that, once a school invests in ICT, there is little support to make it educationally
beneficial. To do so, schools need to better integrate ICT with their curriculum and
instruction, which might be augmented by the physical learning environment. The
authors discuss the need for a healthy balance of all spheres of influence: the
ecosphere (eg. equipment, network), the sociosphere (interactions with other people,
perceptions, outcomes, learning, attitude), and the technosphere (i.e. technical
factors impact instruction such as the goals of teachers) (Zandvliet & Fraser, 2005).
Another significant factor that affects the usefulness of ICT is the technological
experience of the teacher. The National Center for Educational Statistics (Smerdon,
Cronen, Lanahan et al., 2000) revealed that 99% of teachers in public schools in US
had access to computers or the Internet, and that 84% had at least one computer in
classroom, but only 20% felt well-prepared to integrate technology into teaching.
Similarly, another study showed that the primary use of ICT for teachers was for
email to communicate with homes and for students use was Word processing and
Internet research. Therefore, neither users were engaging in the full range of tasks
and advantages that ICT offers (The California Educator, 2003). A Common
frustration for teachers using ICT is the amount of time spent on technical issues
rather than instructional ones (i.e. the technosphere is too large) (Zandvliet & Fraser,
2004).
Perhaps, owing to some of these challenges, Jones (2012) argues that the impact of
technology on the teaching and learning of science “has probably not reached the
potential we thought we thought it might when we began exploring its introduction
25 years ago” (p. 820). Either studies are simply not producing evidence that
technology integration is beneficial (see Section 2.5.3), or the process of integrating
technology into classrooms must be refined by incorporating a broader spectrum of
technological programs, training teachers in their use, providing better spaces in
which to use technology, and designing more accurate studies to evaluate their
effectiveness. My study represents one such attempt to add to the body of research
on the effectiveness of integrating technology in science classes.
77
2.5.2 Virtual Laboratories
This section examines the literature that specifically addresses aspects of virtual
laboratories such as its definition (Section 2.5.2.1), history (Section 2.5.2.2), and
benefits (Section 2.5.2.3).
2.5.2.1 Definition
The specific attempt to integrate technology into science classrooms that was
assessed in this study concerns virtual laboratories, which are interactive
environments for conducting simulated experiments. In more general terms, a
virtual laboratory is defined as “an electronic workspace for distance collaboration
and experimentation in research or other creative activity, to generate and deliver
results using distributed information and communication technologies”, according to
the International Institute of Theoretics and Applied Physics at the Expert Meeting
on Virtual Laboratories in Iowa, USA in 1999 (Rauwerda, Roos, Hertzberger et al.,
2006, p. 230). Essentially, such modalities make use of networked content to
provide a rich immersive learning environment using visualizations, graphics, and
interactive applications.
The term ‘virtual laboratories’ is often used loosely amongst software developers
who wish to entice educators into their usage. Indeed, the concept encompasses five
different categories, according to Harms (2000), only three of which are currently
relevant to this study (Borgman et al., 2008; Nedic, Machotka, & Nafalski, 2003)
and whose boundaries also become somewhat blurred (Ma & Nickerson, 2006):
78
Simulations that contain certain elements of laboratory experiments, but they
are mainly used for visualizations and they are available online. These are
referred to as classical simulations and ‘CyberLabs’ and are further
discussed in Section 2.5.3.1.
Real experiments that are controlled via a network, the settings and output of
which are accessible through the Internet. These are known as Remote Labs.
The benefits of Remote Labs are discussed by Alhalabi (1998). Remote Labs first
were most commonly used for robotics and then expanded to other areas of
engineering. Examples include University of South Australia’s NetLab (Nedic,
Machotka, & Nafalski, 2003), MIT’s iLabs project that offer microelectronics test
equipment and the like (http://icampus.mit.edu/ilabs/), Second Best to Being There
(SBBT) from Oregon State University that provides remote students with complete
access to a control engineering laboratory (Bohus, Aktan, Crowl et al., 1996), and
the Virtual Lab at Carnegie Mellon University
(http://users.ece.cmu.edu/~stancil/virtual-lab/concept.html). For example, the iLabs
inverted pendulum experiment at the University of Queensland permitted users to
access the experiment beyond laboratory hours and led to an increased success rate
for students to balance the pendulum from 5% to 69.5% (Borgman et al., 2008).
Another category delineated by the NSF Task Force on Cyberlearning is a mixed-
reality environment that combines digital content and real-world spaces that allows
users to see the machinery involved but interpret output electronically (Borgman et
al., 2008). However, further discussion about these types of remote virtual
laboratories is beyond the scope of this review and better explored under an
Information and Communications Technology (ICT) framework. The remainder of
this section explores the second category of virtual laboratories (i.e. simulations that
closely represent laboratory experiments).
79
2.5.2.2 History
While the concept of virtual laboratories (as encompassing remote laboratories and
simulations) dates back to the 1970s, the development of true virtual laboratories
specifically related to the life sciences are of greater relevance to the current study.
One of the first such initiatives in the 1980s was the Genetics Construction Kit
(GCK) that illustrates classical Mendelian genetics by simulating fruit fly variations.
Similarly, simulations of genetic transmission of traits in cats, called CATLAB
(http://www.emescience.com/sci-genetics-catlab.html), in fruit flies, called the
Virtual FlyLab (http://biologylab.awlonline.com/), and in pea plants and dragons,
called Biologica (http://biologica.concord.org/), were developed in the 1990s and
were widely used in science classrooms. Later, ViBE: Virtual Biology Experiments
(http://www.ece.rutgers.edu/~marsic/books/SE/projects/ViBE/) was created in 2001
to allow students to discover biological processes and practice laboratory skills. All
of these programs served as the inspiration for the Virtual Genetics Lab, developed
in 2007, to test predictions of genetic crosses for various traits in a hypothetical
insect (http://vgl.umb.edu/). It enabled students to “practice the logic of genetic
analysis without the distractions of wet labs” but were not intended to “replace a wet
lab” (White, Bolker, Koolar et al., 2007, p. 30).
A myriad of such software emerged in the 21st century for medical students and
university and high school students in the sciences (Yu, Brown, & Billet, 2005), but
the ones most commonly used in the current study include: the Howard Hughes
Medical Institute virtual laboratories (http://www.hhmi.org/biointeractive/vlabs/) for
exploring topics in molecular genetics, cardiology, neurophysiology and the immune
80
system (HHMI, 2003), and the University of Utah’s virtual laboratories
(http://learn.genetics.utah.edu/) that prepare students with basic skills in molecular
genetics experiments and involve investigation of the molecular basis of cancer
(University of Utah, 2004). A full list of the virtual laboratories used in this
investigation are provided in Appendix D, and a description of their implementation
is included in Chapter 3.
2.5.2.3 Benefits
Because of the recent rise in the biotechnology industry, and the job opportunities
thus afforded, innovations in teaching biotechnology and molecular biology
concepts have become vital (Toth, Morrow, & Ludvico, 2009). In this manner,
virtual experiments enable users to focus on conceptual explanations because the
virtual program can keep track of details in data to allow users to focus on the ‘big
picture’. Moreover, state education standards are becoming increasingly demanding,
as noted in Section 2.2.2, particularly regarding the molecular focus of biology with
which students often have difficulty. To address this concern, the use of virtual
laboratories in the classroom can help to make these molecular concepts more
concrete for students without requiring complex and costly equipment (Marbach-
Ad, Rotbain, & Stavy, 2008; Raineri, 2001), and thus assist in narrowing the gap
between lagging levels of student achievement and the proposed higher standards to
which students are held accountable. Therefore, the use of virtual laboratories can
aid in both the conceptualization and constructivist realm, allowing students to learn
by doing and become more engaged in their studies (Clancy, Titterton, Ryan et al.,
2003; Felder & Silverman, 1988; Gallagher et al., 2005; Marchevsky, Relan, &
Baillie, 2003; Yu, Brown, & Billet, 2005).
81
Similarly, Toth et al. (2009) describe their efforts to develop a tool that preserves the
beneficial aspects of hands-on laboratory work while deepening the quality of
inquiry learning in a complex, error-prone environment; according to them, virtual
laboratories “allow the user to conduct the same scientific inquiry afforded by
hands-on investigation but at a reduced expense, with increased safety, and within
the time constraints of a…classroom” (p. 334). They also describe the benefits of
virtual laboratory equipment that automates routine tasks, such as mixing solutions
and forming agarose gels, and allows students to focus on the inquiry aspects of an
experiment rather than the technical tasks. Additionally, many virtual laboratories
contain visual representations or animations that explain the mechanism of the
virtual equipment, an interactive feature unavailable in hands-on laboratories where
only the end state of a reaction occurring inside complex machinery is revealed to
students (Toth, Morrow, & Ludvico, 2009).
82
way of instant feedback from data manipulations, form more accurate mental
models of phenomena, and can even use these virtual simulations as practice to
prepare them conceptually for complex hands-on experiments (Zacharia, 2007). Yu
et al. (2005) constructed a system that draws on the instant feedback feature by
providing an intelligent tutoring agent that offers advice for students to correct their
mistakes while conducting a virtual experiment. Naturally, virtual experiments are
repeatable within and outside of the classroom, a feature that serves to prepare
students prior to beginning a hands-on experiment and allows them to review an
experiment after it has been conducted (Cobb et al., 2009; Reising, 2010).
On the other hand, disadvantages of utilizing virtual laboratories include the use of
idealized data, lack of collaboration, and the absence of interaction with real
equipment (Hofstein & Lunetta, 2004; Nedic, Machotka, & Nafalski, 2003). Waight
and Abd-El-Khalick (2007) add that true inquiry in virtual experiments can also be
affected because of the perceived authority of technology. Winn et al. (2006) point
out that such technological tools can favor students who have more prior knowledge.
Ultimately, many of these disadvantages can be avoided with the application of
good design principles for the implementation of virtual laboratories (Annetta,
Klesath, & Meyer, 2009; Toth, Morrow, & Ludvico, 2009). To summarize, hands-
83
on laboratory advocates emphasize design skills (Ma & Nickerson, 2006) and the
importance of making and learning from errors (Toth, Morrow, & Ludvico, 2009),
while virtual and remote laboratory advocates focus on the benefits gained in
conceptual understanding (Marbach-Ad, Rotbain, & Stavy, 2008; Marchevsky,
Relan, & Baillie, 2003; Raineri, 2001; Toth, Morrow, & Ludvico, 2009).
2.5.3.1 Simulations
84
enriching a science learning experience (Thurmond, Holmesa, Annetta et al., 2011),
but a full review of SEGs is beyond the scope of this discussion. The former, the
subject of the current study, is meant to either replace or supplement essential
experiences that could not otherwise be had in science classrooms. However, to
make virtual laboratories attractive, they are often designed similarly to SEGs
because “as the Net Generation (currently the leading population playing online
games) reaches college age, the adaptation of a three-dimensional, game-like
environment into a virtual classroom seems to be the natural evolution in online
learning” (Annetta, Klesath, & Meyer, 2009, p. 27).
85
The current study only defined simulations as a component of virtual laboratories.
Therefore, the review of literature concerning their effectiveness is limited in this
chapter but many other studies contain a more in-depth discussion of the benefits of
simulations (Bell & Trundle, 2008; Burkholder, Purser, & Cole, 2008; Dori &
Barak, 2001; Finkelstein, Adams, Keller et al., 2005; Marbach-Ad, Rotbain, &
Stavy, 2008; Winn et al., 2006).
One of the areas in which virtual laboratories have the potential to be most useful is
online education. This also happens to be the fastest growing area in education
today. In the US, enrolment in full-time virtual schools has increased 40% in the
last three years and, according to the International Association for K–12 online
learning, nearly two million students take at least one online class in the US alone
(Banchero & Simon, 2011; International Association for K–12 Online Learning
(iNACOL), 2012). Well-known American universities (e.g. Harvard University and
Stanford University) are beginning to invest in a venture that offers free classes
online despite the lack of economic gain (Perez-Pena, 2012). To ensure that their
students are well prepared for the world of online education and the future job
market, some school districts and states require the successful completion on an
online course in order to graduate (Brown, 2012).
In some cases, schools are entirely online and there are no bricks-and-mortar
buildings. However, this is mostly frowned upon and the most beneficial
arrangement is a learning model that blends traditional instruction with online
activities or vice versa, as one professor of education and editor of The American
Journal of Distance Education stated: “There is no doubt that blended learning can
be as effective and often more effective than a classroom” (Herrera, 2011, Paragraph
86
20). In order to create a viable and effective arrangement for blended instruction,
Herrera describes three requirements: proper design of the virtual course (or aspect
thereof), the inclusion of direct teacher instruction within physical classrooms, and
an appropriate maturity level among students taking the course.
87
2.5.4 Overview of Studies Employing Virtual Laboratories
Although some projects using virtual laboratories have only recently begun in
schools, and started to show positive results, several researchers note the lack of
empirical evidence concerning their effectiveness (Harms, 2000; Hofstein &
Lunetta, 2004; Javidi, 1999; Javidi & Sheybani, 2006). Ma and Nickerson (2006)
acknowledge the necessity to further evaluate, via controlled studies, the educational
effectiveness of laboratory simulations developed by software companies.
Conversely, Chandra and Fisher (2009) urge teachers, albeit untrained in ICT, to
become more proactive in helping to develop educational technology because they
possess valuable knowledge and experience for designing and sequencing such
activities.
seek to imitate a real laboratory experiment using inquiry skills and which
involve students observing phenomena, formulating hypotheses, setting up
controls, following procedures, testing hypotheses, and analyzing results.
(Virtual experiences to clarify a concept through simulation/modeling are not
included.)
While formal evaluative analysis has yet to be completed, anecdotal evidence and
preliminary evidence gathered over four semesters for university students involved
88
in integrating virtual laboratories from iLabs into their biology course point to a
gradual increase in class performance. More promising was the significant decrease
in the number of students failing the course (Raineri, 2001). In a similar study of 39
college students taking an introductory biology course, using a crossover design to
compare hands-on and virtual laboratory activities, quantitative data showed no
difference in the order of the instructional methods, but revealed the effectiveness of
integrating virtual and hands-on laboratories over hands-on laboratories alone.
Qualitative data indeed pointed to the efficacy of engaging in virtual laboratories
before the hands-on ones (Toth, Morrow, & Ludvico, 2009).
In another study that used ‘presence’ (the ability to perceive virtual representations
as real people or objects despite not being able to touch them directly) as a measure
of effectiveness, entomology students reported high levels of such ‘presence’ when
creating and manipulating a virtual ‘bug farm’ as a supplemental activity in their
course. The activity was a multi-user format similar to video games in a three-
dimensional environment. In this case, males experienced a greater sense of
‘presence’ than females (Annetta, Klesath, & Meyer, 2009).
89
noted for the second trial; therefore virtual laboratories were shown to be as
effective as, if not more effective than, physical laboratories (Pyatt & Sims, 2012).
Because the topic of dissection in science classes has aroused much controversy
(Orlans, 1988), virtual laboratories that involve dissecting ‘specimens’ online
provide a viable alternative to real dissections. Studies of the value of virtual frog
dissections compared with traditional dissections using real specimens have revealed
mixed results; some suggested that real dissections are more effective (Cross &
Cross, 2004), while others suggested the supremacy of simulated dissections for
improved achievement (Akpan & Strayer, 2010). It should be noted that all three
studies used small sample sizes and contained other methodological limitations.
In reality, science classes should blend real and virtual experiments so that students
acquire the skills necessary to perform the required technical tasks; virtual
90
simulations are useful for transferring knowledge and skills from an idealized
(virtual) environment into physical reality (Yu, Brown, & Billet, 2005). Indeed, a
number of studies suggest the desirability of integrating hands-on laboratories with
virtual ones and the effectiveness of engaging in virtual experiences prior to the real,
hands-on investigation (Akpan & Strayer, 2010; Cobb et al., 2009; Toth, Morrow, &
Ludvico, 2009). As well, Nedic et al. (2003) recommended concentrating on virtual
laboratories the first year of a four-year engineering program and then slowly
working towards physical laboratories in the remaining years. In general, skill
acquisition through virtual environments is expected to be more successful if it is
scheduled on an interval basis, including the alternation of physical laboratories and
regular lessons, rather than amassed into a short period of intense practice
(Gallagher et al., 2005).
While this section examines the merits and demerits of virtual experiments that
cannot be conducted in real, physical laboratories, it is important to distinguish
between virtual and physical laboratory environments. The laboratory has been a
prominent feature of science education since the inception of teaching science
systematically in the 19th century. A laboratory refers to “experiences in school
settings in which students interact with equipment and materials or secondary
sources of data to observe and understand the natural world” (Hofstein & Kind,
2012, p. 190). However, in the early years of science experimentation in schools,
laboratories were simply environments in which to practice or confirm information
learned from lectures or textbooks. Its evolution into the space in which exploration
and inquiry can occur took decades, and is a process that is still ongoing.
Ultimately, science learning environments that are rich in practical experiences, as
compared to those with few laboratory experiences, have been shown to be
beneficial for student attitudes and learning, a benefit that might ultimately
contribute to choosing a career in science (Hofstein & Kind, 2012; Hofstein &
Lunetta, 2004).
91
similar to how ‘real scientists’ practise, helps in forming positive impressions early
on. The hands-on interaction with materials and equipment, and the trouble-
shooting involved, expose students to some of the challenges that real scientists
encounter. Additionally, the tactile experiences in a physical setting might enhance
conceptual development. In comparison, virtual laboratories manipulate reality. As
previously mentioned, a virtual environment allows idealized data, as well as
unobservable data, and avoids technical problems associated with equipment.
Virtual laboratories allow interactions with equipment and materials and so the
definition of ‘hand-on’ takes on a new meaning beyond the tactile realm. In line
with the handful of studies described above, the revised thesis also concludes that a
blend of physical and virtual environments is the most effective method for allowing
both physical interaction and conceptual development in science. In fact, the
determining factor in the effectiveness of any method is not the context in which the
experience takes place, but the degree to which inquiry is fostered (de Jong, Linn, &
Zacharia, 2013).
The term ‘inquiry’ was originally described by Kempa and Ward (1975) as 1)
planning an experiment, 2) carrying out the experiment, 3) observations, and 4)
analysis, applications, and explanation of results. More recently, Hofstein and Kind
(2012) stress the importance of incorporating metacognition into all activities so that
students are engaged in planning how to approach a task, monitoring their
comprehension of a task, and evaluating their progress as they execute the task.
Four conditions are necessary in order to foster an environment of inquiry where
metacognition can occur: time, opportunity, guidance, and support (Baird & White,
1996). Regarding the first condition, time can be afforded by reducing the amount
of time spent on tasks that can be handled by technology, as in a virtual
experimentation.
92
studies above evaluating virtual laboratories from an educational standpoint were
based on small sample sizes and didn’t adhere to strict standards of research.
Consequently, there is a dearth of solid evaluative research on virtual laboratories
from an educational perspective, and especially within a learning environments
framework. Therefore, the aim of my study was to evaluate the effectiveness of
virtual laboratories used in educational settings at the high school level, in terms of
the learning environment, attitudes, and achievement.
Starting with the advent of digital technologies in the early part of the 20th century,
overly hopeful inventors envisioned a future without textbooks. In 1913, Thomas
Edison stated, “Books will soon be obsolete in the schools.... Our school system will
be completely changed in 10 years” (Saettler, 2004, p. 98) referring to the
emergence of the motion picture as a new medium for education. Contrary to this
claim, textbooks are still currently being used frequently in classrooms.
One of the first academic evaluations the application of technology into the realm of
education focused on correspondence education involving the use of media such as
loudspeakers (Loder, 1937) and phonographic recordings (Rulon, 1943). The
achievement scores of students who were face-to-face with their instructors were
compared with scores of students who were not; neither study showed significant
differences. Nor were any significant differences found between students learning
93
via instructional radio and students being taught by traditional methods (Woelfel &
Tyler, 1945). In 1950, a study with 9th grade biology involved comparing students
using three instructional methods: sound films, sound films plus study guides, and a
standard lecture demonstration. Again, no significant differences were revealed in
achievement scores between the three groups (Van der Meer, 1950).
94
summed up as no significant difference” (p. 5). In reviewing educational
technology, Thompson, Simonson, and Hargrave (Thompson, Simonson, &
Hargrave, 1996)indicated that, for every study showing educational benefits of a
medium, there was another that suggests the opposite. Yet again, nearly 20 years
ago, Salomon and Perkins (1996, p. 3) observed that “computers, in and of
themselves, do very little to aid learning. Their presence in the classroom along with
relevant software does not automatically inspire teachers to rethink their teaching or
students to adopt new modes of learning”.
With the advent of the Internet, the quantitative and qualitative increase of
instructional media provided a new focus for educational research. From the
integration of online software into classrooms (Goldberg, 1997; Klass & Crothers,
2000) to classes conducted entirely online (Hiltz & Wellman, 1997; Horn, 1994;
Johnson, 2002; Martin & Rainey, 1993; Mock, 2000), a new focus for evaluation
was borne, but results were generally consistent with the ‘no significant difference’
trend.
95
substitution of digital textbooks for hardcover ones within a five-year time span,
citing that South Korea has had such an initiative in place for its students by 2013
(Hiltzik, 2012).
96
Russell, in his original article (1992), questioned why empirical research results for
educational technologies are ignored, often to the detriment of the students.
Professional educators and, of course, technologists and product developers, adhere
to the myth that increased technological interaction, often the more appealing,
newsworthy, costly type, improves education. In fact, many new technologies claim
to produce statistically significant results. How is that possible?
97
Critics, while admitting some potential effectiveness, also point to other downsides
of media: “Well-produced multimedia features can improve students' understanding
of difficult or recondite concepts. But there's a fine line between an enhancement
and a distraction” (Hiltzik, 2012, para. 21). Also, funds spent on multimedia drain
the financial resources available to recruit, hire, and train high-quality teachers, an
important determining factor in students’ attitudes and achievement.
Nearly 30 years ago, Clark stated: “The best current evidence is that media are mere
vehicles that deliver instruction but do not influence achievement any more than the
truck that delivers our groceries causes changes in nutrition...only the content of the
vehicle can influence achievement” (Clark, 1983, p. 445). In order to sway public
perception, a battle between the message and the media ensues and, usually, the
commercialized, sensationalized, and often irrationalized ideas of the media prevail.
Clark argues that adequate learning results will be produced regardless of the
medium and that we must choose the less expensive media to avoid wasting limited
educational resources.
98
(Chapter 4). Ultimately, “Good teaching cannot be replaced by good technology,
but the merger of the two holds the promise for truly effective [online] instruction”
(Annetta, Klesath, & Meyer, 2009, p. 32).
2.6 Summary
Chapter 2 reviewed literature that provides the context for the current study that
sought to evaluate the effectiveness of virtual laboratories in terms of perceptions of
the learning environment, attitudes towards science, and achievement.
First, relevant literature that provides the learning environments framework for the
current study was reviewed. Included in this section was a review of questionnaires
for measuring perceptions of the learning environment from the perspective of the
student. This field of research has grown over the last 40 years beginning with
Lewin’s (1936) and Murray’s (1938) monumental ideas of connecting personality
and environmental influences to behaviour and accounting for personal needs,
environmental presses, and differences perceived by observers and participants.
Moos (1974) characterized human interactions into the three dimensions of
relationship, personal development, and system maintenance and change, which
have served as the basis for various constructs assessed by the burgeoning, valid,
and economical learning environment questionnaires. Several of these widely-used
questionnaires were selected for this study on the basis of their validity, reliability,
and applicability, namely, the Science Laboratory Environment Inventory (SLEI)
and the Technology-Rich Outcomes-Focused Learning Environment Inventory
(TROFLEI).
99
Literature was also reviewed for attitudes towards science, another measure of
effectiveness of virtual laboratories in my study. The literature describing the
development and application of the Test of Science-Related Attitudes (TOSRA) was
explored because scales, specifically Enjoyment of Science Lessons and Attitude to
Scientific Inquiry, that were validated in many other studies were adopted from this
questionnaire for the assessment instrument in the current study.
Next, literature that featured and characterized the intervention in this study was
reviewed, including literature concerning the integration of technology into
classrooms in general, and the practical benefits of such integration. An example of
such educational technology is virtual laboratories, the intervention in my study.
The literature describes virtual laboratories as being interactive, concept-friendly,
skill building, highly instructive, economical, efficient, safe, and viable alternatives
to experiments that would not otherwise be possible in a high-school classroom.
Results of various studies that employed virtual laboratories were presented, but
their methodological approaches were questioned. The lack of research into the
effectiveness of virtual laboratories was noted and therefore used to justify the
significance of its evaluation in this study.
The following chapter outlines the methods of the current study and describes the
approaches used to answer the research questions concerning the validity of the
instrument used, associations between student outcomes and the environment, and
the effectiveness of virtual laboratories, as well as its differential effectiveness for
males and females, in terms of perceptions of the learning environment, attitudes,
and achievement.
100
Chapter 3
Methodology
3.1 Introduction
This chapter describes and justifies the methodological aspects of this study in terms
of the research questions guiding the methods (Section 3.2), the sample selection
(Section 3.3), the materials used including assessment instruments and other
resources (Section 3.4), the procedures followed (Section 3.5), data collection, entry,
and analysis (Section 3.6), and limitations of the study (Section 3.7).
The aim of the study was four-fold: to validate a new questionnaire, to investigate
associations between the learning environment and student outcomes, to determine
the effectiveness of virtual laboratories in general, and to examine the differential
effectiveness of virtual laboratories for males and females. These research aims are
delineated in more detail below; they guided the design, implementation, and data
analysis of this study.
1. Are scales from the Test Of Science Related Attitudes (TOSRA), Science
Laboratory Environment Inventory (SLEI), and Technology-Rich Outcomes-
Focused Learning Environment Inventory (TROFLEI) questionnaires, as
101
well as achievement items valid and reliable when used with a sample of
high school students taking biology in the US?
4. Is the use of virtual laboratories differentially effective for males and females
in terms of students’:
To select participants, an electronic request was sent out over various teacher
networks (email lists and listservs from science education organizations). While
over 20 teachers initially expressed interest, six teachers followed through on
implementation of the treatment procedure with their students. Participating
teachers then obtained informed consent from the respective principals at their
schools and from students in their classes.
As in all quasi-experimental designs (Campbell & Stanley, 1963), the sample was
divided amongst two treatment conditions. The two treatment groups were
‘naturally occurring’ in that they were already organized into classes in their
respective schools. Each teacher implemented this study with at least one class that
used virtual laboratories and one class that did not, thus maintaining consistent
instruction from the same teacher between the experimental and control group,
except for the intervention. Therefore, while students were subjected to different
treatment groups, other variables, such as the teachers, the physical classrooms, the
content delivered, and the level of ability of the students, were controlled for in that
they were present in both the experimental and control groups. This was
accomplished through stratified random sampling procedures (Gibson & Chase,
2002) in which the variables were equally spread amongst ‘strata’ or sub-groups.
This design allowed for more accurate results because the effects of confounding
variables were equally distributed throughout the study’s sample.
To address the third research question about the effectiveness of virtual laboratories,
students were divided amongst experimental classes that used virtual laboratories
and control classes that did not. The experimental group included 169 students and
the control group totalled 153 students. Students in VL and non-VL classes were
spread fairly equally amongst the teachers as shown in Figure 3.1.
103
80
70
60
50
40 Non‐VL students
30 VL Students
20
10
0
Teacher Teacher Teacher Teacher Teacher Teacher
A B C D E F
Figure 3.1 Numbers of Students in Experimental and Control Classes for Each Teacher
Out of the 322 students, 171 were females and 151 were males. This delineation is
relevant for the fourth research question about the differential effectiveness of
virtual laboratories for males and females. The different sexes were distributed
equally amongst the participating teachers, as shown in Figure 3.2. As well, males
and females were fairly well distributed amongst experimental and control classes.
The control group had 79 females and 74 males, while the experimental group had
90 females and 76 males.
80
70
60
50
40 Females
Males
30
20
10
0
Teacher A Teacher B Teacher C Teacher D Teacher E Teacher F
Figure 3.2 Numbers of Female and Male Students for Each Teacher
104
Other background information supplied by student participants included their age,
class type, main language of communication, familiarity with technology, and future
career plans. Although the ages of students ranged from 13–18 years, the majority
(60%) of students were ages 14–15. Regarding the main language of
communication, as 94% of students reported using English, the sample was fairly
‘americanized’. Also, most (81%) students were enrolled in standard level biology
classes, while 11% were in honors level biology and 7% were in inclusion classes.
Between 94%98% of students reported having a computer and Internet access at
home and around 80% of students reported spending at least two hours a week
occupied with such technology; thus, the sample was drawn from a largely digitally-
literate population, an important factor for this study that utilized such technology.
Nearly all students (92%) expected to enroll in post-secondary institutions. Finally,
as another indication of students’ interests in science, 39% responded that they
intend to pursue a science or technology-related career, while 54% planned to
pursue other careers in the arts and humanities. This background information is
relevant because it provides a context for the current study, as well as helping to
establish the validity of generalizing the results of this study to other student
populations.
The assessment instrument for this study consisted of scales from learning
environment questionnaires and from standardized achievement examinations as
described in Section 3.4.1. Other resources are noted in Section 3.4.2.
105
duration for administration of the LAG was 3045 minutes. The following sections
describe the nature of the instruments from which each of the above scales were
obtained and how such instruments were developed.
Scales to assess the learning environment were obtained from two different
instruments: the Science Laboratory Environment Inventory (SLEI) and the
Technology-Rich Outcomes-Focused Learning Environment Inventory (TROFLEI),
as described below.
The SLEI was developed specifically to assess the unique role of the laboratory in a
high school or university science classes. In particular, this instrument was meant to
be useful in addressing concerns about the effectiveness of laboratories and whether
the associated costs are justified. This goal is particularly significant for the current
investigation as virtual laboratories offer a more effective and cost-efficient
alternative to traditional laboratories. In developing the SLEI, relevant literature
was reviewed to identify dimensions important in the unique environment of a
science laboratory class, dimensions in existing instruments were considered,
students and teachers were interviewed to guide revisions of the survey during
various stages, and the instrument was subjected to item and factor analyses. This
resulted in the final version containing 7 items per scale (Student Cohesiveness,
Open-Endedness, Integration, Rule Clarity, Material Environment) with a 5-point
frequency response scale (Fraser, Giddings, & McRobbie, 1992, 1995).
A sample of over 5,447 students in 269 classes in the USA, Canada, England, Israel,
Australia, and Nigeria was used to field test and validate the SLEI. Simultaneous
testing revealed consistent scores on internal consistency reliability and discriminant
validity when used with 1,594 students in 92 classes (Fraser, Giddings, &
McRobbie, 1995), as well as predictive validity when used along with attitude scales
to predict the effect on student outcomes (Fraser, Giddings, & McRobbie, 1992).
Further validation was accomplished through a study of 489 senior high-school
biology students in Australia by Fisher, Henderson and Fraser (1997).
106
Advantages of this instrument include its economy (its brevity and easy hand-
scoring), its cyclic design, and the availability of the personal and class versions and
the actual and preferred forms. However, it does contain some reverse-scored items
(Fraser et al., 1992); responses are on a 5-point frequency scale. To illustrate its
application in the evaluation of educational innovations, the SLEI, or adaptations
thereof, have been employed in various studies including the assessment of an
innovative science course for prospective elementary teachers (Martin-Dunlop &
Fraser, 2007), an inquiry-based, computer-assisted learning class (Maor & Fraser,
1996), and the use of anthropometric activities (Lightburn & Fraser, 2007). More
details about the SLEI are described in Section 2.2.2.7.
For the purposes of this study, modified versions of the Integration and Material
Environment scales were used, as described below and in Table 3.1. Because Fraser
and Tobin (1991) argued that personal forms of scales are likely to be more sensitive
in detecting differences between within-class subgroups, the personal form was
chosen to examine differences between subgroups, such as males and females. One
item, modeled after the original items, was added to each scale to create a uniform
version of eight items for each scale on the LAG. To be consistent with responses
for scales borrowed from other instruments, response alternatives were also
modified to a Likert scale of Strongly Disagree, Disagree, Not Sure, Agree, and
Strongly Agree. As well, reverse-scored items were re-worded for clarity and
consistency throughout the LAG, as recommended by Barnette (2000). The
Integration and Material Environment scales appear as questions 17 through 32 in
the LAG (Appendix A).
Integration measures the extent to which the laboratory activities are integrated with
non-laboratory and theory classes (see Table 3.1). This was an important aspect in
the current study because virtual laboratories are content-based and they can be
easily integrated with material learned in class; therefore, it was expected that
students would perceive increased integration as a result of using virtual
laboratories. The dimension of integration is key to maximizing the retention of
knowledge that can be solidified by experience, including experience associated
with virtual laboratories. This scale is categorized under Moos’ Personal
Development Dimension.
107
Material Environment measures the extent to which laboratory equipment and
materials are adequate (see Table 3.1). It is characterized by Moos’ System
Maintenance and System Change Dimension. Because virtual laboratories use
technological materials, it was important to determine whether perceptions of the
use of both technological materials and hands-on materials were favorable or not.
Therefore, there are two aspects assessed by this scale: 1) the perception of virtual
versus real laboratory materials and 2) the inclusion of technological equipment
(through which virtual laboratories are accessed) amongst laboratory materials. It
was expected that students would have less favorable perceptions of hands-on
materials as a result of using virtual laboratories because virtual materials are
designed to function perfectly, in order to minimize disruptions to experimentation.
The scales adopted for the LAG were originally found to be reliable and valid for
assessing students’ perceptions of their psychosocial environment when the
TROFLEI was administered to 1,035 students in grades 10 and 11 at Sevenoaks
Senior College in Western Australia (Aldridge & Fraser, 2003). During the first
year of the school’s operation, the TROFLEI was designed as part of the formative
and summative evaluation of this new school. Strong factorial validity and internal
consistency reliability was found for both the actual and preferred forms of the
TROFLEI. As well, the actual form of each scale was capable of differentiating
between the perceptions of students in different classrooms. Results after four years
of the school’s operation supported the efficacy of the school’s educational
programs and revealed differences between the classroom environment perceptions
of males and females and between students enrolled in university-entrance
108
examinations and in wholly school-assessed subjects (Aldridge & Fraser, 2008).
Since then, the TROFLEI has successfully been modified into two different forms
(Aldridge & Fraser, 2008), applied to studies with different methods (Aldridge,
Dorman, & Fraser, 2004; Dorman, Aldridge, & Fraser, 2006; Dorman & Fraser,
2009), and adapted for use in other countries (Gupta & Koul, 2007; Koul, Fisher, &
Shaw, 2011; Promratrak & Malone, 2006; Welch et al., 2012). More details about
the TROFLEI are described in Section 2.2.2.9.
Four scales from the TROFLEI were chosen for incorporation into the LAG because
of their relevance in assessing important aspects of a technology-rich learning
environment, as described below and in Table 3.1. Each scale contains eight items
and responses are recorded using a Likert scale of Strongly Disagree, Disagree, Not
Sure, Agree, and Strongly Agree, which are scored 1 to 5, respectively. The
wording of some items was modified to fit the conditions of the current study, but
the style and content of these modifications were modeled on the original items.
Scales adapted from the TROFLEI appear as questions 33 through 64 on the LAG
(Appendix A).
Teacher Support is a measure of the extent to which the teacher is helpful to the
students and shows interest in them (see Table 3.1). Individual student – teacher
interactions are assessed with this scale. The student reports the frequency with
which the teacher approaches them or shows interest in their problems. Adopted
from the CES and categorized under Moos’ Relationship Dimension, Teacher
Support is also used in the WIHIC and COLES. This scale was considered
appropriate for this study because the use of fairly autonomous virtual laboratories is
likely to impact on the frequency with which the teacher approaches the student and
the extent to which the teacher is needed to support the student. Therefore, it was
anticipated that student perceptions of teacher support would decrease.
109
grew from similar scales in the CES and CUCEI, and it is also featured in the
WIHIC and COLES. This scale measures a salient quality of virtual laboratories,
namely, the student’s self-motivation to complete the laboratory in a virtual setting
and remain engaged with the activity despite the lack of ‘hands-on’ experimentation.
Because virtual laboratories are interactive and student-centered, it was anticipated
that students’ perceived motivation to complete work would increase.
Table 3.1 Scale Description and Sample Item for each Learning Environment Scale in the
LAG
Scale Scale Description Sample Item
Integration Extent to which regular science My laboratory activities and regular
(SLEI) lessons and laboratory activities science class work are related.
are related
Material Efficiency and functionality of The materials I need for both laboratory
Environment laboratory materials activities and technology are in good
(SLEI) working order.
Teacher Support Extent to which the teacher helps, The teacher goes out of his/her way to
(TROFLEI) befriends, trusts, and shows help me.
interest in students
Task Orientation Extent to which it is important to I do as much as I set out to do regarding
(TROFLEI) complete activities planned and to the activities in this class.
stay on the subject matter
Investigation Emphasis on the skills and I am asked to think about the evidence
(TROFLEI) processes of inquiry and their use for statements in this class.
in problem solving and
investigation
Differentiation Extent to which work assigned is I work at my own speed regarding the
(TROFLEI) individualized for the pace and activities I do in this class.
level of each student
110
Differentiation, a scale originating from the ICEQ, was included in the TROFLEI to
measure the extent to which teachers tailor their instruction and activities for
students according to their abilities, rates of learning, and interests (see Table 3.1).
It is characterized by Moos under the System Maintenance and Change Dimension.
This scale was included in the current study because, as students work
independently on virtual laboratories, which are self-paced, it was anticipated that
student perceptions of differentiation would improve as a result of using virtual
laboratories.
Overall, the TROFLEI was a useful instrument for this study in that it focuses on
student outcomes, a feature sought by the implementation of virtual laboratories, and
is specific to technologically-integrated environments such as the one in the current
study. Most importantly, the validity and reliability of this instrument and its
antecedent, the WIHIC, have been established numerous times. Therefore, the
TROFLEI provides an economical assessment of key aspects of the classroom
learning environment, namely, student interactions with their teacher, the
environment, the class, and other students.
The TOSRA has been shown to be valid and useful in many studies in different
countries (Fraser, Aldridge, & Adolphe, 2010; Welch et al., 2012; Wong & Fraser,
1996), and in the evaluation of educational innovations (Lightburn & Fraser, 2007;
Martin-Dunlop & Fraser, 2007; Raaflaub & Fraser, 2002; Wolf & Fraser, 2008;
Zandvliet & Fraser, 2005). More details about the TOSRA are described in Section
2.3.2.
111
Table 3.2 Scale Description, Justification, and Sample Item for each TOSRA Scale used
in the LAG
Scale Scale Description Justification for this Study Sample Item
Inquiry Extent to which Because virtual laboratories are I would prefer to find
science activities are intended to be student-centered out why something
student-centered and and provoke curiosity, attitudes happens by doing an
curiosity provoking. towards scientific inquiry re experiment than by
likely to increase. being told.
Enjoyment Extent to which student Because virtual laboratories are The technology used
enjoy science lessons. interactive and meant to in activities makes the
stimulate students using audio science lessons more
and visual effects, enjoyment exciting.
are likely to increase.
For the purposes of this study, modified versions of the following scales were
incorporated into the LAG as shown in Table 3.1: Attitude to Scientific Inquiry
(herein abbreviated as Inquiry) and Enjoyment of Science Lessons (herein
abbreviated as Enjoyment). Two items were removed from each scale to achieve a
consistent length of eight items for each scale of the LAG. The TOSRA’s response
alternatives were maintained as a five-point Likert scale with response categories
ranging from Strongly Disagree to Strongly Agree. As well, reverse-scored items
were re-worded for clarity and consistency throughout the LAG (Barnette, 2000),
and the wording on some items was adjusted to incorporate technical terminology
necessary for classes using virtual laboratories. The Inquiry and Enjoyment scales
appear as Questions 1 through 16 in the LAG (Appendix A).
The scale for assessing students’ achievement in the Genetics portion of their
biology classes was composed of items borrowed from various state-level
examinations. The researcher selected 10 items from standardized science
examinations that have already been validated, administered, and scored. Specific
questions were chosen from these examinations to correspond with the content of
the virtual laboratories used in this study; this was made possible by the availability
of public, searchable, electronic databases containing these validated test-bank
items. The standardized examinations from which the achievement items were
selected include the New York State Regents Examination for Living Environment
courses, the Massachusetts Comprehensive Assessment System (MCAS) for
Biology courses, and the Virginia Standards of Learning (SOL) in Biology. The
112
selected items measure the extent to which students understand various concepts in
genetics, including Mendelian inheritance, the structure of DNA, mutations, cloning,
and genetic engineering.
For ease of administration and scoring, all achievement items utilized a multiple-
choice answer format with four possible responses from which to choose. Scoring
was based on the number of items correctly answered and ranged from zero (0) for
no correct answers to ten (10) for all correct answers. The score was then divided in
half for meaningful comparison with scores from other sections of the LAG, which
ranged from zero (0) to five (5). The use of a multiple-choice answer format limited
the range of responses from students; however, while an open-response format
would have reduced this limitation, it might also have led to inconsistency and bias
in scoring and/or it could have discouraged students from responding. Achievement
questions appear as items 65 to 74 in the LAG (Appendix A).
To ensure that students aged 13–18 years could easily read and comprehend each
item on the LAG instrument used in this study, a pilot study was conducted. An
earlier form of the LAG was administered to 96 students taking biology in grade
nine (ages ranged from 13–15 years) during the year prior to the implementation of
this study. This sample was from one school in the state of Massachusetts but its
population was quite diverse and representative of the larger sample used for the
current study.
Students were instructed to highlight words and questions that they did not
understand and comment on the clarity of items. Some students thought that the
original instrument was too lengthy and some did not understand certain terms as
they were intended by the researcher. Based on students’ comments and patterns in
item responses, the researcher modified some of the wording, eliminated the use of
reverse items, and narrowed down the scales to the current eight scales used for the
LAG.
113
3.4.2 Other Resources
The LAG instrument to assess the effectiveness of virtual laboratories was available
in both soft and hard copies. The soft version was administered via a Google
Document Survey Form and the link was provided to participating teachers with a
teacher-specific code. Responses from the electronic questionnaire were
automatically entered into a Microsoft Excel file, available to the researcher
immediately upon submission. The paper version was printed, copied, and mailed to
the participating teachers who returned the questionnaires via mail at the end of the
semester. Responses from the paper version of the questionnaire were entered by
hand into the same Microsoft Excel file created by the electronic version, and the
hard copies were then stored at the Science and Mathematics Education Centre
facilities on the Curtin University campus in Perth, Western Australia. Data files
were encoded and only accessible through the use of a password by authorized
users. Raw qualitative data, such as recordings and transcripts of interviews, were
also stored securely by the researcher in electronic files locked with a password and
in hard copies locked in a cabinet.
114
3.5 Procedures
This section describes how the effectiveness of virtual laboratories was evaluated by
explicating the treatment conditions (Section 3.5.1), and the implementation of the
educational intervention, including design and delivery of virtual laboratories
(Section 3.5.2), the timetable for the execution of the study (Section 3.5.3),
administration of the questionnaire (Section 3.5.4), and some ethical issues (Section
3.5.5). The high school science classes involved in this study were divided into two
treatment groups: one group engaged in virtual laboratories; and the other group
continued to learn in the way in which students had been learning all along.
However, both groups covered the same content. At the end of the semester, all the
classes were given the LAG questionnaire to assess students’ perceptions of their
learning environment, their attitudes towards science, and their understanding of the
science content. Results for the two groups were compared for significant
differences. Further details about the procedure and implementation of my study are
embellished below.
Because of the quasi-experimental design of the study, the 322 student participants
in 21 different classes studying genetics were divided amongst 10 experimental and
nine control classes. Efforts were made to ensure that the two groups were
comparable overall with respect to the range of academic capabilities, socio-
economic status, gender (Section 3.3) and the physical classroom environment, such
as features of the room and the time of day at which students were taught. This was
accomplished through stratified random sampling procedures (Gibson & Chase,
2002) in which the variables were equally spread amongst ‘strata’ or sub-groups.
Thus, the two treatment groups were ‘naturally occurring’ in that they were already
organized into classes in their respective schools. Each of the six teachers who
volunteered for the implementation of the study taught at least one class with the
intervention and one class without the intervention, thus maintaining consistent
instruction from the same teacher between the experimental and control group,
except for the intervention.
115
The experimental group learned the topic of genetics supplemented with virtual
laboratories. A virtual laboratory is broadly defined as “an electronic workspace for
distance collaboration and experimentation in research or other creative activity, to
generate and deliver results using distributed information and communication
technologies”, according to the International Institute of Theoretics and Applied
Physics at the Expert Meeting on Virtual Laboratories in Iowa, USA in 1999
(Rauwerda et al., 2006, p. 230).
116
Appendices D and F. As well, I created a blog for the participating teachers to share
experiences and a forum through which to ask questions. However, most teachers
did not utilize the blog and, instead, preferred to correspond via email.
Students in the control group continued learning and experimenting in their normal
fashion, without the use of virtual experiments. Instructional methods for these
classes included lectures, textbook readings, hands-on experiments, projects, and/or
other activities normally employed in a science classroom. While teachers were not
provided with specific instructions for teaching students in the control condition,
they were directed to ensure that the same content (i.e. genetics) was taught as in the
experimental classes.
While a more effective and pure experimental design would have involved
comparing an experimental group using virtual laboratories with a control group
conducting parallel hands-on experiments for the very same investigation, such a
setup was neither possible nor ideal for this study for a number of reasons. First,
much of the equipment necessary for complicated experiments in molecular genetics
is not available in high school laboratories because of cost and safety issues. As
well, many of these experiments require lengths of time not provided in a typical
biology class, which usually meets for only 4–5 hours weekly. Secondly, the
rationale for evaluating the effectiveness of virtual laboratories is that such an
innovation provides an opportunity for students to learn about skills, procedures, and
an environment to which they would not otherwise normally be exposed. Virtual
laboratories were suggested for use in situations in which such parallel hands-on
117
experiments cannot be conducted. Therefore, the intention of my study was to
evaluate the effectiveness of using virtual laboratories as a supplemental method,
rather than as a method of substituting virtual laboratories for traditional ones.
This section explains how the researcher selected the virtual laboratories for use in
this study and instructed teachers regarding their delivery. More than 20 different
virtual laboratories, related to the topic of genetics, were chosen by the researcher
for their design and use of inquiry. Table 3.3 shows the title, type, description, and
source for eight of the most commonly used virtual laboratories; the respective
sample worksheets are included in Appendix F.
The virtual laboratories were all web-based and accessible via a URL provided to
participants. Software companies, as delineated in Table 3.3, designed them but the
researcher carefully reviewed and picked appropriate experiments in addition to
providing participating teachers with some suggestions regarding their use in the
classroom (See Appendix E). More specifically, the researcher selected virtual
laboratories featuring equipment and associated skill-acquisition not usually
available in a typical high school laboratory. Most laboratories involved testing a
hypothesis elicited from the student, including the analysis of evidence and other
118
Table 3.3 Title, Type, Description and Source for Each Virtual Laboratory
Title Type Description Source
Bacterial Mostly skill- The activity guides students through the process of Howard
Identifica- based but identifying the bacterial sources of an infection based on Hughes
tion follows an matching a specific DNA sequence; it includes Medical
experimental procedures such as PCR, DNA sequencing, sequence Institute
method analysis, and entry of DNA sequences into BLAST http://www.hh
(Basic Local Alignment Search Tool), which searches mi.org/biointer
the public database of DNA sequences to determine the active/vlabs/
correct bacterial species from which the DNA sequence
originates.
Create a Mostly This activity asks students to hypothesize about the NOVA
DNA experimental culprit of a crime and then leads them through the http://www.pb
Fingerprint but focuses on a process of creating a DNA fingerprint to verify the s.org/wgbh/no
specific suspect they chose. va/sheppard/an
procedural alyze.html
technique
DNA Skills-based, in In this activity students learn the procedure of extracting University of
Extraction order to learn a DNA from human cheek cells. Utah
technique http://learn.gen
etics.utah.edu/
content/labs/ex
traction/
PCR or, Skills-based, in Students learn the procedure and concept behind a University of
Polymerase order to learn a Polymerase Chain Reaction (PCR). In the real lab Utah
Chain technique world, this procedure is used in almost every process http://learn.gen
Reaction using DNA for research, forensics, etc. so it is the etics.utah.edu/
beginning step that is part of a larger procedure. content/labs/pc
r/
Gel Skills-based, in In this activity students learn the procedure of gel University of
Electro- order to learn a electrophoresis to visualize and sort DNA fragments by Utah
phoresis technique size. In the real world, this procedure is used to check http://learn.gen
that the materials that one works with (be it DNA, RNA, etics.utah.edu/
proteins) are not lost at key points during a complicated content/labs/ge
experiment; in forensics, gel electrophoresis would be l/
used to compare DNA samples.
DNA Experimentally- In this activity students learn the procedure and concepts University of
Microarray based, it that underlie the use of a DNA Microarray for the field Utah
combines three of genomics; it includes an investigative piece and http://learn.gen
techniques students get to make a real-life application to the etics.utah.edu/
explored in the differences between healthy cells and cancer cells. content/labs/m
activities above icroarray/
Genetics of Experimental This activity allows students to cross Drosophila to APBioLabs
Organisms obtain new generations of fruit flies to observe the http://www.uc
number of phenotypes and eventually determine the openaccess.org
genotypes of the original parental generation. Students /courses/APBi
then compare their observations against a Punnett square oLabs/course/i
that they construct. ndex.html
Transgenic Experimental This laboratory first guides students through the process Howard
Fly Lab but also teaches of constructing transgenic flies that “glow” and then Hughes
some experimenting with those transgenic flies to understand Medical
significant circadian rhythms through patterns of light emissions. Institute
techniques A number of experiments investigate how light/dark http://www.hh
cycles affect patterns of light emissions (the measure for mi.org/biointer
the presence of a biological clock) and eventually lead active/vlabs/
to locating the biological clock in the fly.
119
elements of inquiry as described in Section 2.2.3. However, some virtual
laboratories were linked in a series so that the aims of the first few ‘laboratories’
were to acquire the skills and concepts needed to proceed with a virtual experiment
at a later point. The researcher was careful to avoid virtual laboratories that lack
elements of a true experiment, such as so-called ‘virtual laboratories’ that were
essentially computer games or a simple list of questions for students to research
about a particular topic in science.
Worksheets were provided for many of the virtual laboratories to guide students
through the activity and to enable them to record data and answer questions related
to the experiment (see Appendix F). These worksheets also allowed teachers to hold
students accountable for their work because they could be given a score, which
could have been incorporated into their semester grade.
3.5.3 Timetable
This section reports the logistical aspects of the application of virtual laboratories,
namely, the duration of implementation of the virtual laboratories, the frequency
with which virtual laboratories were administered, and the time intervals between
each virtual laboratory. The selected virtual laboratories were generally meant to
occupy one class period. If teachers suspected that their student would require more
time to complete the virtual laboratory, teachers were advised to assign students a
pre-laboratory designed by the researcher to prime students’ knowledge about the
topic before beginning the actual laboratory. Some skill-only virtual laboratories
required no more than 20 minutes and could be integrated into another lesson or
completed at home.
120
Virtual laboratories and their associated worksheets were made available in
February 2010 and teachers were given until the end of the semester, a duration of
four to five months, to integrate them into their classes.
The frequency with which virtual laboratories were utilized, the interval between
their use, and the duration of implementation of the entire study by each teacher are
detailed in Table 3.4.
Table 3.4 Implementation of Conditions of the Study by Each Teacher including Class
Composition, Duration of Study, the Administration of the Virtual Laboratories
(VL), and Information about Covariates
Teacher and Class Dur- Number of/Titles of Frequency What Did the Notes about
Composition ation VLs Completed & Intervals Control Group Covariates
of of VLs do? (Quotes from
Study Teachers)
Teacher A: 2 4: Two VLs a “I did a paper “The students
Five Classes DNA Extraction week; the lab with one, were
127 students PCR first week, some other heterogeneous,
Grade 8 Gel Electrophoresis they were hands-on work the same topics
Standard Level DNA Fingerprinting one day with another were covered,
apart and and lecture for classes were
(Experimental the second, the other two” both in the
Group = 3 classes; two days morning and
Control Group = 2 apart later in the
classes) school day…I
tried very hard
to provide the
same material
to each group”
All students did
the hands-on for
gel
electropheresis.
Teacher M: 2 4: 3 VLs in “’paper labs’, “time of day
Two Classes DNA Extraction one week where we [for classes]
29 Students PCR and 1 the simulated differed,”
Grade 10 Gel Electrophoresis following some of the “Of the 2
Standard Level Transgenic Fly Lab week steps; hands- classes, the
on lab for gel class that did
(Experimental electrophoresis the virtual labs
Group = 1 class; ; some other had a slightly
Control Group = 1 computer higher
class) activities” academic
ability and
fewer students
with [special]
‘ed’ plans (10%
vs. 18%)”
Did not do any
hands-on
laboratories
with
experimental
classes.
121
Teacher G 12 ~8–10: About once “no hands-on Used many VLs
Three Classes Bacterial a week laboratories, A as
47 Students Identification DNA model demonstrations
Grade 9 DNA Extraction activity with in the classroom
Honors & PCR plastic pieces (not only as
Honors Prep Gel Electrophoresis & a Punnett individual
DNA Microarray square activity student
13 Students Peppered Moth with 4 investigations).
Grades 10-12 Simulation different Used some VLs
ELL Biology Mitosis & Meiosis colored beads that did not
Labbench [for dihybrid contain
(Experimental Stem Cells crosses].” investigative
Group = 1 class; Cloning experiments but
Control Group = 2 Transgenic Mice just to teach
classes) concepts and
skills.
Teacher R 10 5: About once “I am having
Six Classes DNA Extraction a week all classes
129 Students PCR participate.
Standard Level Gel Electrophoresis Half of the
DNA Microarray class will do
(Experimental DNA Fingerprinting the lab, while
Group = half of all the other half
six classes; complete an
Control Group = alternative,
half of all six unrelated
classes) assignment.”
Teacher D 10 At least 4
Three Classes No other
84 Students information
Grade 10 available
Standard Level
(Experimental
Group = 1 class;
Control Group = 2
classes)
While teachers were allowed a certain degree of freedom regarding which virtual
laboratories to implement and the frequency of their implementation, the researcher
suggested interspersing their delivery with the teacher’s normal methods of
instruction throughout the semester. This systematic integration of virtual
experimentation with traditional instruction was recommended by Gallagher et al.
(2005) because it “is more likely to be successful if the training schedule takes place
122
on an interval basis rather than massed into a short period of extensive practice” (p.
364). After completion of at least four virtual laboratories, or whenever their use
was no longer applicable, teachers were instructed to inform the researcher, at which
point access to the questionnaire was granted, as described in Section 3.5.4.
The last item on the questionnaire asked students to record their email addresses to
enter into a raffle. Email addresses were compiled into the electronic database and a
random number generator was used to select the winner of the raffle prize. More
useful to this study, the researcher used these email addresses to send out a request
asking students to participate in interviews via telephone or Skype because school
123
was no longer in session. Further elaboration of selection and collection of
qualitative data sources are described in Section 3.6.1.
All participants and their parents, in addition to those in the school, such as teachers
and principals, were fully informed of the purposes of this study, including the
potential risks and benefits, before collecting data from any students. Each student
received an information sheet describing the study in plain English and was also told
verbally, via a YouTube broadcast. Opportunities for any questions and concerns
were given to students to reassure them that they may withdraw from the study at
any time without prejudice or other negative consequences, such as affecting
students’ school grades. Finally, informed consent was obtained for each class and
school involved in the study.
124
embellish the quantitative data, qualitative methods of data collection were
employed in this study, as recommended by a number of researchers in the field of
learning environments who extol the merits of triangulation (Fraser & Tobin, 1991;
Tobin & Fraser, 1998). A study of technology-based materials by Russek and
Weinberg (1993) revealed that more insight was gained from a mixed-method
approach, than could be obtained from either type of analysis alone. Moreover, Duit
and Confrey (1996) proposed that interviews allow contextualization of students’
responses and a more complete image of students’ ideas.
After the LAG questionnaire had been administered to both the experimental and
control group, the responses from these two groups were compared for significant
differences. As well, semi-structured interviews were conducted with students who
took the LAG and with their teachers. This section deals with the collection
(Section 3.6.1), coding and entry (Section 3.6.2), and statistical methods of analysis
(Section 3.6.3) of quantitative and qualitative data.
Quantitative data were collected using scales from the four instruments included in
the Laboratory Assessment in Genetics (LAG), namely, the SLEI, TROFLEI,
TOSRA, and achievement examinations. The LAG was administered to 322
students in 21 classes in six different US schools in the states of Massachusetts, New
York, Pennsylvania, and Virginia.
125
version of the questionnaire; in other words, there were no situations in which some
students of a particular teacher filled out the paper version and other students of the
same teacher filled out the electronic version. The two different versions were only
provided for teachers’ ease of use, depending on whether the Internet was easily
accessible in their particular school. Teachers B, F, and A utilized the electronic
versions of the questionnaire, while Teachers C, D, and E used the paper versions.
The list of email addresses, supplied by students, was stored in the same file as the
quantitative data and provided the pool of potential volunteers for gathering
qualitative data trough interviews. Therefore, student interviewees were self-
selected from the same sample of students who completed the LAG questionnaire.
For the purposes of this study, 10 open-ended questions were constructed based on
the LAG questionnaire for semi-structured interviews using standard protocols
(Anderson & Arsenault, 1998; Cohen, Manion, & Morrison, 2007; Drever, 1995;
Erickson, 1998). While the quantitative data were limited to the personal form (i.e.
the use of ‘I’ statements, as described in Section 2.2.2) of the questionnaire, the
collection of qualitative data through semi-structured interviews allowed the
researcher to expand the perspective of the responders to the whole class. For
instance, after the interviewee answered a question about whether the class work
126
was difficult, the researcher was able to further ask whether the whole class
perceived the work as being difficult, in addition to the interviewee’s personal
perspective. This distinction between personal and whole-class perspectives was
noted earlier when reviewing the concepts of ‘private’ beta press and ‘consensual’
beta press (Section 2.3.1).
Once the researcher had determined that additional insight was needed to explain the
quantitative results, the possibility of gathering qualitative data materialized. An
email request was sent out to all student email addresses stored in the database
asking for volunteers to participate in the interview process. When a total of six
students followed through on their initial expression of interest to be interviewed,
telephone or Skype appointments were set up for this purpose. Face-to-face
interviews were not possible because the interviewer and interviewees were not
located in the same geographic area. Informed consent was obtained from students
and their parents. Each interview lasted 20–30 minutes and students seemed eager
to contribute to a better understanding of the quantitative results of this study.
Selected statements from student responses to the interview questions are presented
in Chapter 4.
Additionally, participating teachers were also asked for input, via email, using the
same open-ended questions that had been presented to student interviewees. First,
when teachers filled out a form indicating what actually took place during the
implementation of the study, the information contained in Table 3.4 emerged. All
teachers provided this information, but not all teachers chose to answer the questions
for the semi-structured interviews. Therefore, the comments of the three teachers
who contributed to this effort are embedded throughout Chapter 4.
Data from both the paper and electronic forms of the questionnaire were organized
using Microsoft Excel 2007. Responses to the electronic version were entered
automatically into an Excel spreadsheet as they became available. Responses to the
paper version of the questionnaire were entered into the same Excel spreadsheet by
the researcher personally to ensure precision and they were checked for accuracy.
127
The researcher assigned each paper questionnaire a unique identification code for
tracking purposes that aligned with the number of the row in the Excel spreadsheet.
Email addresses shared by the students were stored along with their responses, in
case I needed to contact students for further clarification. For the purposes of
statistical analysis, responses were coded by transforming descriptive data into
numerical values. For instance, personal information regarding students’ career
choices was recorded in the following manner: careers related to the sciences were
given the value ‘1’ while non-scientific careers received a value of ‘2’. The method
of coding was stored in a separate document.
Some patterns of responses in questionnaires indicated that the students did not
complete them with integrity, such as consistently responding Strongly Disagree
responses to every item, or lack of responses to all items other than personal
background questions. Such data were discarded and this phenomenon, in addition
to absences of many students on the day of administration, account for the lower
number of questionnaires than the actual number of students participating in the
study, as reported by teachers.
Regarding recording and entry of qualitative data, each student interview was
recorded using an internal software system on a Macintosh Notebook Computer,
called Garage Band. Auditory clarity was enhanced because telephone and Skype
calls to interviewees were conducted from the same computer. Teacher responses
from interviews were written via email, as they preferred.
128
3.6.3 Statistical Methods for Analysis of Data
3.6.3.1 Research Question 1: Are scales from the Test Of Science Related Attitudes
(TOSRA), Science Laboratory Environment Inventory (SLEI), and
Technology-Rich Outcomes-Focused Learning Environment Inventory
(TROFLEI), as well as achievement items valid and reliable when used with
a sample of high school students taking biology in the US?
Next, the revised scales of the LAG measuring perceptions of the learning
environment (SLEI, TROFLEI), attitudes (TOSRA), and achievement were checked
for internal consistency reliability to determine the extent to which items in the same
scale measured a common dimension. To accomplish this, Cronbach alpha
coefficients, for two units of analysis (the individual student and the class mean)
were calculated. Scales with a Cronbach alpha coefficient greater than 0.60 were
considered to have satisfactory internal consistency reliability, as suggested by De
Vellis (1991).
129
To ensure that each scale measured a unique aspect of the learning environment or
attitude, an index of discriminant validity (Campbell & Fiske, 1959), namely, the
mean correlation of a scale with all other scales, was determined for two units of
analysis – the student and the class.
The final method for validating the questionnaire involved confirming the ability of
the learning environment scales of the LAG to differentiate between classrooms.
The perceptions of students in the same class ought to be relatively similar as
compared with the perceptions of students in different classes. An ANOVA, with
class membership as the main effect, was used to check differences in the
perceptions of the students in different classrooms. Results for this test are reported
as an eta2 value, which represents the proportion of variance in scale scores
accounted for by class membership.
Because the researcher selected the achievement items, additional validation of this
scale was determined by calculating a frequency distribution of the students’ scores
on this scale to check for a normal distribution, an indication of its ability to produce
the same pattern of scores in a larger population (Herrnstein & Murray, 1996).
For the second research aim regarding associations between perceived classroom
learning environment and the student outcomes of achievement in and attitudes
towards science, simple correlation and multiple regression analyses were used with
the individual student as the unit of analysis. Simple correlation (r) was used to
describe the bivariate relationship between each student outcome (attitude or
achievement) with each learning environment scale. Multiple regression analysis
was used to investigate the combined influence of the whole set of learning
environment scales on each student outcome, with the standardised regression
coefficient () being used to indicate the contribution of each learning environment
scale to the variance in student attitudes or achievement when other learning
environment scales were mutually controlled. The multiple correlation (R)
represented the multivariate association between student attitudes or achievement
(the criterion variables) and the set of all learning environment scales (the predictor
130
variables). The strength of associations was measured by the coefficient of multiple
determination (R2).
To analyze data from the third and fourth research aims concerning the effectiveness
of using virtual laboratories in terms of academic achievement, attitudes towards
science, and perceptions of the learning environment, data were subjected to a two-
way multivariate analysis of variance (MANOVA) with the learning environment
scales from the SLEI and TROFLEI and student outcomes (attitudes and
achievement) as the dependent variables, and with instructional method and sex as
the independent variables. Because the multivariate test using Wilks’ lambda
criterion yielded statistically significant differences for the set of dependent
variables, the individual, univariate two-way ANOVA was interpreted separately for
each dependent variable (students’ perceptions of their learning environment, their
attitudes, and achievement), with the student as the unit of analysis. This analysis
enabled an exploration of all possible interactions between both independent
variables (instructional method and sex) for all three types of dependent variables
(students’ perceptions of their learning environment, their attitudes, and
achievement).
Effect sizes were also reported for each comparison to quantify the magnitude of the
difference between two groups (i.e. either between instructional methods, or
131
between males and females). According to Vacha-Haase & Thompson (2004),
effect sizes indicate a more important aspect of a between-group difference than its
statistical significance. Because this difference between means is expressed in
standard deviation units, the effect size indicates that the average score in the
experimental group is different from the average score in the control group by a
certain number of standard deviations. In this study, two different types of effect
sizes were utilized: Cohen’s d and eta-squared (2). Cohen’s d is the difference
between two sample means divided by the pooled standard deviations. Eta squared
is a measure of the strength of association (or effect size) based on the proportion of
variance accounted for by the effect of the independent variable on the dependent
variable.
Overall, analyses of data from interviews can complement the results of quantitative
analyses and provide a richer understanding by filling in gaps perceived in the
questionnaire data. In this study, qualitative data consisted of student and teacher
responses to semi-structured interview questions.
Therefore, responses from interviews, that were recorded and fully transcribed as
described in Section 3.5.2.2, constituted the raw qualitative data for further analysis.
These transcripts were then subjected to content analysis (Neuendorf, 2002) in
which content was coded, tallied, ranked, and analyzed for emergent themes. More
specifically, raw data were ‘chunked’ into color-coded categories and reported
statistically through well-accepted procedures, such as frequency counts, averages,
and percentages for recurring themes (Erickson, 2012; Wolcott, 1994). More
specifically, responses to questions from the same scales of the LAG were grouped
together; however, the researcher also considered themes that emerged from
interviews that were beyond the dimensions measured by LAG scales. Responses
from interviews were analyzed as they became available and then re-analyzed as a
whole for emerging patterns. Analytic induction (Lindesmith, 1947) was also
undertaken in which the qualitative data were viewed and reviewed with various
lenses. As a result of analytic induction, the researcher modified some questions
during the interview and/or focused on certain questions more than others.
132
Wolcott (1994) distinguishes between analysis and interpretation, with the former
referring to the description of the results of content analysis and the latter referring
to “efforts at sense-making, a human activity that includes intuition, past experience,
emotion – personal attributes of human researchers that can be argued endlessly but
neither proved nor disproved to the satisfaction of all” (2009, p. 30). Thus, the
description of content analysis, through statements from interviews that added
insight to the results from questionnaires, are embedded throughout the report of the
quantitative results in Chapter 4. Additionally, the emergent themes stemming from
responses to interview questions, as interpreted by the researcher, are summarized in
the discussion included in Chapter 5.
3.7 Limitations
Even when much time and effort are invested in carefully planning and designing a
study, methodological errors are unavoidable. This section discusses deviations
from the original design and how accommodations were incorporated. Section 3.7.1
describes methodological issues related to loss of sample, while Section 3.7.2
explores ambiguity concerning treatment conditions, Section 3.7.3 notes technical
difficulties, and Section 3.7.4 explains issues regarding administration of the LAG
questionnaire.
In general, a greater sample allows for both the increased detection of statistically
significant effects and the generalization of these effects for larger populations.
Therefore, if the sample for the current study had been larger, results from
quantitative data possibly might have provided more accurate insights into the
effectiveness of virtual laboratories.
Originally, the study was designed for a larger sample (~800 students), which was
made logistically possible due to placement of the researcher in a large school
environment with 21 equally diverse classes all following the same curriculum.
However, permission to conduct the study was overturned by the superintendent of
the district after implementation had already begun. Therefore, the researcher was
transferred to a new school containing 43 students eligible to participate in place of
the original 640 eligible, and thus potential, student participants.
133
Additionally, as described in Section 3.5.4 regarding electronic versions of the
questionnaire, the researcher used a form available for free through the Internet to
any Google user. While this initially worked well, on the day when many students
(at least 100) were to complete the LAG, the link was dysfunctional. Google
acknowledged this error and fixed it within two days time, which allowed some
students to complete the survey. However, it was too late for most of the students to
complete the questionnaire because school was no longer in session and it was
difficult to track students down via email. Unfortunately, this error also affected the
timetable for collecting qualitative data because the last few days of the semester
were spent trying to sort out the technological issue and copying and mailing paper
versions of the questionnaire instead of contacting students to interview them.
Nevertheless, the current sample size was large enough to determine validity and
reliability of the LAG questionnaire even though a larger sample size could have
better informed the quantitative results.
Another issue confounding the results for the effectiveness of virtual laboratories
was a certain degree of ambiguity about the nature of the treatment conditions.
While the demarcation of teaching methods for the experimental and control groups
was clear to the researcher, it was perhaps less clear to the participating teachers.
The researcher wished to grant the participating teachers as much freedom and
independence as possible in the implementation of the study so as not to interfere
with their standards of teaching and preparation of students for end-of-year
standardized examinations in biology. However, the lack of uniformity both in how
teachers taught classes in the control condition (i.e. without use of virtual
laboratories) and in the experimental condition (i.e. use of virtual laboratories)
proved to confuse students’ perceptions of the definition of a ‘virtual laboratory’.
For instance, if a teacher also used an educational computer game with students in
the experimental group, the students might have thought that such a computer game
was also a ‘virtual laboratory.’
134
instances, hands-on laboratories were conducted with students in experimental
classes who were only supposed to complete virtual laboratories. Two teachers had
their VL classes complete four virtual laboratories within two weeks, which perhaps
caused fatigue and boredom in students because they were overexposed to the same
medium.
At one point, a participating teacher stated that “perhaps the line between virtual and
actual is getting blurry!” This statement indicates the lack of clarity about the
definition of a virtual laboratory amongst participants. While not wanting to burden
participating teachers with theoretical discussions concerning the definition of a
virtual laboratory, perhaps the researcher should have more clearly restricted what
sorts of activities should have been employed or avoided in VL classes and non-VL
classes. This is further discussed in Chapter 5 as part of suggestions for further
research.
A key factor that might have affected the outcomes of this study was the availability
of resources. Participants in the experimental condition required computers in good
working order with an uninterrupted Internet connection in order to complete virtual
laboratories. Although participating teachers initially indicated that their schools
provided these resources, as situations often transpire in large school environments
where resources are constrained, access to these materials was not without problems.
Some teachers and students reported that a particular computer or Internet link to a
virtual laboratory was not in good working order and that each student would have
to be paired with another student. In other instances, some students could not
complete the virtual laboratory because of a lapse in the Internet connection.
Perhaps the experience of completing a virtual laboratory in this manner might have
influenced the students’ responses to LAG items measuring the learning
environment, attitudes towards science, and achievement.
135
3.7.4 Instrument Administration
Most detrimental to the administration of the questionnaire was the lack of clarity
about the terms used to refer to virtual laboratories in various items. The subject of
the questionnaire item was often generalized so that students in both VL and non-
VL classes could respond. However, in trying to avoid introducing bias, the
researcher overcompensated by generalizing terms (such as ‘activity’ instead of
‘virtual laboratory’), which possibly gave rise to confusion amongst students, whose
recorded responses might have differed had a more specific term been used in the
item. Therefore, a degree of clarity could have been lost for the sake of integrity.
Perhaps a different version of the questionnaire should have been administered to
participants in the VL classes and non-VL classes for simplification purposes, as
will be suggested in Section 5.4.
3.8 Summary
136
This study combined quantitative and qualitative methods of data collection.
Quantitative methods included the use of a questionnaire called the Laboratory
Assessment in Genetics (LAG) administered to all participants at the end of the
treatment period. The scales were adopted from previously validated questionnaires
that measure students’ perceptions of the learning environment, such as the Science
Laboratory Environment Inventory (SLEI) and the Technology-Rich Outcomes-
Focused Learning Environment Inventory (TROFLEI), in addition to scales
measuring students’ attitudes towards science from the Test Of Science Related
Attitudes (TOSRA) and an achievement scale with items borrowed from
standardized biology examinations. For a sample of 322 US biology students,
learning environment and attitude scales were tested for validity and reliability,
including factor analysis, internal consistency reliability (Cronbach alpha
coefficients), discriminant validity (mean correlation with other scales), and the
ability of the learning environment scales to differentiate between classrooms
(ANOVA).
Finally, concerning the effectiveness of using virtual laboratories, data from the
LAG questionnaire were subjected to a two-way multivariate analysis of variance
(MANOVA) with the learning environment scales and student outcomes (attitudes
and achievement) as the dependent variables, and with instructional method and sex
as the independent variables. Then, when Wilk’s lambda criterion revealed
statistically significant findings for the set of dependent variables as a whole, the
univariate two-way ANOVA was interpreted separately for each dependent variable
(students’ perceptions of their learning environment, their attitudes, and
achievement). To quantify the magnitude of the difference between two groups (i.e.
either between instructional methods, or between males and females), effect sizes
were also calculated. Analyses explored all possible interactions between the two
independent variables (instructional method and sex) for each type of dependent
variable (learning environment, attitudes, and achievement).
137
After quantitative data analysis, qualitative data were collected from six students
and three teachers who were interviewed to explore underlying themes that lent
further insight into the quantitative data. For the purposes of this study, ten open-
ended questions were constructed based on the LAG questionnaire to use in semi-
structured interviews. Responses from interviews were recorded, fully transcribed,
and subjected to content analysis and analytic induction.
138
Chapter 4
“It is a capital mistake to theorize before one has data. Insensibly one begins to
twist facts to suit theories, instead of theories to suit facts.” – Arthur Conan Doyle
4.1 Introduction
This chapter reports and interprets the findings of this study. Each of the research
questions is addressed by analyzing data and then determining whether the
hypothesis for that question is supported.
As described in Chapter 3, the majority of this study was based on quantitative data
collected using the Laboratory Assessment in Genetics (LAG). Qualitative data
stemming from semi-structured interviews were used in an attempt to fill gaps in the
quantitative data, and to provide a more holistic view of the effectiveness of virtual
laboratories.
This chapter first presents results for validation of the instrument used to collect
quantitative data, the LAG. The LAG contains 74 items in nine scales adapted from
several other validated questionnaires: the Science Laboratory Environment
Inventory (SLEI), the Technology-Rich Outcomes-Focused Learning Environment
Inventory (TROFLEI), the Test of Science-Related Attitudes (TOSRA), and
achievement items from state standardized examinations in Biology. More
specifically, two scales (Enjoyment and Inquiry) were adapted from the TOSRA
(Fraser, 1981) to assess students’ attitudes towards science, in general. These scales
originally included some negative items but were modified to be positively worded
in order to increase the readability and clarity of the LAG for students. Some items
were also replaced with new items modeled after the original items contained in the
TOSRA and wording was generalized to include all types of activities in science
lessons.
In order to assess readability, the LAG was first given to a pilot sample of students
and, based on their comments, the number of items and the item wording were
adjusted. In the main study that took place one year later, the LAG was
administered to 322 students, aged 13–18 years, in 12 US public school classes from
Massachusetts, Pennsylvania, and Virginia.
Qualitative data were obtained from this same sample; students and teachers from
these 12 classes were given the opportunity to be interviewed by the researcher and
their responses were recorded, transcribed and analyzed. These comments
accompany the quantitative data, in an attempt to further explain the results, and
they are interspersed throughout Sections 4.4.
Therefore, this chapter reports results for validation of the various parts of the LAG
in Section 4.2, for associations between perceptions of the learning environment
(SLEI, TROFLEI) and attitudes (TOSRA) and achievement in Section 4.3, and for
the effectiveness of virtual laboratories in Section 4.4, including results for the
differential effectiveness of virtual laboratories for males and females.
In order to address the first research question below, the scales composing the LAG
were administered to 322 US students in 12 classes ranging in age from 13–18
years.
140
Research Question 1: Are scales from the Test Of Science Related Attitudes
(TOSRA), Science Laboratory Environment Inventory (SLEI), and
Technology-Rich Outcomes-Focused Learning Environment Inventory
(TROFLEI) questionnaires valid and reliable when used with a sample of
high school students taking biology in the US?
This section reports the factor structure (4.2.1), internal consistency reliability
(4.2.2), and discriminant validity (4.2.3) for learning environment scales and attitude
scales. Section 4.2.4 focuses on the ability of the learning environment scales to
differentiate between classrooms. Validation of the achievement section of the
LAG, comprising the last 10 items, is also reported (Section 4.2.5).
Because items were modified from the original scales from which they were
adapted, the internal structure of the various learning environment and attitude
scales was examined to ensure validity. Principal axis factoring with varimax
rotation (using Kaiser normalization) was employed to inspect the internal structure
of the 64-item survey containing learning environment and attitude scales when used
with the sample in this study. Principal axis factoring analyses inter-relationships
(variability) between all items in the questionnaire and categorizes them by their
common underlying dimensions or factors. Each dimension serves as a construct for
further analysis in this study. The criteria for retention of any item in its scale were
a factor loading greater than 0.40 on its own scale and less than 0.40 on all other
scales. Varimax rotation was applied because of its common use in providing a
scheme for orthogonal rotation; it minimizes the complexity of the components by
making the large loadings larger and the small loading smaller in order to identify
each variable with a single factor.
Table 4.1 provides the factor loadings for these eight attitude and learning
environment scales. Item numbers shown in the table refer to the question numbers
in the questionnaire (Appendix A). Table 4.1 also reports the percentages of
variance and eigenvalues for each scale.
141
Table 4.1 Factor Analysis Results for Attitude and Learning Environment Scales
Factor Loadings
Item No. Inquiry Enjoyment Integration Material Teacher Task Investigation Different
Environment Support Orientation -iation
Q1 .722
Q3 .413
Q4 .670
Q5 .576
Q6 .658
Q8 .686
Q10 .712
Q11 .685
Q12 .617
Q13 .520
Q14 .547
Q15 .664
Q16 .651
Q18 .560
Q19 .494
Q20 .662
Q22 .672
Q23 .489
Q24 .572
Q25 .589
Q26 .512
Q27 .401
Q28 .613
Q29 .717
Q30 .426
Q31 .452
Q34 .721
Q35 .741
Q36 .735
Q37 .656
Q38 .549
Q39 .724
Q40 .717
Q41 .702
Q42 .695
Q43 .636
Q44 .751
Q45 .670
Q46 .594
Q47 .650
Q48 .721
Q49 .588
Q50 .641
Q51 .719
Q52 .481
Q54 .744
Q55 .646
Q56 .656
Q59 .489
Q60 .774
Q61 .436
Q62 .797
Q63 .861
Q64 .783
% Variance 24.75 7.48 5.58 4.47 3.67 3.44 2.88 2.78
Eigenvalue 15.84 4.72 3.57 2.86 2.35 2.20 1.84 1.78
N = 322 students in 12 Classes.
Factor loadings less than 0.40 have been omitted from the table.
Items 2, 7, 9, 17, 21, 32, 33, 53, 57, and 58 were removed from this analysis.
142
Factor analysis resulted in the retention of the original eight learning environment
and attitude scales of the LAG. No more than two items were removed per scale.
Therefore, the items retained supported the factorial validity of the scales modified
from the TOSRA, SLEI, and TROFLEI when used with the sample of 322 students
in this study.
Ten questions were eliminated from the learning environment and attitude scales for
further analysis because they had a factor loading lower than 0.40 on their own scale
and/or greater than 0.40 on any other scale. The following items were removed in
order to improve the internal consistency reliability and factorial validity: Questions
2 and 7 from Inquiry, Question 9 from Enjoyment, Questions 17 and 21 from
Integration, Question 32 from Material Environment, Question 33 from Teacher
Support, Question 53 from Investigation, and Questions 57 and 58 from
Differentiation. For only the scale of Task Orientation, all eight items from the
original version were retained.
Table 4.1 indicates that the optimal factor solution occurred for the set of 54 items.
The percentage of variance for the different scales ranged from 2.78% for
Differentiation to 24.75% for Inquiry, with a total variance of 55.05% for all scales.
The eigenvalues ranged from 1.78 to 15.84. Results from the factor analysis
strongly supported the factorial validity of the scales from the TOSRA, SLEI, and
TROFLEI for this study’s sample of 322 students. These findings replicate other
validation studies (Aldridge & Fraser, 2003; Fraser, 1981; Fraser, Giddings, &
McRobbie, 1992, 1995), as discussed previously in Chapter 2.
Internal consistency reliability is a measure of the extent to which items in the same
scale measure a common construct. Cronbach’s alpha coefficient was used as the
index of internal consistency for this study. After the removal of invalid items from
the factor analysis, the alpha coefficient was calculated for the revised 54-item
questionnaire measuring learning environment perceptions and attitudes towards
science, for two units of analysis (the individual student and the class mean). Scales
143
with a Cronbach alpha coefficient greater than 0.60 were considered to have
adequate internal consistency reliability (De Vellis, 1991).
Table 4.2 Scale Mean, Standard Deviation, Internal Consistency (Cronbach Alpha
Reliability), Discriminant Validity (Mean Correlation with other Scales), and
Ability to Differentiate between Classrooms (ANOVA Results) for Learning
Environment and Attitude Scales
Mean
Correlation
No of Unit of Alpha with other ANOVA
Scale Items Analysis Mean SD Reliability Scales Eta²
Integration 6 Individual 3.76 0.60 0.83 0.40 0.12***
Class Mean 3.90 0.22 0.96 0.64
Material 7 Individual 3.76 0.61 0.81 0.36 0.07***
Environment Class Mean 3.87 0.20 0.85 0.41
Teacher Support 7 Individual 3.67 0.80 0.91 0.36 0.17***
Class Mean 3.91 0.35 0.98 0.58
Task 8 Individual 3.92 0.71 0.91 0.30 0.07***
Orientation Class Mean 3.99 0.29 0.97 0.25
Investigation 7 Individual 3.45 0.74 0.90 0.41 0.14***
Class Mean 3.64 0.30 0.98 0.63
Differentiation 6 Individual 2.79 0.85 0.86 0.16 0.23***
Class Mean 2.85 0.36 0.95 0.20
Inquiry 6 Individual 3.53 0.74 0.81 0.23
Class Mean 3.61 0.25 0.93 0.43
Enjoyment 7 Individual 3.51 0.80 0.90 0.40
Class Mean 3.73 0.34 0.96 0.54
Achievement 10 Individual 2.83 1.38 0.76
Class Mean 2.96 0.99 0.96
***p<0.001
N=322 students in 6 classes.
Table 4.2 shows that reliability amongst scales measuring students’ perceptions of
their learning environment as measured by the Cronbach alpha coefficient ranged
from 0.81 to 0.91 with the individual as unit of analysis, and from 0.85 to 0.98 with
the class mean as the unit of analysis (Table 4.2). Internal consistency reliability
(Cronbach alpha coefficient) for the two scales measuring attitudes adapted from the
TOSRA were 0.81 and 0.90 with the individual as the unit of analysis, and were
0.93 and 0.96 with the class as the unit of analysis. These high reliability estimates
144
are in agreement with past studies using scales from the TOSRA (Fraser, 1981; Teh
& Fraser, 1994).
These internal consistency reliability results are consistent with other studies using
scales from the SLEI (Fraser, 1998a; Fraser, Giddings, & McRobbie, 1995;
Lightburn & Fraser, 2007; Maor & Fraser, 1996; Martin-Dunlop & Fraser, 2007),
the TROFLEI (Aldridge, Dorman, & Fraser, 2004; 2003; Gupta & Koul, 2007) and
the TOSRA (Aldridge & Fraser, 2003; Fraser, 1981; Fraser, Giddings, & McRobbie,
1995; Koul, Fisher, & Shaw, 2011; Wolf & Fraser, 2008).
In general, reliability estimates in Table 4.2 are higher when the class mean was
used as the unit of analysis, as evidenced in other studies (Zandvliet & Fraser,
2005). Because all scales had Cronbach alpha coefficients greater than 0.60, they
demonstrated satisfactory internal consistency reliability for learning environment
and attitude scales.
Discriminant validity results, in Table 4.2, show that most scales were reasonably
unique in the dimension that each assessed. For the classroom learning environment
scales, the mean correlation of a scale with the other scales varied from 0.16 to 0.41
with the individual as the unit of analysis and from 0.20 to 0.64 with the class mean
as the unit of analysis. For scales that measured attitudes towards science, the mean
correlations varied from 0.23 to 0.40 with the individual as the unit of analysis and
from 0.43 to 0.54 with the class mean as the unit of analysis. These findings suggest
that raw scores on these scales measure relatively unique aspects of the learning
environment and attitudes, despite some overlap. However, the factor analysis
145
results reported in Section 4.2.1 attest to the independence of factor scores.
Discriminant validity results are in agreement with findings from past studies using
some of the same scales from the SLEI (Fraser, 1998a; Fraser, Giddings, &
McRobbie, 1992, 1995; Lightburn & Fraser, 2007; Maor & Fraser, 1996; Martin-
Dunlop & Fraser, 2007), TROFLEI (Aldridge, Dorman, & Fraser, 2004; 2003;
Gupta & Koul, 2007), and TOSRA (Fraser, 1981; Teh & Fraser, 1994; Wolf &
Fraser, 2008).
An ANOVA, with class membership as the main effect, was used to determine the
ability of each learning environment scale to differentiate between the perceptions of
the students in different classrooms. Students in the same class should have scores
on learning environment scales that are relatively similar to each other, but which
are different from the scores of students who are in different classes. Table 4.2
reports the ANOVA results, including eta2 values to represent the proportion of
variance in scale scores amongst individual students accounted for by class
membership. Eta2 scores ranged from 0.07 to 0.23 for scales measuring students’
perceptions of the learning environment as measured by the SLEI and TROFLEI.
The scale for achievement was developed by the researcher to assess student overall
content knowledge of genetics. The scale included 10 items from valid and reliable
standardized examinations in Biology from the following states in which the
majority of students sampled in this study attended school: New York,
Massachussetts, and Virginia (see Appendix A).
146
To check the achievement scale for internal consistency reliability, an alpha
coefficient was calculated. This analysis resulted in an alpha reliability coefficient
of 0.76 with the individual as unit of analysis and of 0.96 with the class mean as unit
of analysis, as shown in Table 4.2. These results indicate that the 10-item
achievement scale was reliable.
Other methods were also employed to determine the validity of the achievement
scale. According to the ‘Bell Curve’ theory, scores on any measure of achievement
result in normal distributions for large populations (Herrnstein & Murray, 1996).
Therefore, if valid, this scale should show a relatively normal distribution for the
group of students in this study.
60
50
40
30 Number of
Students
20
10
0
0 1 2 3 4 5 6 7 8 9 10
Achievement Scores
Figure 4.1 Frequency Distribution for Achievement (Mean = 5.67, SD = 2.76, N = 322)
The histogram in Figure 4.1 shows the distribution of achievement scores for all 322
students in this study. The pattern illustrated in the histogram is similar to typical
patterns of normal distribution (Herrnstein & Murray, 1996), except that more
students than expected received an achievement score of 10. The divergence from a
normal distribution might be explained by the relatively small sample size in this
study.
As well, statistical data are available online for students who took the biology
Massachusetts Comprehensive Assessment System (MCAS) throughout the state of
Massachusetts. As two items from the achievement scale were borrowed from this
examination, the researcher compared the percentage of students in this study’s
147
sample that correctly answered the questions with the percentage of students in
Massachusetts that correctly answered these same questions, as another measure of
validity. Results indicate that 68% of students (n= 53,296) taking the biology
MCAS in 2009 correctly answered a genetics-related item (Massachusetts
Comprehensive Assessment System (MCAS), 2009), whereas 70% of participants in
any study correctly answered the same item taken from that examination. For
another genetics item borrowed from the MCAS, 61% of those taking the
examination scored correctly, while 61% in my study did. These results show that
student responses in my study were similar to those of a larger population. This
finding coupled with the near normal distribution of achievement scores displayed in
Figure 4.1 supports the validity of the achievement scale.
To answer the second research question, simple correlation and multiple regression
analyses, with the individual as the unit of analysis, were used to investigate the
relationship between student perceptions of the classroom learning environment and
the student outcomes of attitude towards science and achievement in genetics.
Simple correlation (r) was used to consider the bivariate relationship between each
student outcome (attitude or achievement) with each learning environment scale of
the Laboratory Assessment in Genetics (LAG). Multiple regression analysis was
applied to investigate the combined influence of the whole set of learning
environment scales on each student outcome, with the multiple correlation (R)
indicating the multivariate association between an outcome and the set of learning
environment scales. The standardized regression coefficient () was used to indicate
the contribution of each learning environment scale to the variance in student
attitude or achievement when other learning environment scales were controlled.
148
environment questionnaire with an additional achievement scale (as described in
Section 3.4.1). For these analyses, the scores on the attitude and achievement scales
measured the various effects of the learning environment, which served as the
independent variables.
This section reports the results for associations between the learning environment
and student attitudes (Section 4.3.1) and achievement (Section 4.3.2). Table 4.3
shows simple correlations (r), standardized regression coefficients (), and multiple
correlations (R) — in order to determine the extent of these associations.
Table 4.3 shows that each learning environment scale correlated significantly
(p<0.01) and positively with each of the student attitudes (Inquiry and Enjoyment),
indicating that positive perceptions of the learning environment are aligned with
improved students’ attitudes towards science. The learning environment scale of
Teacher Support showed the highest correlation with both attitude scales of Inquiry
(0.51) and Enjoyment (0.58) and the scale of Differentiation showed the lowest
correlation with both attitude scales of Inquiry (0.22) and Enjoyment (0.17).
149
As shown in Table 4.3, the multiple correlation coefficient (R) between the six
learning environment scales and attitude was 0.45 for the Inquiry scale and 0.70 for
the Enjoyment scale. These values were statistically significant (p<0.001),
suggesting that student attitudes toward science were related to student perceptions
of their learning environment. The coefficient of determination (R2), which is a
measure of the proportion of variance in attitudes explained by learning environment
scales, was 0.20 for Enjoyment and 0.49 for Inquiry scales. This means that
learning environment scales were stronger predictors of Enjoyment than of Inquiry.
In order to further identify which of the six learning environment scales accounted
for variance in student attitudes, when the other five scales were controlled, the
standardized regression coefficients (), shown in Table 4.3, were examined. Three
learning environment scales (Material Environment, Teacher Support, and
Investigation) were statistically significant (p<0.05), positive, independent
predictors of both attitude scales, whereas two scales (Integration and Task
Orientation) were statistically significant, positive, independent predictors of only
the Enjoyment attitude scale. The learning environment scale of Differentiation was
a statistically significant independent predictor of neither attitude scale.
The simple correlation analysis reported in Table 4.3 reveals statistically significant
(p<0.01) and positive associations between three learning environment scales
(Integration, Material Environment, and Teacher Support) and achievement, while
the scale of Differentiation had a statistically significant (p<0.01) and negative
correlation with achievement. The learning environment scales of Task Orientation
and Investigation showed no statistically significant correlation with achievement.
150
As shown in Table 4.3, the multiple correlation between the six learning
environment scales and achievement was 0.34. This value was statistically
significant (p<0.001), suggesting that there is a multivariate relationship between
achievement and student perceptions of their learning environment.
In order to identify which of the six learning environment scales accounted for the
variance in student achievement, when the other five scales were controlled,
regression coefficients were inspected. Standardized regression coefficients ()
indicated that the learning environment scales of Integration, Material Environment,
and Differentiation uniquely accounted for a significant (p<0.05) amount of variance
in academic achievement. On the other hand, Teacher Support, Task Orientation,
and Investigation scales were not statistically significant independent predictors of
achievement.
In another attempt to explain this finding, the six teachers involved in the study were
consulted regarding the amount and type of actual differentiation in their classrooms
during the implementation of the study. They admitted that not much differentiation
was provided. Therefore, perhaps the questionnaire items asking about
differentiation confused students, producing the mixed results reported in this
section.
151
coefficient (ß). Thus, the bivariate relationship between differentiation and
achievement and the multivariate contribution for differentiation on achievement
present conflicting results. This is known as the ‘Suppressor Effect’, often found
with the addition of predictor variables that increase the value of R2 and lower the
error term, resulting in inaccurate statistical significance of a prediction; this effect
is characteristic of low sample power (Thompson & Levine, 1997). Therefore,
results from this study concerning the relationship between Differentiation and
achievement are inconclusive.
Overall, the results of correlation analyses in Table 4.3, show that most learning
environment scales were positively correlated with the student outcomes of attitude
and achievement, which means that positive perceptions of the learning environment
are linked with improved attitudes towards science and better achievement. Such
links between the learning environment and students’ attitudes and achievement as
replicate past studies (Fraser, 2012; Lightburn & Fraser, 2007; Martin-Dunlop &
Fraser, 2007).
To answer the third and fourth research questions regarding the effectiveness of
using virtual laboratories and its differential effectiveness for different sexes, data
were gathered from classes that engaged in virtual laboratories (the intervention) and
classes that did not.
Research Question 3: Is the use of virtual laboratories in high school science classes
effective in terms of students’
152
a) perceptions of their learning environment
Among the six teachers who volunteered for the implementation of this study, each
teacher taught at least one class with the intervention and one class without the
intervention. The total sample for the study was comprised of 322 American
students from Grades 8–10. Over a treatment period of about 2–12 weeks, students
in the experimental group completed at least four to eight virtual laboratory
experiments in genetics using computers that employed ‘point-and-click’ techniques
for manipulating various laboratory materials. Each of these virtual experiments
simulated a real, hands-on experiment and followed a typical experimental format
for which students observe phenomena, formulate hypotheses, set up controls,
follow procedures, test hypotheses, and analyze results. Students in the control
group continued learning and experimenting in their normal fashion, without the use
of virtual experiments; instructional methods for these classes included lectures,
textbook reading, hands-on experiments, and/or other activities. Further detail
regarding the sample, data collection, treatment conditions, and procedures followed
to implement this study are described in Sections 3.3 and 3.5.
Differences in LAG scale scores between instructional methods and sexes were
examined using a two-way multivariate analysis of variance (MANOVA) with the
learning environment scales from the SLEI and TROFLEI and student outcomes
(attitudes and achievement) as the dependent variables, and with instructional
method and sex as the independent variables. Because the multivariate test using
Wilks’ lambda criterion yielded statistically significant differences for the set of
dependent variables, the individual, univariate two-way ANOVA was interpreted
153
separately for each dependent variable (students’ perceptions of their learning
environment, their attitudes, and achievement), with the student as the unit of
analysis. This analysis enabled an exploration of all possible interactions between
both independent variables (instructional method and sex) and all three dependent
variables (students’ perceptions of their learning environment, their attitudes, and
achievement).
To quantify the size of instructional differences and sex differences, effect sizes
were also calculated to describe the ratio of variance in the dependent variable
attributable to the independent variable, while controlling for other independent
variables. The size of an effect is particular to the sample with which the test is
applied and is purported to be an important aspect of an intervention in addition to
statistical significance alone (Vacha-Haase & Thompson, 2004). In this study, two
different types of effect sizes were utilized: Cohen’s d and Eta-squared (2).
Cohen’s d is the difference between two sample means divided by the pooled
standard deviation. Eta squared (2) is a measure of the strength of association (or
effect size) based on the proportion of variance accounted for by the effect of the
independent variable on the dependent variable. The methods of statistical analysis
are also reviewed in Section 3.6.3.
First, a general overview is provided of the results (Section 4.4.1), for the
effectiveness of virtual laboratories, as well as for the interactive effect between the
two independent variables of instructional method and sex. Then, Sections 4.4.2
and 4.4.3 detail the results for each independent variable (instructional method and
sex) separately, while Section 4.4.4 reports the interaction effects that involve the
differential effectiveness of virtual laboratories for different sexes.
The results of the two-way ANOVAs for instructional method, student sex, and the
interaction between independent variables (instructional method and sex) are
displayed in Table 4.4 for the six learning environment and three student outcome
scales.
154
Table 4.4 Two-Way Analysis of Variance (ANOVA) for Instructional Method and
Sex for each Scale of the LAG
Instructional Instructional
Scale Method Student Sex Method/Sex
F Eta2 F Eta2 F Eta2
Learning Environment
Outcomes
155
and Teacher Support and for the attitude scale of Inquiry. The effect sizes for
all three of these scales were small.
Detailed results for each independent variable (Instructional Method and Sex) are
discussed in Sections 4.4.2 and 4.4.3, respectively. As well, a more detailed report
of the interactions from the ANOVAs appears in Section 4.4.4.
This section reports in greater detail results for the third research question
concerning the effectiveness of virtual laboratories as tested on classes that used
these virtual laboratories and classes that did not.
To further clarify the instructional differences presented Table 4.4 above, more
details are furnished in Table 4.5, including the mean score, standard deviation, and
effect size for the difference in scores between VL and non-VL classes for each
learning environment scale and student outcome (attitudes and achievement). The
mean was obtained by dividing the original scale mean by the number of items in
each scale to allow for meaningful comparison of average scores across scales of
varying lengths. F values from the ANOVA in the first column in Table 4.4 are
repeated in Table 4.5 below. Effect sizes (Cohen’s d values) displayed in Table 4.5
illustrate the number of standard deviations from the mean for any differences found
between classes that had the intervention and classes that did not.
The mean scores represent the average of students’ scores on each scale which
ranged from 1 (Strongly Disagree) to 5 (Strongly Agree). Because achievement
scores were measured from 0 to 10, with each score representing the number of
items each student answered correctly out of 10 items, the final score was divided by
2 to allow for consistent and meaningful comparisons of scores between all scales.
156
Table 4.5 Item Mean, Item Standard Deviation and Difference Between Instructional
Methods (ANOVA Results and Effect Size) for each Learning Environment
and Student Outcome Measured by the LAG
Mean Standard Deviation Difference
Scale
Non-VL VL Non-VL VL F Effect Size
Learning Environment
Outcomes
Sample Size = 322 (Control Group =153 and Experimental Group =169)
Differences in the means between classes using virtual laboratories and classes that
did not use virtual laboratories are illustrated in Figure 4.2. While the mean is
reported on a scale from 1 (Strongly Disagree) to 5 (Strongly Agree), the graph only
shows the scale of 2 (Disagree) to 4 (Agree) in order to magnify the difference
between the means. No mean scores fell below 2 or above 4. The first six scales
measure students’ perceptions of the learning environment, the next two scales
measure students’ attitudes, and the last scale measures achievement.
According to the results shown in Table 4.5 and Figure 4.2, students in the
experimental group, using virtual laboratories, did not perceive their learning
environment too differently from students in the control group who did not engage
in virtual laboratories. Statistically significant differences were not found for any of
the learning environment, attitude, or achievement scales. Furthermore, effect sizes
for using virtual laboratories were small, ranging from 0.03 to 0.12 (all small)
standard deviations for the different dependent variables. Although these findings do
not support the effectiveness of virtual laboratories, they also provide no evidence
157
that using virtual laboratories negatively impacted on students’ perceptions of the
learning environment, attitudes, or achievement.
Non‐VL
2 Classes
For most of the learning environment scales in Table 4.5 (namely, Teacher Support,
Task Orientation, Investigation, and Differentiation), as well as for the attitude
scales of Inquiry and Enjoyment, the mean for the experimental group using virtual
laboratories was slightly greater than the mean for the control group for which no
virtual laboratories were used. These patterns are also demonstrated in Figure 4.2.
Conversely, the means for the VL classes for the dimensions of Integration, Material
Environment, and Achievement were slightly lower than the means for the non-VL
classes.
158
Machotka, & Nafalski, 2003; Raineri, 2001; Toth, Morrow, & Ludvico, 2009; Yu,
Brown, & Billet, 2005).
The quantitative difference between the two comparison groups for Teacher Support
was almost negligible. Similarly, replies about Teacher Support during interviews
indicated no differences between classes that used virtual laboratories and the
classes that did not. Teacher A noted, “Assistance [between the two groups] was
about the same. Maybe a little more explanation [was required for VL classes just]
to get started.” As well, Teacher M agreed but added, “I would say that the non-VL
students needed more teacher assistance. The virtual labs that I chose had very clear
directions and stepped students through processes at a good pace for them. The
main questions from the VL group were more to do with navigation of the site,
rather than content.” Students tended to agree that teacher assistance was similar for
the two treatment groups. In response to being asked whether she needed help with
the virtual laboratories, Lara answered “Usually it was just because I put the website
in wrong, but it was never just to get things done.” Therefore, the type of support
needed in each treatment condition differed in the non-VL classes, more
instructional assistance was needed and, in the VL classes, more technical assistance
was needed but the amount of teacher support was roughly the same.
Additionally, questions about Teacher Support in both the written questionnaire and
the semi-structured interview caused mixed understanding among students about
whether they referred to the support from the physical teacher or from the virtual
program. For instance, Lara stated “I think it’s not as easy understanding science
when you have one teacher per 20-something students and I think it’s easier when
you have one computer working with you one-on-one; I think it helps a lot more and
you get a lot more out of it.” In this instance, virtual laboratories represent the
159
teacher and the personalized feedback is equivalent to the support that a teacher
would offer. Perhaps this misunderstanding of the term ‘Teacher’ (i.e. either the
actual teacher or instruction from a computer program) caused the absence of clear
quantitative results; in the future, the lack of clarity in the wording of items on the
Teacher Support scale ought to be considered when applying the scale when other
methods of instruction are used.
The highest score for students’ perceptions of their learning environment was for the
scale of Task Orientation, even though the difference between the two groups was
small and non-significant. Regardless of instructional method, students in these
science classes seemed motivated to complete the tasks set. Interest in the aspect of
Task Orientation originally motivated the researcher to initiate this study because
virtual laboratories contain an extrinsic motivational element that lends itself to task
completion, as explained in Section 1.2. However, quantitative and qualitative data
showed no differences between students who used virtual laboratories and students
who did not in terms of Task Orientation.
Responses from students and teachers during qualitative data collection reflected the
high quantitative score for Task orientation amongst both groups. All four students
in the experimental group and two students in the control group noted that they were
motivated to complete their work. As well, teachers noted that they did not observe
any differences between the classes regarding motivation to complete the activities,
as indicated by the quantitative data. Thus, it can be inferred that motivation to
complete tasks, as measured by Task Orientation, is not an outcome of some
extrinsic factor, such as virtual laboratories; rather, it is intrinsic motivation that
might be a predictor of the degree of task completion for any activity, whether
innovative or traditional. As Teacher M said, “I think that the motivation differs
among students, not between the two classes [VL versus non-VL].” As such,
perhaps the scale of Task Orientation could be further delineated into extrinsic
motivation (the intended measurement in this study) and intrinsic motivation (the
measurement perceived by students and teachers in this study) when applied to
measuring the effectiveness of an innovative intervention.
160
Comments from student and teacher interviews also reflected the lack of a
significant instructional difference for Investigation. Amongst both treatment
groups, students at this maturity level seem to prefer, or have been conditioned to
prefer, prescribed instructions and clear guidelines, allowing them to feel more
control and preventing them from straying too far from the expected result of the
experiment. As Lara in a VL class confided “I’d rather not have to go back and do
things a million times because I messed up; I’d rather get it right the first time and
learn from it.” Erica in the control group also related: “I prefer the teacher giving us
a set of instructions.” These observations suggest that the implementation of
innovative interventions that aim to increase students’ sense of Investigation might
be more successful with more senior students and/or in non-traditional environments
where students are already encouraged to investigate independently.
According to Table 4.5, the difference of 0.12 standard deviations between the
means of VL classes and non-VL classes was the greatest for the scale of
Differentiation, albeit still not statistically significant. No major differences
between the groups were noted during interviews with students and teachers.
However, students in the VL classes commented that they were allowed to go on to
the next task once they had completed the previous one; this practice is part of the
self-paced nature of virtual laboratories. Teacher A observed, “They [virtual
laboratories] also allowed the more advanced students to move more quickly
through the labs.”
Qualitative data were also obtained for the two attitudes scales, for which means
were higher (albeit not significant) for VL classes than non-VL classes. Teachers
and students did not observe any differences regarding the level of inquiry between
instructional methods. However, the researcher noted a theme that emerged from
student interviews based on the Inquiry scale: students preferred hands-on activities
and the opportunity to collaborate with other students, both being features present in
traditional ‘wet-labs’ and absent from virtual laboratories; these features are both
aspects of Inquiry but such inquiry-driven activities might not necessarily have
resulted in mastery of concepts or skills. Hayley gave numerous examples of sordid,
shock-provoking hands-on activities that piqued her sense of Inquiry, such as “you
take the egg and you either put it in vinegar or in syrup…the egg was huge, …it
161
was disgusting!” However, Hayley was unable to explain the concept learned from
such activities. Lara’s comment also revealed this theme: “…not me [but] a lot of
people enjoy doing the [hands-on] labs like mixing the chemicals and dissecting and
it wouldn’t be as enjoyable for them to just be on the computer clicking on things.
But I actually thought it was better because the computer helped [me] to understand
things and it would say ‘good job, you understand this now’ or it would say ‘no you
didn’t do this right, try again’…”. Therefore, while higher levels of Inquiry were
aligned with hands-on laboratories, according to student interviews, the level of
inquiry did not necessarily result in greater learning, which was a separately
measured dimension.
Regarding Enjoyment, Table 4.5 shows a mean score of 3.53 for VL classes and
3.48 for non-VL classes (effect size of 0.06). As opposed to traditional ‘chalk and
talk’ instruction, investigative laboratory activities, whether hands-on or virtual, are
likely to promote feelings of enjoyment as suggested in Section 2.5, which justifies
the tendency of both groups to score closer to the ‘agree’ side of the scale.
162
Teachers’ assessments of students’ enjoyment in using virtual laboratories showed a
different perspective, one that did not necessarily offer any advantage for virtual
laboratories with regard to Enjoyment. Teacher A related, “I think the students liked
the VL classes because they added some variety to the usual classroom
environment.” Similarly, Teacher M agreed, “In my classroom, I would use virtual
labs as another tool in addition to hands-on-labs, class work, and lecture. Virtual
labs are great for labs where you might not have the equipment to do the labs, and
they are a way to preview/review other work that you have done in class.”
Conversely, the means for the VL classes for the learning environment dimensions
of Integration and Material Environment were slightly lower than the means for the
non-VL classes. The finding concerning Integration (albeit not significant) might
suggest that the successful implementation of virtual laboratories depends on how
well the particular teacher integrates the intervention with the content of the
curriculum, but it might not necessarily indicate anything about the integrative
nature of virtual laboratories themselves. That is, students’ perceptions for the
dimension of Integration might be more affected by differences amongst teachers,
than by the instructional method. Comments from student interviews did not differ
all that greatly between those who used virtual laboratories and those who did not,
thus supporting the quantitative results. As well, all participating teachers claimed
that they fully integrated the laboratory activities into the topics explored at the time,
irrespective of instructional method.
The difference in the means for Material Environment was slightly negative but
nearly negligible. Qualitative data obtained from interviews also supported this
finding. Responses from students, regarding the equipment used in science
laboratories, were mixed. Students in VL classes reported that computers were
“slow” or that the number of available computers was insufficient for the number of
students in the class, while Teacher G mentioned, “there were not enough working
laptops”. Even if there was ample computer access, Teacher M explained that
“there were times when the websites that we were trying to access were jammed up,
and so they had trouble getting to a lab.” The functionality of equipment in non-VL
classes was also variable. Lara mentioned that wet-lab equipment was inadequate
and that “microscopes definitely were something we had a problem with
163
because…[they] were pretty old…and it took away from our learning time so that
was a bit of a pain.” Therefore, for schools where the condition of digital equipment
far surpasses the condition of traditional laboratory equipment, a phenomenon more
common in recent years, the use of virtual laboratories might be beneficial. Teacher
M agreed: “The biggest difficulty with hand-on labs in genetics is the expense and
technical expertise to use more sophisticated equipment.”
Students in the VL classes also scored negligibly lower than students in the non-VL
classes in terms of achievement. Therefore, the quantitative data suggest that both
instructional methods were equally effective with regard to content retention and
understanding. These findings replicate results from the other small number of
studies using virtual laboratories (Cobb et al., 2009; Cross & Cross, 2004; Javidi &
Sheybani, 2006; Stuckey-Mickell & Stuckey-Danner, 2007).
Qualitative data showed that all four students interviewed from VL classes reported
that they had a good understanding of genetics (the content for the virtual
laboratories), scored highly on their particular class examinations, and were able to
explain these concepts to the interviewer orally. Out of the two students in the non-
VL classes, one reported that she had a good understanding of genetics and the other
did not. Student interview responses from the two groups did not seem to indicate
any advantage in using virtual laboratories with regard to achievement. As well,
Teacher M noted, “I’m not sure it [VLs] made a difference. The larger factors may
be student ability and motivation.”
164
The theme noted in the discussion of the Inquiry scale resurfaced in interview
responses concerning achievement: the understanding of content did not correlate
with the sense of intrigue from ‘hands-on’ investigations. For instance, Teacher M
observed students “…doing less mental processing of hands-on labs and being more
partner-dependent. In the VL [virtual laboratory], they had to do the thinking on
their own.” In this way, virtual laboratories might have required students to reflect
on the content and engage in higher-level inquiry-based skills, as opposed to the
more hands-on approach of traditional laboratories that were devoid of such higher-
level skills. Virtual laboratories provided an environment free from ‘hands-on’
distractions. This theme is supported by the literature: simulations and virtual
laboratories are likely to increase conceptual understanding (Marbach-Ad, Rotbain,
& Stavy, 2008; Raineri, 2001; Toth, Morrow, & Ludvico, 2009; Tsui & Treagust,
2004) and traditional laboratories focus more on design skills and the scientific
process (Ma & Nickerson, 2006; Toth, Morrow, & Ludvico, 2009).
Therefore, to conclude the findings obtained from qualitative data, it seems there are
two components to laboratories (whether innovative or traditional) that might
necessitate separate measurements in future studies: 1) exploration, which includes
investigation, use of physical tools and techniques (‘hands-on’), and getting dirty,
and 2) understanding what the laboratory is investigating and how it relates to the
content learned in class.
165
Table 4.6 Item Mean, Item Standard Deviation and Sex Difference (ANOVA Results and
Effect Size) for Each Learning Environment Scale and Student Outcome
Measured by the LAG
Standard
Scale Mean Deviation Difference
Female Male Female Male F Effect Size
Learning Environment
Integration 3.70 3.83 0.61 0.58 3.83* 0.22
Outcomes
Inquiry (Attitude) 3.46 3.60 0.72 0.75 3.06 0.19
To further understand the differences presented in Table 4.4, more details are
furnished in Table 4.6, including the mean score, standard deviation, and difference
between males and females for each learning environment scale and student
outcome (attitudes and achievement). F values for sex differences from the
ANOVAs in Table 4.4 are repeated in Table 4.6. As for instructional method
differences, effect sizes are displayed in Table 4.6 to illustrate the magnitude of
differences found between female and male scores expressed in standard deviation
units. These mean scores are also displayed graphically in Figure 4.3 to show sex
differences in learning environment, attitudes, and achievement scales.
Table 4.6 reveals statistically significant differences (p<0.05) between males and
females for the learning environment scales of Integration and Differentiation.
Males perceived these aspects of their learning environment to be more positive than
females. These differences were associated with small effect sizes (0.22 and 0.24
standard deviations, respectively). A statistically significant difference (p<0.01)
also emerged between males and females for the attitude scale of Enjoyment, with
166
males reporting more enjoyment in science than females, and with magnitude that
can be considered small to medium (0.33 standard deviations).
2 Females
Males
The magnitude of differences for those scales for which sex differences were non-
significant ranged from 0.04 to 0.20 standard deviations (all small). Examination of
the means in Table 4.6 also clarifies the direction of these differences. Although
most differences between the sexes were small and non-significant, a pattern still
emerged: males scored higher than females on nearly all scales (i.e. Integration,
Material Environment, Teacher Support, Investigation, Differentiation, Inquiry,
Enjoyment, and Achievement) except for Task Orientation, for which females
scored higher than males.
Integration measures the extent to which regular science classes and laboratories are
related. In this case, males perceived the laboratory activities to be more relevant to
the content learned in class than did females. If males enjoy the laboratories more,
as indicated by the results of this study as well as other studies (see below regarding
Enjoyment), then they might perceive a stronger connection between the
laboratories and their science classes than do females. However, this finding is
inconsistent with other studies of the SLEI, which indicated that females perceived
more Integration than males (Fraser, Giddings, & McRobbie, 1995; Kijkosol, 2005).
167
Differentiation measures the extent to which work assigned is individualized for the
pace and level of each student. Males in this sample perceived that they completed
tasks at a different pace and level from their female peers, contributing to the
significant difference found in this study. Differentiation can be an aspect of the
broader phenomenon present in male behavior during laboratory activities, as
explained by the qualitative data below and in Section 4.4.4.
The attitude scale of Enjoyment measures the extent to which students enjoy science
lessons. According to the results of this study, males enjoyed their science classes
significantly more than females. This phenomenon is well documented in the
literature (Neathery, 1997; Oakes, 1990; Raaflaub & Fraser, 2002; Wolf & Fraser,
2008), suggesting that males typically derive greater enjoyment from science, and
the ensuing laboratory activities, than females.
168
were “just joking around and girls were more quiet” and focused, so girls “got more
answers than boys.” She also commented that the boys both create more
distractions but can also work better with distractions, whereas the girls “need quiet
to concentrate.”
169
dependent variables. Table 4.7 repeats the results from the two-way ANOVAs
(previously reported in Table 4.4) for the interaction between instructional method
and sex. The presence of a statistically significant instruction-by-sex interaction
was used to identify the differential effectiveness of virtual laboratories for males
and females.
Outcomes
Inquiry (Attitude) Female 3.51 3.41 0.68 0.75 5.03* 0.03
Male 3.47 3.74 0.80 0.68
170
For each scale, Table 4.7 also displays the mean and standard deviation separately
for four groups, namely, males in the control group (Non-VL), males in the
experimental group (VL), females in the control group (Non-VL), and females in the
experimental group (VL).
The average item means reported in Table 4.7 can be used in the interpretation of
the statistically significant interactions between method of instruction and sex.
Means also have been graphed in Figures 4.4–4.6 for the three significant
interactions.
171
Material Environment
3.8
3.7
3.6
Females
3.5 Males
3.4
3.3
Non‐VL Classes VL Classes
Figure 4.4 Differential Effectiveness of Virtual Laboratories for Females and Males for
the Learning Environment Scale of Material Environment
This pattern suggests that males in VL classes might feel that laboratory equipment
and materials, such as the technology required for virtual laboratories, were
adequate while females perceived this less so. Conversely, males in non-VL classes
perceived the functionality of equipment used in traditional laboratories slightly less
favorably than females. This finding is supported by results from other studies that
show a significant difference between instructional methods for Material
Environment (Lightburn & Fraser, 2007; Maor & Fraser, 1996) and by the
differential effectiveness reported for an intervention for males and females in terms
of Material Environment (Quek, Wong, & Fraser, 2005).
Qualitative data also confirmed the more positive perceptions of learning media (i.e.
materials) amongst males in the VL classes compared to females. As teacher A
observed, “Perhaps there was a slightly greater interest on the boys part [rather than
the girls], simply because some of the [virtual] labs were much like a video game.”
Literature suggests that boys are more engaged with interfaces that mimic video
games (Brotman & Moore, 2008; Farenga & Joyce, 1997; Hanson, 2009) and,
because virtual laboratories share a similar interface, males might be more open to
and perceive greater functionality in equipment that engages them. The virtual
laboratory interface, as in gaming, gives the user more control over the results and,
as Wolf quipped, “males prefer to have a sense of control over the experience and
172
that such control is a motivating factor for them.” (2006, p. 118). On the other hand,
females did not seem to be as affected by the medium for learning.
Teacher Support
3.9
3.8
Females
3.7 Males
3.6
3.5
Non‐VL Classes VL Classes
Figure 4.5 Differential Effectiveness of Virtual Laboratories for Females and Males for
the Learning Environment Scale of Teacher Support
This finding might be a reflection of the fact that males are more willing to explore
innovations than females and will ask for, and therefore receive, more assistance
from their teachers in so doing. In contrast, females might be more comfortable
eliciting and consequently receiving teachers’ assistance in the traditional
environment to which they are more accustomed. Such a pattern for perceptions of
increased Teacher Support by females (in traditional classrooms) replicates past
research (Raaflaub & Fraser, 2002; Wong & Fraser, 1996).
173
Responses from student interviews did not seem to identify any differences between
VL and non-VL classes in sex differences for the dimension of Teacher Support.
Out of the six students interviewed, three females and two males reported that they
felt a high degree of Teacher Support, regardless of instructional method. Only one
student in the non-VL class admitted that she felt the teacher was unclear in his
instruction. Teachers stated that they did not notice any difference between the
different sexes or between the classes (VL versus non-VL).
Inquiry
3.9
3.8
3.7
Females
3.6 Males
3.5
3.4
Non‐VL Classes VL Classes
Figure 4.6 Differential Effectiveness of Virtual Laboratories for Females and Males for
the Attitude Scale of Inquiry
174
perceptions for Inquiry, they perceived relatively less Inquiry with virtual
laboratories than with traditional methods. Support for this finding is evident from
Wolf and Fraser’s (2008) study that reported the same pattern of more positive
attitudes for males than for females in an inquiry setting, as compared to slightly
more positive attitudes for females than for males in a non-inquiry setting.
Qualitative data also supported this finding. Students and teachers alike agreed
males seemed to engage in experiential, inquiry-driven activities and therefore
perceived more Inquiry, but that females were liable to follow through with the
work required and gain more of an understanding from the activities, as demanded
by more traditional environments. The delineation between initial interest in an
activity and the motivation to understand the content of the activity, as well as
follow through with task completion, was a theme previously noted in qualitative
data at the conclusion of Sections 4.4.2 and 4.4.3. In this section, the delineation is
divided along sex differences. Interviewees observed that males tended to engage
because of initial interest of a novel activity (i.e. virtual laboratories), while females
were more motivated to understand content and complete tasks, regardless of the
activity, and that females might even be intimidated by such novel activities.
175
The trend for all three significant interactions is that there were greater differences
between males and females in VL classes than in non-VL classes. Males
consistently scored higher in the VL classes than did the females, whereas females
consistently scored higher in the non-VL classes than did the males. This is a
noteworthy pattern in that virtual laboratories seemed to be more beneficial for
males than females with regard to perceptions of the learning environment (on two
scales, Material Environment and Teacher Support) and attitudes (Inquiry), but
females tended to fare better in more traditional learning environments without such
technological interventions as indicated by numerous studies (Aldridge & Fraser,
2008; Kijkosol, 2005; Koul, Fisher, & Shaw, 2011; Wolf & Fraser, 2008; Wong &
Fraser, 1996).
4.5 Summary
The Laboratory Assessment in Genetics (LAG), the instrument used for this study,
contains scales from two learning environment questionnaires (the SLEI and
TROFLEI) and an attitude questionnaire (the TOSRA), and some achievement
176
items. Validation of the LAG was based on 322 US students in 12 grade 8–10
classes.
Principal axis factor analysis with varimax rotation and Kaiser normalization led to
a reduction in the number of items on the LAG from 64 to 54, which increased the
validity and reliability of the six learning environment and two attitude scales. All
remaining items had a factor loading of 0.40 or higher on their own scale and lower
than 0.40 on any other scale; the total variance was 55.05% for all scales. Use of
Cronbach’s alpha reliability coefficient confirmed strong reliability for each of the
SLEI, TROFLEI, and TOSRA scales, as well as for the achievement items;
Cronbach alpha coefficients ranged from 0.76–0.91 with the individual as the unit of
analysis and 0.85–0.97 with the class as the unit of analysis. Discriminant validity
analysis supported the unique nature of each learning environment and attitude
scale. ANOVA results also indicated that all the learning environment scales could
differentiate between the perceptions of students in different classrooms. All these
results supported the validity and reliability of these scales for use with this sample
and add to past research that also validated scales from the SLEI (Fraser, Giddings,
& McRobbie, 1995; Lightburn & Fraser, 2007; Martin-Dunlop & Fraser, 2007), the
TROFLEI (Aldridge, Dorman, & Fraser, 2004; 2003; Gupta & Koul, 2007) and the
TOSRA (Aldridge & Fraser, 2003; Fraser, 1981; Fraser, Giddings, & McRobbie,
1995; Koul, Fisher, & Shaw, 2011; Wolf & Fraser, 2008).
177
of achievement, even though Differentiation resulted in a significant negative
bivariate association with achievement. Overall, these results show strong links
between learning environment and attitude scales, and moderate links with
achievement; this is supported by past research (Fraser, 2012; Lightburn & Fraser,
2007; Martin-Dunlop & Fraser, 2007).
Finally, the effectiveness of virtual laboratories was investigated for LAG scales.
Differences in LAG scale scores between instructional methods and sexes were
examined using a two-way multivariate analysis of variance (MANOVA). Because
the multivariate test using Wilks’ lambda criterion yielded statistically significant
differences for the set of dependent variables, the individual, univariate two-way
ANOVA was interpreted separately for each dependent variable (students’
perceptions of their learning environment, their attitudes, and achievement), with the
student as the unit of analysis. Effect sizes were also calculated to quantify the size
of instructional differences and sex differences. This analysis revealed no
significant differences for instructional method, and moderate significant sex
differences, with males reporting more positively for the scales of Integration,
Differentiation, and Enjoyment.
178
Chapter 5
Discussion
“Intuition becomes increasingly valuable in the new information society precisely
because there is so much data.” – John Naisbitt
5.1 Introduction
The aim of the current study was to evaluate the effectiveness of virtual laboratories,
an educational innovation, in terms of students’ perceptions of the learning
environment, their attitudes towards science and their achievement in science. The
differential effectiveness of such virtual laboratories was also explored for males
versus females.
Previous chapters included the rationale for this study in Chapter 1, the literature
that provided the context for this study in Chapter 2, the research methods used to
implement the study in Chapter 3, and the results for the four research questions that
guided this study in Chapter 4.
This chapter will first summarize the earlier chapters regarding research methods
and results (Section 5.2), explicate the significance of the results and implications
for educational research and practice (Section 5.3), point out the limitations of this
study, suggest directions for further research (Section 5.4), and provide a final
conclusion for the study (5.5).
This study was first conceptualized based upon the researcher’s anecdotal
observation that the interest of students not normally engaged in science classes was
piqued by the use of virtual laboratories. Therefore, the researcher set out to test this
initial observation methodically to determine if virtual laboratories were indeed
effective in increasing students’ positive perceptions of the classroom, their
attitudes, and levels of achievement. Because this phenomenon seemed to initially
manifest especially for males, the researcher also wished to test differential
effectiveness of virtual laboratories for male and female students.
179
The rationale for this study is based on a combination of improved standards in
science education, particularly for the topic of genetics, and the lack of improvement
in the resources necessary to enable students to attain those higher standards.
Virtual laboratories represent a possible method to narrow the gap between lack of
resources and higher standards in science education in that they allow students to
experience laboratory environments and experiments that would not otherwise be
possible in a high school classroom but with which students are required to be
familiar.
First, the relevant literature was reviewed concerning learning environments, the
framework for this study, and one of the measurements of effectiveness for virtual
laboratories. The field of learning environments seeks to understand the effects of
the psychosocial aspects of the classroom on learning, from the student’s
perspective. Over the last 40 years, the field of learning environments has become
more important in educational research and, along with its development, numerous
important questionnaires have emerged.
Next, the role of students’ attitudes towards science was explored by defining the
term ‘attitude’, explaining the assessment of attitudes, and reviewing the effect of
various educational interventions on students’ attitudes. Attitudes constituted
another criterion of effectiveness for virtual laboratories. The issue of student sex in
science education was also considered because girls and boys might respond
differently to virtual laboratories in terms of their perceptions of and attitudes
towards their classes. As well, because research reveals a gender gap in science
achievement (Hill, Corbett, & St. Rose, 2010; National Center for Educational
Statistics (NCES), 2012a; Scantlebury, 2012), it was deemed appropriate to
investigate the differential effectiveness of virtual laboratories for different sexes.
Finally, the topic of virtual laboratories was addressed within the context of
educational technology. Virtual laboratories are defined as electronic workspaces
that are based on interactive simulations of scientific experiments. Benefits include
the increased emphasis on conceptual understanding and reduced reliance on
constraints, such as time, safety hazards, geographic distance, and cost. While
technological interventions in the classroom are often predicted to be more useful
180
than studies have shown (Russell, 1999), they are not generally detrimental to
students’ learning and they are therefore considered to be effective alternatives for
certain educational experiences.
The remainder of this section reviews the research methods and key findings for
each research question (Sections 5.2.1–5.2.4) and also summarizes the qualitative
data gathered from students (5.2.5) and from teachers (5.2.6).
Research Question 1: Are scales from the Test Of Science Related Attitudes
(TOSRA), Science Laboratory Environment Inventory (SLEI), and
Technology-Rich Outcomes-Focused Learning Environment Inventory
(TROFLEI) questionnaires valid and reliable when used with a sample of
high school students taking biology in the US?
Scales to measure the learning environment were taken from the Science Laboratory
Environment Inventory (SLEI) (Fraser, et al., 1992) and the Technology-Rich
Outcomes-Focused Learning Environment Inventory (TROFLEI) (Aldridge &
Fraser, 2003), both of which have been validated in numerous countries, in different
content areas, and with various age levels, as described in Section 2.2.2 (Aldridge &
Fraser, 2003; Fraser, 2012; Fraser, Giddings, & McRobbie, 1992). Scales to
measure students’ attitudes were borrowed from the Test Of Science Related
Attitudes (TOSRA), which also has been validated in numerous countries, in
different content areas, and with various age levels, as described in Section 2.3.2
(Fraser, 1981; Fraser, Aldridge, & Adolphe, 2010; Ogbuehi & Fraser, 2007; Welch
et al., 2012; Wong & Fraser, 1996). Items in each of these scales were modified; for
181
instance, negatively-worded TOSRA items were worded positively, wording in a
learning environment scales was generalized to include their application to virtual
laboratories, and some items in all scales were removed or added to ensure a
consistent number of eight items per scale. Validity and reliability analysis were
also necessary to check the validity of these modifications.
To assess the validity and reliability of the scales, the factor structures of the WIHIC
and TOSRA items were checked using principal axis factoring with varimax rotation
for the sample of 322 students in 12 classes. Next, the internal consistency
reliability for each SLEI, TROFLEI, and TOSRA scale was used to measure the
extent to which items in a given scale assess the same construct. As well, the mean
correlation of a scale with the other learning environment and attitude scales was
used an index to assess the uniqueness of each scale and ensure discriminant
validity. Furthermore, the ability of each SLEI and TROFLEI scale to distinguish
between different classrooms was assessed using an ANOVA.
Key findings for the validity and reliability of scales used for the LAG reported in
Section 4.2 are summarized below:
The optimal factor solution occurred for the set of 54 items in 8 scales from
the SLEI, TROFLEI, and TOSRA, after the removal of 10 items to increase
validity, with a total variance of 55.05% for all scales.
The 54 remaining items from the SLEI, TROFLEI, and TOSRA showed high
reliability and satisfactory discriminant validity for two units of analysis
(individual and class mean).
The learning environment scales (SLEI, TROFLEI) were able to differentiate
between the perceptions of students in different classrooms.
182
Achievement scores showed a close-to-normal distribution and scores on
selected items were similar to scores for a larger population for the same
items.
As with past research, modified scales from the SLEI (Fraser, Giddings, &
McRobbie, 1992), TROFLEI (Aldridge & Fraser, 2003), and TOSRA (Fraser, 1981)
showed strong validity and reliability. The findings suggest that these scales can be
effectively utilized to assess student perceptions and attitudes in high school
classrooms in the US. The almost normal distribution of achievement scores is in
line with patterns of scores from most standardized examinations (Herrnstein &
Murray, 1996) and scores on selected items were similar to those of a larger
population (Massachusetts Comprehensive Assessment System (MCAS), 2009),
therefore suggesting that such items are appropriate measures of achievement in
genetics for high school students in the US.
Associations between the learning environment and student outcomes (attitudes and
achievement) were investigated using simple correlation and multiple regression
analyses with a sample of 322 students in 12 classes, and using the individual means
as the unit of analysis.
Key findings for the associations between the learning environment (as measured by
the SLEI and TROFLEI, a total of six scales) and attitudes (as measured by two
TOSRA scales) were reported in Section 4.3.1 are summarized below:
183
(Integration, Material Environment, Teacher Support, Task Orientation,
Investigation) were positive, independent predictors of the Enjoyment
attitude scale.
Key findings for the associations between the learning environment (as measured by
six SLEI and TROFLEI scales) and achievement reported in Section 4.3.2 are listed
below:
184
e) attitudes towards science, and
f) academic achievement in genetics?
The intervention investigated in this study involved six teachers each teaching at
least one class that used virtual laboratories and at least one class that did not, over a
period of about 2–10 weeks. Altogether, there were 322 students, who were diverse
in ability and socio-economic status, in 12 US grade 8–10 classes. The virtual
laboratories available for application in the classroom were chosen by the researcher
for their emphasis on inquiry skills as well as complex conceptual understanding of
techniques not otherwise available in a high school classroom.
To explore the differences between modes of instruction, and also between males
and females, as well as to find interactions between instructional method and sex, a
two-way MANOVA was used for the set of learning environment, attitude, and
achievement scales. The multivariate test using Wilks’ lambda criterion yielded
significant differences, and so the univariate ANOVA was interpreted for each scale.
Key findings for the differences between the two instructional methods in terms of
learning environment and student outcomes from Section 4.4.2 are summarized
below:
185
These findings replicate those from other studies reporting that virtual laboratories
offered neither advantages nor disadvantages over other methods of instruction
(Cobb et al., 2009; Cross & Cross, 2004; Javidi & Sheybani, 2006; Russell, 1999;
Stuckey-Mickell & Stuckey-Danner, 2007), and suggest that virtual laboratories
might be useful as a supplementary tool in science classrooms, rather as a substitute
for more traditional methods, such as hands-on laboratories (Nedic, Machotka, &
Nafalski, 2003; Raineri, 2001; Toth, Morrow, & Ludvico, 2009; Yu, Brown, &
Billet, 2005). Qualitative data were consistent with the quantitative results; a more
detailed summary of this can be found in Section 5.2.4. However, a subtle pattern
emerged from the qualitative data: higher levels of Inquiry were perceived with
hands-on laboratories than with virtual laboratories, but the level of inquiry did not
necessarily result in greater understanding while several students who used virtual
laboratories did show such understanding.
Key findings for the differences between males and females, regardless of
instructional method (see Section 4.43), were:
Significant but moderate differences were found between males and females
for the learning environment scales of Integration (0.22 standard deviations)
and Differentiation (0.24 standard deviations) and for the attitude scale of
Enjoyment (0.33 standard deviations).
All significant differences revealed scores that were higher for males than for
females.
For the rest of the scales not showing significant differences, males also
scored higher than females, except for the scale of Task Orientation (-0.14
standard deviations).
Modest effect sizes for other differences between the sexes occurred for the
scales of Material Environment (0.20 standard deviations), Investigation
(0.16 standard deviations), and Inquiry (0.19 standard deviations). Other
scales had effect sizes of less than 0.10 standard deviations.
The finding that males perceived the learning environment more positively than
females can be contrasted with past research (Fraser, Giddings, & McRobbie, 1995;
Kijkosol, 2005) and requires further investigation. However, past research indicates
186
more positive attitudes for males towards science than for females (Neathery, 1997;
Oakes, 1990; Raaflaub & Fraser, 2002; Wolf & Fraser, 2008), and this is consistent
with my findings. The finding that no significant differences existed for different
sexes regarding achievement is also consistent with recent research suggesting a
narrowing of the gender gap in science achievement (Gupta & Koul, 2007;
Neathery, 1997; Oakes, 1990; Osborne, Simon, & Collins, 2003). Qualitative data
revealed that, although males enjoy scientific, investigative activities more than
females, females might be the ones who are more motivated to complete the work
(as measured by Task Orientation). More details for qualitative data are
summarized in Section 5.2.4.
Key findings for the differential effectiveness of the instructional methods for males
and females in terms of learning environment and student outcomes (Section 4.4.4)
are summarized below:
Virtual laboratories were more effective for males than for females for
Material Environment, Teacher Support, and Inquiry, but instruction without
the use of virtual laboratories was nearly equally effective for males and
females on all scales.
Similar patterns were described by Wolf and Fraser (2008) in that males perceived a
more positive learning environment and attitudes in the class with an inquiry-based
intervention than in the class without the intervention, but that generally the opposite
was true for females. Other studies also reported differential effectiveness of an
intervention for males over females for the dimensions of Material Environment
(Quek, Wong, & Fraser, 2005), Teacher Support (Khoo & Fraser, 2008; Raaflaub &
Fraser, 2002), and Inquiry (Wolf & Fraser, 2008). In general, more positive
perceptions of the learning environment for females in traditional classrooms have
been noted (Fraser, Giddings, & McRobbie, 1995; Kijkosol, 2005; Raaflaub &
Fraser, 2002; Wong & Fraser, 1996).
187
Qualitative data indicated that males were keen to plunge into experiments that they
perceived to contain high levels of inquiry, whereas females were somewhat
apprehensive. Both students and teachers observed that, in general, males are more
accepting of, and excited by, interventions, especially technological ones, than are
females, which is supported by past research (Brotman & Moore, 2008; Farenga &
Joyce, 1997; Hanson, 2009).
188
Table 5.1 Summary of Student Interview Results for Students Experiencing each
Instructional Method for each Learning Environment and Outcome Variable
(Based on Section 4.4)
Learning Comments from Students in VL classes Comments from Students in non-VL
Environment classes
Scale/Outcome
Integration Students cited numerous examples of Mixed responses revealed that most
laboratory activities that connected to laboratory activities were related to
concepts recently learned in the concepts learned in class, but some
classroom or used an introduction to were not. Some students were not
concepts learned subsequently. able to explain how the activity fitted
with the topic they learned.
Material Students cited examples about how some Students reported that equipment
Environment traditional laboratory equipment was in fine working order, except
(microscopes) was old and caused that sometimes the Internet
problems, which took away time from connection was slow.
learning. Many students also
commented on the state of technological
equipment (computers, Internet) with
mixed responses as to their functionality.
Teacher Support Students recounted that the teacher was Some reported that the teacher was
always helpful whenever students had helpful. Others felt that, while the
questions but that the teacher did not tell teacher was knowledgeable,
them exactly what to do. Examples knowledge was not transmitted
included evidence of forming personal clearly. They wished for the teacher
relationships. to provide more instruction.
Task Orientation Students reported their desires to finish Students reported their desires to
what they started and finish work on finish what they started and finish
time, and described feeling positive as a work on time, and described feeling
result. positive as a result.
Investigation Students reported that they were given This dimension was not addressed by
diagrams and graphs to interpret the interviewees.
evidence for investigations, and that they
had control over their experiments.
Inquiry Students stated their preferences to Students stated their preferences to
(Attitude) experiment themselves rather than be experiment themselves rather than be
told about a result. However, they told about a result, as well as the
preferred to be given a hypothesis to opportunity to find solutions together
test, rather than construct one on their with other students.
own.
Enjoyment All students described their enjoyment Students reported that, because the
(Attitude) of science classes and looked forward to teacher was boring and the
them. As examples, some cited VLs and laboratory activities were not clear,
others cited hands-on laboratories. All the class was not much ‘fun.’
students reported satisfaction about Students stated a preference to be
being placed in the VL class. Some placed in the VL class, even though
students admitted to trying VLs at home. they enjoyed the ‘hands-on’ factor of
experiments. No students reported
trying experiments at home.
Achievement Students found the content challenging, Students found the content
some admitted needing the teacher’s challenging and had difficulty
help and not all were able to explain the explaining the concepts. However,
concepts. Students reported that they they stated that they generally
generally understood the material in understood the material in genetics
genetics and achieved well in this topic. and achieved well in this topic.
Some students pointed to VLs in
assisting their understanding because of
the instant feedback.
189
At the end of each interview, the researcher informed the interviewer that the
quantitative data did not show major differences between VL and non-VL classes,
and asked the interviewee for his or her thoughts about why no such differences
appeared. The following is a summary of the key points that the interviewees
mentioned as explanations for the lack of evidence for the effectiveness of virtual
laboratories (see Section 4.4.2).
Students were also asked to comment on differences between males and females that
they perceived during laboratory activities (see Section 4.4.3). Additionally, the
researcher noted whether statements were made by males or females. These
responses were categorized and summarized by the researcher into the dimensions
listed in Table 5.2.
Mostly, the qualitative data supported the quantitative results but provided some
insight regarding patterns of differences between the sexes, such as the observation
that males tended to initiate experiments and relish handling equipment and
specimens. In contrast, while initially apprehensive, females would follow-through
on the task required and focus on the purpose of the experiment. It follows that
technological interventions, such as virtual laboratories, might perhaps serve to
distract females from the work that they set out to do, while new media engage
males, causing the former to have more positive perceptions in a traditional learning
environment and the latter to have more positive perceptions in an altered learning
environment. However, there were not enough male interviewees to reach a solid
conclusion via the qualitative data and these qualitative results should be verified
with a larger sample in a future study.
190
Table 5.2 Summary of Student Interview Results for Sex Differences for each Learning
Environment and Outcome Variable
Learning
Environment Scale/ Perceived Sex Differences
Student Outcomes
Integration No differences between the sexes was noted for this scale.
Material Males mentioned the audio and visual effects as a positive feature of VLs
Environment but no other differences were noted between the sexes regarding the
functionality of equipment.
Teacher Support Females preferred to have the teacher more involved in any activity to better
guide them, whereas males tended to go at it alone. Females also described
their personal relationship with teachers whereas males simply stated
whether or not teachers were helpful. This finding tentatively explains the
differences found in the non-VL classes but not in the VL classes.
Task Orientation Females reported that males would dive right into an activity but often leave
unfinished the follow-through work, which the females would complete.
Investigation No differences between the sexes was noted for this scale.
Inquiry (Attitude) Males seemed motivated by activities that allowed them to jump in and test
things out themselves, whereas females preferred a set of prescribed
instructions. This was also noted by female students about their male peers.
Enjoyment Males were reported as being noisy, which can be interpreted as evidence of
(Attitude) their enjoyment of laboratory activities. Otherwise, both sexes seemed to
enjoy VLs and non-VLs.
Achievement Students reported that males and females at achieved at equal levels. Scores
posted by the teachers for all to see also revealed this.
While teachers were not identified as the subjects of my study, nor were they
included as the unit of analysis, their feedback about the implementation of the
study adds valuable insight to the current data. Six teachers, including the
researcher, were involved in the evaluation of the effectiveness of virtual
laboratories and were asked to comment about the various logistical aspects of this
study, as well as note their own observations about students’ perceptions of the
learning environment, attitudes, and achievement, as well as gender issues. Four of
the six teachers responded, excluding the researcher to avoid introducing bias, and
their comments were categorized according to the dimensions in Table 5.3.
When teachers were asked why greater differences between the two groups were not
apparent in the quantitative data, they responded that confounding variables could
include the amount of previous exposure that students have had to other laboratory
experiences. Some schools or teachers allow more hands-on investigations whereas
other school or teachers lack the resources to be able to do so and might just use
texts, lectures or videos instead. As well, some teachers used other computer-based
191
activities while implementing virtual laboratories, which might have confused
students when providing their perceptions of virtual laboratories. In conclusion,
whether a virtual or hands-on laboratory or a computer-based activity, the
boundaries that define each of these activities were somewhat blurred. In future
studies, clearer instructions about which activities should or should not be used in
providing feedback about each treatment condition might produce different results.
Table 5.3 Summary of Teachers’ Observations for each Learning Environment and
Outcome Variable, and Gender
Learning
Environment Scale/ Teachers’ Observations
Student Outcomes
Integration All teachers mentioned that they tried to align the laboratory activities with
the content of what was being learned in class.
Material Some teachers noted difficulty with accessing the websites for the VLs
Environment because of a slow internet connection or because computers were in short
supply. One teacher noted mentioned that VLs were advantageous for the
topic of genetics because the expense and technical expertise needed to use
more sophisticated equipment for hands-on laboratories in genetics are
challenging.
Teacher Support Teachers agreed with the students that assistance for the VL group was
mainly about getting started but, otherwise, the VLs were self-guided. Some
teachers observed that more help was needed in the non-VL group.
Task Orientation No differences were observed between the two classes. One teacher
commented that the ability to complete a task depends more on the student’s
motivation and ability than the instructional method. Another teacher
mentioned that student motivation was a predictor of the effectiveness of VLs
rather than the other way around.
Investigation No differences between the groups were noted for this scale.
Differentiation Teachers observed that students in the VL group were able to advance
through the activities at their own pace and review parts, as necessary.
Inquiry (Attitude) One teacher commented that because students could progress at different
paces with VLs, the more skilled ones were able to progress further and
experience more inquiry. Teachers agreed with students that males tended to
take action right away, with VLs and non-VLs, leading to more inquiry.
Enjoyment All teachers noted that VLs are a valuable addition to the regular classroom
(Attitude) activities because students seemed to enjoy them, but VLs should not replace
other activities. Several teachers perceived the males to be particularly
engaged in VLs, more so than the females, which might be because of their
familiarity with other virtual environments online and with video games.
Achievement One teacher reported that her classes, regardless of the instructional method,
perceived the genetic achievement items as being too easy. Another teacher
observed that males were required to do more mental processing with VLs, as
opposed to non-VLs, in that they simply explored and left the mental
processing to their female partners.
192
researchers, developers, entrepreneurs from the gaming industry, education
practitioners, and policy makers to facilitate “rich intellectual collaboration”
(National Research Council (NRC), 2011, p. 3). The results of my study add one
more piece to the body of evidence amassed by these professionals about the
effectiveness of virtual environments in education. The findings herein pose
important implications for both the field of educational research (Section 5.3.1) and
for practitioners in education (Section 5.3.2).
A leading authority in the field of educational research, Fraser (2012) advocates the
incorporation of learning environment scales in evaluating the effectiveness of
educational innovations because traditional measures of effectiveness (such as
achievement) do not provide a complete picture of the educational process. Despite
a number of recent studies (see Section 2.3), the amount of research involving
assessing the impact of educational innovations on transforming the classroom
learning environment is small relative to the speed at which educational innovations
are being incorporated into classrooms. Thus, this study was the first of its kind to
adopt a learning environment framework in which the classroom environment, in
addition to achievement and attitudes, served as a criterion of effectiveness in
evaluating educational innovations. Its findings contribute to the growth in research
into evaluating educational innovations within the increasingly rich and diverse field
of learning environments.
The findings of this study also confirmed positive associations between learning
environment dimensions and attitudes as reported in previous studies (Aldridge &
Fraser, 2003; Fraser, 2012; Lightburn & Fraser, 2007; Wolf & Fraser, 2008).
Addressing student attitudes towards science in the early high school years (grades
8–10) is important because studies have pointed to the decline in such attitudes at
this time (Oliver & Venville, 2011; Tytler & Osborne, 2012). Based on the results
of this study, males who engaged in VLs exhibited more positive attitudes
(regarding inquiry) towards the class. Because Material Environment, Teacher
Support, and Investigation were positive, independent predictors of the Inquiry
attitude scale, these findings further highlight the importance of considering the field
of learning environments in future research.
194
Of interest were significant differences between the perceptions and attitudes of
males and females in this study. Males perceived greater levels of Integration,
Differentiation, and Enjoyment than females. These differences build upon the
well-studied topic of gender imbalance in science education (Scantlebury, 2012) and
could provide direction for future research in this area, especially with regard to
technological innovations in the classrooms.
A degree of effectiveness for virtual laboratories was indeed suggested by the results
of this study, but only for a subsample: for males in VL classes versus males in non-
VL classes. The positive value of virtual laboratories, however, was not evident for
females in VL classes relative to non-VL classes. However, this could be an area of
investigation in future research.
Finally, the findings from this study as well as those from similar studies (Raineri,
2001; Toth, Morrow, & Ludvico, 2009) suggest the need for expanded development
of virtual laboratories, especially regarding the aspects of inquiry, resources, and
teacher support, as well as further evaluative research regarding their effects among
students in secondary and post-secondary classrooms.
The outcomes of this study have the potential to inform policy-makers who call for
technological advancements in education and for administrators and teachers who
could implement these technological tools in their classrooms.
Innovations that alter the dynamic of the traditional classroom, from collaborative
teaching to the incorporation of technology, such as online textbooks and virtual
laboratories to instances of ‘learning without walls’, such as fully online classes or
distance education, have been heralded as a solution for increasing student
motivation and for initiating a paradigm shift in defining the learning environment.
However, the results of this study do not fulfill this promise. The results of this
study simply point to the value of virtual laboratories in providing an equally
beneficial experience for students in alternative educational environments, such as
online or distance education, or for students in schools that lack resources for hands-
on laboratories.
195
Perhaps the most important implication of this study is that it provides a practical
model for teachers for integrating virtual laboratories into traditional high school
classrooms. The results of this study suggest that virtual laboratories can be
incorporated confidently into science curricula without detrimental effects, in
contrast to fears that virtual learning is disadvantageous to students. Added benefits
include that virtual laboratories are an efficient, safe, and cost-effective alternative
to running physical laboratories, that students are able to learn independently and,
more importantly, that they are exposed to laboratory equipment, procedures, and
skills that they could not otherwise access because of limited funding and
maintenance.
Based on the findings in this study, males who engaged in virtual laboratories
exhibited significantly more positive attitudes (Inquiry) toward the class than males
in non-VL classes. Also, because Material Environment, Teacher Support, and
Investigation were positive, independent predictors of the Inquiry attitude scale,
196
improving these aspects of the learning environment could result in improved
attitudes amongst males in classrooms with such technological interventions, a
valuable observation for educational practitioners to note. Furthermore, by
redesigning virtual laboratories to incorporate the preferences of females, who
appreciate certain aspects of VLs, such as personalized and immediate feedback, it
is possible that the attitudes among females could also improve in inquiry classes.
To further engage females, perhaps product developers could merge virtual
experimentation with the realm of social media to allow for greater collaboration
and interpersonal interactions as well as interactions with inanimate objects. In
general, improving students’ attitudes toward science at this stage might lead to
increased overall interest in science that influences the rest of their science courses
throughout high school and beyond.
Significant differences also emerged between males and females in this study,
regardless of the instructional method. Males perceived greater Integration,
Differentiation, and Enjoyment in science classes. Teachers can utilize these
findings in their own classrooms to ensure a more gender-fair environment by
stressing the integration of laboratory work with class work with females, by
providing females with more opportunities for differentiated learning, and by
incorporating activities that are of greater interest to females.
With improvement in the perceptions of the learning environment and attitudes for
males, and without less positive perceptions of the learning environment, attitudes,
and achievement for males or females, it would seem that using virtual laboratories
could be an effective method for teaching laboratory-based content by introducing
students to specialized techniques not otherwise experienced in a high school
classroom setting. This allows teachers to expose students to scientific inquiry in
the real world without sacrificing numerous class periods by attempting the
techniques on their own (if they are even feasible or affordable at the high school
level), and without the safety hazards associated with such activities. Ultimately,
while it is possible that this educational innovation can be disregarded as being of
limited benefit to students in today’s technological society, further research into the
development and evaluation of virtual laboratories is necessary.
197
5.4 Limitations and Suggestions for Further Research
Human error affects all experiments, and my study, which not only involved a
human researcher but also human subjects, was no less error-prone. This section
revisits and summarizes the limitations of this study that were described in greater
detail in Section 3.7. As a result of the quantitative and qualitative data, other
limitations also arose, which were not addressed in Section 3.7 but are described in
this section. Additionally, this section recommends suggestions for future research
on the effectiveness of virtual laboratories based on each limitation noted for this
study.
The sample for this study consisted of 322 American high school students. While
there was much diversity amongst the students in this sample, the size of the sample
was relatively small. An even larger sample would have increased statistical power
and could have permitted differences to be identified more confidently. As well, a
larger sample would have reduced individual idiosyncrasies that could have existed
with this group of students. Similarly, a sample of interviewees greater in number
and diversity would have been desirable and likely to increase insight into the
quantitative results.
Part of the reason why the sample size was limited was a loss of opportunities that
would have allowed more students to respond to the questionnaire. As noted in
Section 3.7.1, the link to the online questionnaire was non-operational at the time
when two teachers intended to administer it. Because the school year was over, time
limitations prevented students from responding to the questionnaire when the link
was fixed or when paper versions could have been provided. This error also limited
the researcher’s ability to recruit interviewees because, during the summer break,
students (and teachers) are apt to neither respond to school-related requests nor
remember the details of what occurred during the school year. In the future, it
would be advisable for the researcher to note the closing date for the school year for
each teacher, in order to ensure that the implementation of the study is completed
well before that date and to allow extra time to fix any errors. Indeed, at the outset,
more time should be allotted to enable increased efforts in finding participants
before the implementation of the study. The suggested timetable for implementation
198
of such a study, assuming the experimental design and preparation of materials is
complete, is 8–10 months of an academic year.
Regarding the sample, the original research proposal included another group for
which to investigate the differential effectiveness of virtual laboratories in addition
to different sexes: minorities. However, the data collected and analyzed for this
were disregarded because of contradictory results, which would decrease the validity
of the conclusions based on this research. Future studies should attempt to
investigate the differential effectiveness of virtual laboratories for minorities with a
sample with a better representation of minority students in both the experimental
and control group.
Controlling the treatment conditions was also a limiting factor in this study (see
Section 3.7.2). Ideally, all conditions between the experimental and control groups
should have been identical, besides for the use of virtual laboratories. Naturally,
such a setup is impossible in a school setting. Nevertheless, certain conditions could
have been controlled better, such as uniformity of teaching resources amongst the
control group and more consistency regarding the frequency with which VLs were
administered.
Other aspects of this study also pointed to the importance of the role of the teacher
over the instructional method, as noted in Section 4.4 from students’ responses to
the interview questions. Each teacher taught both VL and non-VL classes, thereby
199
controlling for differences between teachers. However, the precise manner in which
the VL activities were integrated into the traditional classes depended on the teacher.
Additionally, the degree of enthusiasm and commitment of the teacher to an
alternative teaching method could have influenced student perceptions. Similarly, in
another study about a web-based learning environment, researchers highlighted the
role of the teacher in affecting students’ perceptions and, ultimately, in the
educational effectiveness of the environment (Chandra & Fisher, 2009; Eklund,
Kay, & Lynch, 2003). The inclusion of both pretest and posttest administrations of
a questionnaire in future studies that seek to repeat such an evaluation might
alleviate some of the issues concerning differences between teachers and differences
amongst laboratory activities.
Another issue related to the different treatment groups was the ‘John Henry effect’
mentioned in Section 2.5.5. According to the quantitative data (see Table 4.2), the
mean scores measuring students’ perception of the learning environment, attitudes,
and achievement ranged from 2.79 to 3.92 with the student as the unit of analysis.
These results demonstrate that, overall, regardless of instructional method, students
tended to agree with the questionnaire statements, indicating their positive
perceptions of the learning environment, positive attitudes, and above-average
achievement in their science classes. Therefore, the ‘John Henry effect’ might
explain the lack of significant differences for instructional method; the control group
might have worked harder to improve their learning experience because these
students (and their teachers) knew that they were competing against the group using
virtual laboratories, which was assumed to produce better results.
In fact, while most teachers taught at least one class with the use of virtual
laboratories and at least one class without, one teacher divided each of her classes so
that half of the students in each class used virtual laboratories and the other half did
not. In this instance, the potential for the ‘John Henry effect’ was stronger because
the students in the control group saw what the students in the experimental group
were doing, and they might have over-compensated for the expected difference
when responding to the questionnaire.
200
To account for this issue in future studies, a double-blind design might produce
more accurate results. Participating teachers should not be informed about the exact
purpose of the study, and they should be given more precise instructions for the
control group. For example, the researcher could provide alternative activities for
the control group so that the comparison of students across different teachers would
be uniform. Furthermore, an improved design would involve students in answering
the questionnaire before the implementation of the study, in addition to answering
the questionnaire upon completing the virtual laboratories or comparison
instructional method.
The questionnaire itself might also be improved in a future study to enable the
emergence of more accurate results. As reported by participating teachers, a number
of their students complained about the length of the questionnaire. Based on the
results of this study, the dimension of Differentiation could be removed from the
LAG because it did not produce any significant differences for the instructional
method or for the instructional method x sex interaction, and because its items were
poorly understood by students, as evidenced by the interview process. Also, further
clarity regarding terminology in certain items could be enhanced by defining the
terms for each scale. For instance, before presenting the items for Teacher Support,
instructions could have delineated what is or is not included in the reference to
‘teacher’.
The researcher chose to borrow and adapt scales from previously-validated and
often-used questionnaires in the field of learning environments but, in retrospect, the
novel research presented in this thesis begged for the creation of a new instrument
or, at least, some new scales that could more accurately measure the defining
features emerging from virtual technology. Also the 10 achievement items could
have been better mapped to reflect how simulations affect students’ understanding
of genetics. Future studies could evaluate the validity of newly-created scales that
might be adapted to the implementation of diverse educational technology such as
Content and Learning Management Systems, social media, and virtual
experimentation.
201
Finally, to validly assess the effectiveness of virtual laboratories, future studies
might aim to compare three groups: classes with no virtual and no physical
experiments; classes with only physical experiments; and classes with only virtual
experiments. A number of studies have already compared physical and virtual
laboratories and many of them conclude that virtual laboratories enhance the
effectiveness of physical laboratories, relative to the effectiveness of physical
laboratories alone (Akpan & Strayer, 2010; Cobb, Heaney, Corcoran et al., 2009; de
Jong, Linn, & Zacharia, 2013; Pyatt & Sims, 2012; Toth, 2009; Yu, Brown, &
Billet, 2005; Zacharia, Olympiou, & Papaevripidou, 2008), but most of these studies
did not involve lower-secondary classrooms (grades 8–10). Because secondary
schools invest in better technological equipment for science experiments, it would
be wise to enrich future research with studies involving such a three-way
comparison.
5.5 Conclusion
Learning environment and attitude scales adapted from the Science Laboratory
Environment Inventory (SLEI), Technology-Rich Outcomes-Focused Learning
Environment Inventory (TROFLEI) questionnaires, and Test Of Science Related
Attitudes (TOSRA) were found to be valid and reliable when used with a sample of
US high school students taking biology. These scales have been employed in the
past and can continue to be adapted to a wide variety of samples and situations.
This study also identified associations between students’ perceptions of the learning
environment and their attitudes and achievement. All six learning environment
scales correlated significantly and positively with both attitude scales, and a number
of those scales were positive, independent predictors of the attitude scales,
202
indicating that a more positive learning environment could lead to more positive
attitudes. Associations with achievement were significant for three learning
environment scales (Integration, Material Environment, and Teacher Support), and
two of those scales were positive, independent predictors of achievement,
suggesting that greater integration between laboratory work and class lessons and
better equipment might lead to improved achievement.
Further analysis revealed that virtual laboratories were somewhat more effective for
males than for females, as compared to males and females in the control group.
Male who engaged in virtual laboratories, compared to males who did not, perceived
better equipment (Material Environment), greater support from teachers (Teacher
Support), and experienced more inquiry (Inquiry), while females either perceived
negligible differences between the instructional methods for these aspects, or
perceived them to be more positive in the traditional environment without virtual
laboratories.
203
might be better invested in evaluating other aspects of the learning environment in
science classes.
204
References
Afari, E., Aldridge, J. M., Fraser, B. J., & Khine, M. S. (in press). Students’
perceptions of the learning environment and attitudes in game-based
mathematics classrooms. Learning Environments Research.
Akpan, J., & Strayer, J. (2010). Which comes first: The use of computer simulation
for frog dissection or conventional dissection as an academic exercise?
Journal of Computers in Mathematics and Science Teaching, 29, 113-138.
Aldridge, J. M., Fraser, B. J., Bell, L., & Dorman, J. (2012). Using a new learning
environment questionnaire for reflection in teacher action research. Journal
of Science Teacher Education, 1-32.
Aldridge, J. M., Fraser, B. J., & Laugksch, R. C. (2011). Relationship between the
school-level and classroom-level environment in secondary schools in South
Africa. South African Journal of Education, 31, 127-144.
Aldridge, J. M., Fraser, B. J., & Ntuli, S. (2009). Utilising learning environment
assessments to improve teaching practices among in-service teachers
undertaking a distance education programme. South African Journal of
Education, 29, 147-170.
205
Aldridge, J. M., Fraser, B. J., & Sebela, M. P. (2004). Using teacher action research
to promote constructivist learning environments in South Africa. South
African Journal of Education, 24, 245-253.
Aldridge, J. M., Fraser, B. J., Taylor, P. C., & Chen, C. C. (2000). Constructivist
learning environments in a cross-national study in Taiwan and Australia.
International Journal of Science Education, 22, 37-55.
Alhalabi, B., Hamza, M. K., Hsu, S., & Romance, N. (1998, November). Virtual
labs vs. remote labs: Between myth & reality. Paper presented at the Florida
Higher Education Consortium 7th Statewide Conference, Deerfield Beach,
FL.
Allen, D., & Fraser, B. J. (2007). Parent and student perceptions of classroom
learning environment and its association with student outcomes. Learning
Environments Research, 10, 67-82.
American Association for the Advancement of Science (AAAS). (1989). Science for
all Americans: A project 2061 report on literacy goals in science,
mathematics, and technology. Washington, DC: AAAS.
Annetta, L., Klesath, M., & Meyer, J. (2009). Taking science online: Evaluating
presence and immersion through a laboratory experience in a virtual learning
environment for entomology students. Journal of College Science Teaching,
39, 27-33.
Aud, S., Hussar, W., Johnson, F., Kena, G., Roth, E., Manning, E., Wang, X.,
Zhang, J. (2012). The condition of education. Washington, DC: U.S.
Department of Education, National Center for Education Statistics.
206
Bahar, M., Johnstone, A., & Hansell, M. (1999). Revisiting learning difficulties in
biology. Journal of Biological Education, 33, 84-86.
Banchero, S., & Simon, S. (2011, November 12). My teacher is an app, Wall Street
Journal.
Barak, M., & Asad, K. (2012). Teaching image-processing concepts in junior high
school: boys’ and girls’ achievements and attitudes towards technology.
Research in Science & Technological Education, 30, 81-105.
Barnette, J. J. (2000). Effects of stem and Likert response option reversals on survey
internal consistency: If you feel the need, there is a better alternative to using
those negatively worded stems. Educational and Psychological
Measurement, 60, 361-370.
Beard, M. H., Lorton, P. V., Searle, B. W., & Atkinson, T. C. (1973). Comparison of
student performance and attitude under three lesson-selection strategies in
computer-assisted instruction. Stanford, CA: Defense Technical Information
Center.
Beck, J., Czerniak, C. M., & Lumpe, A. T. (2000). An exploratory study of teachers’
beliefs regarding the implementation of constructivism in their classroom.
Journal of Science Teacher Education, 11, 323-343.
Beede, D., Julian, T., Langdon, D., McKittrick, G., Khan, B., & Doms, M. (2011).
Women in STEM: A gender gap to innovation. Washington, DC: US
Department of Commerce, Economics and Statistics Administration.
Beichner, R., Bernold, L., Burniston, E., Dail, P., Felder, R., Gastineau, J., et al.
(1999). Case study of the physics component of an integrated curriculum.
American Journal of Physics, 67, S16-S24.
Bell, R. L., & Trundle, K. C. (2008). The use of a computer simulation to promote
scientific conceptions of moon phases. Journal of Research in Science
Teaching, 45, 346-372.
207
Blalock, C. L., Lichtenstein, M. J., Owen, S., Pruski, L., Marshall, C., &
Toepperwein, M. (2008). In pursuit of validity: A comprehensive review of
science attitude instruments. International Journal of Science Education, 30,
961-977.
Bohus, C. A., Aktan, B., Crowl, L. A., & Shor, M. A. (1996). Distance learning
applied to control engineering laboratories. IEEE Transactions on
Education, 3, 320-326.
Borgman, C. L., Abelson, H., Dirks, L., Johnson, R., Koedinger, K., Linn, M. C., et
al. (2008). Fostering learning in the networked world: The cyberlearning
opportunity and challenge. Office of Cyberinfrastructure and Directorate for
Education and Human Resources of the National Science Foundation.
Retrieved from http://www.nsf.gov/publications/pub_summ.jsp.
Brotman, J. S., & Moore, F. M. (2008). Girls and science: A review of four themes
in the science education literature. Journal of Research in Science Teaching,
45, 971-1002.
Brown, E. (2012, April). Virginia's new high school graduation requirement: One
online course, Washington Post.
Burkholder, P. R., Purser, G. H., & Cole, R. S. (2008). Using molecular dynamics
simulation to reinforce student understanding of intermolecular forces.
Journal of Chemical Education, 85, 1071.
208
Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validation by
the multitrait-multimethod matrix. Psychological bulletin, 56, 81-105.
Campbell, H. (2012, May 24). Why science test scores being 'stagnant' is a good
thing. Retrieved from
http://www.science20.com/science_20/why_science_test_scores_being_stag
nant_good_thing-90396
Campuzano, L., Dynarski, M., Agodini, R., Rall, K. (2009). Effectiveness of reading
and mathematics software products: Findings from two student cohorts—
Executive summary (NCEE 2009-4042). (1422325237). Washington, DC:
National Center for Education Evaluation and Regional Assistance, Institute
of Education Sciences, U.S. Department of Education.
Chang, K. (2009, November 23). White house pushes science and math education,
New York Times.
Chang, V., & Fisher, D. L. (2003). The validation and application of a new learning
environmet instrument for online learning in higher education. In M. S.
Khine & D. L. Fisher (Eds.), Technology-rich learning environments: A
future perspective (pp. 1-20). River Edge, NJ: World Scientific Publishing
Company.
Cho, J. I., Yager, R. E., Park, D. Y., & Seo, H. A. (1997). Changes in high school
teachers’ constructivist philosophies. School Science and Mathematics, 97,
400-405.
209
Clancy, M., Titterton, N., Ryan, C., Slotta, J., & Linn, M. (2003). New roles for
students, instructors, and computers in a lab-based introductory
programming course. ACM SIGCSE Bulletin, 35, 132-136.
Cobb, S., Heaney, R., Corcoran, O., & Henderson-Begg, S. (2009). The learning
gains and student perceptions of a second life virtual lab. Health and
Bioscience Education, 13, 1-8.
Cohen, L., Manion, L., & Morrison, K. (2007). Research methods in education.
New York: Routledge.
Cross, T., & Cross, V. (2004). Scalpel or mouse? A statistical comparison of real &
virtual frog dissections. The American Biology Teacher, 66, 409-411.
de Jong, T., Linn, M. C., & Zacharia, Z. C. (2013). Physical and virtual laboratories
in science and engineering education. Science, 340, 305-308.
den Brok, P., Fisher, D., Rickards, T., & Bull, E. (2006). Californian science
students’ perceptions of their classroom learning environments. Educational
Research and Evaluation, 12, 3-25.
210
den Brok, P., Telli, S., Cakiroglu, J., Taconis, R., & Tekkaya, C. (2010). Learning
environment profiles of Turkish secondary biology classrooms. Learning
Environments Research, 13, 187-204.
Dori, Y. J., & Barak, M. (2001). Virtual and physical molecular modeling: Fostering
model perception and spatial understanding. Educational Technology &
Society, 4, 61-74.
Dorman, J. P., Aldridge, J. M., & Fraser, B. J. (2006). Using students' assessment of
classroom environment to develop a typology of secondary school
classrooms. International Education Journal, 7, 906-915.
Dorman, J. P., Fraser, B. J., & McRobbie, C. (1997). Relationship between school-
level and classroom-level environments in secondary schools. Journal of
Educational Administration, 35, 74-91.
Duit, R., & Confrey, J. (1996). Reorganizing the curriculum and teaching to
improve learning in science and mathematics. In D. F. Treagust, R. Duit &
211
B. J. Fraser (Eds.), Improving teaching and learning in science and
mathematics (pp. 79-93). New York: Teachers College Press.
Edelson, D. C., Gordin, D. N., & Pea, R. D. (1999). Addressing the challenges of
inquiry-based learning through technology and curriculum design. Journal of
the Learning Sciences, 8, 391-450.
Eklund, J., Kay, M., & Lynch, H. M. (2003). E-learning: Emerging issues and key
trends. Retrieved July 26, 2012, from www.flexiblelearning.net.au
Farenga, S. J., & Joyce, B. A. (1997). What children bring to the classroom:
Learning science from experience. School Science and Mathematics, 97,
248-252.
Ferguson, P. D., & Fraser, B. J. (1998). Changes in learning environment during the
transition from primary to secondary school. Learning Environments
Research, 1, 369-383.
Finkelstein, N., Adams, W., Keller, C., Kohl, P., Perkins, K., Podolefsky, N., et al.
(2005). When learning about the real world is better done virtually: A study
of substituting computer simulations for laboratory equipment. Physical
Review Special Topics-Physics Education Research, 1, 1-8.
Fisher, D. L., & Cresswell, J. (1998). Actual and ideal principal interpersonal
behaviour. Learning Environments Research, 1, 231-247.
Fisher, D. L., & Fraser, B. J. (1981). Validity and use of My Class Inventory.
Science Education, 65, 145-156.
212
Fisher, D. L., Henderson, D., & Fraser, B. J. (1995). Interpersonal behaviour in
senior high school biology classes. Research in Science Education, 25, 125-
133.
Fisher, D. L., Henderson, D., & Fraser, B. J. (1997). Laboratory environments &
student outcomes in senior high school biology. American Biology Teacher,
59, 214-219.
Fisher, D. L., & Khine, M. S., (Eds.). (2006). Contemporary approaches to research
on learning environments: Worldviews. Singapore: World Scientific.
Fraser, B. J. (1978). Some attitude scales for ninth grade science. School Science
and Mathematics Education, 78, 379-384.
213
New directions for teaching practice and research (pp. 285-296). Berkeley,
CA: McCutchan.
Fraser, B. J., & Fisher, D. L. (1986). Using short forms of classroom climate
instruments to assess and improve classroom psychosocial environment.
Journal of Research in Science Teaching, 5, 387-413.
214
Fraser, B. J., Giddings, G. J., & McRobbie, C. J. (1992). Assessment of the
psychosocial environment of university science laboratory classrooms: A
cross-national study. Higher Education, 24, 431-451.
Fraser, B. J., Giddings, G. J., & McRobbie, C. J. (1995). Evolution and validation of
a personal form of an instrument for assessing science laboratory classroom
environments. Journal of Research in Science Teaching, 32, 399-422.
Fraser, B. J., & Kahle, J. B. (2007). Classroom, home and peer environment
influences on student outcomes in science and mathematics: An analysis of
systemic reform data. International Journal of Science Education, 29, 1891-
1909.
Fraser, B. J., & Tobin, K. (1987). Use of classroom and school climate scales in
evaluating alternative high schools. Teaching and Teacher Education, 3,
219-231.
Fraser, B. J., & Tobin, K. (1991). Combining qualitative and quantitative methods in
classroom environment research. In B. J. Fraser & H. J. Walberg (Eds.),
Educational environments: Evaluation, antecedents and consequences (pp.
271–292). Elmsford, NY: Pergamon Press.
Fraser, B. J., & Treagust, D. F. (1986). Validity and use of an instrument for
assessing classroom psychosocial environment in higher education. Higher
Education, 15, 37-57.
Fraser, B. J., Walberg, H. J., Welch, W. W., & Hattie, J. A. (1987). Syntheses of
educational productivity research. International Journal of Educational
Research, 11, 145-252.
215
Fredricks, J. A., Blumenfeld, P. C., & Paris, A. H. (2004). School engagement:
Potential of the concept, state of the evidence. Review of Educational
Research, 74, 59-109.
Friedman, T. L. (2006). The world is flat: A brief history of the twenty-first century.
New York: Farrar Straus & Giroux.
Gallagher, A. G., Ritter, E. M., Champion, H., Higgins, G., Fried, M. P., Moses, G.,
et al. (2005). Virtual reality simulation for the operating room: Proficiency-
based training as a paradigm shift in surgical skills training. Annals of
Surgery, 241, 364-372.
Getzels, J. W., & Thelen, H. A. (1960). The classroom group as a unique social
system. In N. B. Henry (Ed.), The dynamics of instructional groups: Socio-
psychological aspects of teaching and learning (pp. 53-82). Chicago:
University of Chicago Press.
Giallousi, M., Gialamas, V., Spyrellis, N., & Pavlaton, E. (2010). Development,
validation, and use of a Greek-language questionnaire for assessing learning
environments in grade 10 chemistry classes. International Journal of Science
and Mathematics Education, 8, 761-782.
Goh, S. C., & Fraser, B. J. (1996). Validation of an elementary school version of the
Questionnaire on Teacher Interaction. Psychological Reports, 79, 512-522.
Goh, S. C., & Khine, M. S., (Eds.). (2002). Studies in educational learning
environments. Singapore: World Scientific.
Goh, S. C., Young, D. J., & Fraser, B. J. (1995). Psychosocial climate and student
outcomes in elementary mathematics classrooms: A multilevel analysis.
Journal of Experimental Education, 64, 29-40.
216
Goldberg, F. (1997). Constructing physics understanding in a computer-supported
learning environment. San Diego: The Learning Team.
Gonzales, P., Williams, T., Jocelyn, L., Roey, S., Kastberg, D., & Brenwald, S.
(2008). Highlights from TIMSS 2007: Mathematics and science achievement
of U.S. fourth- and eighth-grade students in an international context.
((NCES 2009001)). Washington, DC: National Center for Education
Statistics (NCES), U.S. Department of Education.
Hanson, S. (2009). Swimming against the tide: African American girls and science
education. Philadelphia, PA: Temple University Press.
Harms, U. (2000, June). Virtual and remote labs in physics education. Paper
presented at the Second European Conference on Physics Teaching in
Engineering Education, Budapest.
Harwell, S. H., Gunter, S., Montgomery, S., Shelton, C., & West, D. (2001).
Technology integration and the classroom learning environment: Research
for action. Learning Environments Research, 4, 259-286.
Helding, K. A., & Fraser, B. J. (in press). Effectiveness of NBC (National Board
Certifi ed) teachers in terms of learning environment, attitudes and
achievement among secondary school students. Learning Environments
Research.
Herrera, L. (2011, January 17). In Florida, virtual classrooms with no teachers, New
York Times.
Herrnstein, R. J., & Murray, C. A. (1996). The bell curve: Intelligence and class
structure in American life. New York, NY: Free Press Paperbacks.
217
Hill, C., Corbett, C., & St. Rose, A. (2010). Why so few? Women in science,
technology, engineering and mathematics. Washington, DC: AAUW.
Hiltzik, M. (2012, February 4, 2012). Who really benefits from putting high-tech
gadgets in classrooms?, The Los Angeles Times, pp. B-1. Retrieved from
http://www.latimes.com/business/la-fi-hiltzik-20120205,0,639053.column
Hofstein, A., & Kind, P. M. (2012). Learning in and from science laboratories. In B.
J. Fraser, K. Tobin & C. McRobbie (Eds.), Second international handbook of
science education (pp. 189-207). New York: Springer Verlag.
Hofstein, A., & Lunetta, V. N. (1982). The role of the laboratory in science
teaching: Neglected aspects of research. Review of Educational Research,
52, 201-217.
Huang, S., & Fraser, B. J. (2009). Science teachers' perceptions of the school
environment: Gender differences. Journal of Research in Science Teaching,
46, 404-420.
International Association for K–12 Online Learning (iNACOL). (2012). Fast facts
about online learning. Retrieved August 29, 2012, from
http://www.inacol.org/press/docs/nacol_fast_facts.pdf
Javidi, G. (1999). Virtual reality and education. University of South Florida, Tampa,
FL.
218
Javidi, G., & Sheybani, E. (2006, October). Virtual engineering lab. Paper presented
at the 36th ASEE/IEEE Frontiers in Education Conference, San Diego, CA.
Jegede, O. J., Fraser, B. J., & Fisher, D. L. (1995). The development and validation
of a distance and open learning environment scale. Educational Technology
Research and Development, 43, 90-93.
Johnson, B., & McClure, R. (2004). Validity and reliability of a shortened, revised
version of the Constructivist Learning Environment Survey (CLES).
Learning Environments Research, 7, 65-80.
Johnstone, A. H. (1991). Why is science difficult to learn? Things are seldom what
they seem. Journal of Computer Assisted Learning, 7, 75-83.
Judd, W., Bunderson, C., & Bessent, E. (1970). An investigation of the effects of
learner control in computer-assisted instruction prerequisite mathematics.
Austin, TX: University of Texas.
Kahle, J. B. (2004). Will girls be left behind? Gender differences and acountability.
Journal of Research in Science Teaching, 41, 961-969.
Kanner, J. H., Runyon, R. P., & Desiderato, O. (1954). Television in army training:
Evaluation of television in army basic training. Washington, DC: George
Washington University.
Karplus, R., & Butts, D. P. (1977). Science teaching and the development of
reasoning. Journal of Research in Science Teaching, 14, 169-175.
219
Kempa, R. F., & Ward, J. E. (1975). The effect of different modes of task
orientation on observational attainment in practical chemistry. Journal of
Research in Science Teaching, 12, 69-76.
Kim, H. B., Fisher, D. L., & Fraser, B. J. (2000). Classroom environment and
teacher interpersonal behaviour in secondary science classes in Korea.
Evaluation and Research in Education, 14, 3-22.
Klahr, D., Triona, L. M., & Williams, C. (2007). Hands on what? The relative
effectiveness of physical versus virtual materials in an engineering design
project by middle school children. Journal of Research in Science Teaching,
44, 183-203.
Koul, R. B., Fisher, D., & Shaw, T. (2011). An application of the TROFLEI in
secondary-school science classes in New Zealand. Research in Science &
Technological Education, 29, 147-167.
Koul, R. B., & Fisher, D. L. (2005). Cultural background and students’ perceptions
of science classroom learning environment and teacher interpersonal
behaviour in Jammu, India. Learning Environments Research, 8, 195-211.
220
Kroemer, K., & Grandjean, E. (1997). Fitting the task to the human: A textbook of
occupational ergonomics (5th ed.). London: Taylor and Francis.
Loder, J. E. (1937). A study of aural learning with and without the speaker present.
Lincoln, NE: University of Nebraska.
Logan, K. A., Crump, B. J., & Rennie, L. J. (2006). Measuring the computer
classroom environment: Lessons learned from using a new instrument.
Learning Environments Research, 9, 67-93.
Ma, J., & Nickerson, J. V. (2006). Hands-on, simulated, and remote laboratories: A
comparative literature review. ACM Computing Surveys, 38, 1-24.
Majeed, A., Fraser, B. J., & Aldridge, J. M. (2002). Learning environment and its
association with student satisfaction among mathematics students in Brunei
Darussalam. Learning Environments Research, 5, 203-226.
221
Marbach-Ad, G., Rotbain, Y., & Stavy, R. (2008). Using computer animation and
illustration activities to improve high school students' achievement in
molecular genetics. Journal of Research in Science Teaching, 45, 273-292.
Martin, E. D., & Rainey, L. (1993). Student achievement and attitude in a satellite-
delivered high school science course. American Journal of Distance
Education, 7, 54-61.
McCarty, G., Hope, J., & Polman, J. L. (2010, March). The youth engagement with
science and technology survey: Informing practice and measuring outcomes.
Paper presented at the Annual Meeting of the National Association for
Research on Science Teaching, Philadelphia, PA.
McRobbie, C., & Fraser, B. J. (1993). Associations between student outcomes and
psychosocial science environment. The Journal of Educational Research, 87,
78-85.
Midgley, C., Eccles, J. S., & Feldlaufer, H. (1991). Classroom environment and the
transition to junior high school. In B. J. Fraser & H. J. Walberg (Eds.),
222
Educational environments: Evaluation, antecedents and consequences (pp.
113-139). London, UK: Pergamon.
Milrad, M., & Spikol, D. (2007). Anytime, anywhere learning supported by smart
phones: Experiences and results from the MUSIS project. Journal of
Educational Technology and Society, 10, 62-70.
Moore, R. W., & Sutman, F. X. (1970). The development, field test and validation
of an inventory of scientific attitudes. Journal of Research in Science
Teaching, 34, 327-336.
Moos, R. H. (1974). Social climate scales: An overview. Palo Alto, CA: Consulting
Psychologists Press.
Moss, G., Jewitt, C., Levaaic, R., Armstrong, V., Cardini, A., & Castle, F. (2007).
Interactive whiteboards, pedagogy, and pupil performance: An evaluation of
the schools whiteboard expansion project (London Challenge). London:
Department for Education and Skills/Institute of Education, University of
London.
223
Nasr, A., & Soltani, K. A. (2011). Attitude towards biology and Its effects on
students' achievement. International Journal of Biology, 3, 100-104.
National Center for Educational Statistics (NCES). (2012a). The nation's report
card. Science 2011. (NCES 2012–465). Washington, DC: National Center
for Education Statistics, Institute of Education Sciences, U.S. Department of
Education.
National Center for Educational Statistics (NCES). (2012b). The nation’s report
card. Science in action: Hands-on and interactive computer tasks from the
2009 science assessment. (NCES 2012–468). Washington, DC: National
Center for Education Statistics, Institute of Education Sciences, U.S.
Department of Education.
Nedic, Z., Machotka, J., & Nafalski, A. (2003, November). Remote laboratories
versus virtual and real laboratories. Paper presented at the 33rd ASEE/IEEE
Frontiers in Education Conference, Boulder, CO.
Newby, M., & Fisher, D. L. (1997). An instrument for assessing the learning
environment of a computer laboratory. Journal of Educational Computing
Research, 16, 179-190.
224
NGSS. (2011). Next Generation Science Standards. Retrieved from
http://www.nextgenscience.org/.
Nix, R. H., Fraser, B. J., & Ledbetter, C. E. (2005). Evaluating an integrated science
learning environment using the Constructivist Learning Environment
Survey. Learning Environments Research, 8, 109-133.
Norton, S. J., McRobbie, C. J., & Ginns, I. S. (2007). Problem solving in a middle
school robotics design classroom. Research in Science Education, 37, 261-
277.
Oliver, M., & Venville, G. (2011). An exploratory case study of Olympiad students'
attitudes towards and passion for science. International Journal of Science
Education, 33, 2295-2322.
225
Organization for Economic Co-operation and Development (OECD). (2009).
Equally prepared for life? How 15-year-old boys and girls perform in school.
Retrieved June 25, 2012, from
http://www.pisa.oecd.org/pages/0,3417,en_32252351_32235907_1_1_1_1_1
,00.html
Orlans, F. B. (1988). Should students harm or destroy animal life? The American
Biology Teacher, 50, 6-12.
Osborne, J., Simon, S., & Collins, S. (2003). Attitudes towards science: A review of
the literature and its implications. International Journal of Science
Education, 25, 1049-1079.
Partin, G. R., & Atkins, E. L. (1984). Teaching via the electronic blackboard. In L.
Parker & C. Olgren (Eds.), Teleconferencing and electronic communication
III. Madison, WI: University of Wisconsin Extension, Centre for Interactive
Programs.
Perez-Pena, R. (2012, July). Top universities test the online appeal of free, New
York Times.
Perpich, J. (2012). Howard Hughes Medical Institute: The Virtual Immunology Lab.
Retrieved May 30, 2012, from
http://www.hhmi.org/biointeractive/vlabs/immunology/index.html
226
Piaget, J. (1970). Structuralism. New York: Basic Books.
Programme for International Student Assessment (PISA). (2009). PISA 2009 key
findings. Retrieved July 17, 2012, from
http://www.oecd.org/pages/0,3417,en_32252351_32235731_1_1_1_1_1,00.
html
Promratrak, L., & Malone, J. (2006, July). The development and evaluation of a CAI
package for use in Thai tertiary electronics laboratories. Paper presented at
the 7th International Conference on Information Technology Based Higher
Education and Training, Sydney.
Psotka, J. (1995). Immersive training systems: Virtual reality and education and
training. Instructional Science, 23, 405-431.
Pyatt, K., & Sims, R. (2012). Virtual and physical experimentation in inquiry-based
science labs: Attitudes, performance and access. Journal of Science
Education and Technology, 21, 133-147.
Rauwerda, H., Roos, M., Hertzberger, B. O., & Breit, T. M. (2006). The promise of
a virtual lab in drug discovery. Drug Discovery Today, 11, 228-236.
227
Redfield, R. J. (2012). Perspective — "Why do we have to learn this stuff?"— A
new genetics for 21st century students. PLoS Biology, 10, e1001356.
Rich, M. (2012, July 6). 'No child' law whittled down by white house, The New York
Times.
Rickards, T., den Brok, P., & Fisher, D. L. (2005). The Australian science teacher:
A typology teacher-student interpersonal behaviour in Australian science
classes. Learning Environments Research, 8, 267-287.
Robinson, E., & Fraser, B. J. (in press). Kindergarten students’ and parents’
perceptions of science classroom environments: Achievement and attitudes.
Learning Environments Research.
Russell, A., & Siley, C. (2005). Strengthening the science and mathematics pipeline
for a better America. American Association of State Colleges and
Universities, 2, 1-4.
228
Association of Research in Science Teaching annual international
conference, Orlando, FL.
Salomon, G., & Globerson, T. (1987). Skill may not be enough: The role of
mindfulness in learning and transfer. International Journal of Educational
Research, 11, 623-637.
Saretsky, G. (1972). The OEO PC experiment and the John Henry effect. Phi Delta
Kappan, 53, 579-581.
Sere, M. G. (2002). Towards renewed research questions from the outcomes of the
European project labwork in science education. Science Education, 86, 624-
644.
229
Smerdon, B., Cronen, S., Lanahan, L., Anderson, J., Iannotti, N., & Angeles, J.
(2000). Teachers’ tools for the 21st century: A report on teachers’ use of
technology. Education Statistics Quarterly, 2, 48-53.
Sticht, T. G. (1971). Failure to increase learning using the time saved by the time
compression of speech. Journal of Educational Psychology, 62, 55.
Tamim, R. M., Bernard, R. M., Borokhovski, E., Abrami, P. C., & Schmid, R. F.
(2011). What forty years of research says about the impact of technology on
learning a second-order meta-analysis and validation study. Review of
Educational Research, 81, 4-28.
Taylor, P. C., & Fraser, B. J. (1991, April). CLES: An instrument for assessing
constructivist learning environments. Paper presented at the Annual Meeting
of the National Association for Research in Science Teaching, Fontane, WI.
The California Educator. (2003). The power lies in giving students some control.
http://legacy.cta.org/media/publications/educator/
230
Thomas, R., & Hooper, E. (1991). Simulations: An opportunity we are missing.
Journal of Research on Computing in Education, 23, 497-513.
Thompson, S., Wernert, N., Underwood, C., & Nicholas, M. (2008). TIMSS 07:
Taking a closer look at mathematics and science in Australia. Melbourne:
Australian Council for Educational Research.
Thornton, J. W., & Brown, J. W. (1968). New media & college teaching:
Instructional television. Washington, DC: National Educational Association:
Department of Audiovisual Instruction.
Thurmond, B., Holmesa, S. Y., Annetta, L. A., Folta, E., Sears, M., Cheng, R., et al.
(2011, April). Student perceptions of learning and engagement with
scientific concepts through Serious Educational Game (SEG) development.
Paper presented at the National Association of Research in Science Teaching
annual international conference, Orlando, FL.
Tobin, K., Kahle, J. B., & Fraser, B. J., (Eds.). (1990). Windows into science
classes: Problems associated with higher-level cognitive learning. London,
UK: Falmer Press.
Toth, E. (2009). "Virtual inquiry" in the science classroom: What is the role of
technological pedagogial content knowledge? International Journal of
Information and Communication Technology Education, 5, 78-87.
Toth, E., Morrow, B., & Ludvico, L. (2009). Designing blended inquiry learning in
a laboratory context: A study of incorporating hands-on and virtual
laboratories. Innovative Higher Education, 33, 333-344.
231
Trickett, E. J., & Moos, R. H. (1973). Social environment of junior high and high
school classrooms. Journal of Educational Psychology, 65, 93-102.
Trindade, J., Fiolhais, C., & Almeida, L. (2002). Science learning in virtual
environments: A descriptive study. British Journal of Educational
Technology, 45, 471-488.
Tsui, C. Y., & Treagust, D. F. (2004). Motivational aspects of learning genetics with
interactive multimedia. The American Biology Teacher, 66, 277-285.
Tytler, R., & Osborne, J. (2012). Student attitudes and aspirations towards science.
In B. J. Fraser, K. Tobin & C. McRobbie (Eds.), Second international
handbook of science education (pp. 597-625). New York: Springer Verlag.
University of Utah. (2004). Genetic Science Learning Center (1969, December 31).
Learn. Genetics. Retrieved July 24, 2012, from
http://learn.genetics.utah.edu/
Vacha-Haase, T., & Thompson, B. (2004). How to estimate and interpret various
effect sizes. Journal of Counseling Psychology, 51, 473.
Van Petegem, P., Deneire, A., & De Maeyer, S. (2008). Evaluation and participation
in secondary education: Designing and validating a self-evaluation
instrument for teachers to solicit feedback from pupils. Studies in
Educational Evaluation, 34, 136-144.
Van Rooy, W. S. (2011, April). Transforming and enhancing the learning and
teaching of senior biology via digital technologies. Paper presented at the
National Association of Research in Science Teaching annual international
conference, Orlando, FL.
232
Wahyudi, & Treagust, D. F. (2004). The status of science classroom learning
environments in Indonesian lower secondary schools. Learning
Environments Research, 7, 43-63.
Welch, A. G., Cakir, M., Peterson, C. M., & Ray, C. M. (2012). A cross-cultural
validation of the Technology-Rich Outcomes-Focused Learning
Environment Inventory (TROFLEI) in Turkey and the USA. Research in
Science & Technological Education, 30, 49-63.
Westendarp, C., & Westendarp, H. (2009). Race to the Top. Retrieved July 17,
2012, from http://racetotop.com/
White, B., Bolker, E., Koolar, N., Ma, W., Maw, N., & Yu, C. (2007). The virtual
genetics lab: A freely-available open-source genetics simulation. The
American Biology Teacher, 69, 29-32.
233
Winn, W., Stahr, F., Sarason, C., Fruland, R., Oppenheimer, P., & Lee, Y. L. (2006).
Learning oceanography from a computer simulation compared with direct
experience at sea. Journal of Research in Science Teaching, 43, 25-42.
Woelfel, N., & Tyler, I. K. (1945). Radio and the school. Tarrytown, NY: World
Books.
Wolf, S. J., & Fraser, B. J. (2008). Learning environment, attitudes and achievement
among middle school science students using inquiry-based laboratory
activities. Research in Science Education, 38, 321-341.
Wu, W., Chang, H. P., & Guo, C. J. (2009). The development of an instrument for a
technology-integrated science learning environment. International Journal of
Science and Mathematics Education, 7, 207-233.
Wubbels, T., & Levy, J. (1993). Do you know what you look like: Interpersonal
relationships in education. London: Falmer Press.
234
Yarrow, A., Millwater, J., & Fraser, B. J. (1997). Improving university and primary
school classroom environments through preservice teachers’ action research.
International Journal of Practical Experiences in Professional Education, 1,
68-93.
Yu, J., Brown, D., & Billet, E. (2005). Development of a virtual laboratory
experiment for biology. European Journal of Open, Distance and e-learning.
Retrieved July 26, 2012, from
http://www.eurodl.org/?p=archives&year=2005&halfyear=2&article=195
Zandvliet, D. B., & Buker, L. (2003). Learning environments in new contexts: Web-
capable classrooms in Canada. In M. S. Khine & D. L. Fisher (Eds.),
Technology-rich learning environments: A future perspective (pp. 133-156).
Singapore: World Scientific.
Every reasonable effort has been made to acknowledge the owners of copyright
materials. I would be pleased to hear from any copyright owner who has been
omitted or incorrectly acknowledged.
235
Appendices
236
Appendix A: Laboratory Assessment in Genetics (LAG)
This survey contains questions about your thoughts on science, your perceptions about
science laboratories, and your understanding of the concepts illustrated through laboratory
activities. Part I refers to background information about yourself and your class (14
Questions), Part II refers to your attitudes toward science and perception of the laboratory
environment (Questions #1-64), and Part III refers to your understanding of the concepts
illustrated through the laboratory activities in your class (Questions #65-74).
When you complete this survey, you will be given the opportunity to provide your email
address which enters you into a raffle to win a $50 gift certificate, to thank you for your
participation.
I. In this part of the questionnaire you will answer simple background questions
about yourself and your class.
II. This part of the questionnaire asks questions about student attitudes towards
science and student perceptions of the learning environment
PLEASE NOTE: The word "laboratory" in this survey refers to any experiment
you have done in your science class whether it was "hands-on" or virtual (on a
computer). Thank you.
237
Part I. Background Information
Personal Details:
1. Gender: 3. Ethnity:
Female White
male Hispanic
2. Is English the main language you use to Black (non-Hispanic)
communicate? Asian
Yes Other:_________
No 4. Age:_____
Class Details:
5. Grade: 6. Type of class:
8th Standard/College Preparatory
9th Honors
10th Inclusion
11th Advanced Placement
12th Other:___________
Other:________
7. Teacher Code:________
Computer Usage:
8. Do you have a computer at home? 10. Do you have Internet access at
Yes home?
No Yes
No
9. How many hours a week do you
spend on the computer? 11. How many hours a week do you
0-2 spend on the Internet?
2-5 0-2
5-10 2-5
10-15 5-10
More than 15 10-15
More than 15
Future Plans:
12. Do you plan on going 13. Which type of job would you like when you leave school?
to college?
Yes
No Doctor Psychologist Chef
Lawyer Actor Fashion Designer
Politician Nurse Journalist
Scientist Athlete Businessman
Accountant Teacher Designer
Mechanic Model Other:_____
Programmer Banker
238
Part II. Student Attitudes towards Science and Student Perceptions of the Learning
Environment
Strongly Disagree Not Agree Strongly
Disagree Sure Agree
1. I would prefer to find out why
something happens by doing an 1 2 3 4 5
experiment than by being told.
2. I would prefer to do experiments
than to read about them. 1 2 3 4 5
3. It is better to create my own
hypothesis than to be given a 1 2 3 4 5
hypothesis to test out.
4. I would prefer to do my own
experiments than find out 1 2 3 4 5
information from a teacher.
5. It is better to try out different
ways of setting up an experiment
than to be told exactly how to set 1 2 3 4 5
it up.
6. It is better to find an answer by
doing experiments than to ask the 1 2 3 4 5
teacher the answer.
7. I would prefer to guess the results
than to be told the expected
results before doing an 1 2 3 4 5
experiment.
8. It is better to find out scientific
facts from experimenting than to 1 2 3 4 5
be told them.
239
Strongly Disagree Not Agree Strongly
Disagree Sure Agree
17. The laboratory activities are related
to the topics that I am studying in my 1 2 3 4 5
science class.
18. My regular science class work is
integrated with laboratory activities. 1 2 3 4 5
19. I use the theory from my regular
science class sessions during 1 2 3 4 5
laboratory activities.
20. The topics covered in regular science
class work are quite similar to topics 1 2 3 4 5
in laboratory activities.
21. What I do in the laboratory helps me
to understand the theory covered in 1 2 3 4 5
regular science classes.
22. My laboratory activities and regular
science class work are related. 1 2 3 4 5
23. The concepts addressed in the
laboratory are those I need to know 1 2 3 4 5
for my science class.
24. The skills used in laboratory
activities are similar to the skills 1 2 3 4 5
addressed in my science class.
240
Strongly Disagree Not Agree Strongly
Disagree Sure Agree
33. The teacher takes a personal
interest in me. 1 2 3 4 5
34. The teacher goes out of his/her
way to help me. 1 2 3 4 5
35. The teacher helps me when I have
trouble with my work. 1 2 3 4 5
36. The teacher is interested in my
problems related to schoolwork. 1 2 3 4 5
37. The teacher moves about the class
to talk with me. 1 2 3 4 5
38. The teacher’s questions help me
to understand the topic. 1 2 3 4 5
39. The teacher guides me through
activities when I am stuck. 1 2 3 4 5
40. The teacher helps me with
problems related to schoolwork. 1 2 3 4 5
241
Strongly Disagree Not Agree Strongly
Disagree Sure Agree
49. I carry out investigations to test
my ideas in this class. 1 2 3 4 5
50. I am asked to think about the
evidence for statements in this 1 2 3 4 5
class.
51. I carry out investigations to
answer questions during the 1 2 3 4 5
activities in this class.
52. I explain the meaning of
statements, diagrams, and graphs 1 2 3 4 5
during activities in this class.
53. I carry out investigations to
answer questions that puzzle me 1 2 3 4 5
in this class.
54. I carry out investigations to
answer the teacher’s questions in 1 2 3 4 5
this class.
55. I find out answers to questions by
doing investigations in this class. 1 2 3 4 5
56. I solve problems by using
information obtained from my 1 2 3 4 5
own investigations in this class.
242
Part III. Understanding of Concepts in Genetics
1. Which of the following features of DNA is most important in determining the
phenotype of an organism?
A) The direction of the helical twist
B) The number of deoxyribose sugars
C) The sequence of nitrogenous bases
D) The strength of the hydrogen bonds
2. Fireflies produce light inside their bodies. The enzyme luciferase is involved in the
reaction that produces the light. Scientists have isolated the luciferase gene.
A scientist inserts the luciferase gene into the DNA of cells from another organism. If
these cells produce light, the scientist knows that which of the following occurred?
A) The luciferase gene mutated inside the cells.
B) The luciferase gene was transcribed and translated.
C) The luciferase gene destroyed the original genes of the cells.
D) The luciferase gene moved from the nucleus to the endoplasmic reticulum.
3. Steps in a reproductive process used to produce a sheep with certain traits are listed
below.
Step 1 — The nucleus was removed from an unfertilized egg taken from sheep A; Step
2 — The nucleus of a body cell taken from sheep B was then inserted into this
unfertilized egg from sheep A; Step 3 — The resulting cell was then implanted into the
uterus of sheep C.; Step 4 — Sheep C gave birth to sheep D. Which sheep would be
most genetically similar to sheep D?
A) Sheep A, only
B) Sheep B, only
C) Both sheep A and B
D) Both sheep A and C
4. Bacteria in culture A produce slime capsules around their cell walls. A biologist removed
the DNA from some of the bacteria in culture A and injected it into bacteria in culture B,
which normally do not produce slime capsules. After the injection, bacteria with slime
capsules began to appear in culture B. What conclusion can best be drawn from this
investigation?
A) a ribosome
B) transfer RNA
C) recombinant DNA
D) a male gamete
243
6. Which process is illustrated in the diagram below?
A) chromatography
B) direct harvesting
C) meiosis
D) genetic engineering
7. After a culture of cells is allowed to multiply and is viewed through a microscope, the
cells are x-rayed with high-energy radiation for less than 1/100th of a second. After the
radiation, many newly reproduced cells appear different. What has probably occurred?
A) mutation
B) speciation
C) contamination
D) bacterial infection
8. In 1910, Thomas Morgan discovered traits linked to sex chromosomes in the fruit fly.
The Punnett square below shows the cross between red-eyed females and white-eyed
males. Fruit flies usually have red eyes. If a female and male offspring from the cross
shown above are allowed to mate, what would the offspring probably look like?
244
9. The chances of developing cancer, diabetes, or sickle-cell anemia are higher if a family
member also has the disorder because they are —
A) Genetically based
B) Passed through blood contact
C) Highly infectious
D) Related to diet
10. The picture below shows a segment of DNA from a cat. Which of these is most likely the
kitten of this cat?
A) 1
B) 2
C) 3
D) 4
Please write your email address here if you wish to be entered into a raffle to win a $50 gift
certificate to be drawn at the end of June:
____________________________________________________________
In this Part II of this questionnaire, items 1–16 are based on the Test Of Science Related
Attitudes (TOSRA) (Fraser, 1981) as described in Section 2.3.2, items 17 – 32 are based
on the Science Laboratory Environment Inventory (SLEI) (Fraser, Giddings, &
McRobbie, 1992)described in Section 2.2.2, and items 33–64 are based on the
Technology-Rich Outcomes-Focused Learning Environment Inventory (TROFLEI)
(Aldridge & Fraser, 2008) described in section 2.2.2. Modification of these items from
their original scales is described in Section 3.4.1. The questionnaire items were used in
this study and included in this thesis with the authors’ permission.
245
Appendix B: Semi-structured Interview Questions for Students
Introduction to students: Before we get started, do you have your parents consent to participate
in this interview and to have this interview recorded? Hi! Thank you for agreeing to participate
in this study on science education. The purpose of this research is to help me understand how
experiences in the science classroom affect students’ attitudes towards science, how students
perceive their environments, and how students achieve in science. There are no right or wrong
answers; only your opinions count and, what you say will not be reported back to your teacher!
The results will inform teachers in general on how to best teach science so that students will be
able to learn better. I will start recording now - please say your name when I pause during my
introduction. This is an audio recording on [date, time, place] between Rachel Oser and
__________. I want to remind you that you may stop this interview at any time. Let’s begin.
[ENJ] Do you find the activities you did in your science classes to be fun?
o Can you give an example of a memorable activity? Such as games, demonstrations, labs,
puzzles, virtual labs, etc.
o Were there any laboratory activities you liked doing?
o Was it useful?
o Did you look forward to such activities?
o What did that make you think about your science class in general - was it your favorite
subject?
o Would you ever try any activities from class at home?
[MTE] Please tell me about the materials you used for labs and technology.
o Were they useful, available, in good working order?
o What about computers?
o How did you find the audio and visual effects of the activities you did?
[TSP] How did you find the attitude of your teacher towards helping you?
o Did s/he help you when you had trouble with your work? How so?
o Did your teacher guide through activities when you were stuck?
o In what way do you think the teacher should be involved?
o In your opinion, what helps you learn more from computer activities – more or less
teacher involvement?
[INQ] Do you prefer to learn scientific facts by experimenting or by listening to the teacher
tell you about them? Why?
o What do you like/not like about experimenting?
o Do you like to create your own hypothesis (guess the results) or test out a hypothesis
given to you by your teacher?
o What types of experiments did you like doing best? Why?
246
[INT] Were the labs that you did related to the topics you studied in your science class?
o How so? Can you give me an example?
o Did what you learned in class help your labs or vice versa?
[TOR] Did you feel that it was important to complete a certain amount of work in class? In
general, how motivated are you to get stuff done?
o Were you aware of the work you needed to complete in science class?
o Did you know the purpose of doing that work?
o Did you pay attention in class so that you could complete the work?
o Tell me about a time that you felt good about completing the work in this class.
[INV] Were you asked to think about evidence for statements in this class?
a. What kind of evidence were you given? Statements, diagrams, graphs?
b. During which sorts of activities were you asked to investigate such evidence?
c. Do you prefer to do such investigations to find answers to questions? Why/why not?
Can you give an ex.?
d. How important is it for you to have control over what you are doing during lab activities?
[Gender] Do you think there’s a difference between males and females in the class, in terms
of how they learn, perceive their environment, their attitudes, or achievement? How so?
[VL] You were involved in a research study where some classes did virtual labs and some did
not. Which group do you think you were in?
a. If you had a choice, which group would you prefer to be in? Why?
Follow-up for students in Experimental Group: Lastly, I will tell you what I have found from
the data I collected from the surveys. It turns out that there doesn’t seem to be a significant
difference in perceptions, attitude, or achievement between students who did the virtual labs and
students who did not. Perhaps you can help inform me why not, since I was hoping there would
be such a difference.
a. Can you tell me what you have observed in your class that could account for this?
b. Please describe the virtual lab arrangement in your class. Around how many were
completed? Do you remember which ones (read off list)? Were there any problems
with the virtual labs or computer equipment?
c. Knowing these results, would you change your answer to 10. a.?
12. Is there anything else I have not asked about lab activities in your classroom that you think I
should know?
Thank you so much for your time. What I have learned from you and this conversation will
help me inform teachers about how students learn in science classrooms. I will send you a
transcript to review shortly and email you your $10 iTunes certificate. Have a good summer!
247
Appendix C: Semi-structured Interview Questions for Teachers
Introduction to teachers: The purpose of this research is to help me understand how experiences
in the science classroom (namely, virtual laboratories) affect students’ attitudes towards science,
how students perceive their environments, and how students achieve in science. There are no
right or wrong answers; only your opinions count. (VL = Virtual Lab)
Practical Details:
How many virtual labs did you do with your VL classes?_____ Which VL’s did
you choose to do with them (see attached list as a reminder)?
Period of time for implementation of this study: (# of days/weeks/months)
Please describe how often (or the interval) you did VL’s with your VL classes:
(did you do them every day for a week straight or once a week, etc…)
In contrast, what sort of activities did you do with your non-VL classes? (was it
regular lecture, rich discussions, hands-on labs, other computer activities)
Please mention any confounding variables between the VL and non-VL classes:
(did you teach the same topic, were students at the same level, did the time of
day differ for those classes consistently, etc.)
[ENJ] How do you think your students differed in their opinions of how ‘fun’ the
activities were in your science classes? Did students who did the VL’s overall find
science classes to be more enjoyable or students who did not do VL’s? Do you think
that students who did VL’s found them to be more useful/would look forward to
doing them/would try them at home? If given a choice, would students prefer or not
prefer to be in the VL class?
[ACH] How do you think your students differed in their understanding of genetics?
Did the students who did VL’s understand the topic better as a result of doing VL’s
or did it make no difference?
[MTE] Was the equipment used for VL’s in good working order? Ex. Internet
speed, enough computers, etc. Please describe any technical difficulties, if any. In
contrast, were there any technical problems with other materials you may have used
in hands-on labs?
[TSP] How much did you have to help your students along with the VL’s? Did you
find that you provided more assistance in VL classes or non-VL classes?
[INQ] How do you think students differed regarding the level of inquiry? Were
students in VL classes more/less/about the same as curious about experimenting
using VL’s as students in non-VL classes?
[INT] Were the labs (whether VL’s or other activities you did with your non-VL
classes) that you did related to the topics you taught in your science class? Was there
248
a difference in the integration of these activities amongst the two classes (VL and
non-VL)?
[Gender] Did you notice any differences in gender regarding the VL’s? Did boys or
girls seem to be more engaged/motivated to do them? If so, does the same gender
difference exist with other activities or in non-VL classes? you think there’s a
difference between males and females in the class, in terms of how they learn,
perceive their environment, their attitudes, or achievement? How so?
Lastly, I will tell you what I have found from the data I collected from the surveys. It
turns out that there doesn’t seem to be a significant difference in perceptions,
attitude, or achievement between students who did the virtual labs and students who
did not. The only quantitative, significant difference was found when comparing VL
and non-VL classes against gender so that males showed a significantly higher score
on the attribute of ‘inquiry’ for VL’s versus non-VL’s. Perhaps you can help inform
me why more significant differences were not apparent, since I was hoping there
would be such differences. (Although, since there was no negative effect of doing
VL’s, they are at least as effective as any other educational method and can be
implemented in classrooms with confidence.) Can you tell me what you have
observed in your class that could account for this?
10. Is there anything else I have not asked about the implementation and results of this
study that you think I should know?
249
Appendix D: List of Virtual Laboratories Available for Teachers
List of Virtual Labs (Just the links):
http://highered.mcgrawhill.com/sites/0073031208/student_view0/virtual_labs.ht
ml - On this page, there are 2 labs relevant to this study (Reproduction and
Heredity & Molecular Genetics), but they are quite advanced and not too
interactive. They do not allow for use of virtual lab materials only for analyzing
data from a graph. These are excellent sources for advanced students interested
in the particular questions explored in the labs and they include a self-checking
feature to determine if the data supports the hypothesis.
http://www.biologylabsonline.com/ - This site contains many great virtual labs
but requires an access code (provided below) and registration. I will be mailing
each of you the instructor’s packets for these labs. Access code: USCS-BLUFF-
TREND-POWAN-FIORI-PRIES (please do not distribute as I have procured
this code solely for the purpose of this study which can be accessed up to 1,000
times; otherwise there is usually a fee).
http://www.hhmi.org/biointeractive/vlabs/index.html - There are 2 labs relevant
to this study (Transgenic FlyLab, Bacterial Identification) for which I have
created instructor guides and student worksheets (see the dropbox and/or your
email accounts – please let me know if you need another copy).
http://learn.genetics.utah.edu/ - All the labs on this website are relevant to this
study for which I have created instructor guides and student worksheets (see the
dropbox and/or your email accounts – please let me know if you need another
copy).
http://virtuallaboratory.colorado.edu/Biofundamentals/index.htmlhttp://www.ph
school.com/science/biology_place/labbench/ - Some of these are great,
interactive and realistic labs that require data collection, but they also require
extensive background reading so these are for more advanced students.
http://www2.edc.org/weblabs/WebLabDirectory1.html - This website contains a
list of genetics virtual labs – some are great and really interactive (more
animated than realistic) whereas others are more clicking-through types to cover
content.
http://www.jdenuno.com/TechConnect/OnLineLabs.htm - This is a list of many
virtual labs, which are worthwhile to peruse as you may choose to do these over
the others. Some include virtual labs on Mendelian genetics.
http://www.ucopenaccess.org/courses/APBioLabs/course/index.html - This is a
resource used for online AP biology courses and contains a number of virtual
labs about various topics. I will be creating worksheets to go along with the
“Genetics of Organisms” (Fruit Flies) lab and the “Molecular Biology”
(Bacterial Transformation) lab.
http://virtuallabs.stanford.edu/ - A general list of virtual labs for your own
resources (not too many on genetics).
250
More to be added as deemed appropriate…… (Open to suggestions!)
251
Appendix E: Instructions to Teachers for Participating in My Study
General Instructions for Participation in Virtual Lab Study
Thank you for participating in this study to evaluate the effectiveness of virtual
laboratories in science classrooms, specifically applied to the topic of genetics. Here
are just a couple of quick instructions for you to follow, in order please, so that I can
ensure validity and reliability of the resulting data. All forms are available via
‘dropbox’ or I will send as an email attachment. Please:
1. Read and sign the ‘Consent Form for Teachers’ – return via email, mail, or fax.
2. Fill out the ‘Teacher Information Sheet’- return via email, mail, fax, or Dropbox.
3. Introduce participation of this study to your students via my video (on the blog)
or read a transcript of the video aloud to them. DO NOT mention the real goal of
this study (so they remain unbiased!) but you can describe it as a ‘study on
learning methods in science classrooms’.
4. Hand out consent forms for students and their parents (I will provide copies, if
you wish) and let them know that they only need to return these forms if they
wish NOT to participate in the study. Keep returned forms in a safe place to be
returned to me with all other materials at the end of the study.
5. Read the ‘Introduction to Virtual Laboratory Activities’ document.
6. Browse the suggested virtual laboratories and choose 4-5 that are appropriate for
your classes – print student worksheets or let me know which documents you
need copied and I’ll mail those to you. Re: you will only be doing this with half
of your classes.
7. Implement the virtual laboratories any time between Feb.- June 2010; they need
not follow in succession nor follow equal intervals of time between their
administrations. You will need to ensure Internet access for each student
completing the virtual lab activity so be sure to reserve computer labs or laptop
carts ahead of time. Re: you will only be doing this with half of your classes.
8. Notify me when you have finished using the virtual laboratories and I will send
you the surveys to administer to all of your classes (even the ones who did not
use virtual labs as they are the control group) – surveys should be completed
during one class period (perhaps you can offer extra credit for compliance!). I
may select some students to interview at this time.
9. Notify me when all students have completed the survey and I will send you an
envelope in which you will return all forms and surveys to me. You may keep
any materials used during this study, but I ask that you do not share them with
others until this study is complete (June 25, 2010). Thank you for all your help
and your name will be entered into a raffle for $100 of coffee!
My Contact Information:
82 Eighth St., Providence, RI 02906
Cell: 917-640-8355 / Fax: 781-982-4201 (Attention: Rachel Oser)
[email protected]
http://oserscienceedstudy.blogspot.com/ -blog for announcements, FAQs, etc.
www.dropbox.com – Storage and sharing of all documents (I emailed you a link)
252
Appendix F: Example of a Virtual Laboratory Worksheet
Access: http://learn.genetics.utah.edu/content/labs/microarray/
Brief Description: In this activity students learn the procedure and concepts that
underlie the use of a DNA Microarray for the field of genomics. The purpose of each of
the lab materials is explained clearly and the tasks are simple. This lab takes more
time, relatively, than the other labs from the same website but it does include an
investigative piece and students get to make a real-life application to the differences
between healthy and cancer cells.
Vocabulary:
Organism
Genomics
Genetics
Genes
Gene expression
Gene expression profile
DNA microarray
Cancer
RNA
Vortex
Microcentrifuge
Poly-A tail
Buffer
cDNA
Oligo-dT primers (poly-T tails)
Reverse transcriptase
Nucleotides
Hybridization
Complementary
253
Standards:
NY: Standard 1, Performance Indicators 1.1, 1.2a, 1.3a, 2.3, 2.4, 3.1, 3. & Standard
4, Performance Indicators 2.1, 2.2
MA: Standards 3.1-3.8
National Standards: THE MOLECULAR BASIS OF HEREDITY [Content
Standard B (grades 9-12)]
In all organisms, the instructions for specifying the characteristics of the organism are
carried in DNA, a large polymer formed from subunits of four kinds (A, G, C, and T).
The chemical and structural properties of DNA explain how the genetic information that
underlies heredity is both encoded in genes (as a string of molecular ''letters") and
replicated (by a templating mechanism). Each DNA molecule in a cell forms a single
chromosome.
Most of the cells in a human contain two copies of each of 22 different chromosomes. In
addition, there is a pair of chromosomes that determines sex: a female contains two X
chromosomes and a male contains one X and one Y chromosome. Transmission of
genetic information to offspring occurs through egg and sperm cells that contain only one
representative from each chromosome pair. An egg and a sperm unite to form a new
individual. The fact that the human body is formed from cells that contain two copies of
each chromosome—and therefore two copies of each gene—explains many features of
human heredity, such as how variations that are hidden in one generation can be
expressed in the next.
Changes in DNA (mutations) occur spontaneously at low rates. Some of these changes make
no difference to the organism, whereas others can change cells and organisms. Only
mutations in germ cells can create the variation that changes an organism's offspring.
http://www.nap.edu/openbook.php?record_id=4962&page=185
Student Worksheets: See attached – you may decide to give the first page as a pre-
lab for HW to save time
254
Name:_____________________ Period:_____ Date:_____________
Background: Read the introduction in chapters 1 and 2 and answer the questions below:
http://learn.genetics.utah.edu/content/labs/microarray/
1. About how many genes do humans have?____________________________
2. What is genomics?____________________________________________
3. What does it mean for a gene to be “expressed”? If a gene is expressed, what would
be produced?________________________________________________
4. Briefly explain how many different cell types can form in the body if they all have
the same DNA. _______________________________________________
_________________________________________________________
5. What is a gene expression profile and why is it useful?____________________
7. What are some other names for the DNA microarray (see the pull-down “+”
sign)?_____________________________________________________
8. From where can one get a DNA microarray?___________________________
Problem: What’s the difference between a healthy cell and a cancer cell?
10. Explain the usefulness of looking at cancer cells under the microscope. Will cancer
cells appear to be different?______________________________________
11. In cancer cells, something has gone wrong with the genes that control:_________
12. Why is it important to find out which genes are the culprits each type of cancer?
_________________________________________________________
Procedure: Read and follow all the prompts given to you in the lab and answer the
questions that follow in order as you perform the specific tasks (read the questions ahead of
time!)
1. List the 7 steps in the experiment in which a DNA microarray is used to compare the
differences in gene expression levels between cancer cells and healthy cells:
255
2. List all the materials needed for this experiment:________________________
3. What substance will you measure from both healthy and cancer cells to determine
which genes are turned on/off?____________________________________
7. How will you retain only the mRNA? Why do you want to retain that particular type
of RNA? ___________________________________________________
9. Why do you have to convert mRNA back into DNA (called complementary DNA or,
cDNA)? __________________________________________________
12. A single spot on the microarray contains multiple copies of the same/different
(circle one) DNA sequences whereas the DNA is the same/different (circle one)
from one spot to another.
Outcomes: You now have a chip to which your sample DNA has been added that
represents every known gene in this organism and how the sample and spots on the chip
match up will determine the relationship between those matched genes and the particular
cancer.
14. What do the darker colored spots on the green (healthy) image
represent?_________
16. As the prompt describes, imagine you are a researcher studying such genes in skin
cancer cells, on which color spot would you focus and why?
_________________________________________________________
17. As the prompt asks, what color are the spots that are turned down by gene 4263? __
18. As the prompt asks, what color is gene 6219 on the microarray? _____________
19. What are some of the advantages of using the DNA microarray technique?_______
_________________________________________________________
20. What are some limitations of using the DNA microarray technique?
_________________________________________________________
256