Effectiveness of Virtual Laboratories in Terms of Achievement, Attitudes, and Learning Environment Among High School Science Students

Download as pdf or txt
Download as pdf or txt
You are on page 1of 267

Science and Mathematics Education Centre

Effectiveness of Virtual Laboratories in Terms of Achievement,


Attitudes, and Learning Environment among
High School Science Students

Rachel Rose Oser

This thesis is presented for the Degree of

Doctor of Philosophy

of

Curtin University

January 2013
Declaration

This thesis contains no material which has been accepted for the award of any other
degree or diploma in any university. To the best of my knowledge and belief, this
thesis contains no material previously published by any person except where due
acknowledgement has been made.

ii
Abstract

As our society becomes increasingly technological, research suggests that students,


too, benefit from technology-rich learning environments (Aldridge & Fraser, 2008;
Borgman, Abelson, Dirks et al., 2008; Tamim, Bernard, Borokhovski et al., 2011).
In an effort both to allow students laboratory experiences that would not otherwise
be possible in high school settings and to augment the integration of technology
within science classrooms, virtual laboratories can be used to simulate real
laboratories and encourage students to engage in scientific inquiry.

This study investigated the effectiveness of such virtual laboratories in terms of


students’ perceptions of the learning environment, attitudes towards science, and
achievement. Classes of students who utilized virtual laboratories were compared
with classes of students who did not. The sample consisted of 322 high-school
students in 21 science classes in the US. Data were obtained by administering the
Laboratory Assessment in Genetics (LAG) containing selected scales from the
Technology-Rich Outcomes-Focused Learning Environment Inventory (TROFLEI)
(Aldridge & Fraser, 2008), the Science Laboratory Environment Inventory (SLEI)
(Fraser, Giddings, & McRobbie, 1992), and the Test of Science-Related Attitudes
(TOSRA) (Fraser, 1981), as well as some achievement items from previously-
validated science examinations. Quantitative data were complemented by
qualitative data from interviews with students and teachers.

Data analysis supported the LAG’s factorial validity, internal consistency reliability,
discriminant validity, and ability to differentiate between the perceptions of students
in different classrooms. All six learning environment scales correlated significantly
and positively with students’ attitudes and some of those scales (Integration,
Material Environment, Teacher Support, Differentiation) also correlated
significantly with students’ achievement. Most learning environment scales were
also found to be independent predictors of attitudes.

No significant differences were found between instructional groups for any criteria
of effectiveness, indicating that the promise of such technological interventions in

iii
the classroom might not be fulfilled. However, the use of virtual laboratories did
not negatively impact on students. Significant interactions were found between
instructional method and sex for three dependent variables (Material Environment,
Teacher Support, and the Attitude to Inquiry), with virtual laboratories being more
effective for males than females. The results of this study have the potential to
inform educational practitioners, add to the body of knowledge in the field of
learning environments, and stimulate further investigations into the effectiveness of
virtual laboratories as an instructional method.

iv
Acknowledgements

“Appreciation is a wonderful thing. It makes what is excellent in others belong to us


as well.”  Voltaire

They say that the process to become a Doctor of Philosophy (PhD) is 10%
intelligence and 90% persistence. While persistence might be internal, my
persistence to complete this dissertation was catalyzed by the support, advice, and
encouragement from others. First and foremost, I’d like to thank all the participants
in my study: the students who responded to the questionnaire, and some of whom
allowed me to interview them over their summer break, the students’ parents who
offered their consent and many of whom expressed their support for my endeavor,
and, of course, the teachers who volunteered their precious time implementing this
study in their classrooms. Additionally, these teachers cooperated gracefully with
my onerous instructions and numerous requests for feedback. Their insight and
encouragement throughout this process has really made a difference to me – thank
you! The administration in each of these schools also deserves my appreciation for
allowing this study to take place under their auspices. In particular, the
administration of the school in which I was based was helpful and encouraging.

I would also like to take this opportunity to thank all the people who continually
asked about my progress and took an interest in my completion of this process: Dr.
Frommer, Barbie, Nicole, Allison, Aliza and many other friends and colleagues.
Without your pressure, I would not have come this far!

Naturally, but most significantly (may I use that word now?), words cannot express
my appreciation for my supervisor, Professor Barry Fraser. Your support,
dedication, tireless efforts, and meticulous editing have been outstanding! My
appreciation goes to your whole team at the Science and Mathematics Education
Centre, especially Dr. Rekha Koul, for helping me process and understand the data
numerous times!

Last, but certainly not least, my appreciation goes to my own kin. Thank you to my
family in Australia who accommodated me during my visit to Curtin University.

v
All along the dissertation writing process, I kept envisioning what it would look like
at the end and I would often find myself composing the ‘Acknowledgments’ in my
head. However, now that I have to put words on paper, I am at a loss. All I can
muster is: thank you from the bottom of my heart to the love of my life, Asher, who
pushed me to always pursue further and tolerated my unavailability, and thank you
the miniature loves of my life, Mordechai, Aryeh, and Mira, who have egged me on
and have had the patience to wait until ‘Mommy has finished working’.

vi
Abbreviations
CES Classroom Environment Scale
CCEI Computerized Classroom Ergonomic Inventory
CLES Constructivist Learning Environment Survey
COLES Constructivist-Orientated Learning Environment Survey
CUCEI College and University Classroom Environment Inventory
DOLES Distance and Open Learning Environment Scale
DELES Distance Education Learning Environments Survey
ICEQ Individualized Classroom Environment Questionnaire
LAG Laboratory Assessment in Genetics
LEI Learning Environment Inventory
MCI My Class Inventory
OLLES Online Learning Environment Survey
QTI Questionnaire on Teacher Interaction
SAI Scientific Attitude Inventory
SLEI Science Laboratory Environment Inventory
TOSRA Test of Science-Related Attitudes
TROFLEI Technology-Rich Outcomes-Focused Learning Environment
Inventory
VL Virtual Laboratory
WEBLEI Web-Based Learning Environment Instrument
WIHIC What Is Happening In this Class? Questionnaire

vii
Table of Contents
DECLARATION .................................................................................................................. II
ABSTRACT ........................................................................................................................ III
ACKNOWLEDGEMENTS ................................................................................................. V
ABBREVIATIONS ........................................................................................................... VII
LIST OF TABLES ............................................................................................................... X
LIST OF FIGURES ........................................................................................................... XI
CHAPTER 1 .......................................................................................................................... 1
INTRODUCTION ................................................................................................................. 1
1.1 INTRODUCTION ...................................................................................................... 1
1.2 RATIONALE ............................................................................................................ 2
1.3 RESEARCH QUESTIONS, DESIGN AND METHOD ..................................................... 5
1.4 CONTEXT ................................................................................................................ 7
1.4.1 The State of Science Education Today on a National Scale .............................. 8
1.4.2 Reforming Science Curricula ............................................................................ 9
1.4.3 Science Laboratories ....................................................................................... 11
1.5 LIMITATIONS ........................................................................................................ 15
1.6 OVERVIEW OF THESIS .......................................................................................... 16
CHAPTER 2 ........................................................................................................................ 18
LITERATURE REVIEW ................................................................................................... 18
2.1 INTRODUCTION .................................................................................................... 18
2.2 THEORETICAL FRAMEWORK: LEARNING ENVIRONMENTS RESEARCH ................ 19
2.2.1 History and Development of Learning Environments Research ..................... 20
2.2.2 Instruments for Assessing the Learning Environment ..................................... 22
2.2.3 Past Applications of Learning Environment Scales ........................................ 43
2.3 STUDENT ATTITUDES ........................................................................................... 60
2.3.1 Definition of Attitude ....................................................................................... 61
2.3.2 Assessment of Student Attitudes ....................................................................... 63
2.3.3 Impact of Educational Interventions on Students’ Attitudes ........................... 65
2.4 GENDER DIFFERENCES IN SCIENCE EDUCATION ................................................. 69
2.5 VIRTUAL LABORATORIES IN SCIENCE EDUCATION ............................................. 74
2.5.1 The Proponents: Rationale for Integrating Educational Technology ............. 74
2.5.2 Virtual Laboratories ........................................................................................ 78
2.5.3 Virtual Learning Environments ....................................................................... 84
2.5.4 Overview of Studies Employing Virtual Laboratories ..................................... 88
2.5.5 The Critics: The No Significant Difference Phenomenon Regarding
Educational Technology ............................................................................................... 93
2.6 SUMMARY ............................................................................................................ 99
CHAPTER 3 ...................................................................................................................... 101
METHODOLOGY ............................................................................................................ 101
3.1 INTRODUCTION .................................................................................................. 101
3.2 RESEARCH QUESTIONS ...................................................................................... 101
3.3 SAMPLE SELECTION AND CHARACTERIZATION ................................................. 102
3.4 INSTRUMENTATION AND RESOURCES USED TO IMPLEMENT THE STUDY.......... 105
3.4.1 Instrumentation: Development of LAG Questionnaire .................................. 105
3.4.2 Other Resources ............................................................................................ 114
3.5 PROCEDURES ...................................................................................................... 115
3.5.1 Treatment Conditions .................................................................................... 115

viii
3.5.2 Design and Delivery of Virtual Laboratories ................................................ 118
3.5.3 Timetable ....................................................................................................... 120
3.5.4 Administration of LAG Questionnaire ........................................................... 123
3.5.5 Ethical Issues ................................................................................................. 124
3.6 DATA COLLECTION, ENTRY, AND ANALYSIS..................................................... 124
3.6.1 Collection of Data ......................................................................................... 125
3.6.2 Entry of Data ................................................................................................. 127
3.6.3 Statistical Methods for Analysis of Data ....................................................... 129
3.7 LIMITATIONS ...................................................................................................... 133
3.7.1 Loss of Sample ............................................................................................... 133
3.7.2 Treatment Conditions .................................................................................... 134
3.7.3 Technical Issues............................................................................................. 135
3.7.4 Instrument Administration ............................................................................. 136
3.8 SUMMARY .......................................................................................................... 136
CHAPTER 4 ...................................................................................................................... 139
DATA ANALYSES AND RESULTS .............................................................................. 139
4.1 INTRODUCTION .................................................................................................. 139
4.2 VALIDITY AND RELIABILITY OF LEARNING ENVIRONMENT, ATTITUDE, AND
ACHIEVEMENT SCALES COMPOSING THE LAG ................................................................... 140
4.2.1 Factor Structure of Learning Environment and Attitude Scales ................... 141
4.2.2 Internal Consistency Reliability of Learning Environment and Attitude Scales
143
4.2.3 Discriminant Validity of Learning Environment and Attitude Scales ........... 145
4.2.4 Ability of Learning Environment to Differentiate Between Classrooms ....... 146
4.2.5 Validation of Achievement Section of the LAG.................................................. 146
4.3 ASSOCIATIONS BETWEEN LEARNING ENVIRONMENT, AND ATTITUDES, AND
ACHIEVEMENT ..................................................................................................................... 148
4.3.1 Associations Between Learning Environment and Attitudes ......................... 149
4.3.2 Associations Between Learning Environment and Achievement ................... 150
4.4 EFFECTIVENESS OF VIRTUAL LABORATORIES AND THEIR DIFFERENTIAL
EFFECTIVENESS FOR DIFFERENT SEXES IN TERMS OF LEARNING ENVIRONMENTS,
ATTITUDES, AND ACHIEVEMENT......................................................................................... 152
4.4.1 Overview of Results for Effectiveness for Virtual Laboratories and Differential
Effectiveness of Virtual Laboratories for Males and Females ................................... 154
4.4.2 Effectiveness of Instruction Using Virtual Laboratories in Terms of Learning
Environment Perceptions, Attitudes, and Achievement .............................................. 156
4.4.3 Sex Differences in Learning Environment Perceptions, Attitudes, and
Achievement ................................................................................................................ 165
4.4.4 Differential Effectiveness of Virtual Laboratories for Males and Females... 169
4.5 SUMMARY .......................................................................................................... 176
CHAPTER 5 ...................................................................................................................... 179
DISCUSSION .................................................................................................................... 179
5.1 INTRODUCTION .................................................................................................. 179
5.2 OVERVIEW OF THESIS ........................................................................................ 179
5.2.1 Research Question 1 ...................................................................................... 181
5.2.2 Research Question 2 ...................................................................................... 183
5.2.3 Research Questions 3 and 4 .......................................................................... 184
5.2.4 Summary of Qualitative Data ........................................................................ 188
5.2.5 Teachers’ Perspectives Regarding the Learning Environment and Student
Outcomes .................................................................................................................... 191
5.3 SIGNIFICANCE AND IMPLICATIONS .................................................................... 192
5.3.1 Implications for Educational Research ......................................................... 193
5.3.2 Implications for Educational Practitioners ................................................... 195

ix
5.4 LIMITATIONS AND SUGGESTIONS FOR FURTHER RESEARCH ............................. 198
5.5 CONCLUSION ...................................................................................................... 202
APPENDICES ................................................................................................................... 236
APPENDIX A: LABORATORY ASSESSMENT IN GENETICS (LAG) .................................... 237
APPENDIX B: SEMI-STRUCTURED INTERVIEW QUESTIONS FOR STUDENTS .................... 246
APPENDIX C: SEMI-STRUCTURED INTERVIEW QUESTIONS FOR TEACHERS.................... 248
APPENDIX D: LIST OF VIRTUAL LABORATORIES AVAILABLE FOR TEACHERS ............... 250
APPENDIX E: INSTRUCTIONS TO TEACHERS FOR PARTICIPATING IN MY STUDY ........... 252
APPENDIX F: EXAMPLE OF A VIRTUAL LABORATORY WORKSHEET .............................. 253

List of Tables
TABLE 2.1 OVERVIEW OF SCALES USED IN SOME LEARNING ENVIRONMENT INSTRUMENTS
(CUCEI, MCI, QTI, SLEI, CLES, WIHIC, AND COLES) ............................................ 24
TABLE 2.2 SCALE DESCRIPTION, MOOS’ DIMENSION, AND SAMPLE ITEM FOR EACH
TECHNOLOGY-RICH OUTCOMES-FOCUSED LEARNING ENVIRONMENT INVENTORY
(TROFLEI) SCALE........................................................................................................ 39
TABLE 2.3 SOME STUDIES OF ASSOCIATIONS BETWEEN THE LEARNING ENVIRONMENT
AND STUDENT OUTCOMES ............................................................................................ 46
TABLE 2.4 FRASER'S (1981) TOSRA SCALES AND KLOPFER'S (1971) CLASSIFICATION .... 64
TABLE 3.1 SCALE DESCRIPTION AND SAMPLE ITEM FOR EACH LEARNING ENVIRONMENT
SCALE IN THE LAG ..................................................................................................... 110
TABLE 3.2 SCALE DESCRIPTION, JUSTIFICATION, AND SAMPLE ITEM FOR EACH TOSRA
SCALE USED IN THE LAG ............................................................................................ 112
TABLE 3.3 TITLE, TYPE, DESCRIPTION AND SOURCE FOR EACH VIRTUAL LABORATORY
.................................................................................................................................... 119
TABLE 3.4 IMPLEMENTATION OF CONDITIONS OF THE STUDY BY EACH TEACHER
INCLUDING CLASS COMPOSITION, DURATION OF STUDY, THE ADMINISTRATION OF
THE VIRTUAL LABORATORIES (VL), AND INFORMATION ABOUT COVARIATES......... 121
TABLE 4.1 FACTOR ANALYSIS RESULTS FOR ATTITUDE AND LEARNING ENVIRONMENT
SCALES ........................................................................................................................ 142
TABLE 4.2 SCALE MEAN, STANDARD DEVIATION, INTERNAL CONSISTENCY (CRONBACH
ALPHA RELIABILITY), DISCRIMINANT VALIDITY (MEAN CORRELATION WITH OTHER
SCALES), AND ABILITY TO DIFFERENTIATE BETWEEN CLASSROOMS (ANOVA
RESULTS) FOR LEARNING ENVIRONMENT AND ATTITUDE SCALES ........................... 144
TABLE 4.3ASSOCIATIONS BETWEEN LEARNING ENVIRONMENT QUESTIONNAIRE SCALES
AND ATTITUDES AND ACHIEVEMENT IN TERMS OF SIMPLE CORRELATIONS (R),
MULTIPLE CORRELATIONS (R) AND STANDARDIZED REGRESSION COEFFICIENTS () ....
.................................................................................................................................... 149
TABLE 4.4 TWO-WAY ANALYSIS OF VARIANCE (ANOVA) FOR INSTRUCTIONAL METHOD
AND SEX FOR EACH SCALE OF THE LAG .................................................................... 155
TABLE 4.5 ITEM MEAN, ITEM STANDARD DEVIATION AND DIFFERENCE BETWEEN
INSTRUCTIONAL METHODS (ANOVA RESULTS AND EFFECT SIZE) FOR EACH
LEARNING ENVIRONMENT AND STUDENT OUTCOME MEASURED BY THE LAG........ 157
TABLE 4.6 ITEM MEAN, ITEM STANDARD DEVIATION AND SEX DIFFERENCE (ANOVA
RESULTS AND EFFECT SIZE) FOR EACH LEARNING ENVIRONMENT SCALE AND
STUDENT OUTCOME MEASURED BY THE LAG........................................................... 166

x
TABLE 4.7DIFFERENTIAL EFFECTIVENESS (INSTRUCTIONAL METHOD X SEX INTERACTION)
OF VIRTUAL LABORATORIES FOR MALES AND FEMALES FOR EACH LEARNING
ENVIRONMENT SCALE AND STUDENT OUTCOME MEASURED BY THE LAG .............. 170
TABLE 5.1 SUMMARY OF STUDENT INTERVIEW RESULTS FOR STUDENTS EXPERIENCING
EACH INSTRUCTIONAL METHOD FOR EACH LEARNING ENVIRONMENT AND OUTCOME
VARIABLE (BASED ON SECTION 4.4) .......................................................................... 189
TABLE 5.2 SUMMARY OF STUDENT INTERVIEW RESULTS FOR SEX DIFFERENCES FOR EACH
LEARNING ENVIRONMENT AND OUTCOME VARIABLE ............................................... 191
TABLE 5.3 SUMMARY OF TEACHERS’ OBSERVATIONS FOR EACH LEARNING ENVIRONMENT
AND OUTCOME VARIABLE, AND GENDER .................................................................. 192

List of Figures

FIGURE 3.1 NUMBERS OF STUDENTS IN EXPERIMENTAL AND CONTROL CLASSES FOR EACH
TEACHER ..................................................................................................................... 104
FIGURE 3.2 NUMBERS OF FEMALE AND MALE STUDENTS FOR EACH TEACHER................ 104
FIGURE 3.3 SCREENSHOT FROM A SAMPLE VIRTUAL LABORATORY ................................ 116
FIGURE 4.1 FREQUENCY DISTRIBUTION FOR ACHIEVEMENT ............................................ 147
FIGURE 4.2 PROFILE OF MEANS FOR INSTRUCTIONAL GROUPS AS MEASURED BY LAG .. 158
FIGURE 4.3 PROFILE OF MEANS FOR DIFFERENT SEXES AS MEASURED BY LAG ............. 167
FIGURE 4.4DIFFERENTIAL EFFECTIVENESS OF VIRTUAL LABORATORIES FOR FEMALES AND
MALES FOR THE LEARNING ENVIRONMENT SCALE OF MATERIAL ENVIRONMENT ... 172
FIGURE 4.5 DIFFERENTIAL EFFECTIVENESS OF VIRTUAL LABORATORIES FOR FEMALES
AND MALES FOR THE LEARNING ENVIRONMENT SCALE OF TEACHER SUPPORT ....... 173
FIGURE 4.6 DIFFERENTIAL EFFECTIVENESS OF VIRTUAL LABORATORIES FOR FEMALES
AND MALES FOR THE ATTITUDE SCALE OF INQUIRY ................................................. 174

xi
Chapter 1

Introduction

“If we knew what it was we were doing, it would not be called research, would it?”
– Albert Einstein

1.1 Introduction

In order to compete globally, students require a strong foundation in science,


technology, engineering, and mathematics (STEM). To this end, the development
and evaluation of educational innovations in science classes have become
increasingly significant. One such educational innovation – virtual laboratories –
was evaluated in my study.

Intended to simulate real experiments, virtual laboratories, available through the


Internet, can utilize less instructional time, reduce reliance on complex, hazardous,
and costly equipment, and allow students to experience high-level investigations that
might not otherwise be possible in a high school classroom setting. In response to
urging to adopt more educational technology in science classrooms, the use of
virtual laboratories also can offer an engaging instructional medium, one to which
many students of the digital age are well-accustomed. However, evidence is
required about whether this instructional tool is indeed effective and whether virtual
laboratories should continue to be developed and utilized in classrooms.

Because students spend approximately 20,000 hours in a classroom setting during


the period extending from pre-school to university (Fraser, 2001), the learning
environment has a strong impact on students, and students’ perceptions of that
environment are an important measure of the effectiveness of any educational
intervention. Therefore, the effectiveness of virtual laboratories was investigated in
this study in terms of students’ perceptions of the learning environment, as well as
the student outcomes of attitudes and achievement.

This chapter introduces the components of this study. The rationale for this study is
explained in Section 1.2. The research questions, design, and method are described
in Section 1.3. The context, which describes the setting in which the study was
implemented, and also the curriculum on which the study was based, are explored in

1
Section 1.4. Limitations and boundaries regarding this study are delineated in
Section 1.5. This chapter concludes with an overview of the remaining chapters that
review relevant literature, discuss an appropriate framework, explain the methods of
the study, describe methods for analyzing the data, report the results, and provide
implications for practical applications and future research.

1.2 Rationale

Achievement scores in the sciences for American students have raised alarms about
the abilities, skills, and knowledge base of the nation’s future work force (see
Section 1.4.1). As decried by Thomas Friedman in The World is Flat (2006), the US
has entered an era of ‘outsourcing’ low-skilled jobs to developing countries because
the cost is less. Outsourcing also occurs for high-skilled jobs involving Science,
Technology, Engineering, and/or Mathematics (STEM) for which the American
workforce is ill-equipped; this is referred to as the ‘brain drain’ or “the chronic
decline in homegrown STEM talent” (Dugger, 2010). A 2005 report of the US
Bureau of Labor Statistics predicted that, by 2012, the number of jobs in STEM
occupations would grow by 47%, which is three times the rate for all other
occupations (Russell & Siley, 2005). Fortunately, Friedman argues, educational
systems are dynamic and can be enhanced to better train American youth and
prevent such outsourcing (Friedman, 2006).

In response to this phenomenon, a number of initiatives to improve science


education have been launched. Examples include the Educate to Innovate campaign
that focuses on activities outside the classroom and National Lab Day that matches
scientists willing to volunteer their time with local science classes. Challenges to
design video games that incorporate scientific concepts and skills, online directories
for local science activities (www.connectamillionminds.com), and an emphasis on
science in popular children’s television programming are also some of the
innovative plans offered by various organizations and corporations (Chang, 2009,
November 23).

The National Science Foundation’s Task Force on Cyberlearning also proposed


upgrading the state of Science, Technology, Engineering, and Mathematics (STEM)
education by incorporating interactive technology, with one of the examples offered

2
being virtual laboratories (Borgman et al., 2008). The integration of technology into
science laboratories has begun, but several researchers note the lack of empirical
evidence concerning its effectiveness in general (Russell, 1999), and the
effectiveness of using virtual laboratories in particular (Harms, 2000; Hofstein &
Lunetta, 2004; Javidi & Sheybani, 2006). Ma and Nickerson (2006) acknowledge
the necessity to further evaluate the educational effectiveness of laboratory
simulations by conducting controlled studies. While there are a number of studies
that have assessed such educational innovations from the field of Information
Technology, there is hardly any evaluative research on virtual laboratories from an
educational perspective. The purpose of this study, then, was to evaluate the
effectiveness of the use of such virtual laboratories in science classes.

Virtual laboratories are essentially simulated experiments conducted using computer


software (often through the Internet), that offer numerous advantages for both
student learning and the logistics of educational experiences, as discussed in Section
2.5.2. The author’s initial motivation to conduct this study was based on a teaching
experience in which under-performing male students seemed to be engaged by this
technology, which seemed to lead to increased understanding and task completion.
However, because these initial observations were purely anecdotal, further evidence
was needed about the effectiveness of such virtual laboratories.

The researcher chose the field of learning environments as the foundation for the
current study. Classroom learning environment research focuses on interactions that
take place within a classroom, between students, and between teachers and students
(Fraser, 2012). Learning environment instruments can be used to assess student
perceptions of what is taking place in the classroom and these assessments can guide
future directions to improve the learning environment. Because associations have
been established between the learning environment and student attitudes towards
science, as well as with achievement in science (Fraser, 2012), enhancing the
learning environment through an educational innovation (such as virtual
laboratories) might also improve students’ attitudes and achievement levels.

Attitudes towards science amongst middle to early high school students have been
found to decline relative to their earlier schooling experiences (Oliver & Venville,

3
2011). Students who lose interest in the sciences are less likely to further explore
the field in higher education and tend not to pursue such lines of work (Tytler &
Osborne, 2012). If educational researchers can uncover evidence for the
effectiveness of instructional media that engage students in science at this critical
age of development, it might inform current and future practices for improving
attitudes towards science and science-related careers. Therefore, in addition to
assessing students’ perceptions of the learning environment, this study also
examined students’ attitudes towards science, especially because robust and
economical instruments are available to assess such attitudes.

While achievement is traditionally a measure of the effectiveness of educational


innovations, this study focused mainly on how the psychosocial aspects of the
classroom were impacted. However, the effect of virtual laboratories on
achievement was also taken into account in order to check students’ understanding
of the material and to confirm previously-established links between achievement
and such psychosocial aspects of education. If both students’ perceptions of the
classroom environment and their attitudes towards science improved as a result of
an intervention, but their conceptual understanding was unchanged, then the
intervention could not be considered to be truly effective.

Additionally, because learning environment instruments have been honed to detect


differences between subgroups (such as different sexes) within a classroom setting,
they could be applied in this study to examine differences between males and
females in perceptions of the learning environment, attitudes, and achievement.
This is a significant area of research in science education and there is much
controversy over whether such gender differences exist (Scantlebury, 2012).
Therefore, this study also explored gender differences, especially whether virtual
laboratories are a gender-inclusive instructional technique.

Ideally, an evaluation of any intervention should include a comparison group


without the intervention so that data from both groups can be compared. The
current study adopted a quasi-experimental design for this purpose, with data from
students in classes that engaged in virtual laboratories being compared with data
from students in traditional classes. However, quantitative data cannot provide the

4
whole picture of the effect of virtual laboratories, especially because students cannot
indicate their opinions outside of the specific questions about which they are asked
on an instrument. For this reason, the collection of qualitative data through semi-
structured interviews was an important element in this study. A triangulation of
quantitative and qualitative methods of data collection in learning environment
research has been recommended by Tobin and Fraser (1998). Elaboration of the
research design is described in Section 1.3 and further details are furnished in
Chapter 3.

This is the first study of its kind to evaluate the effectiveness of virtual laboratories
in science education in terms of students’ perceptions of the learning environment
and the student outcomes of attitudes and achievement. Therefore, findings have the
potential to usefully inform future researchers in science education, practitioners
such as administrators and teachers, and policy-makers, and eventually impact on
students.

1.3 Research Questions, Design and Method

Once the purpose of this study was conceived, it was further divided into four
separate aims for exploring various aspects of the educational intervention. Each
aspect of my investigation was guided by a research question, as appears below.

To check whether the instruments used in this study were valid and reliable, the first
research question was constructed:

Research Question 1:

Are scales from the Test Of Science Related Attitudes (TOSRA), Science
Laboratory Environment Inventory (SLEI), and Technology-Rich Outcomes-
Focused Learning Environment Inventory (TROFLEI), as well as
achievement items, valid and reliable when used with a sample of high
school students taking biology in the US?

To uncover associations between the three criteria used to assess the effectiveness of
virtual laboratories, the second research question was written:

5
Research Question 2:

Are there associations between the perceived classroom learning


environment and student outcomes of attitudes towards and achievement in
science?

To examine the effectiveness of virtual laboratories in terms of the three measures of


criteria, the third research question was formed:

Research Question 3:

Is the use of virtual laboratories in high school science classes effective in


terms of students’:

a. perceptions of their learning environment,


b. attitudes towards science, and
c. academic achievement?

To examine whether using virtual laboratories was differentially effective for


different sexes, the final research question was asked:

Research Question 4:

Is the use of virtual laboratories differentially effective for males and females
in terms of students’:

a. perceptions of their learning environment,


b. attitudes towards science, and
c. academic achievement?

Chapter 3 describes the research design and method in detail; the following is a brief
overview of Chapter 3. This study used a quasi-experimental design to compare
students in 11 high school classes who engaged in virtual laboratories with students
in 10 high school classes who did not (they continued learning and experimenting in
their normal fashion). Eight different virtual laboratories related to the topic of
genetics were chosen by the researcher for their design and use of inquiry. Teachers

6
used at least four of these virtual laboratories. The treatment period lasted from two
to twelve weeks.

This study involved a questionnaire called the Laboratory Assessment in Genetics


(LAG) that was administered to a sample of 322 students at the end of the treatment
period. As well, semi-structured interviews were conducted with six self-selected
students from the same sample and three of their teachers. The scales for the LAG
were adopted from previously validated questionnaires that measure students’
perceptions of the learning environment, such as the Science Laboratory
Environment Inventory (SLEI) and the Technology-Rich Outcomes-Focused
Learning Environment Inventory (TROFLEI), in addition to scales measuring
students’ attitudes towards science from the Test Of Science Related Attitudes
(TOSRA), and an achievement scale with items borrowed from standardized biology
examinations. The learning environment and attitude scales were first checked for
validity and reliability, and then associations between the variables — perceived
classroom learning environment and student outcomes of achievement and attitudes
towards science — were explored. Finally, the effectiveness of using virtual
laboratories, as well as the differential effectiveness for males and females, in terms
of perceptions of the learning environment, attitudes towards science, and academic
achievement, were investigated. Effect sizes were also calculated for each of these
analyses to determine the magnitude of any differences.

1.4 Context

The field of science education provided the general context for this study. This
section surveys the landscape of science education today regarding recent trends and
future directions on a national scale, with respect to global circumstance (Section
1.4.1). Next, Section 1.4.2 delves into the particulars of the science curriculum, on
which the content of the virtual laboratories used in this study was based. The role
of the science laboratory is also explored because it provided the setting for the
intervention evaluated in this study (Section 1.4.3).

7
1.4.1 The State of Science Education Today on a National Scale

This study took place in the United States of America and involved public high
school students from four different states along the eastern coast. The sciences are
considered a ‘high need’ area in education because there is a shortage of qualified
teachers and because students have been losing interest in this area (Baird, 2012).

The subject of science is part of a larger area of learning commonly referred to as


STEM, which stands for Science, Technology, Engineering, and Mathematics.
Science seeks to understand the natural world; technology aims to modify the natural
environment through innovation in order to satisfy perceived human wants or needs;
engineering is about developing ways to economically use the materials and forces
of nature for the benefit of mankind; and mathematics refers to the study of patterns
and relationships to provide models for the natural world (National Research
Council (NRC), 1996). Because one is a sub-section of the other, the terms ‘science
education’ and ‘STEM education’ are used interchangeably throughout this chapter.

With the globalization of the economy as well as other aspects of society and
culture, STEM education must involve global collaboration as it prepares the future
workforce (Friedman, 2006). In the last decade or two, several international
assessments were developed to compare the achievement of students in different
countries. The most notable are the Third (or, renamed ‘Trends in’) International
Mathematics and Science Study (TIMSS) and the Programme for International
Student Assessment (PISA) for reading, mathematics, and science literacy. Data for
TIMSS have been collected from 4th and 8th graders every four years since 1995 and
data for PISA have been collected from 15 year-olds every three years since 2000
(Programme for International Student Assessment (PISA), 2009; Trends in
International Science and Mathematics Study (TIMSS), 2007).

How does the US compare to other countries in terms of student competency in


science? According to the most recent PISA (2009) results, about 18% of American
students did not score proficiently in science (level one and below). The PISA
report states: “Students whose proficiency in science is limited to Level 1 will find it
difficult to participate fully in society at a time when science and technology play a

8
large role in daily life” (Organization for Economic Co-operation and Development
(OECD), 2010, p. 24). Similar findings were reported from the TIMSS (Gonzales,
Williams, Jocelyn et al., 2008).

To improve education in general, the US government launched a number of efforts


with the aim of increasing educational opportunities for all learners. Such efforts in
general education included A Nation at Risk (Gardner, 1983), that focused on
students, and President Bush’s No Child Left Behind Act (NCLB, 2001), that
focused on schools’ Adequate Yearly Progress (AYP). Currently, the Obama
administration has been granting more and more waivers to schools that have not
met their AYP, indicating the lack of the NCLB’s success (Rich, 2012). A new
initiative launched by President Obama is the Race to the Top (Westendarp &
Westendarp, 2009) that pits states against each other to infuse a sense of competition
and urgency regarding the nation’s educational status. As part of reforming general
education, the US Department of Education continually seeks to improve science
education by investing in efforts to implement programs that aim towards higher
standards, such as the examples cited in Section 1.4.2.

1.4.2 Reforming Science Curricula

While the examples offered in Section 1.4.1 refer to the how of engaging students in
science, what science content is being delivered also requires upgrading. As the US
became aware of how poorly its students were achieving in science relative to other
developed countries, it sought to establish clear learning statements as goals for its
students to attain. Historical examples of such science learning goals include
Benchmarks for Science Literacy (American Association for the Advancement of
Science (AAAS), 1989), the National Science Education Standards (National
Research Council (NRC), 1996, 2005), and a New Generation of Science Standards
(NGSS, 2011). In accordance with the new framework’s scientific practices to
promote scientific inquiry, to focus on cross-cutting concepts (concepts fundamental
to different disciplines of science), and to deepen core content, virtual laboratories
(the intervention in the current study) are one medium through which such scientific
inquiry can be practiced and enable a greater emphasis on cross-cutting concepts,
without the distraction of time-consuming hands-on tasks.

9
The specific topic on which the virtual laboratories in this study were based was
genetics, the study of inheritance. Disciplinary core ideas introduced by the NGSS
are addressed by such virtual laboratories, including ‘LS1: From molecules to
organisms: Structures and processes’, ‘LS3: Heredity: Inheritance and variation of
traits’, and ‘LS4: Biological evolution: Unity and diversity’ (National Research
Council (NRC), 2011, pp. ES-3). Such ideas are considered to be difficult to learn
(Bahar, Johnstone, & Hansell, 1999) because they require multilevel thinking: an
organism is at the macro-level, while cells, chromosomes and DNA are at the micro-
and molecular level, and genotypes are at the symbolic level (Johnstone, 1991). For
instance, to master the topic of genetics at the high school level, students must be
able to discuss the structure and function of key molecules in the cell, explain the
process and purpose of DNA replication, meiosis, gene expression, cellular
regulation, and mutations, predict the impact of environmental factors on these
processes, discuss how they lead to diversity, and the identify role of genetics in
evolution. The teaching of genetics proves to be complex, as well. Controversy
exists over what order in which the sub-topics should be taught and at which point in
the curriculum genetics should be presented (Redfield, 2012) in order to maximize
understanding.

Since Watson and Crick’s (1953) discovery of the structure of DNA, the area of
genetics, one that began in the mid-1800’s as classic ‘Mendelian genetics’, took on a
new direction and was renamed ‘molecular genetics’. From this historic event,
entire new fields within molecular genetics were borne (Marbach-Ad, Rotbain, &
Stavy, 2008), including genetic engineering, which is the intentional modification of
an organism’s characteristics by manipulating its genetic material. From an
economic perspective, the burgeoning pharmaceutical industry provides the impetus
to raise standards in the learning of genetics because it relies on a skilled workforce,
which will be drawn from today’s students.

Unfortunately, current genetics instruction leaves many students ill-prepared to


understand, discuss, and engage in debates about the benefits and detriments of
technological advances in genetic engineering, such as genetically-modified (GM)
foods, cloning, gene therapy, personalized medicine, and genetic screening and
counseling (Toth, Morrow, & Ludvico, 2009). Enhancing such instruction through

10
the use of models and visualization might prove to be helpful, especially at the
molecular level at which students have difficulty understanding this topic simply
based on textual presentations (Marbach-Ad, Rotbain, & Stavy, 2008). A number of
researchers note the potential of computer animations/simulations to facilitate the
visualization of abstract concepts and processes at the molecular level (Marbach-Ad,
Rotbain, & Stavy, 2008; Tsui & Treagust, 2004; Wu, Krajcik, & Soloway, 2001).
Virtual laboratories, which are such examples of computer-based simulations, are
capable of reducing the logistic load required in both the classroom and laboratory
when learning molecular genetics, so that students might better focus on its
demanding cognitive aspects.

To assess whether learning goals are achieved and whether interventions are
beneficial, the National Assessment of Educational Progress (NAEP), colloquially
referred to as the ‘Nation’s Report Card’, showed that science scores improved
between 2009 and 2011, and that the achievement gap for minorities also narrowed
(National Center for Educational Statistics (NCES), 2012a). Another study revealed
that, over the course of a decade (from 2000–2010), more high school students were
enrolled in science and mathematics courses but that achievement had not improved
(Aud, 2012). However, the stagnancy of these scores might even be commended
because it indicates that standards have not been artificially lowered in order to
entice students to engage in more science and mathematics (Campbell, 2012).
These findings suggest that the interventions aimed at improving science education
in the last decade might hold promise. Therefore, one of the aims of this study was
to investigate whether using an intervention (i.e. virtual laboratories) would lead to
increased student science achievement.

1.4.3 Science Laboratories

The laboratory has been a prominent feature of science education since the inception
of teaching science systematically in the 19th century. A laboratory refers to
“experiences in school settings in which students interact with equipment and
materials or secondary sources of data to observe and understand the natural world”
(Hofstein & Kind, 2012, p. 190). However, in the early years of science
experimentation in school, laboratories were simply environments in which to
practice or confirm information learned during lectures or from textbooks. The
11
evolution of laboratories as being the environment through which exploration and
inquiry occur took decades, and is a process that is still ongoing.

Going back to the 1960s–1970s, the contributions of psychologists to the field of


science education, and specifically experimentation, cannot be underestimated.
Also, it was anticipated that science teaching could also help to develop the sort of
thinking processes in youngsters that psychologists espoused. Based on Piaget
(1970) and cognitive psychology, educational researchers developed the learning
cycle to emphasize the process of science: 1) exploration, involving students in
manipulating concrete materials, 2) concept introduction in which the teacher
introduces new concepts, and 3) concept application when the student applies the
learned concept to novel situations. In this way, work with concrete objects, as
afforded by the laboratory part of science classes, was considered to be an essential
component of the development of thought processes, especially as a prerequisite to
the ‘formal operations’ period (Hofstein & Kind, 2012; Karplus & Butts, 1977).

The development of an inquiry-based model for scientific experimentation


continued throughout and beyond this period (Kempa & Ward, 1975; Tamir, 1974).
However, it was argued that an overemphasis was placed on the ‘scientific method’
as a simplified, empiricist approach that included following instructions, getting the
correct answer, and manipulating equipment. Therefore, the 1980s–1990s saw an
increase of science as craftsmanship (i.e. inquiry with a trained scientist/teacher to
become better problem solvers) and a focus on procedural knowledge (i.e. learning
how to do science), in addition to conceptual development, which formed a new
perspective on science education referred to as ‘constructivism’ (Hofstein & Kind,
2012). In the field of psychology, developmental constructivism refers to the idea
that children learn by doing (Piaget, 1963). In line with this theory, the science
laboratory was considered the ideal setting for such construction of knowledge.

Later, constructivists further expanded their ideas by incorporating Vygotsky’s


(1978) socio-cultural view of learning, which dictates that the construction of
concepts originates from socially-mediated activities, especially through language.
Therefore, learning that takes place in a laboratory was seen as a socialization into
scientific culture. This process requires students to engage in metacognition in that

12
they must internalize the their own thought processes as well as those of their peers
(Hofstein & Kind, 2012).

Despite all of these reform efforts over the years, challenges still remain; Hofstein
and Lunetta (2004) pose serious questions about the efficiency and benefits of the
science laboratory. The laboratory in science education has been shown to be
effective in the development of practical, manipulative skills related to handling
equipment, but it has failed to enhance concept-building, critical thinking, and an
understanding of the nature of science; in essence, the laboratory has become a place
for “manipulating equipment and materials, but not ideas” (Hofstein & Kind, 2012,
p. 192).

Some reasons for the lack of evidence regarding the effectiveness of laboratories
include inadequate assessment and research procedures (Lazarowitz & Tamir,
1994), such as insufficient control over laboratory procedures (e.g. laboratory
manuals, teacher behavior, teachers’ assessments of student achievement),
inappropriate samples, and the use of measures that were not sensitive to the
laboratory learning environment (Hofstein & Lunetta, 1982). Since then, a number
of instruments to measure dimensions specific to the science laboratory were
developed, such as the Science Laboratory Environment Inventory (SLEI) discussed
further in Section 2.3.2.

In practice, the inclusion of concepts such as inquiry and constructivism were


difficult to implement. Teachers preferred the safer ‘cookbook’ approach, in which
students perform investigations as if they are following a recipe; teachers
underestimated learners’ capabilities to handle the high cognitive demand required
by true investigations (Hofstein & Kind, 2012). In fact, when Sere (2002)
conducted a comprehensive and long-term study of the use of laboratories in several
EU countries, based on 23 case studies, she found that, although laboratory work
was perceived as an essential component of the experimental sciences, the
objectives stated for practical work in the laboratory were too numerous and
demanding to be implemented by the average science teacher.

Hofstein and Kind (2012) highlight a number of possible solutions to these


challenges about the role of the laboratory in science classrooms. They stress the

13
importance of incorporating metacognition into all activities; this is also considered
to be a way to develop independent learners (NRC, 1996, 2005, 2011). Four
conditions are necessary in order to foster an environment of inquiry, in which
metacognition can occur: time, opportunity, guidance, and support (Baird & White,
1996). Time can be afforded by reducing the amount of time spent on tasks that can
be handled by technology. Similarly, Hofstein and Lunetta (2004) present a way to
overcome the challenges of a lack of inquiry in science laboratories: investing in the
training and use of ‘inquiry empowering technologies’. Already in the early 1980s,
digital technologies were recognized as important tools for the science laboratory
(further discussion about the history of educational technology is found in Section
2.5). Essentially, such technologies can be used to perform time-consuming tasks
such as gathering and analyzing data. This allows students more time to observe,
reflect and construct conceptual knowledge; conduct, interpret, and report more
accurate and relevant data; and focus on student collaboration, development of a
community of inquirers, and engagement in argumentation (Hofstein & Kind, 2012).
All of these features are outcomes of laboratory investigation steeped in the
concepts of inquiry, constructivism, and social learning.

Hofstein and Kind (2012) note some improvements in students’ conceptual


understanding of science with the integration of information and communication
technology (ICT) in the laboratory, but the level at which ICT is utilized in various
school laboratories varies. They surmise that ICT will be used to achieve a greater
synthesis between laboratory work and computer-based simulations and conclude
that this is an area that requires more research regarding its educational
effectiveness.

Hence, the current study evaluated the educational effectiveness of virtual


laboratories, which are computer-based simulations of real investigations. In line
with the aforementioned advantages for technology integration, the virtual
laboratories were anticipated to save time on menial laboratory tasks and allow
students to focus on the theory behind the investigation, as well as its connection to
the design of the experiment.

14
1.5 Limitations

The intention of this study was to compare the instructional effectiveness of virtual
laboratories relative to instruction without virtual laboratories. Therefore, virtual
laboratories, in the context of this study, were meant to supplement current methods
of instruction, rather than substitute traditional methods (i.e. hands-on experiments)
with more innovative and technological ones. In other words, the intention of this
study was not to compare virtual laboratories with their hands-on counterparts. The
researcher was not interested in investigating whether virtual laboratories were more
effective than hands-on laboratories for the same experiment because research
(Bredderman, 1982; Johnson, Wardlow, & Franklin, 1997; Ma & Nickerson, 2006)
already has indicated the effectiveness of hands-on experiences with regard to
experimentation. In fact, the small body of past research on physical (hands-on)
versus virtual laboratories is inconclusive regarding which method is more
beneficial for students (de Jong, Linn, & Zacharia, 2013). This point is further
expanded upon in Section 2.5 where the literature regarding the effectiveness of
virtual laboratories, and educational technology in general, is reviewed.

Rather, the researcher simply noted a lack of opportunities for students to engage in
complex experimentation and techniques with which they are expected to become
familiar, according to newer standards of science education (see Section 1.4.1).
Therefore, the hypothesis of my study was that the introduction of virtual
laboratories would help students in this regard more than current instructional
methods that only involve verbal explanations or textbook illustrations. Essentially,
the researcher did not wish to run similar physical experiments with the comparison
group because such experiments are not usually possible in a high school setting.
High school laboratories simply do not have the safety precautions in place for
conducting such experiments; nor do high schools have the resources, such as costly
equipment and long periods of time for conducting these experiments.

Thus, this study was limited to high school classrooms that do not have the
capability to conduct complex experiments; the comparison of virtual experiments
with similar physical ones was beyond the scope of this investigation. As long as
the results of this study do not suggest a negative impact of virtual laboratories on
students’ educational experience (including perceptions of their learning
15
environment, attitudes, and achievement), then they suggest the effectiveness of
virtual laboratories as an instructional method.

1.6 Overview of Thesis

Background information about this study, its implementation, and its results are
presented in five chapters. Chapter 1 introduced the background (Section 1.1),
rationale and purpose (Section 1.2), research questions and research design (Section
1.3), educational context (Section 1.4), and limitations (Section 1.5) of the study, as
well as an overview of the rest of the thesis (Section 1.6).

Chapter 2 reviews the literature relevant to the current study, and is organized into
several sections and sub-sections. Section 2.2 describes the theoretical framework
for the evaluation of the intervention, namely, learning environments, including its
history and development, instruments used to assess the learning environment, and
the application of learning environment scales to current educational research.
Section 2.3 deals with students’ attitudes towards science, another measure of the
effectiveness of virtual laboratories in my study, by defining the term ‘attitude’,
describing how attitudes are assessed, and reviewing research on the impact of
educational interventions on attitudes. Gender differences in science education are
also considered in Section 2.4. The intervention in this study, virtual laboratories, is
discussed in Section 2.5, including its definition, history, and benefits, and the
possibility that educational technology might not offer any advantages. Finally,
Section 2.6 examines various aspects of achievement, another measure of
effectiveness, in science education.

The methodological aspects of this study are depicted in Chapter 3. Section 3.2
delineates the research questions that guided the methods, while Section 3.3
describes the sample selection, and Section 3.4 discusses the assessment instruments
and other resources. The procedures for the study’s implementation are elucidated
in Section 3.5, and a description of how the data were collected, entered, and
analyzed is presented in Section 3.6. Errors and other general limitations are
pointed out in Section 3.7

16
The next chapter, Chapter 4, reports results for validation of the various parts of the
LAG instrument in Section 4.2, for associations between perceptions of the learning
environment (SLEI, TROFLEI) and attitudes (TOSRA) and achievement in Section
4.3, and for the effectiveness of virtual laboratories in Section 4.4, including results
for the differential effectiveness of virtual laboratories for males and females.

The final chapter summarizes the earlier chapters regarding research methods and
results (Section 5.2), explicates the significance of the results and implications for
educational research and practice (Section 5.3), points out the limitations of this
study as well as suggesting directions for further research (Section 5.4), and
provides a conclusion for the study (5.5).

17
Chapter 2

Literature Review

“If I have seen further it is by standing on the shoulders of giants.” – Isaac Newton

2.1 Introduction

This chapter reviews literature that supports the various aspects of this study. The
aim of this study was to investigate the effectiveness of virtual laboratories in terms
of students’ perceptions of the learning environment, their attitudes towards science,
and their achievement in science. Additionally, it examined the differential
effectiveness of virtual laboratories for different sexes using the same measures.

First, Section 2.2 focuses on the literature that provided the theoretical framework
for the evaluation of the intervention: the field of learning environments provided a
framework for evaluating the effectiveness of virtual laboratories. Included in this
section is a review of the literature regarding the historical background for the
development of the field (Section 2.2.1), the instruments used to assess the learning
environment (Section 2.2.2), and the application of learning environment scales to
current research in classrooms (Section 2.2.3).

Next, Section 2.3 reviews the literature that deals with students’ attitudes towards
science, another measure of the effectiveness of virtual laboratories. The term
‘attitude’ is defined in Section 2.3.1, methods of assessment are presented in Section
2.3.2, and literature concerning the impact of educational interventions on students’
attitudes is reviewed in Section 2.3.3.

Historically, many studies in science education have examined gender differences


when assessing the learning environment of a classroom. Therefore, in my study,
gender differences were considered when evaluating the effectiveness of virtual
laboratories. The literature that discusses whether such gender differences are
perceived by society or whether they are real and innate is examined in Section 2.4.

Finally, literature about the subject of the intervention in this study, virtual
laboratories, is reviewed in Section 2.5. More specifically, this section reviews
literature that describes the history of as well as the rationale for integrating
18
educational technology into classrooms (Section 2.5.1), that defines virtual
laboratories and portrays their advantages and application (Section 2.5.2), and
literature that provides a critical voice against such interventions (Section 2.5.3).

2.2 Theoretical Framework: Learning Environments Research

This study was couched in an area of educational research that has grown from its
infancy to premiership over the last 40 years. How does one measure the effects of
educational reform? Traditionally, educational research has focused on the learning
outcomes, especially achievement scores, of students experiencing an educational
intervention. However, evidence for the effectiveness of education is broader than a
mean score on achievement tests. This is the focus of the learning environments
framework; ‘learning environments’ is an area of research that involves not only the
learning outcome of achievement, but also a complex web of psychosocial factors
that impact on students, classrooms, and schools. More specifically, it explores
intangible aspects that give the classroom a characteristic tone (Fraser, 2001).

In fact, Fraser (2001, 2012) claims that the students, in contrast to external
observers, are the best evaluators of the classroom setting because they have been
observers in a multitude of classrooms during their entire lives. He states that, by
the time a student graduates from university, s/he will have been experiencing
classrooms for over 20,000 hours! Therefore, the perspective that is taken into
account in the field of learning environments is that of the student. That is, the field
uses students’ perceptions of the classroom environment, assessed by quantitative
surveys (Fraser, Giddings, & McRobbie, 1995), as criteria of effectiveness and
predictors of students’ cognitive and affective outcomes (Walberg & Anderson,
1968). Because these perceptions might in turn impact upon their attitudes and
achievement, the field of learning environments indirectly involves learning
outcomes, even though the real focus is the student’s perception of the classroom
environment.

This section reviews literature concerning various aspects of the field of learning
environments. First, Section 2.2.1 provides the historical background of the
development of the field. Next, instruments used to assess the learning environment

19
are explored in Section 2.2.2. Finally, Section 2.2.3 reviews how learning
environment scales have been applied in research in classrooms.

2.2.1 History and Development of Learning Environments Research

The field of learning environments has foundations that date back to Lewin’s (1936)
seminal study in a business setting that led to the formula, Behavior = f(Person,
Environment), in which behavior is defined as a function of the person and the
environment; this idea was applied to human behavior in any setting. His work was
followed by Murray (1938) who advocated a needs–press model in which personal
needs are either supported or frustrated by the environmental press. In line with this
model, Murray also coined the terms ‘alpha press’, referring to the perspective of an
objective observer, and ‘beta press’, which is the perspective of the participant of the
environment. Furthermore, Stern, Stein, and Bloom (1956) delineated between the
individual’s perception of the environment (private beta press) and the shared
group’s perception of the environment (consensual beta press), a distinction
important to researchers when deciding upon the perception scores of the individual,
the group, or an external observer. Work in learning environments was furthered by
Stern (1970) who expanded upon the notion of person–environment fit.

Soon, research on environmental influences was extended to educational settings.


The founding studies in America that involved classroom environment assessments
began simultaneously with Walberg’s (1968) evaluation of the Harvard Physics
Project, resulting in the development of the Learning Environment Inventory (LEI),
and Moos’ (1974) study which used social climate scales that were initially applied
in evaluating psychiatric programs but later were adapted for use in classrooms with
the creation of the Classroom Environment Scale (CES). The development of both
of these widely-used instruments was based upon the Getzels and Thelen (1960)
model that learning outcomes are a result of the interaction of personality needs, role
expectations, and classroom climate. This founding work was significant to the
growth of the field of learning environments as reviewed in numerous books (Fisher
& Khine, 2006; Goh & Khine, 2002; Khine & Fisher, 2003) and book chapters
(Fraser, 1998a, 2007, 2012).

20
Within a decade, the pioneering research on learning environments that began in the
US soon became international. Wubbels and Levy (1993) in the Netherlands
developed the Questionnaire on Teacher Interaction (QTI) to assess student–teacher
interactions. This work with the QTI was furthered by others in countries such as
Brunei Darussalam (Scott & Fisher, 2004), Singapore (Quek, Wong, & Fraser,
2005), Korea (Lee, Fraser, & Fisher, 2003), and Indonesia (Fraser, Aldridge, &
Soerjaningsih, 2010). Barry Fraser and his colleagues established Australia as a
center of research for learning environments and initially constructed the
Individualized Classroom Environment Questionnaire (ICEQ), which differed from
previous questionnaires that assessed teacher-centered classrooms to focus on
classrooms that were more student-centered (Fraser & Butts, 1982). He was also
involved in the development and cross-validation of numerous other instruments
applied to learning environments around the world as described in this and the next
section.

The field was further established by the creation of specific research groups,
journals, and books devoted to learning environments, in addition to the
accumulation of studies conducted by individual researchers. In the mid-1980s, the
American Educational Research Association formed a Special Interest Group (SIG)
on Learning Environments. The launch of the Learning Environments Research: An
International Journal (Fraser, 1998a) carried the field of learning environments to
the next echelon in its rich history and development spanning the last few decades.
As well, new book series, Advances in Learning Environments Research (Aldridge
& Fraser, 2008), that has emerged to cater for topics in greater depth and breadth
than that allowed in journals.

Research in the burgeoning field of learning environments still continues and new
instruments to assess the student’s perspective are currently being conceived at the
same time that scales from historically-significant questionnaires are still being
adapted to new circumstances. While designing studies that evaluate classroom
environments, researchers must select the appropriate instrument that best fits the
scope of the intended study, while also taking care to choose the appropriate unit of
analysis (e.g. the student, the class, the teacher) for scores from the questionnaire
responses to ensure statistically-accurate results (Dorman, 2012). A review of the

21
historically-significant instruments to assess learning environments ensues, with a
focus on the relevant questionnaires from which scales were selected and adapted
for my study.

2.2.2 Instruments for Assessing the Learning Environment

In his review of classroom environment instruments, Fraser declares: “Few fields of


educational research have such a rich diversity of valid, economical, and widely-
applicable assessment instruments as does the field of learning environments”
(Fraser, 1998a, p. 7). This section reviews this array of learning environment
instruments after first noting some general issues regarding the structure of these
questionnaires. Following this introduction, an overview of the questionnaires used
to assess learning environments is presented in Section 2.2.2.1, and a focus on the
questionnaires from which scales were adapted for this study are found in Sections
2.2.2.2 and 2.2.2.3.

Debates abound regarding the most appropriate method to evaluate a classroom


environment. Should data be collected quantitatively through the use of
questionnaires that assess students’ perceptions, or should data be qualitative in
nature and involve an external researcher observing the natural climate of the
classroom and/or interviewing students?

There are several advantages in the use of quantitative questionnaires to collect data.
In general, gathering data through the administration of questionnaires provides a
snapshot of the classroom environment (Fraser, 1998a, 1998b). The nature of these
quantitative instruments allows for data collection from several large groups at one
time and for comparisons to made across these groups and between subgroups
(Fraser, Fisher, & McRobbie, 1996); it is therefore an efficient method for gathering
a large data set in a short amount of time, in contrast to the amount of time required
to collect, record, transcribe, and organize qualitative data. This is particularly
relevant to classrooms where research-based improvements need to be implemented
swiftly before the environment changes. Additionally, questionnaires enable an
examination of multiple aspects of a learning environment to be assessed at a single
time (Fraser, 1998a, 1998b), as opposed to the limited field of view of an external
observer. Fraser (2012) also notes that gleaning perspectives from the participants

22
in the environment, namely, the students and teachers, can capture information
which an external observer can miss or consider insignificant. Naturally, gathering
data through quantitative measures introduces less bias than a researcher observing
the classroom environment or interviewing students himself or herself (Anderson &
Arsenault, 1998). Finally, in comparison with the effort required to train an external
agent in observation or interviewing techniques, teachers who administer
quantitative surveys do not require specialized training, ensuring greater efficiency
in data collection (Fraser, 1998a, 1998b).

Needless to say, while quantitative data collection through the use of surveys allows
all of the aforementioned benefits in the research process, it also lacks the ability to
grasp the nuances in students’ perceptions of the environment. In particular, the
researcher could be unable to understand the rationale behind students’ perceptions
and lack the information necessary to explain anomalies in the data (Duit &
Confrey, 1996). For this reason, while quantitative data collection via
questionnaires dominated the field of learning environments in the past, the method
of triangulation in which the productive combination of quantitative and qualitative
approaches to data collection characterizes the field today (Aldridge, Fraser, &
Huang, 1999; Fraser & Tobin, 1991; Mathison, 1988; Tobin & Fraser, 1998).

Some brief explanations are in order regarding the general structure of such
questionnaires. The scales within a questionnaire (e.g. Student Cohesiveness and
Independence) are the dimensions by which the learning environment can be
quantitatively measured. The scales comprise specific items that address the
particularities of that dimension; for example, an item under Student Cohesiveness
might ask respondents to indicate their agreement with the statement “I know other
students in this class”. Most questionnaires contain a Likert or frequency scale
where responses range from ‘strongly disagree’ to ‘strongly agree’ or from ‘almost
never’ to ‘almost always’, respectively.

According to Moos’ (1974) scheme, there are three general dimensions that
characterize all human environments. Scales within any specific instrument can be
classified under the relationship dimension (i.e. the strength and type of personal
relationships within the environments, and the extent to which people are involved

23
in the environment and support one another), the personal development dimension
(i.e. the extent to which self-reflection and personal growth occur), or system
maintenance and change dimensions (i.e. the extent to which the environment is
orderly, clear in expectations, maintains control, and is responsive to change)
(Moos, 1974).

Table 2.1 Overview of Scales used in some Learning Environment Instruments (CUCEI,
MCI, QTI, SLEI, CLES, WIHIC, and COLES)
Scales classified according to Moos’ dimensions
Instrument Level Items Relationship Personal System
per development maintenance
scale and change
College and Higher 7 Personalisation Task Orientation Innovation
University Education Involvement Individualisation
Classroom Student
Environment Cohesiveness
Inventory (CUCEI) Satisfaction
My Class Elementary 6–9 Cohesiveness Difficulty
Inventory (MCI) Friction Competitiveness
Satisfaction
Questionnaire on Secondary/ 8–10 Leadership
Teacher Primary Helpful/Friendly
Interaction (QTI) Understanding
Student
responsibility
and freedom
Uncertain
Dissatisfied
Admonishing
Strict
Science Upper 7 Student Open-endedness Rule clarity
Laboratory Secondary/ cohesiveness Integration Material
Environment Higher Environment
Inventory (SLEI) Education
Constructivist Secondary 7 Personal relevance Critical voice Student
Learning Uncertainty Shared control negotiation
Environment
Survey (CLES)
What Is Secondary 8 Student Investigation Equity
Happening In this cohesiveness Task orientation
Class (WIHIC) Teacher support Cooperation
Involvement
Constructivist- Secondary 11 Student Task orientation Equity
Oriented Learning cohesiveness Cooperation Differentiation
Environment Teacher support Formative
Survey (COLES) Involvement assessment
Young adult ethos Assessment
Personal relevance criteria
Adapted from Fraser (2012)

24
Table 2.1 displays important questionnaires used in the learning environments field,
and categorizes the scales of these questionnaires according to Moos’ three
dimensions. More dated questionnaires, as well as less commonly used
questionnaires, are not included in this table. Additionally, the TROFLEI is omitted
because it is described in a separate table (see Table 2.2).

Issues to consider in the design and administration of such questionnaires include


the convenience of the survey in terms of its length, low reading level, and absence
of negative wording, which could confuse respondents and invalidate the results.
Many of the historically-significant questionnaires are quite lengthy, containing
around 100 items and potentially creating fatigue for both respondents. Fraser
(1982) reduced the number of items in several instruments while still maintaining
the instruments’ reliability, thereby creating a short form. Consequently, most of the
more contemporary classroom environment questionnaires are relatively short and
have scales containing 6–8 items.

As these instruments were developed, numerous different versions emerged. Lewin


(1936) distinguished between beta press (subjective observation of a participant) and
alpha press (objective observation by a detached observer) in advocating the
consideration of teachers’ and students’ perspectives about their own educational
processes. To accommodate further discrepancies in perceptions, many instruments
include a personal form as opposed to a whole-class form (Fraser, Fisher, &
McRobbie, 1996) so that, instead of generalized statements such as “Students learn
from each other in this class”, students are first asked to consider a more relevant
statement based on their personal experiences, such as “I learn from other students
in this class”. The first such variation of this form was tested using the Science
Laboratory Environment Inventory (see Section 2.3.2.2) for which item and factor
analyses confirmed that the personal form had a similar factor structure and
comparable statistical characteristics (e.g. internal consistency, discriminant
validity) to the class form when either the individual student or the class mean was
used as the unit of analysis. This study also revealed that students might have a
more detached and often more positive view of the environment when perceiving it
as a whole class rather than as an individual. According to the study, gender
differences in perceptions of the environment were somewhat larger on the personal

25
form than on the class form (Fraser, Giddings, & McRobbie, 1995). Therefore, the
major advantage of the personal form is the increased sensitivity of the perceptions
of subgroups (eg. gender) within the classroom, in contrast to the traditional class
form to which students could respond inconsistently. For instance, when asked
about whether the work in the class is difficult, some students might consider
whether the whole class thinks that the work is difficult, while others perhaps
perceive that certain students think that the work is difficult, and still others reflect
on whether the work is difficult for themselves. In this confusion, it would be
difficult to extract the perspectives of subgroups. This distinction between personal
and class forms accommodates the distinction between ‘private’ beta press and
‘consensual’ beta press (Section 2.3.1).

To broaden understanding of a classroom environment from different perspectives,


some forms are for students and others are for teachers, and yet others are for
administrators; some forms are intended for a classroom setting and some for a
whole-school setting (Fraser & Rentoul, 1982). There are even forms to distinguish
between the ‘actual’ and the ‘preferred’ environment because students’ perceptions
of what actually occurs can differ from their perceptions of what they would have
liked to occur in their classrooms; the wording for these actual and preferred forms
differs somewhat (Fraser, 1998a). Often these actual and preferred forms are
utilized to evaluate programs in terms of bridging the gap between what is actually
occurring and what students would prefer. These distinctions between forms of
questionnaires are considered further when reviewing instruments in Section 2.2.2.1
–2.2.2.11 and learning environment studies in Section 2.2.3.

The following sections review the learning environment instruments that have been
developed in the field (including the instruments from which scales have been
adapted for this study): the Learning Environment Inventory (LEI) and My Class
Inventory (MCI) (Section 2.2.2.1), the Classroom Environment Scales (CES)
(Section 2.2.2.2), the Individualized Classroom Environment Questionnaire (ICEQ)
(Section 2.2.2.3), the College and University Classroom Environment Inventory
(CUCEI) (Section 2.2.2.4), the Questionnaire on Teacher Interaction (QTI) (Section
2.2.2.5), the Constructivist Learning Environment Survey (CLES) (Section 2.2.2.6),
the Science Laboratory Environment Inventory (SLEI) (Section 2.2.2.7), the What Is

26
Happening In this Class? (WIHIC) survey (Section 2.2.2.8), the Technology-Rich
Outcomes-Focused Learning Environment Inventory (TROFLEI) (Section 2.2.2.9),
the Constructivist-Orientated Learning Environment Survey (COLES) (Section
2.2.2.10), and a few other questionnaires (Section 2.2.2.11). The summaries in these
sections provide a more detailed description of the learning environment
questionnaires outlined in Table 2.1. Each instrument’s synopsis below includes
information about the number of scales and items within each scale, the age level for
which the questionnaire is designed, past studies that have validated the
questionnaire, and the fit or lack thereof with the current study.

2.2.2.1 Learning Environment Inventory (LEI) and My Class Inventory (MCI)

As part of an evaluation of the Harvard Physics Project in the late 1960s, Walberg
formulated the Learning Environment Inventory (LEI), which was widely used in
the United States for secondary classrooms (Walberg & Anderson, 1968). The LEI
includes 15 scales (Cohesiveness, Friction, Favoritism, Cliqueness, Satisfaction,
Apathy, Speed, Difficulty, Competitiveness, Diversity, Formality, Material
Environment, Goal Direction, Disorganization, Democracy) each containing 7 items
that are responded to in four gradations of agreement with some reverse–scored
items. Because this instrument is geared towards a teacher-centered style of
classroom and is quite lengthy, it is better suited to traditional educational
environments at the secondary level. Furthermore, its length and complexity was
not found suitable for students involved in this study.

Later, the LEI was adapted to be used with younger students (ages 8–12 years) as its
item wording is simplified, its number of scales are trimmed to just 5 containing 25
items in the short form (Cohesiveness, Friction, Satisfaction, Difficulty,
Competitiveness), and responses are reduced to a Yes–No format; this instrument
became known as the My Class Inventory (MCI) (Fisher & Fraser, 1981; Fraser,
Anderson, & Walberg, 1982). Swee Chiew Goh and Barry Fraser (1998) expanded
the MCI’s response option to include a three-point frequency scale (Seldom,
Sometimes and Most of the Time) and included a Task Orientation scale, and then
used the revised MCI in research in Singapore among primary mathematics
students. The MCI has also been successfully employed in Brunei Darussalam with
just three scales (Cohesiveness, Difficulty and Competition) to reveal sex
27
differences in students’ perceptions (Majeed, Fraser, & Aldridge, 2002). In the US,
the MCI has been used in Florida to evaluate a K–5 mathematics program that
integrates children’s literature called Project SMILE (Science and Mathematics
Integrated with Literature Experiences) (Mink & Fraser, 2005), in Texas to evaluate
the use of science kits in primary school (Houston, Fraser, & Ledbetter, 2008), and
in Washington as an accountability tool for elementary-school counselors (Sink &
Spencer, 2005). Because this instrument is geared towards primary school students
and the response options are limited, it was not considered relevant for this study.

2.2.2.2 Classroom Environment Scales (CES)

Moos (1974) developed the Classroom Environment Scales (CES) after evaluating
and researching diverse human environments such as psychiatric hospitals, prisons,
universities, and work settings in the US. The CES includes 9 scales (Involvement,
Affiliation, Teacher Support, Task Orientation, Competition, Order and
Organization, Rule Clarity, Teacher Control, Innovation) each containing 10 items
answered in a True–False response format (Trickett & Moos, 1973). While some of
the CES’s scales were used in the current study, as they have been integrated into
more contemporary questionnaires (i.e. Teacher Support and Task Orientation), the
instrument in its entirety was not appropriate for use my study because it is geared
towards a teacher-centered setting, and is lengthy and complex, and its response
format is limited.

2.2.2.3 Individualized Classroom Environment Questionnaire (ICEQ)

In 1979, the Individualized Classroom Environment Questionnaire (ICEQ) was


created to assess secondary individualized classrooms in Australia that differed from
traditional classrooms in their openness and focus on inquiry-based education
(Rentoul & Fraser, 1979). The final version (Fraser, 1990) includes 10 items for
each of five scales (Personalization, Participation, Independence, Investigation,
Differentiation) with a 5-point frequency response scale ranging from Almost Never
to Very Often. Many items are reverse scored. Because of its reverse scoring and
the fact that its factorial validity was never properly established (McKavanagh &
Stevenson, 1992), the application of this instrument to the current study would have

28
presented challenges, even though the scale of Investigation was used because it was
borrowed from more recent questionnaires.

2.2.2.4 College and University Classroom Environment Inventory (CUCEI)

A similar questionnaire, called the College and University Classroom Environment


Inventory (CUCEI), was developed for small-sized university classrooms. The
CUCEI contains seven items in each of seven scales (Personalization, Involvement,
Student Cohesiveness, Satisfaction, Task Orientation, Innovation, Individualization).
The response style is a 4–point Likert scale of agreement and approximately half of
the items are reverse scored (Fraser & Treagust, 1986). The CUCEI has been used
to evaluate an alternative high school classroom in order to determine the presence
of more student-centered features such as involvement, satisfaction, innovation and
individualization (Fraser & Tobin, 1987) and in computing classrooms in New
Zealand, where its psychometric performance had limitations (Logan, Crump, &
Rennie, 2006). Therefore, the CUCEI was considered of limited use for this study.

2.2.2.5 Questionnaire on Teacher Interaction (QTI.

Another aspect of learning environments is the interpersonal relationship between


teachers and students, which inspired the creation of the Questionnaire on Teacher
Interaction (QTI) in the Netherlands for senior high school students (Wubbels,
Brekelmans, & Hooymayers, 1991), as noted above. This survey assesses eight
aspects of behavior drawing upon a theoretical model that considers proximity
(cooperation–opposition) and influence (dominance–submission) between teachers
and students. Each scale contains 8-10 items and responses are on a five-point
frequency scale. The QTI has been cross-validated in many other countries and
languages including the USA (Wubbels & Levy, 1993), Australia (Fisher,
Henderson, & Fraser, 1995), Brunei Darussalam (Scott & Fisher, 2004), Singapore
(Goh & Fraser, 1996; Quek, Wong, & Fraser, 2005), Korea (Kim, Fisher, & Fraser,
2000; Lee, Fraser, & Fisher, 2003), and Indonesia (Fraser, Aldridge, &
Soerjaningsih, 2010). It has been adapted to relationships between principals and
teachers in the Principal Interaction Questionnaire (PIQ) (Fisher & Cresswell,
1998). These important interactions between teacher and student have led to the
inclusion in other learning environment instruments of scales such as Teacher

29
Support. While Teacher Support is a relevant dimension to assess in the current
study, a more economical and current version of this scale was adopted from
another, more current questionnaire (see Section 2.2.2.9).

2.2.2.6 Constructivist Learning Environment Survey (CLES)

A growing trend since the early part of this century has been the constructivist
learning theory which postulates that learning is a proactive, cognitive process in
which the learner makes sense of the world in relation to prior constructed
knowledge through negotiation and consensus building. To assess the degree to
which constructivist epistemology is reflected in the learning environment,
including the teachers’ epistemological assumptions and the students’ awareness of
the invisible forces that affect their thinking, the Constructivist Learning
Environment Survey (CLES) was developed (Taylor, Fraser, & Fisher, 1997).

Large-scale quantitative and qualitative studies were conducted to validate the


CLES with over 2,000 students in US and Australian classes (Taylor, Fraser, &
Fisher, 1997). Sound validity was also reported from a cross-national study of
junior high-school science classroom learning environments, which involved
administering the English version of the CLES to 1,081 students in Australia, and
administering the Mandarin version of the CLES to 1,879 students in Taiwan. This
study also revealed that Australian classes were perceived as being more
constructivist than Taiwanese classes (Aldridge, Fraser, Taylor et al., 2000). The
CLES was further cross-validated by administering it to 1,864 students in South
Africa. The focus this study was action research for South African teachers to
become more reflective practitioners in their classrooms, with some improvements
in the constructivist orientation of classrooms being noted (Aldridge, Fraser, &
Sebela, 2004).

The validated CLES has been used in the evaluation of educational innovations (see
Section 2.2.3.2 for further detail). For instance, data from the CLES revealed the
success of novel teaching strategies in middle-school mathematics classrooms
(Ogbuehi & Fraser, 2007) and of a new mathematics program called the Class
Banking System (Spinner & Fraser, 2005). Additionally, when a teacher
professional development program based on the Integrated Science Learning

30
Environment (ISLE) was evaluated using the CLES, the results showed that
changing teachers’ learning environment at the university level enhanced their
students’ middle-school classroom environments (Nix, Fraser, & Ledbetter, 2005).

As well, smaller-scale studies tested the instrument in various countries and


languages. In the US, the CLES was validated numerous times (Beck, Czerniak, &
Lumpe, 2000; Cannon, 1995; Harwell, Gunter, Montgomery et al., 2001). The
CLES has been successfully used in Mandarin in Taiwan (Aldridge, Fraser, &
Fisher, 2000), in Spanish in Miami (Peiro & Fraser, 2009), in Korean in the US
(Cho, Yager, Park et al., 1997) and Korea (Oh & Yager, 2004), and in English in
South Africa (Aldridge, Fraser, & Sebela, 2004). In 2004, a shortened form of the
CLES was shown to be equally valid and reliable as the long form (Johnson &
McClure, 2004); Nix and Fraser (Nix & Fraser, 2011) used this short form in the US
with 845 students and reported strong support for its validity.

The final version of the CLES was a revision of the original version (Taylor &
Fraser, 1991) that focused on students as co-constructors of knowledge but ignored
the cultural context of the classroom environment. It contains 7 items per scale
(Personal Relevance, Uncertainty, Critical Voice, Shared Control, Student
Negotiation) with responses on a 5-point frequency scale. Its advantages include an
organizational arrangement of items in blocks for the respondent and minimal use of
negative wording. While constructivism is a desirable dimension featured in virtual
laboratories, none of the CLES scales seemed relevant for assessing their
implementation and so this instrument was of limited use for this study.

2.2.2.7 Science Laboratory Environment Inventory (SLEI)

The Science Laboratory Environment Inventory (SLEI) was designed specifically to


assess the unique role of the laboratory in a high school or university science class,
which is also an important factor in the psychosocial makeup of the learning
environment. In particular, this instrument can be used to address effectiveness of
science laboratory classes and whether the associated costs are justified (Fraser,
Giddings, & McRobbie, 1992). In developing the SLEI, relevant literature was
reviewed to identify dimensions important in the unique environment of a science
laboratory class and this was compared to dimensions in existing instruments. In

31
addition, students and teachers were interviewed to provide comments to guide
revisions to the survey during the various stages. Furthermore, student data
collected using the SLEI were subjected to item and factor analysis, which resulted
in the final version containing 7 items per scale (Student Cohesiveness, Open-
Endedness, Integration, Rule Clarity, Material Environment) with responses on a 5-
point frequency scale (Newby & Fisher, 1997).

Advantages of this instrument include its economical administration (in that it is


short) and easy hand scoring, its cyclic design, and the availability of the personal
and class versions and the actual and preferred forms, which were all shown to be
equally valid and reliable (Fraser, Giddings, & McRobbie, 1992). One shortcoming
of this instrument is that it contains some reverse items in the original version,
although wording can be easily modified to include only positive statements.

A sample of over 5,447 students in 269 classes in the USA, Canada, England, Israel,
Australia, and Nigeria was used to field test and validate the SLEI (Fraser &
McRobbie, 1995). Simultaneous testing revealed consistent scores on internal
consistency reliability and discriminant validity when used with 1,594 students in 92
classes (Fraser, Giddings, & McRobbie, 1995), as well as predictive validity when
used along with attitude scales to predict the effect on student outcomes (Fraser,
Giddings, & McRobbie, 1992). Further validation was accomplished through a
study of 489 senior high-school biology students in Australia by Fisher, Henderson
and Fraser (1997).

The SLEI was also translated into Korean for use in a study of differences between
the classroom environments of three streams (science-independent, science-oriented
and humanities), consisting of 439 high-school students in total. This version of the
SLEI exhibited sound factorial validity and internal consistency reliability, and was
able to differentiate between the perceptions of students in different classes.
Generally students in the science-independent stream perceived their laboratory
classroom environments more positively than did students in either of the other two
streams (Fraser & Lee, 2009).

To illustrate its application in the evaluation of educational innovations, the SLEI, or


adaptations, was employed in assessing an innovative science course for prospective

32
elementary teachers (Martin-Dunlop & Fraser, 2007) and the effect of
anthropometric activities on a classroom learning environment (Lightburn & Fraser,
2007). Each of these studies is explored in detail in Section 2.2.3.2.

The SLEI has also been adapted to more specific environments, such as the
Chemistry Laboratory Environment Inventory (CLEI), which was found to be valid
when used in Singapore to uncover associations between the learning environment
and attitudes (Wong & Fraser, 1996) and to assess the differences in chemistry
laboratory environments between streams (gifted versus non-gifted) and sexes
(Quek, Wong, & Fraser, 2005).

At around the same time, adaptations were made to the SLEI for use for courses in
which computing technology is a fundamental tool. The Computer Laboratory
Environment Inventory (CLEI) was developed to assess the learning environment of
a computer laboratory in higher education and was tested with 80 college-level
students (Newby & Fisher, 1997). The survey contains 5 scales (Student
Cohesiveness, Open-Endedness, Integration, Technology Adequacy, Material
Environment) and responses are on a 5-point frequency scale.

As a whole, the SLEI, with its focus on laboratory classroom environments seemed
to be an appropriate instrument for use in the current study of the effectiveness of an
alternative laboratory. However, most of the scales are geared towards hands-on
experimentation in a whole-class setting involving social aspects of the classroom
(i.e. Student Cohesiveness, Open-Endedness, Rule Clarity) and therefore are
irrelevant to the setting of the study, which focused on the individual student.
Therefore, only the scales of Integration and Material Environment were borrowed
from the SLEI for use in the current study’s instrument because they pertain to
various aspects of virtual laboratories.

The personal form of the SLEI was more appropriate for this study to ensure that
students provided their own perspectives rather than their perspectives of the whole
class. Also, the actual version of the SLEI scales was applicable to the
circumstances because changing the environment in light of student preferences was
not part of my study. For use among high-school science students, the language was
modified to include only positively-worded items to avoid confusion. Also, an item

33
was added to each SLEI scale to maintain consistency in the number of items in
other scales in this study’s instrument.

2.2.2.8 What Is Happening In this Class? (WIHIC)

The What Is Happening In this Class? (WIHIC) questionnaire is important in this


literature review because it is the most frequently-used classroom environment
instrument around the world today and because it formed the foundations for the
development of the Technology-Rich Outcomes-Focused Learning Environment
Inventory (TROFLEI), which was selected for use in my study.

The original What Is Happening In this Class? (WIHIC) was developed by Fraser,
McRobbie, and Fisher (1996) to combine previous questionnaires and incorporate
contemporary educational concerns such as equity and constructivism. It was found
to be reliable and valid when tested with a sample of 50 high school classes each in
Australia and Taiwan (in Chinese). Interestingly, even though Australian students
viewed their learning environments more favorably, Taiwanese students had more
positive attitudes towards science. Many of the studies employing the WIHIC
investigated associations between perceptions of the learning environment and
attitudes towards learning; for a more comprehensive description of these studies,
see Fraser’s review of Classroom Learning Environments (Fraser, 2012).

Originally consisting of 90 items in nine scales, the WIHIC was field tested with
355 middle school students. Factor analysis and interviews resulted in a revised
form containing 56 items in 7 scales (Student Cohesiveness, Teacher Support,
Involvement, Investigation, Task Orientation, Cooperation, Equity) with a 5-point
frequency scale (Aldridge, Fraser, & Huang, 1999). In addition to its wide use and
validity, the WIHIC’s items are organized in blocks, there are no reverse-scored
items (to minimize confusion), and there is a personal and class form available to
accommodate differences in individualized perceptions of the classroom (Aldridge,
Fraser, & Huang, 1999; Fraser, 1998a). This instrument has been extensively
applied to various subject areas, age levels, and countries, and is available in many
languages, as described below.

34
In a second round of field testing of the WIHIC with 1,081 students in Australia and
1,879 students in Taiwan using a Chinese version, Aldridge and colleagues reported
strong factorial validity and internal consistency reliability and that each scale was
capable of differentiating significantly between the perceptions of students in
different classrooms (Aldridge, Fraser, & Fisher, 2000). In fact, these sound
psychometric qualities have been replicated in every study using the WIHIC.

A comprehensive validation was conducted by Dorman using a cross-national


sample of 3,980 high-school students from Australia, the UK, and Canada. The use
of multi-sample analyses within structural equation modeling for the three grouping
variables of country, grade level, and student sex supported “the wide international
applicability of the WIHIC as a valid measure of classroom psychosocial
environment” (Dorman, 2003, p. 231). Another such study validated both the actual
and preferred forms of the WIHIC using multi-trait–multi-method modeling, with
the seven scales as traits and the two forms of the instrument as methods; this study
involved 978 secondary-school students from Australia (Dorman, 2008).

The WIHIC has been translated into Mandarin for use in Taiwan (Aldridge, Fraser,
& Fisher, 2000; Aldridge, Fraser, & Huang, 1999), Indonesian for use in Indonesia
(Fraser, Aldridge, & Adolphe, 2010; Wahyudi & Treagust, 2004), Korean for use in
Korea (Kim, Fisher, & Fraser, 2000), Arabic for use in the UAE (Afari, Aldridge,
Fraser et al., in press; MacLeod & Fraser, 2010), and Spanish for use in Miami in
the US (Allen & Fraser, 2007; Helding & Fraser, in press; Robinson & Fraser, in
press). Additionally, countries where the instrument has been validated in English,
besides the aforementioned studies in Australia, include the US (Allen & Fraser,
2007; den Brok, 2006; Helding & Fraser, in press; Martin-Dunlop & Fraser, 2007;
Ogbuehi & Fraser, 2007; Pickett & Fraser, 2009; Robinson & Fraser, in press; Wolf
& Fraser, 2008), Canada (Zandvliet & Fraser, 2004, 2005), Singapore (Chionh &
Fraser, 2009; Khoo & Fraser, 2008), India (Koul & Fisher, 2005), South Africa
(Aldridge, Fraser, & Ntuli, 2009), and Turkey (den Brok, Telli, Cakiroglu et al.,
2010).

Because of its robustness, the WIHIC’s scales have been adapted for use with other
instruments in particular environments in many different areas of research. For

35
instance, 2,638 grade 8 science students from 50 schools in the Limpopo Province of
South Africa were used as a sample to develop and validate a classroom
environment instrument in the Sepedi language for monitoring the implementation
of outcomes-based classroom environments. The Outcomes-Based Learning
Environment Questionnaire (OBLEQ) contains four scales from the WIHIC, in
addition to three other scales (Aldridge, Laugksch, Seopa et al., 2006).

Greek versions of two scales of the WIHIC, namely, Involvement and Teacher
Support, were incorporated into a new questionnaire entitled How Chemistry Class
is Working (HCCW), which was validated with over 1600 students in Greece and
Cyprus. A more positive classroom environment was perceived among Cypriot
students than among Greek students (Giallousi, Gialamas, Spyrellis et al., 2010).

The WIHIC’s use (either in its entirety or in adaptations) in the evaluation of


educational innovations is illustrated in the following studies, all of which are
expanded upon in Section 2.2.3.2: inquiry laboratory teaching in middle schools
(Wolf & Fraser, 2008); a new science course for prospective elementary school
teachers (Martin-Dunlop & Fraser, 2007); innovative teaching strategies in middle-
school mathematics classes (Ogbuehi & Fraser, 2007); computer-networked high
school classrooms in Australia and Canada (Zandvliet & Fraser, 2005); the physical
and psychosocial environments in internet classrooms in Canada (Zandvliet &
Buker, 2003); and laptop use in science and mathematics classes in Canada
(Raaflaub & Fraser, 2002).

Owing to its outstanding validity, widespread use, and ease of administration


amongst students, the WIHIC would have been a sound choice for use in the current
study. However, because the more-recent TROFLEI builds on the WIHIC and is
more relevant to the technological aspects of this study, the TROFLEI provided a
better choice for my study.

2.2.2.9 Technology-Rich Outcomes-Focused Learning Environment Inventory


(TROFLEI)

Outcomes-focused education has been espoused by educational researchers and


adopted in many countries as an approach to educational reform in which planning,
delivery, and assessment all focus on the student outcomes that result from teaching

36
rather than on content (Fraser, 2012); this approach is also often referred to as
‘backward planning’ (Wiggins & McTighe, 2005). Appropriate instruments are
necessary in order to evaluate this approach to education. As well, the integration of
technology into education is a contemporary dimension of classroom environments.
This reflects the view that the classroom environment is dynamic rather than static
and that instrumentation to evaluate new dimensions needs to be continually
devised. Rather than using a generalized instrument ‘off the shelf’, it is now
common to validate context-specific instruments when conducting classroom
environment research (Dorman, Aldridge, & Fraser, 2006).

To this end, the Technology-Rich Outcomes-Focused Learning Environment


Inventory (TROFLEI) was designed as an innovative, modern instrument by
Aldridge and Fraser (2003) in Australia to meet these two growing trends of
research. It draws upon the WIHIC and incorporates all of its seven scales (Student
Cohesiveness, Teacher Support, Involvement, Investigation, Task Orientation,
Cooperation, Equity), but includes three additional dimensions (Differentiation,
Computer Usage, Young Adult Ethos) to permit investigation of learning
environments that are outcomes-based and technology-rich. Each of the 10 scales
consists of 8 items that are responded to using a five-point frequency scale (Almost
Never, Seldom, Sometimes, Often and Almost Always).

The TROFLEI has been applied across all learning areas using both the personal and
class forms and actual and preferred forms (Aldridge & Fraser, 2008). A unique
aspect of the TROFLEI is that it employs a side-by-side response format, which
enables students to provide their separate perceptions of actual and preferred
classroom environment in an economical way. To provide contextual cues and to
minimize confusion to students (Aldridge et al., 2000), TROFLEI items that belong
to the same scale are grouped together instead of arranging them randomly or
cyclically. Only positively-worded items are employed to ease students’
understanding of the statements, as indicated in past studies (Barnette, 2000).

The TROFLEI was originally validated in a study of 1,035 students in grades 10 and
11 at Seven Oaks Senior College in Western Australia (Aldridge & Fraser, 2003),
which has an emphasis on outcomes-focused education and the use of Information

37
Communication Technology (ICT). More extensive validation was carried out using
a larger sample of 2,317 students from 166 grade 11 and 12 classes from Western
Australia and Tasmania. During its first year of operation, the new school was
subjected to formative and summative evaluation that included use of the TROFLEI.
The study revealed strong factorial validity and internal consistency reliability for
both the actual and preferred forms of the TROFLEI. As well, the actual form of
each scale was capable of differentiating between the perceptions of students in
different classrooms. Results after four years supported the efficacy of the school’s
educational programs and offered insights regarding differences in the classroom
environment perceptions between males and females and between students enrolled
in university-entrance examinations and in wholly school-assessed subjects
(Aldridge & Fraser, 2008). Furthermore, Aldridge, Dorman and Fraser (2004) used
multi-trait-multi-method modeling with a sub-sample of 1,249 students, of whom
772 were from Western Australia and 477 were from Tasmania, to support the
TROFLEI’s construct validity and sound psychometric properties, including that the
actual and preferred forms share a common structure.

Employing structural equation modeling with a sample of 4,146 grade 8–13


students, Dorman and Fraser (2009) used the TROFLEI to establish associations
between students’ affective outcomes and their classroom environment perceptions.
With the same sample, the authors also applied cluster analysis to the TROFLEI
responses in order to identify five relatively homogeneous groups of classroom
environments: exemplary, safe and conservative, non-technological teacher-
centered, contested technological, and contested non-technological (Dorman,
Aldridge, & Fraser, 2006).

In addition to validation studies in Australia and Tasmania, the TROFLEI, in its


entirety, has been validated amongst secondary science students in a number of
other countries. For secondary science students in both India (Gupta & Koul, 2007)
and New Zealand (Koul, Fisher, & Shaw, 2011), the TROFLEI was shown to be a
valid questionnaire to assess a technology-rich learning environment. Females
perceived a more positive technology-rich learning environment than males,
confirming previous findings regarding females’ positive perceptions of the learning
environment. As well, associations were found for scales of the TROFLEI and three

38
affective outcomes scales (attitude to subject, attitude to computers, and academic
efficacy) (Koul, Fisher, & Shaw, 2011).

Table 2.2 Scale Description, Moos’ Dimension, and Sample Item for Each Technology-
Rich Outcomes-Focused Learning Environment Inventory (TROFLEI) Scale
Environment Scale Description Moos’ Sample Item
Scale Dimension
Student The extent to which students Relationship I am friendly to members of
Cohesiveness know, help, and are supportive this class.
of one another.
Teacher The extent to which the teacher Relationship The teacher takes an interest
Support helps, befriends, trusts, and is in me.
interested in students.
Involvement The extent to which students Relationship I explain my ideas to other
have attentive interest, students.
participate in discussions, do
additional work and enjoy the
class.
Task The extent to which it is Personal I know how much work I have
Orientation important to complete activities Development to do.
planned and stay on the subject
matter.
Investigation The extent to which skills and Personal I carry out investigations to
processes of enquiry and their Development test my ideas.
use in problem solving and
investigation are emphasized.
Cooperation The extent to which students Personal I share my books and
cooperate rather than compete Development resources with other students
with one another on learning when doing assignments.
tasks.
Equity The extent to which students System I get the same opportunity to
are treated equally by the Maintenance answer questions as other
teacher. and Change students.
Differentiation The extent to which teachers System I do work that is different
cater for students differently on Maintenance from other students’ work.
the basis of ability, rate of and Change
learning and interests.
Computer The extent to which students System I use the computer to take part
Usage use their computers as a tool to Maintenance in online discussion with other
communicate with others and to and Change students.
access information.
Young Adult The extent to which teachers Relationship I am encouraged to take
Ethos give students responsibility and control of my own learning.
treat them as young adults.
(Koul, Fisher, & Shaw, 2011)

Another validation study was conducted cross-culturally with 980 Turkish and 130
American high school science students in grades 9–12. This study revealed sound
psychometric properties of the TROFLEI for use with both populations (Welch,

39
Cakir, Peterson et al., 2012). The TROFLEI was also validated in Thailand
involving tertiary-level students in electronics laboratories (Promratrak & Malone,
2006).

Table 2.2, adapted from Koul, Fisher, and Shaw (2011), displays for each of the 10
scales of the TROFLEI, a scale description, its categorization under Moos’
dimensions, and a sample item. The scales chosen for use in the current study are
discussed at greater length in Chapter 3.

Considered to be an instrument of choice for technology-integrated environments,


this unique questionnaire has numerous applications. For instance, in their overview
of instrumentation for virtual high schools, Black et al. (2008) consider that the
TROFLEI is robust, especially for adult populations. Individual scales have been
adopted in newly-created instruments for ICT around the world including in Taiwan
(Wu, Chang, & Guo, 2009) and Belgium (Van Petegem, Deneire, & De Maeyer,
2008).

2.2.2.10 Constructivist-Orientated Learning Environment Survey (COLES)

As an outgrowth of the WIHIC and TROFLEI, the Constructivist-Orientated


Learning Environment Survey (COLES) was recently designed to provide feedback
as a basis for reflection in teacher action research. It differs from its predecessor
instruments in that it addresses important aspects related to the assessment of
student learning, a feature lacking in all existing classroom environment
questionnaires. Therefore, Aldridge, Fraser, Bell and Dorman (2012) constructed
two new COLES scales related to assessment: Formative Assessment (the extent to
which students feel that the assessment tasks given to them make a positive
contribution to their learning) and Assessment Criteria (the extent to which
assessment criteria are explicit so that the basis for judgments is clear and public).
As a foundation, the COLES incorporates six of the WIHIC’s seven scales (namely,
Student Cohesiveness, Teacher Support, Involvement, Task Orientation,
Cooperation and Equity), while omitting the WIHIC’s Investigation scale. Like the
TROFLEI, the COLES also includes the scales of Differentiation and Young Adult
Ethos. In addition, the Personal Relevance scale (the extent to which learning

40
activities are relevant to the student’s everyday out-of-school experiences) was also
borrowed from the CLES for inclusion in the COLLES.

Data analysis supported the sound factorial validity and internal consistency
reliability of both actual and preferred versions of the COLES for a sample of 2,043
grade 11 and 12 students in Western Australian schools. In addition, the actual form
of the COLES was capable of differentiating between the perceptions of students in
different classrooms. In order to provide feedback as a basis for reflection in teacher
action, results from the COLES were also complemented by students’ reflective
journals, written feedback, discussion at a forum, and teacher interviews. The
experiences of these teachers concerning the viability of using feedback from the
COLES was considered as part of their action research aimed at improving their
classroom environments (Aldridge et al., 2012).

2.2.2.11 Other Questionnaires

The evolution of learning environments reflects the changing values of society


towards education; the follow ing illustrates how some instruments adapted to those
changes. Fraser (2012) reviews a broader spectrum of these alternative
questionnaires. Of particular interest to this study are measures to assess learning
environments involving technological adaptations.

Instruments that have been developed to assess remote learning environments at the
post-secondary level include the Distance and Open Learning Environment Scale
(DOLES) for higher education (Jegede, Fraser, & Fisher, 1995) and the Distance
Education Learning Environments Survey (DELES) (Walker & Fraser, 2005). The
DOLES contains the five core scales of Student Cohesiveness, Teacher Support,
Personal Involvement and Flexibility, Task Orientation and Material Environment,
and Home Environment, as well as the two optional scales of Study Center
Environment and Information Technology Resources. The DELES was constructed
online and includes six scales (Instructor Support, Student Interaction and
Collaboration, Personal Relevance, Authentic Learning, Active Learning and
Student Autonomy).

41
The Web-Based Learning Environment Instrument (WEBLEI) was developed to
assess students’ perceptions of online learning environments for higher education.
The online mode of education represents a paradigm shift in learning environments
as it involves a separation of time and place between teacher and learner, between
learners, and between learners and learning resources. A study conducted with
university students in Australia, the majority of whom were new to the concept of an
online mode for coursework, validated the questionnaire’s four scales of Access,
Interaction, Response, Results (Chandra & Fisher, 2009).

Another questionnaire, the Online Learning Environment Survey (OLLES), was


designed to capture students’ perceptions of their online learning environment and to
assess new information and communication via technology-rich ways of teaching
and learning. The validation was based on respondents from universities in New
Zealand and Australia for various levels and course subjects. The survey is more
individual-based as there is no real concept of a class. It includes 49 items in 7
scales: Computer Competence, Material Environment, Student Collaboration, Tutor
Support, Active Learning, Information Design and Appeal, and Reflective Thinking
(Clayton, 2007).

In the case of digitalized classrooms, the physical components of the learning


environment grow increasingly important in addition to the psychosocial factors
which can influence the learning outcomes. Therefore, the Computerized
Classroom Ergonomic Inventory (CCEI), containing scales such as Workspace
Environment, Computer Environment, Visual Environment, Spatial Environment,
and Overall Air Quality (Kroemer & Grandjean, 1997), was used in a number of
studies evaluating technology-rich learning environments (Zandvliet & Fraser,
2005). Maor and Fraser (1996) developed and validated a five-scale classroom
environment instrument in Australia (assessing Investigation, Open-Endedness,
Organization, Material Environment and Satisfaction) based on the LEI, ICEQ and
SLEI. Teh and Fraser (1994) developed and validated a four-scale instrument in
Singapore to assess Gender Equity, Investigation, Innovation and Resource
Adequacy.

42
While such questionnaires to assess alternative classroom environments were
somewhat relevant to the current study, each was too specific in the environment
that it assesses. Therefore, they were used as a reference to modify wording of
specific items within scales that were adopted from more generalized questionnaires
as described in Sections 2.2.2.7 and 2.2.2.9. Specifically, the OLLES informed
modifications necessary for the Material Environment scale adopted from the SLEI.
In order to maximize validity and reliability of this study, the author chose to
balance the need to customize instrumentation to the specific aspects of this study
with the robustness of more standardized questionnaires such as the SLEI and
TROFLEI detailed above.

2.2.3 Past Applications of Learning Environment Scales

The learning environment instruments described above have been used to pursue
numerous lines of past research. Specific lines of past research within the field of
learning environments, which are briefly reviewed below, are associations between
student outcomes and the learning environment (Section 2.2.3.1), teachers’ efforts to
improve the classroom environment (Section 2.2.3.2), comparison of actual and
preferred environments (Section 2.2.3.3), cross-national studies (Section 2.2.3.4),
and other lines of research (Section 2.2.3.5). Finally, Section 2.2.3.6 singles out the
line of research that pertains to this study, namely, using learning environment
dimensions as criterion variables in the evaluation of educational innovations.

2.2.3.1 Associations Between Student Outcomes and Environment

Past research has consistently linked the nature of the learning environment with
students’ cognitive (i.e. achievement) and affective (eg. attitudes) learning
outcomes. In fact, a multitude of factors have a multiplicative, diminishing-returns
effect on educational productivity, as theorized by Walberg’s (1981) economic
model of agricultural, industrial, and cultural productivity: age, ability, motivation,
quality and quantity of instruction, and the psychosocial environments of the home,
classroom, per group, and mass media. The effect of these factors is multiplicative
in that any factor at zero point (e.g. motivation) will result in zero learning, and
therefore it is better to improve a limiting factor that is low rather than to improve a
factor that is already functioning well. While there is this multitude of factors that

43
affect educational productivity, the psychosocial learning environment has emerged
as a strong predictor of both achievement and attitudes even when other factors are
held constant (Fraser, 2007, 2012). In other words, students’ perceptions of their
classroom environments, relative to other influential forces such as students’
backgrounds, are more closely associated with learning outcomes.

Associations of learning outcomes and students’ perceptions of the psychosocial


characteristics of their classrooms is a common and historic area of interest amongst
learning environment investigations, and such associations have been replicated for
a variety of measures, instruments, and sample populations in different countries and
at different age levels (Fraser, 1994).

For instance, in evaluating computer-assisted instruction, Teh and Fraser (1994)


established associations between classroom environment, achievement and attitudes
among a sample of 671 high-school geography students in Singapore. Using the
QTI, associations between student outcomes and perceived patterns of teacher–
student interaction were reported for samples of 489 senior high-school biology
students in Australia (Fisher, Henderson, & Fraser, 1995) and 1,512 primary-school
mathematics students in Singapore (Goh, Young, & Fraser, 1995). The WIHIC has
been employed in over a dozen different studies in various countries and languages,
and amongst diverse populations, which also showed associations between
classroom learning environment and student outcomes (Fraser, 2012).

A positive learning environment, specifically in science laboratories, has been found


to lead to improved attitudes towards science (Hofstein & Walberg, 1995). Scales
from the SLEI were found to be associated with students’ cognitive and affective
outcomes for a sample of approximately 80 senior high-school chemistry classes in
Australia (Fraser, Giddings, & McRobbie, 1995; McRobbie & Fraser, 1993), 489
senior high-school biology students in Australia (Fisher, Henderson, & Fraser, 1997)
and 1,592 grade 10 chemistry students in Singapore (Wong & Fraser, 1996).

Even though many past learning environment studies have employed techniques
such as multiple regression analysis, oversight in this method results because
classroom environment data are typically derived from students grouped in pre-
formed classes, which are inherently hierarchical. Therefore, multilevel analysis is

44
appropriate under such conditions to avoid aggregation bias and imprecision. Two
studies of outcome-environment associations compared the results obtained from
multiple regression analysis with those obtained from an analysis involving the
hierarchical linear model (HLM). The multiple regression analyses were performed
separately at the individual student level and the class mean level. In the HLM
analyses, the environment variables were investigated at the individual level and
also they were aggregated at the class level. In a study involving 1,592 grade 10
students in 56 chemistry classes in Singapore, associations were investigated
between three student attitude measures and a modified version of the SLEI (Wong,
Young, & Fraser, 1997). In Goh, Young and Fraser’s (1995) study with 1,512 grade
5 mathematics students in 39 classes in Singapore, scores on a modified version of
the MCI were related to student achievement and attitude. The two methods
produced results that were consistent in strength and in direction.

Using a large sample of high school students in Turkey, a translated version of the
QTI was administered in conjunction with an attitude questionnaire to explore
associations between teacher–student interpersonal behavior and students’ attitudes
to science. The use of multilevel analysis revealed that the influence dimension of
the QTI was related to student enjoyment, while proximity was associated with
attitudes to inquiry (den Brok et al., 2010). In another study involving the
TROFLEI, the classroom environment was investigated by applying structural
equation modeling using LISREL, antecedent variables (gender, grade level, and
home computer and Internet access), and student affective outcomes (attitude to the
subject, attitude to computer use and academic efficacy) among 4,146 high-school
students from Western Australia and Tasmania. Results revealed that: improving
classroom environment had the potential to improve student outcomes; antecedents
did not have any significant direct effect on outcomes; and academic efficacy
mediated the effect of several classroom environment dimensions on attitude to
subject and attitude to computer use (Dorman & Fraser, 2009).

45
Table 2.3 Some Studies of Associations Between the Learning Environment and Student
Outcomes
Study Outcome Measures Sample
Studies Involving the MCI
Fraser & Fisher (1982b) Inquiry skills; understanding 2,305 Grade 7 science students in 100
of nature of science; attitudes classes in Tasmania, Australia
Goh, Young, & Fraser (1995) Attitudes 1,512 primary school students in
Singapore
Majeed, Fraser, & Aldridge Attitudes 1,565 mathematics students in 81 classes
(2002) in Brunei Darussalam
Studies Involving the SLEI
Fraser & McRobbie (1995); Attitudes Approximately 80 senior high school
McRobbie & Fraser (1993) chemistry classes in Australia
Fisher, Henderson, & Fraser Attitudes 489 senior high school biology students in
(1997) Australia
Wong & Fraser (1996) Attitudes 1,592 Grade 10 chemistry students in
Singapore
Lightburn & Fraser (2007) Attitudes 761 high-school students in the US
Quek, Wong, & Fraser (2005) Attitudes 497 secondary school students in
Singapore (using an adaptation, the CLEI)
Studies Involving the CLES
Kim, Fisher, & Fraser (1999) Attitudes 1,083 Grade 10 and 11 science students in
24 classes in Korea
Aldridge, Fraser, Taylor, & Chen Attitudes 1,081 Grade 8–9 science students in
(2000) Taiwan and 1,879 Grade 7–9 science
students in Australia
Aldridge, Fraser, & Sebela (2004) Attitudes 1,843 Grade 4–9 students in 29
mathematics classes in South Africa
Nix, Fraser, & Ledbetter (2005) Attitudes 1,079 high school students in 59 classes in
Texas, USA
Studies Involving the WIHIC
Aldridge et al. (1999); Aldridge & Enjoyment 1,081 junior high school students in
Fraser (2000) Australia and 1,879 such students in
Taiwan
Kim, Fisher, & Fraser (2000) Attitudes 543 Grade 8 students in 12 schools in
Korea
Telli, Çakıroğlu, & Brok (2006) Attitudes 1,983 students in 57 classrooms in Turkey
Wolf (2008) Attitudes 1,434 middle-school science students in
71 classes in the US
Fraser & Chionh (2009) Achievement, Attitudes, and 2,310 grade 10 geography and
Self-esteem mathematics students in Singapore
Fraser, Aldridge, & Adolphe Attitudes 567 high-school science students in
(2010) Australia and 594 such students in
Indonesia
Studies Involving the TROFLEI
Dorman & Fraser (2009) Attitudes 4,146 grade 8–13 students in Western
Australia and Tasmania
Koul, Fisher, & Shaw (2011) Attitude to subject, Attitude 1,027 high-school students in New
to Computers, and Academic Zealand
Efficacy

46
A meta-analysis conducted by Haertel, Walberg and Haertel (1981) involving 734
correlations from 12 studies involving 823 classes, eight subject areas, 17,805
students and four nations revealed associations between various dimensions of the
learning environment and student outcomes. Additionally, correlations were
generally higher in samples of older students and in studies employing collectivities
such as classes and schools (in contrast to individual students) as the units of
statistical analysis. In particular, higher achievement on a variety of outcome
measures was found consistently in classes perceived as having greater
Cohesiveness, Satisfaction and Goal Direction and less Disorganization and
Friction. Other meta-analyses also provide further evidence supporting the link
between educational environments and student outcomes (Fraser, Walberg, Welch et
al., 1987).

In summary, classroom and school environment dimensions consistently have been


found to be strong predictors of both achievement and attitudes even when a
comprehensive set of other factors are held constant. Consequently, the rationale for
investigating associations between learning environments and learner outcomes in
my study was that an educational innovation (i.e. virtual laboratories) influences the
learning environment which, in turn, is likely to be linked to improved attitudes and
achievement.

Table 2.3 displays details of some of the well-known studies that have established
associations between learning environment scales and various measures of student
outcomes. These studies are organized in the table below according to which
specific learning environment instrument was employed in the study. Other
information about each study, such as the sample size, grade level, and location, is
also shown.

2.2.3.2 Action Research: Teachers’ Efforts to Improve Classroom Environments

Because the study of educational environments is ultimately intended to lead to


implementing changes beneficial for students, the evaluation of reform efforts
should include classroom environment dimensions to provide process measures of
effectiveness. However, much of the research specific to the field of learning
environments has remained unfamiliar to teachers. Fraser (1986) describes how

47
feedback information based on student or teacher perceptions can be employed as a
basis for reflection upon, discussion of, and systematic attempts to improve
classroom and school environments. Therefore, this section reviews this specific
line of research involving teachers’ use of learning environment perceptions in
guiding practical attempts to improve their own classrooms and schools.

Fraser and Fisher’s (1986) case studies of teachers attempting to improve their
classroom environment involved five steps: assessment when students were given
the preferred form of the CES and one week later the actual form; feedback to the
teachers from students’ responses regarding the gap between the preferred and the
actual environment; private reflection and informal discussion that helped the
teacher to consider which dimensions require intervention; intervention for about a
two-month period during which specific strategies to address dimensions of concern
were implemented; and reassessment at which point students again responded to the
actual form of the CES to determine whether the changes they preferred indeed had
occurred. Some changes in the actual environment did occur as a result, and two of
the dimensions on which significant changes were recorded were those on which the
teacher had specifically attempted to change.

This practical approach to learning environments research has been used with pre-
service teacher education students in their own university settings and in their
students’ school classrooms (Yarrow, Millwater, & Fraser, 1997), as well as with in-
service teachers (Aldridge, Fraser, & Ntuli, 2009). For instance, in South Africa,
two such studies were conducted using action research in an attempt to improve
teachers’ classroom learning environments. In Aldridge, Fraser and Ntuli’s (2009)
study, 31 in-service teachers undertaking a distance-education program administered
an adapted version of the WIHIC in the IsiZulu language to 1077 primary school
students, which enabled some of the teachers to use feedback from the questionnaire
to improve their classroom environments with varying degrees of success. In
Aldridge, Fraser and Sebela’s (2004) study, a group of 29 mathematics teachers in
South Africa administered the English version of the CLES to their primary-level
students and some of the teachers were able to improve the constructivist orientation
of their classrooms.

48
In the US, Sinclair and Fraser (2002) used the actual and preferred forms of a
questionnaire based on the WIHIC to guide changes in their classrooms’ learning
environments. Results generally supported the success of teachers’ attempts to
change their classroom environments based on feedback from the students, but they
also indicated that efforts to change the learning environment should involve
different interventions for students of different genders.

Most recently, the COLES was used as a basis for reflection for teachers’ attempts
to bridge the gap between preferred and actual classroom environments over a six-
week period. The COLES was administered as a pre-test and post-test with the aim
being for teachers to use the feedback from the pre-test to reduce the actual–
preferred discrepancies on selected COLES scales by the time of the post-test. In
this particular study, the authors created a novel method, using circular profiles, to
communicate to the teachers feedback information based on students’ responses to
the COLES. Qualitative data were also collected from reflective journals, written
feedback, forum discussions and teacher interviews. Teachers felt that this process
enabled them to reflect on their teaching practices and ultimately to help them to
improve their classroom environments (Aldridge et al., 2012).

2.2.3.3 Comparison of Actual and Preferred Environments

As described in Section 2.3.2, different versions of learning environment


instruments are available to distinguish between students’ perception of the actual
environment and their preferred environment. Interestingly, a number of studies
showed that students preferred a more positive classroom environment than was
actually present on all dimensions within a survey, and teachers perceived a more
positive actual classroom environment than did their students in the same
classrooms on most of the dimensions on that same survey (Fisher & Fraser, 1983;
Fraser, Giddings, & McRobbie, 1995; Moos, 1974). Subsequently, the question
arises as to whether students would perform better in their preferred environments.
To this end, Fraser and Fisher (1983) used the CES and ICEQ with a sample of 116
class means and found that actual–preferred congruence (or person–environment fit)
could be as important as the actual classroom environment in predicting student
achievement of cognitive and affective learner outcomes. Therefore, outcomes

49
might be enhanced by attempting to change the actual classroom environment in
order to increase congruence with student preferences.

2.2.3.4 Cross-National Studies

With the recent globalization of the economy, new vistas open for education in the
international arena as well. Approaching educational environments from a cross-
national perspective is advantageous in that variation (such as teaching methods,
student attitudes, new nationalities) is increased within the sample for a study, and in
that standard practices in one country can be called into questioned in an unbiased
manner in another country (Fraser, 2012).

A landmark cross-national learning environment study (Aldridge, Fraser, & Huang,


1999) involving a collaboration between six Australian and seven Taiwanese
researchers involved the administration of the WIHIC to 50 junior high-school
science classes in each of Taiwan (1,879 students) and Australia (1,081 students).
The questionnaire was available in both English and Mandarin, and the translation
was double-checked by external translators. In addition to students’ scores from the
questionnaires, qualitative data, involving interviews with teachers and students and
classroom observations, were also collected. The largest differences in the means
between the two countries were found for the scales of Involvement and Equity,
with Australian students perceiving each scale more positively than students from
Taiwan. The qualitative data provided valuable insights into the perceptions of
students cross-nationally, helped to explain some of the differences in the means
between countries, and highlighted the need for caution in the interpretation of
differences between the questionnaire results from two countries with cultural
differences. A similar study was conducted at the cross-national level involving the
use of the CLES in Taiwan and Australia (Aldridge et al., 2000).

In a separate study, the WIHIC was validated in two languages in Indonesia (in
Bahasa) and in Australia (in English) and some differences were found between
countries, as well as for different sexes. This study also confirmed associations
between the learning environment and several attitude scales (Fraser, Aldridge, &
Adolphe, 2010).

50
In designing new instruments, it is important to validate them across nations
simultaneously, and many such questionnaires were developed in this way. For
instance, the SLEI was validated across the USA, Canada, Australia, England,
Israel, and Nigeria (Fraser, Giddings, & McRobbie, 1992) and the WIHIC was
validated using students in Australia, England, and Canada (Dorman, 2003). As
well, the TROFLEI was cross-culturally validated in the US and Turkey for high
school (grades 9–12) students. Differences were noted across national borders in
each study, suggesting the role of culture in perceptions of the learning environment.

2.2.3.5 Other Lines of Learning Environments Research

Other lines of research involve the use of triangulation or, combining quantitative
and qualitative methods, which permeates the field today (Aldridge, Fraser, &
Huang, 1999; Fraser & Tobin, 1991; Mathison, 1988; Tobin & Fraser, 1998).
Quantitative data collection is accomplished most often through the use of a
questionnaire while qualitative data usually encompasses student and teacher
interviews (and perhaps sometimes interviews of administrators and parents),
classroom observations, and students’ written work. Unique contributions to
learning environments that use a mixed-methods approach within the same study
include the complementation of qualitative data to quantitative results that clarified
patterns in Taiwanese and Australian classrooms and identified the differences
between them (Aldridge, Fraser, & Huang, 1999), the investigation of higher-level
cognitive learning in US classrooms (Tobin, Kahle, & Fraser, 1990), and a
multilevel exploration of the learning environment to judge whether a certain
teacher was typical of other teachers within her school and of other schools within
the state in Australia (Fraser, 1999). In a mostly qualitative study comparing
exemplary teachers with non-exemplary teachers, data from questionnaires were
also obtained and the merging of these two methods helped to shed light on the
differences between classrooms of such teachers (Fraser & Tobin, 1989). Currently,
many evaluations of educational innovations using a learning environment
framework include the use of at least semi-structured interviews in their design, in
addition to the main method of questionnaires as data collection.

As well, learning environments research has started to play a significant role in


informing school psychologists and counselors about how to guide changes in
51
classroom environments (Burden & Fraser, 1993) and in evaluating the efficacy of
their own counseling programs in education (using the MCI) (Sink & Spencer,
2005).

Recently, a trend has re-emerged to extend the research on learning environments in


classrooms to its links between other environments such as the home (Marjoribanks,
1991), the home and the parents’ workplace (Moos, 1991), and the home and peer
environments (Fraser & Kahle, 2007). Findings from a three-year study involving
7,000 US science and mathematics students showed that the environments of the
classroom, home, and peer group all accounted for statistically significant amounts
of unique variance in student attitudes, but only the classroom environment
accounted for statistically significant amounts of unique variance in student
achievement scores (Fraser & Kahle, 2007). Numerous studies also considered
whether the ethos of the whole school environment has an impact on the classroom
environment (Aldridge, Fraser, & Laugksch, 2011; Dorman, Fraser, & McRobbie,
1997; Fraser & Rentoul, 1982). A new scale was even developed to assess links
between multiple settings in a unique African milieu: the Socio-Cultural
Environment Scale (Jegede, Fraser, & Okebukola, 1994).

A learning environments framework is also applied to studies of transitions from


primary to high school, where the environment often changes. Most of these studies
identify a deterioration of the classroom environment as students move from the
more personal primary classrooms to high school classrooms (Ferguson & Fraser,
1998; Midgley, Eccles, & Feldlaufer, 1991).

Finally, typologies of classroom environments have also been identified through


learning environments research. Five clusters of learning environment orientations
that emerged from a study using the CES in the US are control, innovation,
affiliation, task completion, and competition (Moos, 1978). Using the QTI in the
Netherlands and US researchers identified eight distinct interpersonal profiles:
directive; authoritative; tolerant-authoritative; tolerant; uncertain-tolerant; uncertain-
aggressive; repressive; and drudging (Brekelmans, Levy, & Rodriguez, 1993),
although some of these typologies were considered to be unique to certain countries
(Rickards, den Brok, & Fisher, 2005). Six distinct classroom profiles that emerged

52
from a study employing a Turkish translation of the WIHIC in Turkey were: self-
directed learning; task-orientated cooperative learning; mainstream; task-orientated
individualized; low-effective learning; and high-effective learning (den Brok et al.,
2010). As well, using cluster analysis for results from the TROFLEI on a large
Australian sample of students, five relatively homogeneous groups of classes
became apparent: exemplary; safe and conservative; non-technological teacher
centered; contested technological; and contested non-technological (Dorman,
Aldridge, & Fraser, 2006).

2.2.3.6 Evaluating Educational Innovations using Learning Environment Scales

This section discusses another line of past and current research involving learning
environment scales that is relevant to my study and therefore deserves this separate
section to allow for greater depth. More recently, educational innovations have
changed the dynamic of traditional classrooms and their evaluation has created a
new subgenre of the learning environment framework. Learning environment scales
have been useful in providing criteria of effectiveness for evaluating educational
innovations in the numerous past studies described below. Thus, in my study,
learning environment variables were used both as criteria of instructional
effectiveness and as predictors of student outcomes such as attitudes and
achievement. In this manner, educational innovations influence learning
environments, which in turn influences attitudes and achievement. This constitutes
the specific research approach for my study because the use of virtual laboratories is
considered an educational innovation that requires evaluation.

Studies that have used learning environment scales to evaluate educational


innovations are presented below in a chronological manner. They include historical
studies (Section 2.2.3.6.1), and studies that evaluate inquiry-based learning and
constructivism (Section 2.2.3.6.2), new programs in mathematics (Section
2.2.3.6.3), teacher professional development programs (Section 2.2.3.6.4), and
technology integration (Section 2.2.3.6.5).

2.2.3.6.1 Historical evaluation studies

The focus on using learning environment scales to evaluate educational innovations


has evolved only recently but, since its inception, the development of learning

53
environment questionnaires has often been within the context of a need to evaluate a
particular educational evaluation. An evaluation of Harvard Project Physics, a
national curriculum introduced in the late 1960s to utilize new instructional media
that emphasize the philosophical, historical, and humanistic aspects of physics,
resulted in the development of the first learning environment questionnaire, the LEI,
as described in Section 2.2.21. In one particular study, which was part of a series of
investigations about the classroom as a social system, according to the Getzels and
Thelen’s (1960) theory, 1,700 US high school students who completed the project
were surveyed (within-class design) with a questionnaire that was based on the
Physics Achievement Test, the Science Process Inventory, the Semantic Differential
for Science Students, the Pupil Activity Inventory, and the Classroom Climate
Questionnaire. The study showed that there were significant and complex relations
between climate measures (18 structural and affective) and learning criteria (9); for
instance, characteristics such as ‘isomorphism’, ‘organization’, and ‘synergism’
predicted learning variables more frequently than ‘coaction’ and ‘syntality’
(Walberg & Anderson, 1968).

Another seminal study was the evaluation of the Australian Science Education
Project (ASEP) that, during 1969 to 1974, produced learning materials for high
school science classes. The sample involved 300 schools and used case studies as
well as questionnaires (Owen, 1979). At that time, because few instruments existed,
half of the LEI scales that were relevant were selected and some new scales were
developed, including a new scale of Individualization. ASEP students perceived
their classroom as being more satisfying, individualized, and having a better
material environment compared to a control group (Fraser, 1979).

The difference between these historical and founding studies and more recent
evaluations of educational innovations is the evolution of learning environment
variables – whether they serves as independent variables or dependent variables (i.e.
criteria of effectiveness).

2.2.3.6.2 Evaluation of inquiry-based learning and constructivism in science

Inquiry-based learning encourages students to ask questions, share ideas, and engage
in dialogue to investigate information. A key component is whole-group

54
collaboration, although individuals participate equally and are held accountable.
Many studies have supported the effectiveness of inquiry-based programs (Wolf &
Fraser, 2008). For example, evaluation of a computer-assisted learning course in
which students used a database to explore birds of Antarctica, a study which is
described in the section on Technology Integration below, revealed positive student
perceptions of dimensions such as Investigation and Open-Endedness, which both
are hallmarks of inquiry-based learning (Maor & Fraser, 1996). In another study,
which differed from prior evaluations of inquiry-based learning in that it utilized a
control group, inquiry-based laboratory teaching was evaluated in terms of
perceptions of the class learning environment, students’ attitudes towards science,
and cognitive achievement. The data from 1,434 middle-school physical science
students in the US were collected using the WIHIC to measure the perceptions of
the learning environment, selected items from the TOSRA to measure attitudes
towards science, a 9-item scale to assess achievement based on a standardized state
test, and interviews. The instructional method was differentially effective for males
(higher with inquiry) and females (higher with non-inquiry) (Wolf & Fraser, 2008).

In two separate studies, the CLES was used in Korean high schools to assess novel
constructivist approaches. One study involved longitudinal action research with 136
earth science students and revealed that students’ perceptions became increasingly
positive over time (changes on the Personal Relevance scale were also associated
with improved attitudes towards science) (Oh & Yager, 2004). Another study
involved teachers who attended a professional development program at the
University of Iowa involving the implementation of constructivist approaches (Cho
et al., 1997).

2.2.3.6.3 Evaluation of new programs in mathematics

Although many studies reviewed in Section 2.2.3.2 involved the school subject of
science, many instruments have been adapted for mathematics classes as the two
subjects are often related. For instance, one particular innovation relies on
mathematics media (numbers and measurements) within an innovative science
course that uses anthropometric activities. This innovation was evaluated using four
scales from the SLEI, TOSRA and Fennema-Sherman attitude scales, together with
an achievement test and report card grades, respectively. This study was carried out
55
on 761 biology high school students in the US, including a control group for
learning environment perceptions and attitudes (Lightburn & Fraser, 2007).

A number of studies have evaluated educational innovations such as the Class


Banking System (CBS), which uses constructivist approaches. In this study, 119
fifth grade students were split into two control groups and one experimental group to
evaluate the CBS in terms of perceptions of the classroom environment, students’
attitudes towards mathematics, and conceptual development in mathematics.
However, the relatively small sample size decreased the statistical power. Learning
environment data included scales from the actual forms of the ICEQ to assess
Individualization and CLES to assess constructivism. The TOMRA was used to
measure attitudes towards mathematics, concept map tests were used to measure the
conceptual development, and some case studies were conducted (Spinner & Fraser,
2005).

In another attempt to improve the mathematics classroom environment and attitudes


towards reading, writing, and arithmetic, teachers who participated in project
SMILE (Science and Mathematics Integrated with Literary Experiences)
implemented this innovative program in their classrooms. This program was
evaluated by surveying 120 fifth grade students in the US whose teachers completed
in-service training. In addition to qualitative data, scales from the actual and
preferred forms of the MCI were used to measure perceptions of the learning
environment, and scales from the NEAP attitude inventory were used to measure
attitudes towards reading, writing, and arithmetic. The results showed improved
congruence between the actual and preferred environment and improved reading and
attitudes towards mathematics (Mink & Fraser, 2005).

Ogbuehi and Fraser (2007) evaluated innovative teaching strategies in middle-


school mathematics in terms of the classroom environment, students’ attitudes
towards mathematics, and students’ conceptual development of mathematics. For
this study, 661 students from inner-city classes in the US were surveyed with
questionnaires containing scales adapted from the CLES and WIHIC to measure
perceptions of the learning environment, and scales from the TOMRA to measure

56
attitudes towards mathematics. For each dimension, the efficacy of the innovative
teaching model was supported (Ogbuehi & Fraser, 2007).

2.2.3.6.4 Evaluation of teacher professional development programs

A number of innovative programs have been aimed at teachers, who are responsible
for transmitting science content and promoting positive attitudes towards science,
and who have an important role in the learning environment. An evaluation of a
long-term, teacher professional development program in the US, based on the
Integrated Science Learning Environment (ISLE), involved a combination of
methods: constructivist concept-mapping, psychosocial cognition, and Information
Technology (IT). The evaluation of this program was novel in that the researchers
assessed the effectiveness of the teacher-training program using a new form of the
CLES (Comparative Student or CLES-CS) which has the same scales as the original
CLES but includes two, separate, side-by-side frequency scales for each item to rate
this class and another class (whose teachers have not been trained through the ISLE
program). For a sample size of 1,079, students whose teachers participated in the
ISLE program perceived higher levels of Personal Relevance and Uncertainty in
their classes compared with other science and non-science classes in the same
school (Nix, Fraser, & Ledbetter, 2005; Nix & Fraser, 2011).

Another evaluation of the effectiveness of a science course for prospective


elementary school teachers, who are usually intimidated by teaching science
involved these teachers’ perceptions of laboratory learning environments and
attitudes towards science. A sample of 525 females at an American urban university
responded to scales from WIHIC, SLEI, and TOSRA. There were large and
statistically significant differences between pre-course and post-course responses for
both attitudes towards science and perceptions of the learning environment (Martin-
Dunlop & Fraser, 2007).

A third such study investigated the success of a two-year mentoring program in


science for beginning elementary-school teachers in terms of participants’ classroom
teaching behavior as assessed by their school students’ perceptions of their
classroom learning environments. The sample consisted of seven novice primary
school teachers in the US and their 573 students. A modified version of the WIHIC

57
was used to assess student perceptions of classroom learning environment as a
pretest and as a posttest. The use of MANOVA and effect sizes supported the
efficacy of the mentoring program in terms of some improvements over time in the
learning environment, as well as in students’ attitudes and achievement (Pickett &
Fraser, 2009).

2.2.3.6.5 Evaluation of technology integration

Since the advent of the computer and the Internet, there has been much pressure to
incorporate information technology into science classrooms; as well, there is an
increasing interest in evaluating the effects of this technology on students in terms of
learning environments. An evaluation of a micro-PROLOG-based Computer-
Assisted Learning (CAL) involved developing and validating a new instrument,
called the Geography Class Environment Inventory (GCEI). 671 high school
students in Singapore were given the GCEI, which includes four scales (Gender
Equity, Investigation, Innovation, and Resource Adequacy) to measure perceptions
of the learning environment, the Semantic Differential Inventory (SDI) to measure
attitudes towards the subject, and a Geography Aptitude Test (GAT) to measure
achievement. Relative to non-CAL students, CAL students had higher scores for
achievement, attitudes, and perceptions of classroom environment (Teh & Fraser,
1994).

As well, Maor and Fraser (1996) evaluated inquiry-based CAL with 120 high-school
students in Western Australia who interacted with a computerized database
associated with a program entitled The Birds of Antarctica. A new questionnaire
based on the LEI, ICEQ, and SLEI, called the Computerized Classroom
Environment Inventory (CCEI), was developed to include five scales (Investigation,
Open-Endedness, Organization, Material Environment, Satisfaction). Questionnaire
items were re-worded for whole-class observations from ‘I’ statements to ‘students’
statements. The results showed increased student-perceived Investigation and
Open-Endedness, whereas the teachers’ perceptions were more positive.

In an investigation of whether using laptop computers in science and mathematics


classes affects students’ perceptions of the learning environment, 1,173 high school
students in Canada were given a new version of the WIHIC (the personal form and

58
actual and preferred forms), one scale from the Computer Aptitude Survey (CAS) to
measure attitudes towards computers, and one scale from the TOSRA (Enjoyment of
Lessons). While there were positive associations between perceptions of the
learning environment and students’ attitudes towards science and mathematics, there
were statistically significant differences between perceptions of the actual and
preferred environments, differences for males and females, and differences between
science and mathematics (Raaflaub & Fraser, 2002).

Regarding networked use of computers, Zandvliet and Buker (2003) considered the
relationship between technology and instruction as they evaluated Internet
classrooms in terms of the physical and psychosocial environments and student
satisfaction. They argued that technology brings more diversity to the factors that
influence the learning environment; these factors are divided into three major
categories that comprise the person’s learning experience and thus satisfaction: the
ecosphere (physical surroundings for example, lighting and space); the sociosphere
(the person’s net interactions with all other people within that environment (e.g.
autonomy and cohesion); and the technosphere (includes all the man-made objects
available). In one of their studies, 358 high school students in B.C., Canada,
responded to the actual form of the WIHIC, items from the TOSRA, and the
Computer Classroom Environment Checklist (CCEC) for physical factors (for which
the unit of analysis was the classroom and the scales included Workspace
Environment, Computer Environment, Visual Environment, Spatial Environment,
and Air Quality Rating). In another study, the physical and psychosocial learning
environments of computer-networked classrooms were evaluated for their effects on
student satisfaction. Scores from the CCEI, WIHIC (actual and personal forms), and
TOSRA, as well as systematic observation and case studies, comprised the data
collected from 1,404 students in Australian and Canadian high schools, which
indicated that the psychosocial environment (specifically Independence and Task
Orientation) was significantly associated with satisfaction with learning, although
learning satisfaction was not associated with the physical classroom environment.
However, there were statistically significant associations between the physical and
psychosocial learning environment variables in classes using new informational

59
technology and, thus, the physical environment indirectly impacted students’
satisfaction with learning (Zandvliet & Fraser, 2005).

Aldridge and Fraser’s (2003, 2008, 2012) longitudinal study, which also involved
the TROFLEI’s development and validation, evaluated a technology-rich
environment that focused on outcomes-based learning. The four-year investigation
involving 1918 students led to more positive student perceptions of seven out of the
ten TROFLEI scales, but the degree of change in the learning environment varied
for different learning areas.

Adult students undertaking computer application courses in Singapore were also


involved in an evaluation of the program’s effectiveness. The WIHIC was adapted
for the sample of 250 working adults and it proved to be valid and reliable.
Generally, students perceived their classroom environment in a positive manner, but
with some variation for students of different genders and ages (Khoo & Fraser,
2008).

In another study, students’ perceptions of a blended learning environment, in which


online technology is integrated into class lessons, were investigated. Getsmart, a
teacher-designed website, was blended into science and physics lessons at an
Australian high school. The Web-Based Learning Environment Instrument
(WEBLEI) (Section 2.2.2.11) was found to be valid for this sample of 302 students
in year 10–12 classes, even though the original questionnaire was intended for
university students. The data generated through the WEBLEI, in addition to
qualitative data extracted from written surveys and emails, suggested that students
had positive perceptions of their web-based learning environment (Chandra &
Fisher, 2009).

2.3 Student Attitudes

In this study, the effectiveness of virtual laboratories was evaluated in terms of not
only students’ perceptions of their learning environment (see Section 2.2) but also
students’ attitudes towards science. Below, I consider aspects of the affective
domain of learning and its relationship to the cognitive domain of learning. First,
the term ‘attitude’ is defined in Section 2.3.1. Then methods of assessment are

60
presented in Section 2.3.2, and this is followed by a review of the literature about
the impact of educational interventions on students’ attitudes (Section 2.3.3).

2.3.1 Definition of Attitude

For decades, the attempt to clarify the term ‘attitude’ has engendered much
controversy because it incorporates a broad range of dimensions that are loosely
defined with vague references. Examples of such dimensions are interest,
engagement, motivation, mindfulness, flow, self-efficacy, identity, perceived ability,
the degree of fun, personal relevance, and the like. This haziness is further clouded
by the inclusion of sub-topics under the all-encompassing ‘science’ umbrella, such
as various careers, formal and informal education, perceptions of scientists and
science media (Aldridge & Fraser, 2008; Olitsky & Milne, 2012; Oliver & Venville,
2011; Tytler & Osborne, 2012). More recently, Koballa and Glynn (2007) define an
attitude as “a general and enduring positive or negative feeling about some person,
object, or issue”, in this case, science (p. 78). This definition maintains the
neutrality of the term ‘attitude’, whereas many of the aforementioned dimensions
refer to only the positive form of the affective domain; for instance, ‘interest’
denotes a positive feeling about the subject.

Because of the lack of clarity concerning the term attitude, Klopfer (1971) began to
distinguish between ‘attitudes towards science’, the subject of this section, and
‘scientific attitudes’, a mindset committed to evaluating evidence, harboring
skepticism, and requiring rational explanations for phenomena. However, ‘attitudes
towards science’ can still encompass attitudes towards scientists, school science,
science learning experiences and activities, as well as the pursuit of science-related
careers (Tytler & Osborne, 2012). Later, Klopfer (1976) further classified the
affective domain, specific to science education, into four categories of attitudes:
towards events in the natural world (awareness and emotional responses to
experiences), towards activities (school science and informal science), towards
science in general (the nature of science as a means of knowing about the world),
and towards inquiry (the adoption of inquiry processes including methodical
assessment of phenomena).

61
OECD’s (2009) Programme for International Student Assessment (PISA) assesses
students every three years in a variety of subject areas. Their definition for attitudes
towards science is based on the belief that a student’s scientific literacy includes
certain attitudes, beliefs, motivational orientations, sense of self-efficacy, values,
and ultimate actions, which builds upon Klopfer’s (1976) structure for the affective
domain in science education as well as other reviews of attitudinal research
(Gardner, 1975; Osborne, Simon, & Collins, 2003).

Considering other definitions for the affective domain, some educational researchers
refer to ‘engagement’, a positive feeling or a passion, as an indicator for attitudes
(Olitsky & Milne, 2012). Engagement can be further broken down into its various
components, such as behavioral engagement (e.g. on-task actions in a science
classroom or participation in extra-curricular activities), emotional engagement
(interests and values evident from students’ reactions to their environment), and
cognitive engagement (motivation, self-efficacy, and behavior) (Fredricks,
Blumenfeld, & Paris, 2004; McCarty, Hope, & Polman, 2010). A corollary of
‘engagement’ is the concept of ‘flow’, defined as “the feeling generated by total
engagement with an activity” (Tytler & Osborne, 2012, p. 605). According to a
pioneering study by Csikszentmihalyi and Schneider (2001), tests, quizzes, and
concrete tasks, including laboratory work, all produced above-average levels of
‘flow’ while the presentation of lectures and video clips produced little ‘flow’. The
current study involved virtual laboratories, whose use was anticipated to produce
greater ‘flow’.

‘Motivation’ is a key term often used by educational researchers in relation to


students’ attitudes towards a subject. In a case study of 10th grade biology students
in Australia concerning whether the incorporation of Biologica, a digital genetics
activity, would increase their motivation, Tsui and Treagust (2004) identified some
salient features that elicited student’s motivation to learn: instant feedback,
flexibility, and visualization. These features are also prominent in virtual
laboratories, the subject of the current study. In Tsui and Treagust’s study, student
motivation, which increased as a result of exposure to Biologica, was interpreted by
the intrinsic dimensions of curiosity, control, fantasy, and challenge. The
researchers concluded that new complex topics, such as genetics, should be

62
introduced to students by embedding them into supportive learning conditions
including student motivation and interests, as well as the beliefs of learners and
teachers. Furthermore, the authors asserted the need for students to be engaged in
mindful learning. According to Salomon and Globerson (1987), mindfulness
involves “volitional, meta-cognitively guided employment of non-automatic, usually
effort-demanding processes” (p. 623). Accordingly, the learning benefits of being
motivated and mindful are expected to be long-term because they are related to
higher levels of learning that engage all faculties and produce stronger impressions
in the minds of learners.

This section attempted to define the concept of attitudes towards science by


exploring the various dimensions associated with the affective domain of learning.
A longitudinal study by Oliver and Simpson (1988) showed a strong relationship
between three such affective variables – attitude towards science, motivation to
achieve, and the self-concept that the individual has of their own ability – and their
achievement in science. This relationship is further explored in Section 2.2.3.1 on
associations between perceptions of the learning environment and attitudes towards
science.

2.3.2 Assessment of Student Attitudes

Students’ attitudes towards science can be assessed using questionnaires, open-


ended questions, interviews, preference rankings, and the like. The earliest, most
notable instrument was developed by Perrodin (1966), who assessed the attitudes of
over 500 fourth, sixth, and eighth graders in the US using qualitative methods.
Later, Moore and Sutman (1970) created the Scientific Attitude Inventory to assess
emotional and intellectual attitudes toward science among secondary school
students. The development of a number of other similar attitude instruments ensued
over the past few decades, but many of them fail to meet the sound psychometric
standards, according to a comprehensive review of 66 instruments for measuring
attitudes (Blalock, Lichtenstein, Owen et al., 2008) and many other critics who
question their conceptual and empirical quality (Gardner, 1975; Munby, 1997;
Shibeci, 1984).

63
Similarly, Fraser (1978) noted three major limitations of existing instruments used
to assess attitudes toward science: low statistical reliability, a lack of economy of
items, and the combination of different attitude dimensions into a single scale which
creates a mixture of variables. In response, Fraser (1981) developed the Test of
Science-Related Attitudes (TOSRA). This is the instrument that was selected for the
current study because some of its scales were deemed highly suitable for the
investigation of how students’ attitudes towards science changed as a result of using
virtual laboratories.

Because the TOSRA is based on Klopfer’s (1976) classification of the affective


domain, its scales correspond to Klopfer’s attitudinal categories (see Table 2.4) with
some modifications. In constructing the specific items, Fraser sought the expertise
of science teachers and researchers involved in educational measurement.

Table 2.4 Fraser's (1981) TOSRA Scales and Klopfer's (1971) Classification
TOSRA Scale Name Klopfer Classification
Social Implications of Science H.1 Manifestation of favorable attitude towards science and
scientists
Normality of Scientists
Attitude to Scientific Inquiry H.2 Acceptance of scientific enquiry as a way of thought
Adoption of Scientific Attitudes H.3 Adoption of scientific attitudes
Enjoyment of Science Lessons H.4 Enjoyment of science learning experiences
Leisure Interest in Science H.5 Development of interest in science and science related
activities
Career Interest in Science H.6 Development of interest in pursuing a career in science

The TOSRA is a widely-used questionnaire for assessing attitudes of science


(Aldridge & Fraser, 2003, 2008; Fisher, Henderson, & Fraser, 1995; Fraser,
Giddings, & McRobbie, 1995; Koul, Fisher, & Shaw, 2011; Ogbuehi & Fraser,
2007; Quek, Wong, & Fraser, 2005), and it is intended to be used by teachers or
researchers with grades 7–10 science students. Its attitude scales were originally
validated in Australia with a total of 1,337 students from 11 schools that varied
socioeconomically. The final version contains 10 items in each of the seven scales
(Social Implications of Science, Normality of Scientists, Attitude to Scientific
Inquiry, Adoption of Scientific Attitudes, Enjoyment of Science Lessons, Leisure
Interest in Science, and Career Interest in Science) (Fraser, 1981). Each item is

64
arranged on a Likert scale with the responses of Strongly Agree, Agree, Undecided,
Disagree, and Strongly Disagree. Approximately half of the items in the TOSRA
are negatively worded, thus challenging the respondent to think carefully about each
statement.

The questionnaire was further validated in a cross-national study between Australia


and Indonesia with 1,161 students (Fraser, Aldridge, & Adolphe, 2010), 1,592
Grade 10 chemistry students in Singapore (Wong & Fraser, 1996), and 1,110 high
school students in Turkey and the US (Welch et al., 2012).

Modifications have been made to adapt the TOSRA scales to mathematics


classrooms to form the Test of Mathematics-Related Attitudes (TOMRA), that has
been employed in evaluating educational innovations (Ogbuehi & Fraser, 2007;
Spinner & Fraser, 2005), as well as to college computer courses to form the
Attitudes towards Computers and Computer Courses (ACCC) survey that added the
scales of Lack of Anxiety, Enjoyment, Usefulness of Computers, and Usefulness of
the Course (Newby & Fisher, 1997). Moreover, sometimes the TOSRA is used in a
modified form, generally consisting of one or a few scales rather than all seven
scales, or in a form with a lower reading level for younger students.

For the current study, attitudes were assessed using a modified version of the
Enjoyment of Science Lessons scale as well as the Attitude to Scientific Inquiry
scale. Sample items of the former include “I look forward to this class” and “This
class is among the most interesting at this school”, whereas examples of the latter
scale are “I would prefer to do experiments than to read about them” and “It is better
to create my own hypothesis than to be given a hypothesis to test out”. To avoid
confusion in responses, the items were all worded positively, as recommended by
Barnette (2000). As well, because each TOSRA scale contains 10 items, items that
were highly similar to other items in the same scale were removed to enable
consistency with all other scales that had eight items in my study’s questionnaire.

2.3.3 Impact of Educational Interventions on Students’ Attitudes

Recently, research about students’ science-related attitudes has been on the rise
because of a decrease in student enrolment in the sciences at the secondary and

65
tertiary levels of education, especially in Western countries (Osborne, Simon, &
Collins, 2003). In fact, there seems to be an inverse relationship between the
economic advancement of a country and their students’ interest in school science.
In general, attitudes towards science tend to decline with age so that students in the
younger grade levels report enjoyment of science lessons, while middle-school
students begin to lose interest and high school students enjoy science the least out of
all school ages. Similarly, gender differences in attitudes are less apparent in the
younger years and emerge during middle school, especially in relation to the
compartmentalization of the sub-topics within science, such as physical science and
chemistry (Oliver & Venville, 2011; Tytler & Osborne, 2012). Nevertheless,
despite the decrease seen in science attitudes, the overall interest in science remains
predominantly positive (Tytler & Osborne, 2012).

However, the decline in attitudes towards science is also disturbing because attitudes
correlate with achievement. The Trends in International Mathematics and Science
Study (TIMSS) showed a consistent relationship between attitudes and achievement
over the years, with students with more positive attitudes having higher achievement
in science than those with medium or low attitudes in science (Nasr & Soltani, 2011;
Neuendorf, 2002). The PISA study in 2006 reported that most students agree that
science is important to learn and that science and technology improve living
conditions, but that fewer students found science personally relevant and that even
fewer students expressed an interest in pursuing a science-related career. The study
also showed a correlation between socio-economic status and interest in science-
related careers (Organization for Economic Co-operation and Development
(OECD), 2009).

In searching for an answer to why attitudes to science decline relative to attitudes to


other school subjects, a number of causes emerge based on students’ responses.
Students often complain that school science lacks relevance, the curriculum is
riddled with repetition across primary to middle to high school classes, there aren’t
enough opportunities to discuss the implications of science, and there is an
overemphasis on copying notes from the teacher or textbook as the standard form of
writing (Tytler & Osborne, 2012). To generalize, there are actually many factors
that determine students’ interest in school science: gender, the quality of teaching,

66
and pre-adolescent experiences are the most notable determinants, but others include
self-evaluation of science ability, parental expectation and level of guidance,
exposure to career guidance and goals, exposure to inspirational teachers, and
teacher expectation of success (Osborne, Simon, & Collins, 2003; Shibeci, 1984;
Tytler & Osborne, 2012).

Once such determinants are identified, researchers and educators can implement
proactive strategies that address such issues in order to improve students’ interest in
science. For instance, enrichment experiences in school science have been shown to
be effective in raising students’ positive attitudes towards science (Quek, Wong, &
Fraser, 2005; Tytler & Osborne, 2012). Olitsky and Milne (2012) propose the
development of programs that focus on engagement in science, provide
opportunities for students to construct their own meanings in science through direct
experience, and engage students at an emotional level. In exploring Olympiad
(honors-level) students’ attitudes towards and passion for science, more positive
attitudes were observed as a result of this enrichment program, even though school
science originally had decreased their interest in science (Oliver & Venville, 2011).

Nasr and Soltani (2011) conducted a longitudinal study to examine the relationship
between attitudes towards science and achievement in science in a grade 10 biology
course in Isfahan, Iran. They found no statistically significant differences between
the sexes. However, using the Simpson–Troost Attitude Questionnaire–revised
(STAQ–R), meaningful positive associations were uncovered between achievement
and the dimension of ‘biology is fun for me’. The other dimensions, which lacked
significant associations with achievement, included Motivating Biology Class, Self-
Directed Efforts, Family Models, and Peer Models.

In the evaluation of a unique image-processing course that integrates STEM subjects


with students’ personal worlds and digital culture, Israeli middle-school learners’
motivation to engage in the subject increased as a result of the intervention. In fact,
the increase was greater for girls than for boys. This study used both quantitative
methods, through the use of the Interest in Computers (IiC) questionnaire, and
qualitative methods, through documenting learners’ comments throughout the

67
course, observing their levels of motivation, and through photographs and
videotapes (Barak & Asad, 2012).

In a similar attempt to use computer animation and illustration activities to improve


high school achievement in molecular genetics, the authors designed an attitude item
(“Do you find molecular genetics more difficult than other topics in biology?”) to
determine if the type of activity influenced attitude and thus achievement. Indeed,
58% of the control group reported that molecular genetics was very difficult,
indicating a negative attitude, compared with 24–38% of the experimental group.
Accordingly, the experimental group showed greater knowledge in this topic than
the control group, although interview responses revealed that the animation activity
was significantly more effective than the illustration activity (Marbach-Ad, Rotbain,
& Stavy, 2008).

Many studies using a learning environments framework also investigated whether


students’ attitudes improve as a result of an intervention. Often, associations are
found between students’ attitudes towards the subject and learning environment
scales (Fraser, 2012). In this way, both attitudes and perceptions of the environment
might be linked to achievement and can better inform educators about how to
improve achievement in science.

Educational innovations that have resulted in improved student attitudes towards


STEM subjects include a technology-rich environment (Aldridge & Fraser, 2003;
Koul & Fisher, 2005), the introduction of inquiry laboratories (Wolf & Fraser,
2008), the integration of children’s literature into mathematics (Mink & Fraser,
2005), a unique science course for prospective teachers of elementary students
(Martin-Dunlop & Fraser, 2007), the introduction of an innovative mathematics
program called the Class Banking System (Spinner & Fraser, 2005), the use of
anthropometric activities with biology students (Lightburn & Fraser, 2007),
computer-assisted learning environments (Teh & Fraser, 1994), the use of laptop
computers (Raaflaub & Fraser, 2002), and adult computer application courses (Khoo
& Fraser, 2008).

Many of these studies also revealed positive associations between attitudes,


measured by the TOSRA, and learning environment scales from questionnaires such

68
as the SLEI (Fraser, Giddings, & McRobbie, 1995; Kijkosol, 2005; Martin-Dunlop
& Fraser, 2007), the QTI (Fisher, Henderson, & Fraser, 1995; Kijkosol, 2005; Quek,
Wong, & Fraser, 2005), the WIHIC (Khoo & Fraser, 2008; Martin-Dunlop & Fraser,
2007; Raaflaub & Fraser, 2002; Wolf & Fraser, 2008), and the TROFLEI (Aldridge
& Fraser, 2003, 2008; Koul, Fisher, & Shaw, 2011; Koul & Fisher, 2005).
Adaptations of the TOSRA to mathematics classes, called the TOMRA, also showed
positive associations with learning environment scales from the CLES, WIHIC,
ICEQ, and MCI (Mink & Fraser, 2005; Ogbuehi & Fraser, 2007; Spinner & Fraser,
2005).

My study investigated whether virtual laboratories affect students’ attitudes towards


science and it also explored associations between dimensions of the learning
environment and the student outcomes of attitude and achievement. If such
associations exist, then virtual laboratories might possibly not only directly affect
the learning environment, but also indirectly affect students’ attitudes and
achievement.

2.4 Gender Differences in Science Education

The current study investigated the effectiveness of virtual laboratories (the third
research question) as well as their differential effectiveness for males and females
(the fourth research question). While a full review of literature on gender issues in
science education is beyond the scope of this thesis, a review of the perceptions,
attitudes, and achievement of the different sexes in science education is necessary to
provide a context for this investigation of whether virtual laboratories assist or
hinder gender equity. If virtual laboratories assist in closing the gender gap in
science education, they could be utilized in the classroom with greater confidence
about their many benefits. On the other hand, if virtual laboratories are
differentially beneficial for one sex over another, such differences would have to be
taken into account when implementing their use in the classroom.

For decades, educational research has pointed to differences in attitudes and


achievement between boys and girls in the sciences. Even today, a strong
perception that males are inherently better at mathematics and science still remains
(Hill, Corbett, & St. Rose, 2010; Scantlebury, 2012). In fact, the National

69
Assessment of Educational Progress (NAEP) reported in 2011 that American males
in grade eight scored on average five points higher than females in science
achievement examinations, which is consistent with the same study conducted in
2009 (National Center for Educational Statistics (NCES), 2012a). However, recent
research has pointed to the absence of such a gender gap in the sciences (Koul,
Fisher, & Shaw, 2011; Scantlebury, 2012). Whether the absence of a gender gap
naturally exists or whether it exists as a result of interventions intended to create
equality between the sexes is reviewed below.

In 2009, the Programme for International Student Assessment (PISA) revealed small
gender differences amongst 15 year-old science students regarding attitudes and
achievement, but the results were inconsistent in that they varied with different
countries, types of schools, and socio-economic levels (Organization for Economic
Co-operation and Development (OECD), 2009). A Trends in International
Mathematics and Science Study (Trends in International Science and Mathematics
Study (TIMSS), 2007) reported gender differences in favor of girls at the fourth and
eighth grade levels. As well, female students in grades 4, 8, and 10 scored higher
than males on hands-on science tasks, though males scored higher on the traditional
paper-and-pencil science assessment. In the same study, there was no gender gap in
interactive computer tasks in science (National Center for Educational Statistics
(NCES), 2012b). In an individual study of grade 10 biology students in Isfahan,
Iran, no significant differences between males and females were reported for
attitudes, but females scored higher in achievement (Nasr & Soltani, 2011).

Regarding students’ perceptions of their learning environments, the framework for


the current study, gender differences also have been reported. Fraser and Tobin
(1991) argue that the personal form of a learning environment questionnaire, in
which students respond to statements in the first person (e.g. “I pay attention during
this class”), is more sensitive to within-class sub-group differences, such as gender,
than the class form which presents statements as facts which with students agree or
disagree, or indicate the frequency of their occurrence in the classroom. For the
personal, actual form of the Science Laboratory Environment Inventory (SLEI),
females reported greater Student Cohesiveness, Integration, and Material
Environment than males (Fraser, Giddings, & McRobbie, 1992, 1995).

70
Numerous studies using various learning environment questionnaires have
replicated a pattern in which females scored more highly than males on scales such
as Rule Clarity, Task Orientation, Cooperation, Equity, and Teacher Support.
However, for scales such as Involvement, Investigation, Differentiation, and Young
Adult Ethos, variable results have been reported for differences between the sexes
(Aldridge & Fraser, 2008; Khoo & Fraser, 2008; Kijkosol, 2005; Koul, Fisher, &
Shaw, 2011; Quek, Wong, & Fraser, 2005; Raaflaub & Fraser, 2002; Wolf & Fraser,
2008). In conclusion, females tend to perceive most aspects of their science learning
environment more favorably than their male counterparts. Furthermore, a number of
these same studies showed more positive attitudes towards science for males relative
to females (Khoo & Fraser, 2008; Raaflaub & Fraser, 2002; Wolf & Fraser, 2008).

In response to such inconsistent findings about gender differences in science classes,


Scantlebury (2012) points out that differences within genders can be greater than
differences between genders. Differences include race/ethnicity, religion, class,
socio-economic status, and sexual orientation. For instance, socio-economic status
has been shown to have a greater impact on achievement than gender. Therefore,
Scantlebury notes a disinterest in continued gender studies in science education
(Scantlebury, 2012). Kahle (2004) also points out that it is currently optional to
report achievement scores by gender for many state examinations in the US because
gender differences are no longer considered an issue.

To further understand gender differences that were found in past research, it is


necessary to dissect the various aspects of such differences. It seems that, even
within the sciences, gender variation exists. Girls tend to prefer the life sciences
because they are interested in humans and animals (e.g. activities involving
collecting and cataloging seashells). Boys, on the other hand, tend to choose
activities in the physical sciences, perhaps because of exposure to games that
involve physical sciences such as shooting firearms. These gender-specific
preferences have been consistent throughout the literature for over 40 years
(Brotman & Moore, 2008; Farenga & Joyce, 1997; Hanson, 2009).

Similarly, gender differences found for interest in science-related careers have also
been consistent. Females tend to perceive a lack of relevance of the physical

71
sciences to their personal lives and often avoid choosing careers that are heavily
based on such a subject. Instead, they tend to show interest in science careers that
involve nurturing (e.g. nursing). As well, life demands have a larger impact on
women than on men, which ultimately might cause women to neglect or
underachieve in science-related careers. The opposite is generally true for males
who show more interest in demanding science-related careers (Beede, Julian,
Langdon et al., 2011; Oakes, 1990; Scantlebury, 2012).

In general, students’ attitudes towards science decline as they go through the science
‘pipeline’ from preschool to their careers, but this decline is greater for girls than for
boys. Female interest in science typically begins to decrease in the middle school
years (Scantlebury, 2012).

Caleon and Subraminian (2008) found a positive correlation between intellectual


ability and attitudes towards science amongst fifth graders in Singapore. The boys
in the study reported more positive attitudes towards science than the girls, but boys
were more likely to achieve higher scores than girls. Because attitudes are linked to
achievement (also see Section 2.2.3.1), and attitudes amongst females tend to
decline with grade level, female achievement in science is also negatively correlated
with grade level (Oakes, 1990; Scantlebury, 2012). This phenomenon might explain
why results from gender studies in science education are so inconsistent; more
insightful results could be obtained if the grade level of students in a study’s sample
is considered.

Why is the extent of the decline in interest in science with grade level unbalanced
between the sexes? One contribution might be teachers’ preconceived notions about
the difference in abilities between males and females. For instance, some research
has revealed that some teachers call on boys to answer more-challenging questions
and encourage them towards science-related careers (Oakes, 1990; Scantlebury,
2012). Huang and Fraser (2009) conducted a study involving 818 Taiwanese male
and female science teachers’ perceptions of the school environment. A critical
finding from this study was that male science teachers reported that science is a
subject more suitable for boys and that they encouraged boys more than girls in this
area, while female teachers viewed science as equally important for boys and girls.

72
If teachers instill more confidence in males than in females in the sciences, then
males are more likely to excel. In fact, Thompson (2008) claims that gender
differences in science are due to differences in levels of self-confidence in learning
science, rather than intellectual ability, and, because males have more self-
confidence, they tend to outperform females.

Thus, it seems that, naturally, little difference exists between the sexes regarding
their attitudes and achievement in science, but that these gender differences could be
created by teachers or other educational interventions that tip the scale in favor of
male interest and achievement in science. This also explains why traditional
classrooms could have a narrower gender gap than classrooms with an innovative
intervention (Wolf & Fraser, 2008).

If such gender differences, whether natural or contrived, exist, how might education
be reformed to encourage more female interest in the sciences in order to reduce or
eliminate the gender gap? This question was addressed in the early 1980s with the
rise of feminism by the introduction of ‘girl-friendly’ curricula that highlight
women’s contributions to science and other female-focused themes (Scantlebury,
2012).

In another study, grade nine Israeli Arabs were exposed to an integrative Science,
Technology, Engineering and Mathematics (STEM) intervention about image
processing using computers. In this case, the pretest for interest in learning
computers in school showed sex differences, with males outperforming females, but
no significant differences were found between the sexes for the posttest (Barak &
Asad, 2012). An intervention such as this can help to decrease the gender gap in
science education.

Therefore, ideally, the evaluation of any intervention in science education today


would also evaluate whether the intervention is differentially effective for males and
females. Consequently, this study also included a research question about the
differential effectiveness of virtual laboratories for different sexes.

73
2.5 Virtual Laboratories in Science Education

Section 2.2 reviewed literature concerning the main measure of effectiveness in my


study (i.e. perceptions of the learning environment). Section 2.3 surveyed the
literature regarding another measure of effectiveness (i.e. attitudes), and Section 2.4
provided a literature review of gender issues in science education because that was
an additional aspect considered in my study. However, the actual intervention in my
study (i.e. virtual laboratories) still requires elucidation. Therefore, this section
reviews literature about virtual laboratories as well as the larger context of
educational technology.

The literature outlining the advantages of integrating technology into science


classrooms is presented in Section 2.5.1. Then Section 2.5.2 reviews literature
detailing the definition, history, and particular benefits of virtual laboratories.
Section 2.5.3 examines the literature describing virtual learning environments and
the context for virtual laboratories, and this is followed by an overview of studies
that evaluated virtual laboratories (Section 2.5.4). To provide a balance, Section
2.5.5 concludes with a review of the literature that is skeptical about the overall
effectiveness of integrating educational technology into classrooms.

2.5.1 The Proponents: Rationale for Integrating Educational Technology

The use of technology for instruction is not a new idea. In reality, the reference to
the term ‘technology’ changes with time. In the early part of the 20th century,
‘technology’ might have referred to phonographs and transistor radios, progressed to
sound recordings, television and computers (Russell, 1999), and more recently
included interactive whiteboards (Moss, Jewitt, Levaaic et al., 2007), Personal
Response Systems (Herrmann, 2012), iPads (Nooriafshar, 2011), and other mobile
devices (Milrad & Spikol, 2007). Naturally, these technologies have been adapted
to the educational realm and, alongside, their educational effectiveness was
evaluated. A full review of the integration of technology in education is beyond the
scope of the current study; this section merely examines the general role of
technology in science education.

Many technological advances are quickly revolutionizing the rate of discovery and
youngsters are expected to be familiar with such innovations. As Javidi (Javidi,

74
1999) notes: “To allow educational tools to fall behind the pace of technological
advance is to sell out a generation of learners” (p. 1). Because students learn better
from processes which are sensory, visual, inductive, and active (Felder & Silverman,
1988), they benefit from lessons that are interspersed with technology-rich activities
that contain digital images and animations, activities that involve the use of
simulations and databases, and research via the internet (Beichner, Bernold,
Burniston et al., 1999; Trindade, Fiolhais, & Almeida, 2002).

Whether or not the evidence supports its use (see Section 2.5.5), technology is the
comfort zone for many students today. This idea is most eloquently summarized by
the terms ‘digital natives’, referring to those born into an era surrounded by
technology and are thus conferred with the ability to manage it, and ‘digital
immigrants’, referring to those who need to adjust to technological innovation; it is
argued that the thought processes of the former are fundamentally different from
those of the latter (Prensky, 2001). Therefore, the modernization of presentation
modes in education might be of benefit to students and to teachers who could use
more tools to reach the young minds that have been trained by popular entertainment
media to seek constant stimulation.

In studies on integrating Information and Communication Technologies (ICT) in


four different countries, results showed that students perceived most aspects of the
learning environment to be positive, thus influencing students’ overall perceptions
of the science classroom, which are linked to improvements in achievement
(Zandvliet & Buker, 2003; Zandvliet & Fraser, 2004). A meta-analysis of 25 studies
that integrated technology into classrooms (mostly computer-based) resulted in a
positive, but small to moderate, effect favoring the use of technology over
traditional instruction without technology, and showed that computer technology
used to support instruction was more effective than technology applications that
provide direct instruction (Tamim et al., 2011). Norton et al. (2007) focused on
robotics in middle school science classes and also indicated that integrating
technology into these classes allowed students to think for themselves, apply logical
thinking, be creative, and be autonomous.

75
As applied to the natural sciences, specifically regarding the topic of genetics, one
study showed that the use of multiple representations dynamically linked in an
interactive multimedia program called BioLogica, enhanced students’ learning of
introductory genetics. In this case, the intervention enabled teachers “to increase the
use of visual-graphical representations, thus making genetics more interesting and
easier to learn and understand” (p. 285). The authors underscore the role of the
teacher in encouraging students to engage with such multimedia programs (Tsui &
Treagust, 2004).

In a similar attempt to use computer animation and illustration activities to improve


high school achievement in molecular genetics, the authors used an attitude item,
“Do you find molecular genetics more difficult than other topics in biology?” to
determine if the type of activity influenced attitude and achievement. Indeed, 58%
of the control group reported that molecular genetics was very difficult, indicating a
negative attitude, compared with 24–38% of the experimental group. Also, the
experimental group showed greater knowledge in this topic than the control group,
although interview responses revealed that the animation activity was significantly
more effective than the illustration activity (Marbach-Ad, Rotbain, & Stavy, 2008).

Overall, innovations that alter the dynamic of the traditional classroom, from
collaborative teaching to the incorporation of technology such as online textbooks
and virtual laboratories, to instances of ‘learning without walls’ such as fully online
classes or distance education, initiate a paradigm shift in defining the learning
environment. With such innovations, the teacher’s role as director diminishes and a
new model of teacher as facilitator emerges that allows for more student-focused
learning; the focus is on ‘learning’ and not necessarily on ‘teaching’ (Chang &
Fisher, 2003; Rogers, 2000). Another byproduct of such innovations, especially
concerning online and distance education, is the globalization of communication
within education, which allows trans-cultural exchange (van de Bunt-Kokhuis,
2001).

Zandvliet and Fraser (2004) note a number of challenges that prevent the successful
integration of technology into classrooms. They point out that the use of ICT in
schools is partially attributable to technological, commercial and societal pressures

76
but that, once a school invests in ICT, there is little support to make it educationally
beneficial. To do so, schools need to better integrate ICT with their curriculum and
instruction, which might be augmented by the physical learning environment. The
authors discuss the need for a healthy balance of all spheres of influence: the
ecosphere (eg. equipment, network), the sociosphere (interactions with other people,
perceptions, outcomes, learning, attitude), and the technosphere (i.e. technical
factors impact instruction such as the goals of teachers) (Zandvliet & Fraser, 2005).

Another significant factor that affects the usefulness of ICT is the technological
experience of the teacher. The National Center for Educational Statistics (Smerdon,
Cronen, Lanahan et al., 2000) revealed that 99% of teachers in public schools in US
had access to computers or the Internet, and that 84% had at least one computer in
classroom, but only 20% felt well-prepared to integrate technology into teaching.
Similarly, another study showed that the primary use of ICT for teachers was for
email to communicate with homes and for students use was Word processing and
Internet research. Therefore, neither users were engaging in the full range of tasks
and advantages that ICT offers (The California Educator, 2003). A Common
frustration for teachers using ICT is the amount of time spent on technical issues
rather than instructional ones (i.e. the technosphere is too large) (Zandvliet & Fraser,
2004).

Perhaps, owing to some of these challenges, Jones (2012) argues that the impact of
technology on the teaching and learning of science “has probably not reached the
potential we thought we thought it might when we began exploring its introduction
25 years ago” (p. 820). Either studies are simply not producing evidence that
technology integration is beneficial (see Section 2.5.3), or the process of integrating
technology into classrooms must be refined by incorporating a broader spectrum of
technological programs, training teachers in their use, providing better spaces in
which to use technology, and designing more accurate studies to evaluate their
effectiveness. My study represents one such attempt to add to the body of research
on the effectiveness of integrating technology in science classes.

77
2.5.2 Virtual Laboratories

The National Science Foundation’s (NSF) Task Force on Cyberlearning proposes


upgrading the state of Science, Technology, Engineering, and Mathematics (STEM)
education by incorporating interactive technology (Borgman et al., 2008). It points
to a changing society and how education must also “respond dynamically to prepare
our population for the complex, evolving, global challenges of the 21st century” (p.
5). More specifically, the NSF promotes the growth of a cyberlearning
infrastructure that is networked, customizable, and computationally rich, with one
example being virtual laboratories.

This section examines the literature that specifically addresses aspects of virtual
laboratories such as its definition (Section 2.5.2.1), history (Section 2.5.2.2), and
benefits (Section 2.5.2.3).

2.5.2.1 Definition

The specific attempt to integrate technology into science classrooms that was
assessed in this study concerns virtual laboratories, which are interactive
environments for conducting simulated experiments. In more general terms, a
virtual laboratory is defined as “an electronic workspace for distance collaboration
and experimentation in research or other creative activity, to generate and deliver
results using distributed information and communication technologies”, according to
the International Institute of Theoretics and Applied Physics at the Expert Meeting
on Virtual Laboratories in Iowa, USA in 1999 (Rauwerda, Roos, Hertzberger et al.,
2006, p. 230). Essentially, such modalities make use of networked content to
provide a rich immersive learning environment using visualizations, graphics, and
interactive applications.

The term ‘virtual laboratories’ is often used loosely amongst software developers
who wish to entice educators into their usage. Indeed, the concept encompasses five
different categories, according to Harms (2000), only three of which are currently
relevant to this study (Borgman et al., 2008; Nedic, Machotka, & Nafalski, 2003)
and whose boundaries also become somewhat blurred (Ma & Nickerson, 2006):

78
 Simulations that contain certain elements of laboratory experiments, but they
are mainly used for visualizations and they are available online. These are
referred to as classical simulations and ‘CyberLabs’ and are further
discussed in Section 2.5.3.1.

 Simulations that attempt to represent laboratory experiments as closely as


possible by engaging in inquiry skills, called Virtual Labs, the subject of this
section.

 Real experiments that are controlled via a network, the settings and output of
which are accessible through the Internet. These are known as Remote Labs.

The benefits of Remote Labs are discussed by Alhalabi (1998). Remote Labs first
were most commonly used for robotics and then expanded to other areas of
engineering. Examples include University of South Australia’s NetLab (Nedic,
Machotka, & Nafalski, 2003), MIT’s iLabs project that offer microelectronics test
equipment and the like (http://icampus.mit.edu/ilabs/), Second Best to Being There
(SBBT) from Oregon State University that provides remote students with complete
access to a control engineering laboratory (Bohus, Aktan, Crowl et al., 1996), and
the Virtual Lab at Carnegie Mellon University
(http://users.ece.cmu.edu/~stancil/virtual-lab/concept.html). For example, the iLabs
inverted pendulum experiment at the University of Queensland permitted users to
access the experiment beyond laboratory hours and led to an increased success rate
for students to balance the pendulum from 5% to 69.5% (Borgman et al., 2008).
Another category delineated by the NSF Task Force on Cyberlearning is a mixed-
reality environment that combines digital content and real-world spaces that allows
users to see the machinery involved but interpret output electronically (Borgman et
al., 2008). However, further discussion about these types of remote virtual
laboratories is beyond the scope of this review and better explored under an
Information and Communications Technology (ICT) framework. The remainder of
this section explores the second category of virtual laboratories (i.e. simulations that
closely represent laboratory experiments).

79
2.5.2.2 History

Virtual laboratories have been developed by educational companies and institutions


of higher learning through software or websites over the past four decades. They are
utilized at every level of education from primary school through secondary school,
at institutions of higher education, and for job training in medicine, security, and the
military (Felder & Silverman, 1988; Gallagher, Ritter, Champion et al., 2005;
Marchevsky, Relan, & Baillie, 2003; Nedic, Machotka, & Nafalski, 2003; Psotka,
1995; Rogers, 2000; Yasar & Landau, 2003). Recently, virtual laboratories have
even emerged in the scientific workplace as extensions of common meeting places,
fostering collaboration around certain topics of research (Rauwerda et al., 2006).

While the concept of virtual laboratories (as encompassing remote laboratories and
simulations) dates back to the 1970s, the development of true virtual laboratories
specifically related to the life sciences are of greater relevance to the current study.
One of the first such initiatives in the 1980s was the Genetics Construction Kit
(GCK) that illustrates classical Mendelian genetics by simulating fruit fly variations.
Similarly, simulations of genetic transmission of traits in cats, called CATLAB
(http://www.emescience.com/sci-genetics-catlab.html), in fruit flies, called the
Virtual FlyLab (http://biologylab.awlonline.com/), and in pea plants and dragons,
called Biologica (http://biologica.concord.org/), were developed in the 1990s and
were widely used in science classrooms. Later, ViBE: Virtual Biology Experiments
(http://www.ece.rutgers.edu/~marsic/books/SE/projects/ViBE/) was created in 2001
to allow students to discover biological processes and practice laboratory skills. All
of these programs served as the inspiration for the Virtual Genetics Lab, developed
in 2007, to test predictions of genetic crosses for various traits in a hypothetical
insect (http://vgl.umb.edu/). It enabled students to “practice the logic of genetic
analysis without the distractions of wet labs” but were not intended to “replace a wet
lab” (White, Bolker, Koolar et al., 2007, p. 30).

A myriad of such software emerged in the 21st century for medical students and
university and high school students in the sciences (Yu, Brown, & Billet, 2005), but
the ones most commonly used in the current study include: the Howard Hughes
Medical Institute virtual laboratories (http://www.hhmi.org/biointeractive/vlabs/) for
exploring topics in molecular genetics, cardiology, neurophysiology and the immune
80
system (HHMI, 2003), and the University of Utah’s virtual laboratories
(http://learn.genetics.utah.edu/) that prepare students with basic skills in molecular
genetics experiments and involve investigation of the molecular basis of cancer
(University of Utah, 2004). A full list of the virtual laboratories used in this
investigation are provided in Appendix D, and a description of their implementation
is included in Chapter 3.

2.5.2.3 Benefits

Because of the recent rise in the biotechnology industry, and the job opportunities
thus afforded, innovations in teaching biotechnology and molecular biology
concepts have become vital (Toth, Morrow, & Ludvico, 2009). In this manner,
virtual experiments enable users to focus on conceptual explanations because the
virtual program can keep track of details in data to allow users to focus on the ‘big
picture’. Moreover, state education standards are becoming increasingly demanding,
as noted in Section 2.2.2, particularly regarding the molecular focus of biology with
which students often have difficulty. To address this concern, the use of virtual
laboratories in the classroom can help to make these molecular concepts more
concrete for students without requiring complex and costly equipment (Marbach-
Ad, Rotbain, & Stavy, 2008; Raineri, 2001), and thus assist in narrowing the gap
between lagging levels of student achievement and the proposed higher standards to
which students are held accountable. Therefore, the use of virtual laboratories can
aid in both the conceptualization and constructivist realm, allowing students to learn
by doing and become more engaged in their studies (Clancy, Titterton, Ryan et al.,
2003; Felder & Silverman, 1988; Gallagher et al., 2005; Marchevsky, Relan, &
Baillie, 2003; Yu, Brown, & Billet, 2005).

As well, traditional laboratories face a number of logistical challenges in that they


are expensive to maintain and thus scarce, require low student-faculty ratios, entail
long durations of time (a resource that is tightly rationed), depend on well-designed
activity sequences, and raise safety and ethical issues when handling toxic
substances or biological specimen. On the other hand, for virtual laboratories, issues
such as time, geographical distance, safety, and expenses are largely irrelevant
(Borgman et al., 2008).

81
Similarly, Toth et al. (2009) describe their efforts to develop a tool that preserves the
beneficial aspects of hands-on laboratory work while deepening the quality of
inquiry learning in a complex, error-prone environment; according to them, virtual
laboratories “allow the user to conduct the same scientific inquiry afforded by
hands-on investigation but at a reduced expense, with increased safety, and within
the time constraints of a…classroom” (p. 334). They also describe the benefits of
virtual laboratory equipment that automates routine tasks, such as mixing solutions
and forming agarose gels, and allows students to focus on the inquiry aspects of an
experiment rather than the technical tasks. Additionally, many virtual laboratories
contain visual representations or animations that explain the mechanism of the
virtual equipment, an interactive feature unavailable in hands-on laboratories where
only the end state of a reaction occurring inside complex machinery is revealed to
students (Toth, Morrow, & Ludvico, 2009).

Regarding safety, simulations of crises such as pandemics or results of natural


disasters can be replicated and studied in a non-dangerous manner. Virtual
experiments also provide opportunities for physically-disabled students to perform
experiments in a risk-free environment by avoiding complex equipment and
materials that pose safety hazards (Cobb, Heaney, Corcoran et al., 2009). Such
virtual simulations might also benefit potential workers in need of training
(Muirhead, 2003).

Other practical advantages of incorporating virtual laboratories into science


curricula include reduced teacher preparation and cleanup time, the lack of complex
and costly equipment, materials, and physical laboratory space, and allowing
experiences not otherwise possible in many high school classroom settings (Yu,
Brown, & Billet, 2005). A virtual environment for experimentation also enables
better multi-tasking. A number of experiments or equipment can be run at once and,
by its nature, the Internet allows a synthesis of different resources for learning rather
than the single, ‘authoritative’ voice of the textbook or instructor (Annetta, Klesath,
& Meyer, 2009; Dede, 2005).

Furthermore, virtual learning environments offer an emphasis on authentic scientific


experiences because students can revise their original predictions for experiments by

82
way of instant feedback from data manipulations, form more accurate mental
models of phenomena, and can even use these virtual simulations as practice to
prepare them conceptually for complex hands-on experiments (Zacharia, 2007). Yu
et al. (2005) constructed a system that draws on the instant feedback feature by
providing an intelligent tutoring agent that offers advice for students to correct their
mistakes while conducting a virtual experiment. Naturally, virtual experiments are
repeatable within and outside of the classroom, a feature that serves to prepare
students prior to beginning a hands-on experiment and allows them to review an
experiment after it has been conducted (Cobb et al., 2009; Reising, 2010).

While some virtual laboratories are designed to incorporate student collaboration


(Cobb et al., 2009), others are focused on training individuals in the skills and
concepts of a particular experiment. The types of laboratories intended to enable
student collaboration excel in imitating a true scientific experience of not only
investigation but also the building of community. Indeed, one of the most important
features of web materials necessary to improve learner outcomes is a high degree of
interaction, which can be accomplished asynchronously (eg. emails and bulletins)
and synchronously (eg. Chat rooms) (Chandra & Fisher, 2009). On the other hand,
virtual laboratories that focus on the individual have the advantages of enabling shy
students to find more voice (Dede, 1999) and reducing the peer pressure from both
fellow students and teachers, thus allowing users to feel more comfortable about
making and learning from mistakes (Yu, Brown, & Billet, 2005); these advantages
might be applied to online experimentation that includes collaboration, as well,
because there is a degree of anonymity.

On the other hand, disadvantages of utilizing virtual laboratories include the use of
idealized data, lack of collaboration, and the absence of interaction with real
equipment (Hofstein & Lunetta, 2004; Nedic, Machotka, & Nafalski, 2003). Waight
and Abd-El-Khalick (2007) add that true inquiry in virtual experiments can also be
affected because of the perceived authority of technology. Winn et al. (2006) point
out that such technological tools can favor students who have more prior knowledge.
Ultimately, many of these disadvantages can be avoided with the application of
good design principles for the implementation of virtual laboratories (Annetta,
Klesath, & Meyer, 2009; Toth, Morrow, & Ludvico, 2009). To summarize, hands-

83
on laboratory advocates emphasize design skills (Ma & Nickerson, 2006) and the
importance of making and learning from errors (Toth, Morrow, & Ludvico, 2009),
while virtual and remote laboratory advocates focus on the benefits gained in
conceptual understanding (Marbach-Ad, Rotbain, & Stavy, 2008; Marchevsky,
Relan, & Baillie, 2003; Raineri, 2001; Toth, Morrow, & Ludvico, 2009).

2.5.3 Virtual Learning Environments

Researchers and policymakers recommend that a modern learning environment


should incorporate media and technology, including virtual experiences (Borgman et
al., 2008; Saettler, 2004; Tamim et al., 2011). However, this environment must be
characterized by understanding the relationship between tasks and resources,
integration, establishing and maintaining good study habits, building confidence,
including enrichment, annotation, tracking, and feedback (Sirkemaa, 2003).
Naturally, these dimensions of a learning environment differ from a traditional one;
this is referred to as a Virtual Learning Environment (VLE) (Yu, Brown, & Billet,
2005), or ‘v-learning’ (Annetta, Klesath, & Meyer, 2009)

A number of chapters in Khine and Fisher’s (2003) book about Technology-Rich


Learning Environments characterize VLEs and deal with the changing aspects of a
learning environment in the virtual world, but the intervention evaluated in this
study was not intended to transport students into a separate VLE. Rather the virtual
laboratories in the current study were meant to supplement the traditional classroom
learning environment, by borrowing elements from a VLE. Such elements include
simulations and the nature of online education, both explored in this section.

2.5.3.1 Simulations

Virtual science learning environments rely on the use of interactive simulations of


scientific phenomena that are too small, large, slow, fast, simple or complex to
explore in a typical classroom. These simulations might stand alone or serve as the
media used by various technologies, such as virtual laboratories and Serious
Educational Games (SEGs). Both virtual laboratories and SEGs share the same type
of interface regarding simulations and interactivity, and might even share the
common goal of improving science learning, but the underlying premise is entirely
different. The latter seeks to merely increase students’ interest and engagement by

84
enriching a science learning experience (Thurmond, Holmesa, Annetta et al., 2011),
but a full review of SEGs is beyond the scope of this discussion. The former, the
subject of the current study, is meant to either replace or supplement essential
experiences that could not otherwise be had in science classrooms. However, to
make virtual laboratories attractive, they are often designed similarly to SEGs
because “as the Net Generation (currently the leading population playing online
games) reaches college age, the adaptation of a three-dimensional, game-like
environment into a virtual classroom seems to be the natural evolution in online
learning” (Annetta, Klesath, & Meyer, 2009, p. 27).

Such simulations, based on visualizations and animations, have been heralded as


being essential to students’ conceptual understanding of complex topics, especially
those requiring keen mathematical abilities and sustained logic that burden the
cognitive load on students and endanger their abilities to master a concept
(Marbach-Ad, Rotbain, & Stavy, 2008; Toth, Morrow, & Ludvico, 2009; Tsui &
Treagust, 2004; Yasar & Landau, 2003). In fact, the benefits of simulations for
conceptual understanding are so pervasive that Van Rooy (2011) claimed its
primacy in instruction: “Much of bioscience can now only be effectively taught via
digital technology since its representational, symbolic forms are in digital formats”
(p. 1). Her study, based on qualitative data using classroom observations and semi-
structured interviews with teachers, pointed to the pedagogical benefit of using
digital technologies for students’ understanding of concepts in molecular genetics.

While a number of studies highlight only their beneficial outcomes, research


regarding the effectiveness of simulations for science learning is inconclusive
(Sabah, 2011). Based on past research, the NRC’s Committee on Science Learning
concluded that there is much evidence for the positive impact of simulations on
conceptual understanding, some evidence that simulations motivate interest in
science, and less evidence about whether they support other science learning goals.
They view computer games and simulations as worthy of future investment from
entrepreneurs, and investigation by researchers, as a means to improve science
learning (National Research Council (NRC), 2011).

85
The current study only defined simulations as a component of virtual laboratories.
Therefore, the review of literature concerning their effectiveness is limited in this
chapter but many other studies contain a more in-depth discussion of the benefits of
simulations (Bell & Trundle, 2008; Burkholder, Purser, & Cole, 2008; Dori &
Barak, 2001; Finkelstein, Adams, Keller et al., 2005; Marbach-Ad, Rotbain, &
Stavy, 2008; Winn et al., 2006).

2.5.3.2 Online Education

One of the areas in which virtual laboratories have the potential to be most useful is
online education. This also happens to be the fastest growing area in education
today. In the US, enrolment in full-time virtual schools has increased 40% in the
last three years and, according to the International Association for K–12 online
learning, nearly two million students take at least one online class in the US alone
(Banchero & Simon, 2011; International Association for K–12 Online Learning
(iNACOL), 2012). Well-known American universities (e.g. Harvard University and
Stanford University) are beginning to invest in a venture that offers free classes
online despite the lack of economic gain (Perez-Pena, 2012). To ensure that their
students are well prepared for the world of online education and the future job
market, some school districts and states require the successful completion on an
online course in order to graduate (Brown, 2012).

Whether to save costs, provide opportunities to regain credit for a previously-failed


course, or offer enrichment options, K–12 schools are increasingly adopting online
course options. For instance, many schools in Florida, Illinois, and Massachusetts
have avoided the issue of class size by establishing ‘virtual classrooms’, essentially
large computer laboratories with a facilitator, that can accommodate more students
(Banchero & Simon, 2011; Herrera, 2011).

In some cases, schools are entirely online and there are no bricks-and-mortar
buildings. However, this is mostly frowned upon and the most beneficial
arrangement is a learning model that blends traditional instruction with online
activities or vice versa, as one professor of education and editor of The American
Journal of Distance Education stated: “There is no doubt that blended learning can
be as effective and often more effective than a classroom” (Herrera, 2011, Paragraph

86
20). In order to create a viable and effective arrangement for blended instruction,
Herrera describes three requirements: proper design of the virtual course (or aspect
thereof), the inclusion of direct teacher instruction within physical classrooms, and
an appropriate maturity level among students taking the course.

Naturally, online courses in the experimental sciences, as with distance education,


face challenges without a physical laboratory. Some distance education programs
have adapted to these challenges by sending videotapes or home kits or arranging
hands-on experiences at local laboratories, but none of these options have proven to
be too useful or beneficial (Alhalabi et al., 1998). Virtual laboratories provide a
solution for such courses by allowing students to learn the practical skills required
for inquiry: students can manipulate virtual equipment, gather and analyze data, and
even engage in virtual dissections.

The virtual environment in science can be extremely beneficial to students and


institutions in developing countries that do not have access to highly complex
equipment and costly resources in laboratories. In fact, an initiative in India has
been established to enable such students to understand and ‘experience’ certain
experiments within many areas of science at the university level
(http://www.vlab.co.in/). Through this resource, users can access both simulation-
based and remote-triggered virtual laboratories, comprising over 800 different
experiments. The mission is to provide remote access to laboratories in various
disciplines of science and engineering to students and researchers:

Virtual Lab is a complete Learning Management System. All the relevant


information including the theory, lab-manual, additional web-resources,
video-lectures, animated demonstrations and self-evaluation are available at
a common place. Virtual Labs can be used in a complementary fashion to
augment the efficacy of theory-based lectures. Small projects can also be
carried out using some of the Virtual Labs. Virtual Labs can be effectively
used to give lab-demonstrations to large classes. (Vlab, 2012, FAQ Section)

87
2.5.4 Overview of Studies Employing Virtual Laboratories

Although some projects using virtual laboratories have only recently begun in
schools, and started to show positive results, several researchers note the lack of
empirical evidence concerning their effectiveness (Harms, 2000; Hofstein &
Lunetta, 2004; Javidi, 1999; Javidi & Sheybani, 2006). Ma and Nickerson (2006)
acknowledge the necessity to further evaluate, via controlled studies, the educational
effectiveness of laboratory simulations developed by software companies.
Conversely, Chandra and Fisher (2009) urge teachers, albeit untrained in ICT, to
become more proactive in helping to develop educational technology because they
possess valuable knowledge and experience for designing and sequencing such
activities.

The following is an overview of studies involving an evaluation of the educational


benefits of virtual laboratories; however, this discussion is limited to virtual
laboratories that:

 seek to imitate a real laboratory experiment using inquiry skills and which
involve students observing phenomena, formulating hypotheses, setting up
controls, following procedures, testing hypotheses, and analyzing results.
(Virtual experiences to clarify a concept through simulation/modeling are not
included.)

 explore topics that are too complex to be investigated in real laboratories at


the high school or university levels because of various constraints on time,
safety, etc.

 are evaluated in an educational context (i.e. under the framework of


improving science education) rather than improving a product’s design from
the perspective of ICT.

 result in positive learning gains; inconclusive or negative results are


presented in Section 2.5.5.

While formal evaluative analysis has yet to be completed, anecdotal evidence and
preliminary evidence gathered over four semesters for university students involved

88
in integrating virtual laboratories from iLabs into their biology course point to a
gradual increase in class performance. More promising was the significant decrease
in the number of students failing the course (Raineri, 2001). In a similar study of 39
college students taking an introductory biology course, using a crossover design to
compare hands-on and virtual laboratory activities, quantitative data showed no
difference in the order of the instructional methods, but revealed the effectiveness of
integrating virtual and hands-on laboratories over hands-on laboratories alone.
Qualitative data indeed pointed to the efficacy of engaging in virtual laboratories
before the hands-on ones (Toth, Morrow, & Ludvico, 2009).

Significant improvement over four years in student participation and satisfaction


was also seen amongst medical students experiencing web-based instruction in a
pathophysiology course. Attendance at laboratory sessions using virtual software
increased to almost 100%, compared to the approximately 30% to 40% attendance
in previous years when students had been required to bring their own microscopes to
study histological slides at their own pace (Marchevsky, Relan, & Baillie, 2003).

In another study that used ‘presence’ (the ability to perceive virtual representations
as real people or objects despite not being able to touch them directly) as a measure
of effectiveness, entomology students reported high levels of such ‘presence’ when
creating and manipulating a virtual ‘bug farm’ as a supplemental activity in their
course. The activity was a multi-user format similar to video games in a three-
dimensional environment. In this case, males experienced a greater sense of
‘presence’ than females (Annetta, Klesath, & Meyer, 2009).

Another evaluation of a virtual laboratory involved 184 high school chemistry


students in the US in a two-year crossover design with students being exposed to
both virtual laboratories and real laboratories about the same topic (stoichiometry).
The measures of effectiveness were laboratory performance, including the ability to
interpret data and comprehend the concepts learned from the investigation, students’
perceptions of the learning environment, and students’ attitudes towards laboratory
investigations and computers. No significant differences emerged in terms of
learning gains for the first trial, which illustrates that substituting virtual for physical
experimentation can be equally effective, but some significant learning gains were

89
noted for the second trial; therefore virtual laboratories were shown to be as
effective as, if not more effective than, physical laboratories (Pyatt & Sims, 2012).

Furthermore, the authors argued that ‘hands-on’ is a concept about interaction,


interpretation and revelation, more than it is about equipment use. The insight
offered by this study shows that opportunities to explore and manipulate
experimental variables matter more to students than operating physical equipment
(Pyatt & Sims, 2012). Similarly, studies involving using manipulatives for teaching
heat and temperature (Zacharia, Olympiou, & Papaevripidou, 2008) and for
experimentation in electric circuits (Zacharia, 2007) indicated that the use of virtual
equipment, when utilized in conjunction with physical equipment, was superior to
the use of physical equipment alone.

In another study, while quantitative results showed no significant differences in


learning gains, qualitative results revealed that students performing a virtual
laboratory in Second Life (http://secondlife.com/) reported more satisfaction and
asked less questions of the staff than when subsequently performing the same
laboratory practical in real-life. These results indicate improved understanding
amongst students who performed the virtual laboratory compared with students who
did not perform the virtual investigation as a prerequisite to the activity. However,
the entire implementation of the study took only over three hours, calling into
question the validity of results from an activity based on one occasion (Cobb et al.,
2009).

Because the topic of dissection in science classes has aroused much controversy
(Orlans, 1988), virtual laboratories that involve dissecting ‘specimens’ online
provide a viable alternative to real dissections. Studies of the value of virtual frog
dissections compared with traditional dissections using real specimens have revealed
mixed results; some suggested that real dissections are more effective (Cross &
Cross, 2004), while others suggested the supremacy of simulated dissections for
improved achievement (Akpan & Strayer, 2010). It should be noted that all three
studies used small sample sizes and contained other methodological limitations.

In reality, science classes should blend real and virtual experiments so that students
acquire the skills necessary to perform the required technical tasks; virtual

90
simulations are useful for transferring knowledge and skills from an idealized
(virtual) environment into physical reality (Yu, Brown, & Billet, 2005). Indeed, a
number of studies suggest the desirability of integrating hands-on laboratories with
virtual ones and the effectiveness of engaging in virtual experiences prior to the real,
hands-on investigation (Akpan & Strayer, 2010; Cobb et al., 2009; Toth, Morrow, &
Ludvico, 2009). As well, Nedic et al. (2003) recommended concentrating on virtual
laboratories the first year of a four-year engineering program and then slowly
working towards physical laboratories in the remaining years. In general, skill
acquisition through virtual environments is expected to be more successful if it is
scheduled on an interval basis, including the alternation of physical laboratories and
regular lessons, rather than amassed into a short period of intense practice
(Gallagher et al., 2005).

While this section examines the merits and demerits of virtual experiments that
cannot be conducted in real, physical laboratories, it is important to distinguish
between virtual and physical laboratory environments. The laboratory has been a
prominent feature of science education since the inception of teaching science
systematically in the 19th century. A laboratory refers to “experiences in school
settings in which students interact with equipment and materials or secondary
sources of data to observe and understand the natural world” (Hofstein & Kind,
2012, p. 190). However, in the early years of science experimentation in schools,
laboratories were simply environments in which to practice or confirm information
learned from lectures or textbooks. Its evolution into the space in which exploration
and inquiry can occur took decades, and is a process that is still ongoing.
Ultimately, science learning environments that are rich in practical experiences, as
compared to those with few laboratory experiences, have been shown to be
beneficial for student attitudes and learning, a benefit that might ultimately
contribute to choosing a career in science (Hofstein & Kind, 2012; Hofstein &
Lunetta, 2004).

A number of studies have compared the effectiveness of virtual and physical


experimentation, as reviewed in a recent paper by de Jong, Linn and Zacharias
(2013). They describe a physical laboratory as one that imitates reality. The
enthusiasm that results from students practising science in a ‘real’ laboratory,

91
similar to how ‘real scientists’ practise, helps in forming positive impressions early
on. The hands-on interaction with materials and equipment, and the trouble-
shooting involved, expose students to some of the challenges that real scientists
encounter. Additionally, the tactile experiences in a physical setting might enhance
conceptual development. In comparison, virtual laboratories manipulate reality. As
previously mentioned, a virtual environment allows idealized data, as well as
unobservable data, and avoids technical problems associated with equipment.
Virtual laboratories allow interactions with equipment and materials and so the
definition of ‘hand-on’ takes on a new meaning beyond the tactile realm. In line
with the handful of studies described above, the revised thesis also concludes that a
blend of physical and virtual environments is the most effective method for allowing
both physical interaction and conceptual development in science. In fact, the
determining factor in the effectiveness of any method is not the context in which the
experience takes place, but the degree to which inquiry is fostered (de Jong, Linn, &
Zacharia, 2013).

The term ‘inquiry’ was originally described by Kempa and Ward (1975) as 1)
planning an experiment, 2) carrying out the experiment, 3) observations, and 4)
analysis, applications, and explanation of results. More recently, Hofstein and Kind
(2012) stress the importance of incorporating metacognition into all activities so that
students are engaged in planning how to approach a task, monitoring their
comprehension of a task, and evaluating their progress as they execute the task.
Four conditions are necessary in order to foster an environment of inquiry where
metacognition can occur: time, opportunity, guidance, and support (Baird & White,
1996). Regarding the first condition, time can be afforded by reducing the amount
of time spent on tasks that can be handled by technology, as in a virtual
experimentation.

There is a plethora of evaluations of virtual innovations from the field of


information technology in which the computer basically served as a virtual
laboratory that simulates natural phenomena. However, because the purpose of
most of those studies was to improve the technology developed in order to expand
its usage, and perhaps increase financial gains, evaluation of such products for
educational benefits could be superficial. Additionally, as illustrated, many of the

92
studies above evaluating virtual laboratories from an educational standpoint were
based on small sample sizes and didn’t adhere to strict standards of research.
Consequently, there is a dearth of solid evaluative research on virtual laboratories
from an educational perspective, and especially within a learning environments
framework. Therefore, the aim of my study was to evaluate the effectiveness of
virtual laboratories used in educational settings at the high school level, in terms of
the learning environment, attitudes, and achievement.

2.5.5 The Critics: The No Significant Difference Phenomenon Regarding


Educational Technology

Thomas L. Russell (1999), in his book entitled The No Significant Difference


Phenomenon, points out an interesting trend regarding educational technology that
started in 1928 and continues currently (http://www.nosignificantdifference.org/).
In his introduction, Russell reveals that he began with the intention to document a
well-known ‘fact’ that technology improves instruction, but his findings surprised
him: only a handful of studies showed any measurable positive effect of technology
on education and they were offset by studies indicating a negative impact. Mostly,
he concluded, studies of the effectiveness of educational technology resulted in no
significant differences. The following is a brief survey of the literature revealing
this trend.

Starting with the advent of digital technologies in the early part of the 20th century,
overly hopeful inventors envisioned a future without textbooks. In 1913, Thomas
Edison stated, “Books will soon be obsolete in the schools.... Our school system will
be completely changed in 10 years” (Saettler, 2004, p. 98) referring to the
emergence of the motion picture as a new medium for education. Contrary to this
claim, textbooks are still currently being used frequently in classrooms.

One of the first academic evaluations the application of technology into the realm of
education focused on correspondence education involving the use of media such as
loudspeakers (Loder, 1937) and phonographic recordings (Rulon, 1943). The
achievement scores of students who were face-to-face with their instructors were
compared with scores of students who were not; neither study showed significant
differences. Nor were any significant differences found between students learning

93
via instructional radio and students being taught by traditional methods (Woelfel &
Tyler, 1945). In 1950, a study with 9th grade biology involved comparing students
using three instructional methods: sound films, sound films plus study guides, and a
standard lecture demonstration. Again, no significant differences were revealed in
achievement scores between the three groups (Van der Meer, 1950).

Early in the 1950s, television promised to be an effective medium of instruction in


the classroom, but the data showed otherwise. One of the first such studies
indicated that Instructional Television, or ITV, was equally effective as face-to-face
instruction (Kanner, 1954). Subsequently, there were many studies of the
effectiveness of ITV which showed no significant differences (Thornton & Brown,
1968). Televised instruction was even applied to the acquisition of laboratory skills,
but no significant differences were found in students’ achievement compared to
those in face-to-face laboratories (Seibert & Honig, 1960).

In the 1950s, Purdue University initiated a special laboratory devoted to the


acquisition of languages utilizing the most advanced technology available at that
time; however, no significant differences were noted in studies that evaluated this
method (Fotos, 1955). Similarly, the promise of benefit to students regarding
educational media such as the kinescope (Parsons, 1957), telephone (Cutler,
McKeachie, & McNeil, 1958), multi-image presentation (Didcoct, 1958), and tape
recorder (Popham, 1961) was not fulfilled as evaluative studies produced no
significant differences.

The 1970s ushered in an era of computer exploration that had instructional


relevance. However CAI, or Computer-Assisted Instruction, did not reveal much
success in terms of significant differences from traditional methods (Beard, Lorton,
Searle et al., 1973; Goldberg, 1997; Judd, Bunderson, & Bessent, 1970; Lee, 1985).
Neither did other media, such as movies (Atherton, 1971), time compression of
speech (Sticht, 1971), the Spitz Students Response system (Brown, 1972), audio-
conferencing (Holdampf, 1983), the electronic blackboard (Partin & Atkins, 1984),
video simulations (Atherton & Buriak, 1988; Thomas & Hooper, 1991), and
interactive video (Cennamo, 1990), emerge as educationally beneficial. By 1980,
Wilkinson (1980) stated: “The results of several decades of research…can be

94
summed up as no significant difference” (p. 5). In reviewing educational
technology, Thompson, Simonson, and Hargrave (Thompson, Simonson, &
Hargrave, 1996)indicated that, for every study showing educational benefits of a
medium, there was another that suggests the opposite. Yet again, nearly 20 years
ago, Salomon and Perkins (1996, p. 3) observed that “computers, in and of
themselves, do very little to aid learning. Their presence in the classroom along with
relevant software does not automatically inspire teachers to rethink their teaching or
students to adopt new modes of learning”.

With the advent of the Internet, the quantitative and qualitative increase of
instructional media provided a new focus for educational research. From the
integration of online software into classrooms (Goldberg, 1997; Klass & Crothers,
2000) to classes conducted entirely online (Hiltz & Wellman, 1997; Horn, 1994;
Johnson, 2002; Martin & Rainey, 1993; Mock, 2000), a new focus for evaluation
was borne, but results were generally consistent with the ‘no significant difference’
trend.

More recently, a myriad of technological innovations have continued to be


integrated into classrooms despite the lack of evidence regarding their effectiveness.
Currently, progressive schools cannot educate their students without the ‘essential’
interactive whiteboard, so it seems. Yet, in 2007, a team at the University of London
evaluated their Schools Whiteboard Expansion (SWE) project only to discover that
using interactive whiteboards did not influence students’ educational experiences at
all (Moss et al., 2007). The US Department of Education commissioned a study of
the effectiveness of reading and mathematics software widely used by primary
schools. The conclusion was that there were no statistically significant differences
in the test scores of students who used the software and those who did not
(Campuzano, 2009). The most recent proposal, while actually mirroring the one
envisioned by Thomas Edison 100 years ago, is to abandon physical textbooks in
favor of their electronic counterpart; however, a study conducted with university
students using digital technologies, such as Amazon Kindle, Sony eReader Touch,
Apple iPad, enTourage eDGe, and CourseSmart, showed no significant differences
in their learning relative to students using traditional textbooks (Weisberg, 2011).
Still, in early 2012, US government officials campaigned for the complete

95
substitution of digital textbooks for hardcover ones within a five-year time span,
citing that South Korea has had such an initiative in place for its students by 2013
(Hiltzik, 2012).

While a full review of literature on the effectiveness of educational technology in


the last few decades is beyond the scope of this study, the research described above
is perhaps a glimpse into the array of literature pointing to the lack of evidence for
educational efficacy. More relevant to this review are the studies evaluating the
effectiveness of virtual laboratories, however limited in quantity. Some of these
studies, which claim educational benefits, are explored in Section 2.5.4. This
section is devoted to studies whose results indicate ‘no significant differences’ or
significant differences in favor of traditional methods over virtual ones.

To illustrate, no significant differences were found for an undergraduate


oceanography course in which students went on a real field trip to the sea or
engaged in a virtual activity that simulated the field trip (Winn et al., 2006). Virtual
and real equipment was found to be equally effective for middle school students
designing mouse-trap cars in a science class (Klahr, Triona, & Williams, 2007). A
comparison of real and virtual frog dissections in an AP biology class showed that
students dissecting the organic frogs scored significantly better on a laboratory
practical than students using a virtual version (Cross & Cross, 2004). College
students taking an online introductory biology course generally perceived face-to-
face laboratories to be more effective than the virtual ones, although they did not
perceive the virtual laboratories to be ineffective (Stuckey-Mickell & Stuckey-
Danner, 2007). Finally, university students in a biotechnology class who performed
virtual laboratories through Second Life, a virtual world in which participants create
an avatar that interacts with other people and institutions, performed equally
successfully as students conducting the same experiment in real life (Cobb et al.,
2009). In summary, virtual laboratories, just like any other technological
intervention, are generally comparable in effectiveness to traditional methods of
learning, even though they are purported by their developers, the media, and even
educators as being superior.

96
Russell, in his original article (1992), questioned why empirical research results for
educational technologies are ignored, often to the detriment of the students.
Professional educators and, of course, technologists and product developers, adhere
to the myth that increased technological interaction, often the more appealing,
newsworthy, costly type, improves education. In fact, many new technologies claim
to produce statistically significant results. How is that possible?

In his introduction to Russell’s book, Richard E. Clark, who left a commercial


career in media to pursue a PhD in education, offers the following explanations for
the appearance of significant differences. First, many studies involving media are
invalidated as a result of inadequate design methods. Second, journal editors are
often biased towards reporting positive results, especially for evaluations of
educational technology. Naturally, economic interests drive the publication and
dissemination of studies showing positive, significant differences (Russell, 1999).

Clark attempts to explain why studies that evaluate technology produce no


significant differences. He refers to the ‘John Henry Effect’, a term first used by
Saretsky to memorialize an American steel driver who pushed himself so hard to
win against a steam driven chisel; win he did, but he also died as a result (Saretsky,
1972). As applied to educational research on media, this effect describes a situation
in which the comparison group works harder to improve teaching and learning in
response to the perceived threat of competing with the lure of promised results
through sensationalized media. In this way, the experimental and comparison
groups both emerge with positive results and no significant differences are
uncovered (Russell, 1999).

Another explanation of the ‘No Significant Differences’ phenomenon is offered by


Chris Dede (1999), a professor in Education and Information Technology at George
Mason University:

However, all these studies [evaluating particular educational technologies]


are limited in that the average performance of a group is compared for one
single mode of delivery versus another. This research does not recognize
that, for each medium utilized, some students are empowered, others
disenfranchised, and the net impact may average out the differences. (p. 23)

97
Critics, while admitting some potential effectiveness, also point to other downsides
of media: “Well-produced multimedia features can improve students' understanding
of difficult or recondite concepts. But there's a fine line between an enhancement
and a distraction” (Hiltzik, 2012, para. 21). Also, funds spent on multimedia drain
the financial resources available to recruit, hire, and train high-quality teachers, an
important determining factor in students’ attitudes and achievement.

Nearly 30 years ago, Clark stated: “The best current evidence is that media are mere
vehicles that deliver instruction but do not influence achievement any more than the
truck that delivers our groceries causes changes in nutrition...only the content of the
vehicle can influence achievement” (Clark, 1983, p. 445). In order to sway public
perception, a battle between the message and the media ensues and, usually, the
commercialized, sensationalized, and often irrationalized ideas of the media prevail.
Clark argues that adequate learning results will be produced regardless of the
medium and that we must choose the less expensive media to avoid wasting limited
educational resources.

The implications of this field of research do not dictate the abandonment of


evaluating educational technology. Rather, they suggest that unbiased empirical
research and judicious review of its effectiveness in education are all the more
necessary and must parallel the effort invested in the marketing and sensationalizing
of such innovations.

In fact, a finding of No Significant Differences is as important a finding as statistical


significance. At the very least, such results provide evidence that technology is not
detrimental to instruction and that such technologies can be used with confidence
when they indeed provide solutions that are cost-effective, efficient, and convenient.
For instance, a ‘no significant difference’ result for virtual laboratories is promising
for distance education. Furthermore, in trying to adapt content to instructional
media, the content and its delivery are actually reviewed, and this process in itself is
beneficial to improving instruction (Russell, 1999).

In conclusion, Section 2.5.1–2.5.4 reviewed the benefits heralded by proponents of


educational technology, while Section 2.5.5 presented the opponents’ view. With
this balance, the reader can better evaluate the evidence presented later in this thesis

98
(Chapter 4). Ultimately, “Good teaching cannot be replaced by good technology,
but the merger of the two holds the promise for truly effective [online] instruction”
(Annetta, Klesath, & Meyer, 2009, p. 32).

2.6 Summary

Chapter 2 reviewed literature that provides the context for the current study that
sought to evaluate the effectiveness of virtual laboratories in terms of perceptions of
the learning environment, attitudes towards science, and achievement.

First, relevant literature that provides the learning environments framework for the
current study was reviewed. Included in this section was a review of questionnaires
for measuring perceptions of the learning environment from the perspective of the
student. This field of research has grown over the last 40 years beginning with
Lewin’s (1936) and Murray’s (1938) monumental ideas of connecting personality
and environmental influences to behaviour and accounting for personal needs,
environmental presses, and differences perceived by observers and participants.
Moos (1974) characterized human interactions into the three dimensions of
relationship, personal development, and system maintenance and change, which
have served as the basis for various constructs assessed by the burgeoning, valid,
and economical learning environment questionnaires. Several of these widely-used
questionnaires were selected for this study on the basis of their validity, reliability,
and applicability, namely, the Science Laboratory Environment Inventory (SLEI)
and the Technology-Rich Outcomes-Focused Learning Environment Inventory
(TROFLEI).

Historical developments in this field have led to the establishment of an


international journal and book series focusing on learning environments research.
Current and past lines of research in learning environments focus on associations
between student outcomes and the environment, actual versus preferred
environments, cross-national validations, action research, the combination of
quantitative and qualitative data, the links between home, the class, and school, as
well as others. More recently, a trend to evaluate the effect of innovations in science
on the learning environment has emerged, and it is under this sub-genre of learning
environments research that the current study falls.

99
Literature was also reviewed for attitudes towards science, another measure of
effectiveness of virtual laboratories in my study. The literature describing the
development and application of the Test of Science-Related Attitudes (TOSRA) was
explored because scales, specifically Enjoyment of Science Lessons and Attitude to
Scientific Inquiry, that were validated in many other studies were adopted from this
questionnaire for the assessment instrument in the current study.

Because gender is thought to play a role in the perceptions, attitudes, and


achievement of students in science, relevant literature for this factor was reviewed.
More specifically, I considered gender differences in these measures of the
effectiveness of science education and whether the literature shows that particular
interventions either increase or decrease the gender divide.

Next, literature that featured and characterized the intervention in this study was
reviewed, including literature concerning the integration of technology into
classrooms in general, and the practical benefits of such integration. An example of
such educational technology is virtual laboratories, the intervention in my study.
The literature describes virtual laboratories as being interactive, concept-friendly,
skill building, highly instructive, economical, efficient, safe, and viable alternatives
to experiments that would not otherwise be possible in a high-school classroom.
Results of various studies that employed virtual laboratories were presented, but
their methodological approaches were questioned. The lack of research into the
effectiveness of virtual laboratories was noted and therefore used to justify the
significance of its evaluation in this study.

The following chapter outlines the methods of the current study and describes the
approaches used to answer the research questions concerning the validity of the
instrument used, associations between student outcomes and the environment, and
the effectiveness of virtual laboratories, as well as its differential effectiveness for
males and females, in terms of perceptions of the learning environment, attitudes,
and achievement.

100
Chapter 3

Methodology

“Though this be madness, yet there is method in’t.” – William Shakespeare

3.1 Introduction

This study investigated the effectiveness of virtual laboratories in terms of students’


perceptions of their learning environment, attitudes towards science, and
achievement in US high schools. The research design selected for the study was
quasi-experimental in that two treatment conditions were established to compare the
effectiveness of instruction with and without virtual laboratories. Its principal
method of data collection was the use of a new questionnaire containing elements
from previously-validated questionnaires to assess perceptions of the learning
environment and learner outcomes (i.e. attitudes and achievement). However,
qualitative data, through semi-structured interviews, were added to embellish the
quantitative results.

This chapter describes and justifies the methodological aspects of this study in terms
of the research questions guiding the methods (Section 3.2), the sample selection
(Section 3.3), the materials used including assessment instruments and other
resources (Section 3.4), the procedures followed (Section 3.5), data collection, entry,
and analysis (Section 3.6), and limitations of the study (Section 3.7).

3.2 Research Questions

The aim of the study was four-fold: to validate a new questionnaire, to investigate
associations between the learning environment and student outcomes, to determine
the effectiveness of virtual laboratories in general, and to examine the differential
effectiveness of virtual laboratories for males and females. These research aims are
delineated in more detail below; they guided the design, implementation, and data
analysis of this study.

1. Are scales from the Test Of Science Related Attitudes (TOSRA), Science
Laboratory Environment Inventory (SLEI), and Technology-Rich Outcomes-
Focused Learning Environment Inventory (TROFLEI) questionnaires, as

101
well as achievement items valid and reliable when used with a sample of
high school students taking biology in the US?

2. Are there associations between the perceived classroom learning


environment and student outcomes of attitudes towards and achievement in
science?

3. Is the use of virtual laboratories in high school science classes effective in


terms of students’:

a. perceptions of their learning environment,


b. attitudes towards science, and
c. academic achievement?

4. Is the use of virtual laboratories differentially effective for males and females
in terms of students’:

a. perceptions of their learning environment,


b. attitudes towards science, and
c. academic achievement?

3.3 Sample Selection and Characterization

To select participants, an electronic request was sent out over various teacher
networks (email lists and listservs from science education organizations). While
over 20 teachers initially expressed interest, six teachers followed through on
implementation of the treatment procedure with their students. Participating
teachers then obtained informed consent from the respective principals at their
schools and from students in their classes.

As part of the questionnaire, students answered some personal questions concerning


sex, minority status, and others, which informed the characterization of the sample.
Thus, participants were biology students in grades 810 from six different public
schools throughout the following states in the US: Massachusetts (MA), New York
(NY), Pennsylvania (PA), and Virginia (VA). The total sample size for the study
comprised of 322 students in 21 classes, taught by six teachers. The inclusion of
multiple grade levels, as well as different states and different teachers involved,
102
allowed for greater representation of the US population and ultimately led to greater
generalizability. The variable of age should not have affected the results in any way
because the differences were spread across both groups (students who used virtual
laboratories and those who did not). The statistical methods for controlling these
variables are described in Section 3.6.3.

As in all quasi-experimental designs (Campbell & Stanley, 1963), the sample was
divided amongst two treatment conditions. The two treatment groups were
‘naturally occurring’ in that they were already organized into classes in their
respective schools. Each teacher implemented this study with at least one class that
used virtual laboratories and one class that did not, thus maintaining consistent
instruction from the same teacher between the experimental and control group,
except for the intervention. Therefore, while students were subjected to different
treatment groups, other variables, such as the teachers, the physical classrooms, the
content delivered, and the level of ability of the students, were controlled for in that
they were present in both the experimental and control groups. This was
accomplished through stratified random sampling procedures (Gibson & Chase,
2002) in which the variables were equally spread amongst ‘strata’ or sub-groups.
This design allowed for more accurate results because the effects of confounding
variables were equally distributed throughout the study’s sample.

To address the third research question about the effectiveness of virtual laboratories,
students were divided amongst experimental classes that used virtual laboratories
and control classes that did not. The experimental group included 169 students and
the control group totalled 153 students. Students in VL and non-VL classes were
spread fairly equally amongst the teachers as shown in Figure 3.1.

103
80

70

60

50

40 Non‐VL students
30 VL Students

20

10

0
Teacher Teacher Teacher Teacher Teacher Teacher
A B C D E F

Figure 3.1 Numbers of Students in Experimental and Control Classes for Each Teacher

Out of the 322 students, 171 were females and 151 were males. This delineation is
relevant for the fourth research question about the differential effectiveness of
virtual laboratories for males and females. The different sexes were distributed
equally amongst the participating teachers, as shown in Figure 3.2. As well, males
and females were fairly well distributed amongst experimental and control classes.
The control group had 79 females and 74 males, while the experimental group had
90 females and 76 males.

80

70

60

50

40 Females
Males
30

20

10

0
Teacher A Teacher B Teacher C Teacher D Teacher E Teacher F

Figure 3.2 Numbers of Female and Male Students for Each Teacher

104
Other background information supplied by student participants included their age,
class type, main language of communication, familiarity with technology, and future
career plans. Although the ages of students ranged from 13–18 years, the majority
(60%) of students were ages 14–15. Regarding the main language of
communication, as 94% of students reported using English, the sample was fairly
‘americanized’. Also, most (81%) students were enrolled in standard level biology
classes, while 11% were in honors level biology and 7% were in inclusion classes.
Between 94%98% of students reported having a computer and Internet access at
home and around 80% of students reported spending at least two hours a week
occupied with such technology; thus, the sample was drawn from a largely digitally-
literate population, an important factor for this study that utilized such technology.
Nearly all students (92%) expected to enroll in post-secondary institutions. Finally,
as another indication of students’ interests in science, 39% responded that they
intend to pursue a science or technology-related career, while 54% planned to
pursue other careers in the arts and humanities. This background information is
relevant because it provides a context for the current study, as well as helping to
establish the validity of generalizing the results of this study to other student
populations.

3.4 Instrumentation and Resources Used to Implement the Study

The assessment instrument for this study consisted of scales from learning
environment questionnaires and from standardized achievement examinations as
described in Section 3.4.1. Other resources are noted in Section 3.4.2.

3.4.1 Instrumentation: Development of LAG Questionnaire

A new questionnaire, called the Laboratory Assessment in Genetics (LAG), was


developed for the purposes of this study. Most of its scales were adopted from three
previously-validated learning environment and attitude questionnaires, as described
in the sections that follow. Appendix A contains the full version of this instrument.
The LAG consists of items that assess students’ perceptions of the learning
environment (Teacher Support, Task Orientation, Investigation, Differentiation,
Integration, and Material Environment), students’ attitudes (Attitude to Scientific
Inquiry and Enjoyment of Science Lessons), and student achievement. The intended

105
duration for administration of the LAG was 3045 minutes. The following sections
describe the nature of the instruments from which each of the above scales were
obtained and how such instruments were developed.

3.4.1.1 Scales to Assess the Learning Environment

Scales to assess the learning environment were obtained from two different
instruments: the Science Laboratory Environment Inventory (SLEI) and the
Technology-Rich Outcomes-Focused Learning Environment Inventory (TROFLEI),
as described below.

The SLEI was developed specifically to assess the unique role of the laboratory in a
high school or university science classes. In particular, this instrument was meant to
be useful in addressing concerns about the effectiveness of laboratories and whether
the associated costs are justified. This goal is particularly significant for the current
investigation as virtual laboratories offer a more effective and cost-efficient
alternative to traditional laboratories. In developing the SLEI, relevant literature
was reviewed to identify dimensions important in the unique environment of a
science laboratory class, dimensions in existing instruments were considered,
students and teachers were interviewed to guide revisions of the survey during
various stages, and the instrument was subjected to item and factor analyses. This
resulted in the final version containing 7 items per scale (Student Cohesiveness,
Open-Endedness, Integration, Rule Clarity, Material Environment) with a 5-point
frequency response scale (Fraser, Giddings, & McRobbie, 1992, 1995).

A sample of over 5,447 students in 269 classes in the USA, Canada, England, Israel,
Australia, and Nigeria was used to field test and validate the SLEI. Simultaneous
testing revealed consistent scores on internal consistency reliability and discriminant
validity when used with 1,594 students in 92 classes (Fraser, Giddings, &
McRobbie, 1995), as well as predictive validity when used along with attitude scales
to predict the effect on student outcomes (Fraser, Giddings, & McRobbie, 1992).
Further validation was accomplished through a study of 489 senior high-school
biology students in Australia by Fisher, Henderson and Fraser (1997).

106
Advantages of this instrument include its economy (its brevity and easy hand-
scoring), its cyclic design, and the availability of the personal and class versions and
the actual and preferred forms. However, it does contain some reverse-scored items
(Fraser et al., 1992); responses are on a 5-point frequency scale. To illustrate its
application in the evaluation of educational innovations, the SLEI, or adaptations
thereof, have been employed in various studies including the assessment of an
innovative science course for prospective elementary teachers (Martin-Dunlop &
Fraser, 2007), an inquiry-based, computer-assisted learning class (Maor & Fraser,
1996), and the use of anthropometric activities (Lightburn & Fraser, 2007). More
details about the SLEI are described in Section 2.2.2.7.

For the purposes of this study, modified versions of the Integration and Material
Environment scales were used, as described below and in Table 3.1. Because Fraser
and Tobin (1991) argued that personal forms of scales are likely to be more sensitive
in detecting differences between within-class subgroups, the personal form was
chosen to examine differences between subgroups, such as males and females. One
item, modeled after the original items, was added to each scale to create a uniform
version of eight items for each scale on the LAG. To be consistent with responses
for scales borrowed from other instruments, response alternatives were also
modified to a Likert scale of Strongly Disagree, Disagree, Not Sure, Agree, and
Strongly Agree. As well, reverse-scored items were re-worded for clarity and
consistency throughout the LAG, as recommended by Barnette (2000). The
Integration and Material Environment scales appear as questions 17 through 32 in
the LAG (Appendix A).

Integration measures the extent to which the laboratory activities are integrated with
non-laboratory and theory classes (see Table 3.1). This was an important aspect in
the current study because virtual laboratories are content-based and they can be
easily integrated with material learned in class; therefore, it was expected that
students would perceive increased integration as a result of using virtual
laboratories. The dimension of integration is key to maximizing the retention of
knowledge that can be solidified by experience, including experience associated
with virtual laboratories. This scale is categorized under Moos’ Personal
Development Dimension.

107
Material Environment measures the extent to which laboratory equipment and
materials are adequate (see Table 3.1). It is characterized by Moos’ System
Maintenance and System Change Dimension. Because virtual laboratories use
technological materials, it was important to determine whether perceptions of the
use of both technological materials and hands-on materials were favorable or not.
Therefore, there are two aspects assessed by this scale: 1) the perception of virtual
versus real laboratory materials and 2) the inclusion of technological equipment
(through which virtual laboratories are accessed) amongst laboratory materials. It
was expected that students would have less favorable perceptions of hands-on
materials as a result of using virtual laboratories because virtual materials are
designed to function perfectly, in order to minimize disruptions to experimentation.

A relatively-new Technology-Rich Outcomes-Focused Learning Environment


Inventory (TROFLEI), designed by Aldridge and Fraser (2003) in Australia, draws
upon the What Is Happening In this Class? (WIHIC) inventory (Aldridge, Fraser, &
Fisher, 2000) by incorporating the WIHIC’s scales (Student Cohesiveness, Teacher
Support, Involvement, Investigation, Task Orientation, Cooperation, Equity), and
adding three additional dimensions (Differentiation, Computer Usage, Young Adult
Ethos). In the context of my study, the Student Cohesiveness, Cooperation, Equity,
Computer Usage, and Young Adult Ethos scales were omitted because they do not
measure aspects of the learning environment that are salient in this study.

The scales adopted for the LAG were originally found to be reliable and valid for
assessing students’ perceptions of their psychosocial environment when the
TROFLEI was administered to 1,035 students in grades 10 and 11 at Sevenoaks
Senior College in Western Australia (Aldridge & Fraser, 2003). During the first
year of the school’s operation, the TROFLEI was designed as part of the formative
and summative evaluation of this new school. Strong factorial validity and internal
consistency reliability was found for both the actual and preferred forms of the
TROFLEI. As well, the actual form of each scale was capable of differentiating
between the perceptions of students in different classrooms. Results after four years
of the school’s operation supported the efficacy of the school’s educational
programs and revealed differences between the classroom environment perceptions
of males and females and between students enrolled in university-entrance

108
examinations and in wholly school-assessed subjects (Aldridge & Fraser, 2008).
Since then, the TROFLEI has successfully been modified into two different forms
(Aldridge & Fraser, 2008), applied to studies with different methods (Aldridge,
Dorman, & Fraser, 2004; Dorman, Aldridge, & Fraser, 2006; Dorman & Fraser,
2009), and adapted for use in other countries (Gupta & Koul, 2007; Koul, Fisher, &
Shaw, 2011; Promratrak & Malone, 2006; Welch et al., 2012). More details about
the TROFLEI are described in Section 2.2.2.9.

Four scales from the TROFLEI were chosen for incorporation into the LAG because
of their relevance in assessing important aspects of a technology-rich learning
environment, as described below and in Table 3.1. Each scale contains eight items
and responses are recorded using a Likert scale of Strongly Disagree, Disagree, Not
Sure, Agree, and Strongly Agree, which are scored 1 to 5, respectively. The
wording of some items was modified to fit the conditions of the current study, but
the style and content of these modifications were modeled on the original items.
Scales adapted from the TROFLEI appear as questions 33 through 64 on the LAG
(Appendix A).

Teacher Support is a measure of the extent to which the teacher is helpful to the
students and shows interest in them (see Table 3.1). Individual student – teacher
interactions are assessed with this scale. The student reports the frequency with
which the teacher approaches them or shows interest in their problems. Adopted
from the CES and categorized under Moos’ Relationship Dimension, Teacher
Support is also used in the WIHIC and COLES. This scale was considered
appropriate for this study because the use of fairly autonomous virtual laboratories is
likely to impact on the frequency with which the teacher approaches the student and
the extent to which the teacher is needed to support the student. Therefore, it was
anticipated that student perceptions of teacher support would decrease.

Task Orientation is a measure of a student’s internal motivation to complete


assigned tasks and also to stay ‘on task’ (see Table 3.1). Classified under Moos’
Personal Development Dimension, students respond to items related to the perceived
importance of setting goals and seeking to achieve those goals. Students also report
on their attention span and focus in the class setting. The Task Orientation scale

109
grew from similar scales in the CES and CUCEI, and it is also featured in the
WIHIC and COLES. This scale measures a salient quality of virtual laboratories,
namely, the student’s self-motivation to complete the laboratory in a virtual setting
and remain engaged with the activity despite the lack of ‘hands-on’ experimentation.
Because virtual laboratories are interactive and student-centered, it was anticipated
that students’ perceived motivation to complete work would increase.

Investigation is the extent to which students engage in problem-solving and use


inquiry skills (see Table 3.1). This scale helps to assess the current trend in
education to use more inquiry processes, such as laboratory activities, in the
classroom. Students report about the frequency with which they seek answers
through laboratory work and are asked to report and explain their findings to others.
The Investigation scale was adapted from the ICEQ, under Moos’ Personal
Development Dimension, and is also incorporated into the WIHIC. This degree of
experimentation was a significant dimension to assess in the current study because
virtual laboratories involve elements of a scientific investigation that are likely to
promote increased Investigation.

Table 3.1 Scale Description and Sample Item for each Learning Environment Scale in the
LAG
Scale Scale Description Sample Item
Integration Extent to which regular science My laboratory activities and regular
(SLEI) lessons and laboratory activities science class work are related.
are related
Material Efficiency and functionality of The materials I need for both laboratory
Environment laboratory materials activities and technology are in good
(SLEI) working order.
Teacher Support Extent to which the teacher helps, The teacher goes out of his/her way to
(TROFLEI) befriends, trusts, and shows help me.
interest in students
Task Orientation Extent to which it is important to I do as much as I set out to do regarding
(TROFLEI) complete activities planned and to the activities in this class.
stay on the subject matter
Investigation Emphasis on the skills and I am asked to think about the evidence
(TROFLEI) processes of inquiry and their use for statements in this class.
in problem solving and
investigation
Differentiation Extent to which work assigned is I work at my own speed regarding the
(TROFLEI) individualized for the pace and activities I do in this class.
level of each student

110
Differentiation, a scale originating from the ICEQ, was included in the TROFLEI to
measure the extent to which teachers tailor their instruction and activities for
students according to their abilities, rates of learning, and interests (see Table 3.1).
It is characterized by Moos under the System Maintenance and Change Dimension.
This scale was included in the current study because, as students work
independently on virtual laboratories, which are self-paced, it was anticipated that
student perceptions of differentiation would improve as a result of using virtual
laboratories.

Overall, the TROFLEI was a useful instrument for this study in that it focuses on
student outcomes, a feature sought by the implementation of virtual laboratories, and
is specific to technologically-integrated environments such as the one in the current
study. Most importantly, the validity and reliability of this instrument and its
antecedent, the WIHIC, have been established numerous times. Therefore, the
TROFLEI provides an economical assessment of key aspects of the classroom
learning environment, namely, student interactions with their teacher, the
environment, the class, and other students.

3.4.1.2 Scales to Assess Student Attitudes

A widely-used questionnaire for assessing attitudes towards science is the Test of


Science-Related Attitudes (TOSRA) (Fraser, 1981). The TOSRA was validated on
a total of 1,337 science students, grades 7–10, from 11 schools that varied
socioeconomically in Australia, resulting in a final version containing 10 items in
each of the seven scales (Social Implications of Science, Normality of Scientists,
Attitude to Scientific Inquiry, Adoption of Scientific Attitudes, Enjoyment of
Science Lessons, Leisure Interest in Science, and Career Interest in Science).

The TOSRA has been shown to be valid and useful in many studies in different
countries (Fraser, Aldridge, & Adolphe, 2010; Welch et al., 2012; Wong & Fraser,
1996), and in the evaluation of educational innovations (Lightburn & Fraser, 2007;
Martin-Dunlop & Fraser, 2007; Raaflaub & Fraser, 2002; Wolf & Fraser, 2008;
Zandvliet & Fraser, 2005). More details about the TOSRA are described in Section
2.3.2.

111
Table 3.2 Scale Description, Justification, and Sample Item for each TOSRA Scale used
in the LAG
Scale Scale Description Justification for this Study Sample Item
Inquiry Extent to which Because virtual laboratories are I would prefer to find
science activities are intended to be student-centered out why something
student-centered and and provoke curiosity, attitudes happens by doing an
curiosity provoking. towards scientific inquiry re experiment than by
likely to increase. being told.
Enjoyment Extent to which student Because virtual laboratories are The technology used
enjoy science lessons. interactive and meant to in activities makes the
stimulate students using audio science lessons more
and visual effects, enjoyment exciting.
are likely to increase.

For the purposes of this study, modified versions of the following scales were
incorporated into the LAG as shown in Table 3.1: Attitude to Scientific Inquiry
(herein abbreviated as Inquiry) and Enjoyment of Science Lessons (herein
abbreviated as Enjoyment). Two items were removed from each scale to achieve a
consistent length of eight items for each scale of the LAG. The TOSRA’s response
alternatives were maintained as a five-point Likert scale with response categories
ranging from Strongly Disagree to Strongly Agree. As well, reverse-scored items
were re-worded for clarity and consistency throughout the LAG (Barnette, 2000),
and the wording on some items was adjusted to incorporate technical terminology
necessary for classes using virtual laboratories. The Inquiry and Enjoyment scales
appear as Questions 1 through 16 in the LAG (Appendix A).

3.4.1.3 Scale for Assessing Achievement

The scale for assessing students’ achievement in the Genetics portion of their
biology classes was composed of items borrowed from various state-level
examinations. The researcher selected 10 items from standardized science
examinations that have already been validated, administered, and scored. Specific
questions were chosen from these examinations to correspond with the content of
the virtual laboratories used in this study; this was made possible by the availability
of public, searchable, electronic databases containing these validated test-bank
items. The standardized examinations from which the achievement items were
selected include the New York State Regents Examination for Living Environment
courses, the Massachusetts Comprehensive Assessment System (MCAS) for
Biology courses, and the Virginia Standards of Learning (SOL) in Biology. The

112
selected items measure the extent to which students understand various concepts in
genetics, including Mendelian inheritance, the structure of DNA, mutations, cloning,
and genetic engineering.

For ease of administration and scoring, all achievement items utilized a multiple-
choice answer format with four possible responses from which to choose. Scoring
was based on the number of items correctly answered and ranged from zero (0) for
no correct answers to ten (10) for all correct answers. The score was then divided in
half for meaningful comparison with scores from other sections of the LAG, which
ranged from zero (0) to five (5). The use of a multiple-choice answer format limited
the range of responses from students; however, while an open-response format
would have reduced this limitation, it might also have led to inconsistency and bias
in scoring and/or it could have discouraged students from responding. Achievement
questions appear as items 65 to 74 in the LAG (Appendix A).

3.4.1.4 Pilot Study

To ensure that students aged 13–18 years could easily read and comprehend each
item on the LAG instrument used in this study, a pilot study was conducted. An
earlier form of the LAG was administered to 96 students taking biology in grade
nine (ages ranged from 13–15 years) during the year prior to the implementation of
this study. This sample was from one school in the state of Massachusetts but its
population was quite diverse and representative of the larger sample used for the
current study.

Students were instructed to highlight words and questions that they did not
understand and comment on the clarity of items. Some students thought that the
original instrument was too lengthy and some did not understand certain terms as
they were intended by the researcher. Based on students’ comments and patterns in
item responses, the researcher modified some of the wording, eliminated the use of
reverse items, and narrowed down the scales to the current eight scales used for the
LAG.

113
3.4.2 Other Resources

Other resources necessary to conduct my study into the effectiveness of virtual


laboratories included technology. More specifically, every class involved in the
study was required to have access to computers with Internet access. All of the
virtual laboratories were internet-based, mostly with free access. There were a few
sites that required a sign-in feature because access to these laboratories was donated
by the company that created them, and the researcher provided the teachers with this
access code. Particulars about the selected virtual laboratories are described further
in Section 3.5, and the list of virtual laboratories is presented in Appendix D.

Additionally, the researcher provided worksheets, styled after traditional ‘lab


reports’, for each virtual laboratory to ensure completion of the activity and to
ensure student accountability so that teachers could have a concrete assignment and
score to incorporate into students’ academic profiles. In this manner, students
involved in this study were not diverted from ‘time on task’ dictated by state
requirements of learning. These worksheets are explained in further detail in
Section 3.5, and they appear in Appendix F.

The LAG instrument to assess the effectiveness of virtual laboratories was available
in both soft and hard copies. The soft version was administered via a Google
Document Survey Form and the link was provided to participating teachers with a
teacher-specific code. Responses from the electronic questionnaire were
automatically entered into a Microsoft Excel file, available to the researcher
immediately upon submission. The paper version was printed, copied, and mailed to
the participating teachers who returned the questionnaires via mail at the end of the
semester. Responses from the paper version of the questionnaire were entered by
hand into the same Microsoft Excel file created by the electronic version, and the
hard copies were then stored at the Science and Mathematics Education Centre
facilities on the Curtin University campus in Perth, Western Australia. Data files
were encoded and only accessible through the use of a password by authorized
users. Raw qualitative data, such as recordings and transcripts of interviews, were
also stored securely by the researcher in electronic files locked with a password and
in hard copies locked in a cabinet.

114
3.5 Procedures

This section describes how the effectiveness of virtual laboratories was evaluated by
explicating the treatment conditions (Section 3.5.1), and the implementation of the
educational intervention, including design and delivery of virtual laboratories
(Section 3.5.2), the timetable for the execution of the study (Section 3.5.3),
administration of the questionnaire (Section 3.5.4), and some ethical issues (Section
3.5.5). The high school science classes involved in this study were divided into two
treatment groups: one group engaged in virtual laboratories; and the other group
continued to learn in the way in which students had been learning all along.
However, both groups covered the same content. At the end of the semester, all the
classes were given the LAG questionnaire to assess students’ perceptions of their
learning environment, their attitudes towards science, and their understanding of the
science content. Results for the two groups were compared for significant
differences. Further details about the procedure and implementation of my study are
embellished below.

3.5.1 Treatment Conditions

Because of the quasi-experimental design of the study, the 322 student participants
in 21 different classes studying genetics were divided amongst 10 experimental and
nine control classes. Efforts were made to ensure that the two groups were
comparable overall with respect to the range of academic capabilities, socio-
economic status, gender (Section 3.3) and the physical classroom environment, such
as features of the room and the time of day at which students were taught. This was
accomplished through stratified random sampling procedures (Gibson & Chase,
2002) in which the variables were equally spread amongst ‘strata’ or sub-groups.
Thus, the two treatment groups were ‘naturally occurring’ in that they were already
organized into classes in their respective schools. Each of the six teachers who
volunteered for the implementation of the study taught at least one class with the
intervention and one class without the intervention, thus maintaining consistent
instruction from the same teacher between the experimental and control group,
except for the intervention.

115
The experimental group learned the topic of genetics supplemented with virtual
laboratories. A virtual laboratory is broadly defined as “an electronic workspace for
distance collaboration and experimentation in research or other creative activity, to
generate and deliver results using distributed information and communication
technologies”, according to the International Institute of Theoretics and Applied
Physics at the Expert Meeting on Virtual Laboratories in Iowa, USA in 1999
(Rauwerda et al., 2006, p. 230).

As applied to the educational setting in this study, students in the experimental


group used computers connected to the Internet to complete virtual experiments that
employed ‘point-and-click’ techniques for manipulating various laboratory materials
(see Figure 3.3). Each of these virtual experiments simulated a real, hands-on
experiment and followed a typical experimental format in which students observe
phenomena, formulate hypotheses, set up controls, follow procedures, test
hypotheses, and analyze results.

Figure 3.3 Screenshot from a Sample Virtual Laboratory (Perpich, 2012)

The instructions provided to teachers are included in Appendix E. A virtual sharing


space (Dropbox) was set up for teachers to access the materials for each virtual
laboratory, such as the information about the virtual laboratory and an associated
worksheet to assess students’ understanding. These materials are also included in

116
Appendices D and F. As well, I created a blog for the participating teachers to share
experiences and a forum through which to ask questions. However, most teachers
did not utilize the blog and, instead, preferred to correspond via email.

In order to respect the individuality of teachers in meeting the learning requirements


and schedules set by their particular state, district, school, department, and
classroom, the researcher provided a ‘bank’ of at least 10 different virtual
laboratories for use in this study (see Section 3.5.2). Teachers were given the
freedom to choose the type and number of virtual laboratories that they wished to
employ with the experimental classes. On average, teachers administered five, full-
period virtual laboratories over eight weeks. Table 3.4 delineates the type and
frequency of delivery of virtual laboratories, as well as the interval between
administration.

Students in the control group continued learning and experimenting in their normal
fashion, without the use of virtual experiments. Instructional methods for these
classes included lectures, textbook readings, hands-on experiments, projects, and/or
other activities normally employed in a science classroom. While teachers were not
provided with specific instructions for teaching students in the control condition,
they were directed to ensure that the same content (i.e. genetics) was taught as in the
experimental classes.

While a more effective and pure experimental design would have involved
comparing an experimental group using virtual laboratories with a control group
conducting parallel hands-on experiments for the very same investigation, such a
setup was neither possible nor ideal for this study for a number of reasons. First,
much of the equipment necessary for complicated experiments in molecular genetics
is not available in high school laboratories because of cost and safety issues. As
well, many of these experiments require lengths of time not provided in a typical
biology class, which usually meets for only 4–5 hours weekly. Secondly, the
rationale for evaluating the effectiveness of virtual laboratories is that such an
innovation provides an opportunity for students to learn about skills, procedures, and
an environment to which they would not otherwise normally be exposed. Virtual
laboratories were suggested for use in situations in which such parallel hands-on

117
experiments cannot be conducted. Therefore, the intention of my study was to
evaluate the effectiveness of using virtual laboratories as a supplemental method,
rather than as a method of substituting virtual laboratories for traditional ones.

Unfortunately, because of the design and respect for teacher individuality,


differences existed both within and between experimental and control groups in
matters other than the use of virtual laboratories. Administration of the virtual
laboratories within the experimental group varied with respect to frequency, to the
precise format and content within genetics, and to its blend with other traditional
classroom activities such as hands-on laboratories. Naturally, the control classes
also lacked uniformity regarding method of instruction. While most of the
differences between groups, regarding teachers, students, and classroom
environments, were controlled by the design of the study to be equally distributed
amongst both groups (See Section 3.6.3), differences within groups were more
difficult to control and could have affected results, as described in Section 3.7 on
limitations of this study.

3.5.2 Design and Delivery of Virtual Laboratories

This section explains how the researcher selected the virtual laboratories for use in
this study and instructed teachers regarding their delivery. More than 20 different
virtual laboratories, related to the topic of genetics, were chosen by the researcher
for their design and use of inquiry. Table 3.3 shows the title, type, description, and
source for eight of the most commonly used virtual laboratories; the respective
sample worksheets are included in Appendix F.

The virtual laboratories were all web-based and accessible via a URL provided to
participants. Software companies, as delineated in Table 3.3, designed them but the
researcher carefully reviewed and picked appropriate experiments in addition to
providing participating teachers with some suggestions regarding their use in the
classroom (See Appendix E). More specifically, the researcher selected virtual
laboratories featuring equipment and associated skill-acquisition not usually
available in a typical high school laboratory. Most laboratories involved testing a
hypothesis elicited from the student, including the analysis of evidence and other

118
Table 3.3 Title, Type, Description and Source for Each Virtual Laboratory
Title Type Description Source
Bacterial Mostly skill- The activity guides students through the process of Howard
Identifica- based but identifying the bacterial sources of an infection based on Hughes
tion follows an matching a specific DNA sequence; it includes Medical
experimental procedures such as PCR, DNA sequencing, sequence Institute
method analysis, and entry of DNA sequences into BLAST http://www.hh
(Basic Local Alignment Search Tool), which searches mi.org/biointer
the public database of DNA sequences to determine the active/vlabs/
correct bacterial species from which the DNA sequence
originates.
Create a Mostly This activity asks students to hypothesize about the NOVA
DNA experimental culprit of a crime and then leads them through the http://www.pb
Fingerprint but focuses on a process of creating a DNA fingerprint to verify the s.org/wgbh/no
specific suspect they chose. va/sheppard/an
procedural alyze.html
technique
DNA Skills-based, in In this activity students learn the procedure of extracting University of
Extraction order to learn a DNA from human cheek cells. Utah
technique http://learn.gen
etics.utah.edu/
content/labs/ex
traction/
PCR or, Skills-based, in Students learn the procedure and concept behind a University of
Polymerase order to learn a Polymerase Chain Reaction (PCR). In the real lab Utah
Chain technique world, this procedure is used in almost every process http://learn.gen
Reaction using DNA for research, forensics, etc. so it is the etics.utah.edu/
beginning step that is part of a larger procedure. content/labs/pc
r/
Gel Skills-based, in In this activity students learn the procedure of gel University of
Electro- order to learn a electrophoresis to visualize and sort DNA fragments by Utah
phoresis technique size. In the real world, this procedure is used to check http://learn.gen
that the materials that one works with (be it DNA, RNA, etics.utah.edu/
proteins) are not lost at key points during a complicated content/labs/ge
experiment; in forensics, gel electrophoresis would be l/
used to compare DNA samples.
DNA Experimentally- In this activity students learn the procedure and concepts University of
Microarray based, it that underlie the use of a DNA Microarray for the field Utah
combines three of genomics; it includes an investigative piece and http://learn.gen
techniques students get to make a real-life application to the etics.utah.edu/
explored in the differences between healthy cells and cancer cells. content/labs/m
activities above icroarray/
Genetics of Experimental This activity allows students to cross Drosophila to APBioLabs
Organisms obtain new generations of fruit flies to observe the http://www.uc
number of phenotypes and eventually determine the openaccess.org
genotypes of the original parental generation. Students /courses/APBi
then compare their observations against a Punnett square oLabs/course/i
that they construct. ndex.html
Transgenic Experimental This laboratory first guides students through the process Howard
Fly Lab but also teaches of constructing transgenic flies that “glow” and then Hughes
some experimenting with those transgenic flies to understand Medical
significant circadian rhythms through patterns of light emissions. Institute
techniques A number of experiments investigate how light/dark http://www.hh
cycles affect patterns of light emissions (the measure for mi.org/biointer
the presence of a biological clock) and eventually lead active/vlabs/
to locating the biological clock in the fly.

119
elements of inquiry as described in Section 2.2.3. However, some virtual
laboratories were linked in a series so that the aims of the first few ‘laboratories’
were to acquire the skills and concepts needed to proceed with a virtual experiment
at a later point. The researcher was careful to avoid virtual laboratories that lack
elements of a true experiment, such as so-called ‘virtual laboratories’ that were
essentially computer games or a simple list of questions for students to research
about a particular topic in science.

In order to share with participating teachers resources such as worksheets, sources,


and general instructions for virtual laboratories (See Appendices A–F), an online
storage system called Dropbox was used. This system had to be downloaded by
each user and it also displayed when each user accessed the documents and/or
modified the documents. Teachers were instructed to use, within a three-month
period, at least four of the virtual laboratories available in the Dropbox file. How
each teacher applied the instructions in implementing the conditions of the study
with his or her classes is detailed in Table 3.4.

Worksheets were provided for many of the virtual laboratories to guide students
through the activity and to enable them to record data and answer questions related
to the experiment (see Appendix F). These worksheets also allowed teachers to hold
students accountable for their work because they could be given a score, which
could have been incorporated into their semester grade.

3.5.3 Timetable

This section reports the logistical aspects of the application of virtual laboratories,
namely, the duration of implementation of the virtual laboratories, the frequency
with which virtual laboratories were administered, and the time intervals between
each virtual laboratory. The selected virtual laboratories were generally meant to
occupy one class period. If teachers suspected that their student would require more
time to complete the virtual laboratory, teachers were advised to assign students a
pre-laboratory designed by the researcher to prime students’ knowledge about the
topic before beginning the actual laboratory. Some skill-only virtual laboratories
required no more than 20 minutes and could be integrated into another lesson or
completed at home.

120
Virtual laboratories and their associated worksheets were made available in
February 2010 and teachers were given until the end of the semester, a duration of
four to five months, to integrate them into their classes.

The frequency with which virtual laboratories were utilized, the interval between
their use, and the duration of implementation of the entire study by each teacher are
detailed in Table 3.4.

Table 3.4 Implementation of Conditions of the Study by Each Teacher including Class
Composition, Duration of Study, the Administration of the Virtual Laboratories
(VL), and Information about Covariates
Teacher and Class Dur- Number of/Titles of Frequency What Did the Notes about
Composition ation VLs Completed & Intervals Control Group Covariates
of of VLs do? (Quotes from
Study Teachers)
Teacher A: 2 4: Two VLs a “I did a paper “The students
Five Classes DNA Extraction week; the lab with one, were
127 students PCR first week, some other heterogeneous,
Grade 8 Gel Electrophoresis they were hands-on work the same topics
Standard Level DNA Fingerprinting one day with another were covered,
apart and and lecture for classes were
(Experimental the second, the other two” both in the
Group = 3 classes; two days morning and
Control Group = 2 apart later in the
classes) school day…I
tried very hard
to provide the
same material
to each group”
All students did
the hands-on for
gel
electropheresis.
Teacher M: 2 4: 3 VLs in “’paper labs’, “time of day
Two Classes DNA Extraction one week where we [for classes]
29 Students PCR and 1 the simulated differed,”
Grade 10 Gel Electrophoresis following some of the “Of the 2
Standard Level Transgenic Fly Lab week steps; hands- classes, the
on lab for gel class that did
(Experimental electrophoresis the virtual labs
Group = 1 class; ; some other had a slightly
Control Group = 1 computer higher
class) activities” academic
ability and
fewer students
with [special]
‘ed’ plans (10%
vs. 18%)”
Did not do any
hands-on
laboratories
with
experimental
classes.

121
Teacher G 12 ~8–10: About once “no hands-on Used many VLs
Three Classes Bacterial a week laboratories, A as
47 Students Identification DNA model demonstrations
Grade 9 DNA Extraction activity with in the classroom
Honors & PCR plastic pieces (not only as
Honors Prep Gel Electrophoresis & a Punnett individual
DNA Microarray square activity student
13 Students Peppered Moth with 4 investigations).
Grades 10-12 Simulation different Used some VLs
ELL Biology Mitosis & Meiosis colored beads that did not
Labbench [for dihybrid contain
(Experimental Stem Cells crosses].” investigative
Group = 1 class; Cloning experiments but
Control Group = 2 Transgenic Mice just to teach
classes) concepts and
skills.
Teacher R 10 5: About once “I am having
Six Classes DNA Extraction a week all classes
129 Students PCR participate.
Standard Level Gel Electrophoresis Half of the
DNA Microarray class will do
(Experimental DNA Fingerprinting the lab, while
Group = half of all the other half
six classes; complete an
Control Group = alternative,
half of all six unrelated
classes) assignment.”
Teacher D 10 At least 4
Three Classes No other
84 Students information
Grade 10 available
Standard Level
(Experimental
Group = 1 class;
Control Group = 2
classes)

Teacher O 6 5: About once “Lectures, Lost data for


Two Classes DNA Extraction a week animations, most of
20 Students PCR paper lab.s, 1 experimental
Grade 9 Gel Electrophoresis hands-on lab.” class (Number
Standard Level DNA Microarray of students
(Experimental DNA Fingerprinting reflects loss)
Group = 1 class;
Control Group = 1
class)

While teachers were allowed a certain degree of freedom regarding which virtual
laboratories to implement and the frequency of their implementation, the researcher
suggested interspersing their delivery with the teacher’s normal methods of
instruction throughout the semester. This systematic integration of virtual
experimentation with traditional instruction was recommended by Gallagher et al.
(2005) because it “is more likely to be successful if the training schedule takes place

122
on an interval basis rather than massed into a short period of extensive practice” (p.
364). After completion of at least four virtual laboratories, or whenever their use
was no longer applicable, teachers were instructed to inform the researcher, at which
point access to the questionnaire was granted, as described in Section 3.5.4.

3.5.4 Administration of LAG Questionnaire

The method of administration of the LAG questionnaire to assess the effectiveness


of virtual laboratories is detailed in this section. While only students in classes
belonging to the experimental group were exposed to virtual laboratories, students in
classes belonging to both the experimental and control groups were given the LAG
questionnaire (See Section 3.4.1). Therefore, at the end of the treatment period, all
322 students completed the questionnaire addressing perceptions of their learning
environment, their attitudes towards science, and their understanding of the science
content. The questionnaire took about 30 minutes to complete. Also, language was
purposely generalized so that the word ‘laboratory’ could include virtual and non-
virtual experiences. The instructions to students in the introduction to the
questionnaire read “Please note: The word ‘laboratory’ in this survey refers to any
experiment you have done in your science class, whether it was ‘hands-on’ or
virtual.”

According to the preferences of participating teachers, the researcher provided both


electronic and paper versions of the questionnaire, which were identical in content.
Electronic access was granted through a link to the Google Document Form used to
create the survey. Students were instructed to click on the responses that applied to
them and, upon completion, to click on the ‘submit’ button to enter their responses
automatically into an electronic database. Paper versions were mailed to teachers
who returned them via mail upon completion.

The last item on the questionnaire asked students to record their email addresses to
enter into a raffle. Email addresses were compiled into the electronic database and a
random number generator was used to select the winner of the raffle prize. More
useful to this study, the researcher used these email addresses to send out a request
asking students to participate in interviews via telephone or Skype because school

123
was no longer in session. Further elaboration of selection and collection of
qualitative data sources are described in Section 3.6.1.

3.5.5 Ethical Issues

To ensure fairness of exposure to an innovation that is potentially beneficial, the


treatment conditions were reversed after the data-collection stage so that students in
the comparison group also had the opportunity to use virtual laboratories. However,
no data were collected during this period as it was only meant to guarantee equity of
students’ learning experiences.

All participants and their parents, in addition to those in the school, such as teachers
and principals, were fully informed of the purposes of this study, including the
potential risks and benefits, before collecting data from any students. Each student
received an information sheet describing the study in plain English and was also told
verbally, via a YouTube broadcast. Opportunities for any questions and concerns
were given to students to reassure them that they may withdraw from the study at
any time without prejudice or other negative consequences, such as affecting
students’ school grades. Finally, informed consent was obtained for each class and
school involved in the study.

Another ethical issue concerns confidentiality and protection of participants’


privacy. For this study, all efforts were made to keep the names of the schools,
teachers, and students confidential. Upon collection, data were encoded for the
analysis stage to protect students’ privacy. No names were reported and names of
interviewees were changed. The acknowledgement found in the front matter of this
thesis is devoid of names of participants, for the very same reason of protecting
anonymity.

3.6 Data Collection, Entry, and Analysis

This section explores the various aspects of obtaining and understanding


quantitative and qualitative data. In general, multiple methodological approaches
allow a more holistic assessment of the effects of an intervention. Additional
approaches can further explain idiosyncrasies in quantitative data and assess the
uniqueness of each classroom environment established by the teacher. Therefore, to

124
embellish the quantitative data, qualitative methods of data collection were
employed in this study, as recommended by a number of researchers in the field of
learning environments who extol the merits of triangulation (Fraser & Tobin, 1991;
Tobin & Fraser, 1998). A study of technology-based materials by Russek and
Weinberg (1993) revealed that more insight was gained from a mixed-method
approach, than could be obtained from either type of analysis alone. Moreover, Duit
and Confrey (1996) proposed that interviews allow contextualization of students’
responses and a more complete image of students’ ideas.

After the LAG questionnaire had been administered to both the experimental and
control group, the responses from these two groups were compared for significant
differences. As well, semi-structured interviews were conducted with students who
took the LAG and with their teachers. This section deals with the collection
(Section 3.6.1), coding and entry (Section 3.6.2), and statistical methods of analysis
(Section 3.6.3) of quantitative and qualitative data.

3.6.1 Collection of Data

Quantitative data were collected using scales from the four instruments included in
the Laboratory Assessment in Genetics (LAG), namely, the SLEI, TROFLEI,
TOSRA, and achievement examinations. The LAG was administered to 322
students in 21 classes in six different US schools in the states of Massachusetts, New
York, Pennsylvania, and Virginia.

Questionnaires were either mailed to the teachers requesting paper versions, or


provided as an online link to teachers who requested the electronic versions. In both
cases, the researcher provided to the teacher for each class specific codes, which
identified the teacher and treatment condition (i.e. experimental or control), without
revealing the names of the schools, teachers, or students. Teachers were instructed
to ensure that students entered these codes onto the front page of the survey.

Regarding the paper version of the questionnaire, teachers administered them


personally, packaged them by class, and returned them via mail for data entry by the
researcher. Electronic questionnaires were submitted automatically over the Internet
as students completed them. All students of the same teacher completed the same

125
version of the questionnaire; in other words, there were no situations in which some
students of a particular teacher filled out the paper version and other students of the
same teacher filled out the electronic version. The two different versions were only
provided for teachers’ ease of use, depending on whether the Internet was easily
accessible in their particular school. Teachers B, F, and A utilized the electronic
versions of the questionnaire, while Teachers C, D, and E used the paper versions.

To ensure consistency in the administration of the questionnaires, teachers were


provided with detailed instructions on how to administer the LAG (see sample
directions given to teachers in Appendix E. Teachers were asked to be present
during administration of both the paper and electronic versions of the surveys so that
they could assist students with any questions that they had and to record feedback
from students as they completed the surveys. Therefore, all questionnaires were
administered during class time and were not taken home.

Students responding to the LAG provided information regarding personal details,


including their sex, main language of communication, ethnicity, and age, as well as
class details, including grade level, and teacher code, and other practices and
preferences, such as computer usage and future plans (See Appendix B for sample
questionnaire). The last item on the questionnaire asked students to record their
email addresses to enter into a raffle, as an incentive to complete the questionnaire.

The list of email addresses, supplied by students, was stored in the same file as the
quantitative data and provided the pool of potential volunteers for gathering
qualitative data trough interviews. Therefore, student interviewees were self-
selected from the same sample of students who completed the LAG questionnaire.
For the purposes of this study, 10 open-ended questions were constructed based on
the LAG questionnaire for semi-structured interviews using standard protocols
(Anderson & Arsenault, 1998; Cohen, Manion, & Morrison, 2007; Drever, 1995;
Erickson, 1998). While the quantitative data were limited to the personal form (i.e.
the use of ‘I’ statements, as described in Section 2.2.2) of the questionnaire, the
collection of qualitative data through semi-structured interviews allowed the
researcher to expand the perspective of the responders to the whole class. For
instance, after the interviewee answered a question about whether the class work

126
was difficult, the researcher was able to further ask whether the whole class
perceived the work as being difficult, in addition to the interviewee’s personal
perspective. This distinction between personal and whole-class perspectives was
noted earlier when reviewing the concepts of ‘private’ beta press and ‘consensual’
beta press (Section 2.3.1).

Once the researcher had determined that additional insight was needed to explain the
quantitative results, the possibility of gathering qualitative data materialized. An
email request was sent out to all student email addresses stored in the database
asking for volunteers to participate in the interview process. When a total of six
students followed through on their initial expression of interest to be interviewed,
telephone or Skype appointments were set up for this purpose. Face-to-face
interviews were not possible because the interviewer and interviewees were not
located in the same geographic area. Informed consent was obtained from students
and their parents. Each interview lasted 20–30 minutes and students seemed eager
to contribute to a better understanding of the quantitative results of this study.
Selected statements from student responses to the interview questions are presented
in Chapter 4.

Additionally, participating teachers were also asked for input, via email, using the
same open-ended questions that had been presented to student interviewees. First,
when teachers filled out a form indicating what actually took place during the
implementation of the study, the information contained in Table 3.4 emerged. All
teachers provided this information, but not all teachers chose to answer the questions
for the semi-structured interviews. Therefore, the comments of the three teachers
who contributed to this effort are embedded throughout Chapter 4.

3.6.2 Entry of Data

Data from both the paper and electronic forms of the questionnaire were organized
using Microsoft Excel 2007. Responses to the electronic version were entered
automatically into an Excel spreadsheet as they became available. Responses to the
paper version of the questionnaire were entered into the same Excel spreadsheet by
the researcher personally to ensure precision and they were checked for accuracy.

127
The researcher assigned each paper questionnaire a unique identification code for
tracking purposes that aligned with the number of the row in the Excel spreadsheet.
Email addresses shared by the students were stored along with their responses, in
case I needed to contact students for further clarification. For the purposes of
statistical analysis, responses were coded by transforming descriptive data into
numerical values. For instance, personal information regarding students’ career
choices was recorded in the following manner: careers related to the sciences were
given the value ‘1’ while non-scientific careers received a value of ‘2’. The method
of coding was stored in a separate document.

Some patterns of responses in questionnaires indicated that the students did not
complete them with integrity, such as consistently responding Strongly Disagree
responses to every item, or lack of responses to all items other than personal
background questions. Such data were discarded and this phenomenon, in addition
to absences of many students on the day of administration, account for the lower
number of questionnaires than the actual number of students participating in the
study, as reported by teachers.

Regarding recording and entry of qualitative data, each student interview was
recorded using an internal software system on a Macintosh Notebook Computer,
called Garage Band. Auditory clarity was enhanced because telephone and Skype
calls to interviewees were conducted from the same computer. Teacher responses
from interviews were written via email, as they preferred.

Recordings of student interviews were transcribed by the researcher and were


reviewed multiple times to ensure accuracy. Each transcription of an interview was
saved as a separate document and stored in a file accessible only to the researcher.
Upon the completion of both student and teacher interviews, names were also
encoded to preserve anonymity. Gender identification amongst students was
maintained by replacing interviewees’ names with fictional names of the same
gender.

128
3.6.3 Statistical Methods for Analysis of Data

Responses to the LAG, taken by 322 students in 21 US science classes, constituted


the quantitative data for this study. After numerical transformation, quantitative
data were analyzed to address the four research questions using SPSS 17.0
Statistical Package. Sections 3.6.3.1–3.6.3.3 explicate the statistical methods of
analysis for each research question in this study. The method of analysis for
qualitative data is described in Section 3.6.3.4.

3.6.3.1 Research Question 1: Are scales from the Test Of Science Related Attitudes
(TOSRA), Science Laboratory Environment Inventory (SLEI), and
Technology-Rich Outcomes-Focused Learning Environment Inventory
(TROFLEI), as well as achievement items valid and reliable when used with
a sample of high school students taking biology in the US?

Regarding the first research question, the questionnaire administered to a sample of


American biology students had to be checked to ensure that it would be a valid and
reliable instrument with which to gather data for this population. To accomplish
this, the scales from the SLEI, TROFLEI, and TOSRA were subjected to factor
analysis to check the questionnaire’s structure. Principal axis factoring with
varimax rotation (using Kaiser normalization) was employed because of its ability to
organize components of the questionnaire by common dimensions. Correlation
coefficients, or factor loadings, between items from the SLEI, TROFLEI, TOSRA,
and scale total scores were inspected. The criteria for retention of any item were
that its factor loadings must be greater than 0.40 on its own scale and less than 0.40
on all other scales. The application of these criteria led to the removal of some
items prior to subjecting the refined scales to further validation and reliability
analyses.

Next, the revised scales of the LAG measuring perceptions of the learning
environment (SLEI, TROFLEI), attitudes (TOSRA), and achievement were checked
for internal consistency reliability to determine the extent to which items in the same
scale measured a common dimension. To accomplish this, Cronbach alpha
coefficients, for two units of analysis (the individual student and the class mean)
were calculated. Scales with a Cronbach alpha coefficient greater than 0.60 were
considered to have satisfactory internal consistency reliability, as suggested by De
Vellis (1991).

129
To ensure that each scale measured a unique aspect of the learning environment or
attitude, an index of discriminant validity (Campbell & Fiske, 1959), namely, the
mean correlation of a scale with all other scales, was determined for two units of
analysis – the student and the class.

The final method for validating the questionnaire involved confirming the ability of
the learning environment scales of the LAG to differentiate between classrooms.
The perceptions of students in the same class ought to be relatively similar as
compared with the perceptions of students in different classes. An ANOVA, with
class membership as the main effect, was used to check differences in the
perceptions of the students in different classrooms. Results for this test are reported
as an eta2 value, which represents the proportion of variance in scale scores
accounted for by class membership.

Because the researcher selected the achievement items, additional validation of this
scale was determined by calculating a frequency distribution of the students’ scores
on this scale to check for a normal distribution, an indication of its ability to produce
the same pattern of scores in a larger population (Herrnstein & Murray, 1996).

3.6.3.2 Research Question 2: Are there associations between the perceived


classroom learning environment and student outcomes of attitudes towards
and achievement in science?

For the second research aim regarding associations between perceived classroom
learning environment and the student outcomes of achievement in and attitudes
towards science, simple correlation and multiple regression analyses were used with
the individual student as the unit of analysis. Simple correlation (r) was used to
describe the bivariate relationship between each student outcome (attitude or
achievement) with each learning environment scale. Multiple regression analysis
was used to investigate the combined influence of the whole set of learning
environment scales on each student outcome, with the standardised regression
coefficient () being used to indicate the contribution of each learning environment
scale to the variance in student attitudes or achievement when other learning
environment scales were mutually controlled. The multiple correlation (R)
represented the multivariate association between student attitudes or achievement
(the criterion variables) and the set of all learning environment scales (the predictor
130
variables). The strength of associations was measured by the coefficient of multiple
determination (R2).

3.6.3.3 Research Questions 3 and 4: Is the use of virtual laboratories in high


school science classes effective in terms of students’ perceptions of their
learning environment, attitudes towards science, and academic
achievement? Is the use of virtual laboratories differentially effective for
males and females in terms of students’ perceptions of their learning
environment, attitudes towards science, and academic achievement?

To analyze data from the third and fourth research aims concerning the effectiveness
of using virtual laboratories in terms of academic achievement, attitudes towards
science, and perceptions of the learning environment, data were subjected to a two-
way multivariate analysis of variance (MANOVA) with the learning environment
scales from the SLEI and TROFLEI and student outcomes (attitudes and
achievement) as the dependent variables, and with instructional method and sex as
the independent variables. Because the multivariate test using Wilks’ lambda
criterion yielded statistically significant differences for the set of dependent
variables, the individual, univariate two-way ANOVA was interpreted separately for
each dependent variable (students’ perceptions of their learning environment, their
attitudes, and achievement), with the student as the unit of analysis. This analysis
enabled an exploration of all possible interactions between both independent
variables (instructional method and sex) for all three types of dependent variables
(students’ perceptions of their learning environment, their attitudes, and
achievement).

Differences between instructional methods (with and without virtual laboratories)


and between different sexes were portrayed by the mean score for each learning
environment, attitude, and achievement scale. The mean score of each scale was
calculated by dividing the original scale score by the number of items in each scale
to allow for meaningful comparison of average scores across scales containing
differing number of items. The presence of a significant instruction-by-sex
interaction was interpreted to indicate the differential effectiveness for males and
females.

Effect sizes were also reported for each comparison to quantify the magnitude of the
difference between two groups (i.e. either between instructional methods, or

131
between males and females). According to Vacha-Haase & Thompson (2004),
effect sizes indicate a more important aspect of a between-group difference than its
statistical significance. Because this difference between means is expressed in
standard deviation units, the effect size indicates that the average score in the
experimental group is different from the average score in the control group by a
certain number of standard deviations. In this study, two different types of effect
sizes were utilized: Cohen’s d and eta-squared (2). Cohen’s d is the difference
between two sample means divided by the pooled standard deviations. Eta squared
is a measure of the strength of association (or effect size) based on the proportion of
variance accounted for by the effect of the independent variable on the dependent
variable.

3.6.3.4 Analysis of Qualitative Data

Overall, analyses of data from interviews can complement the results of quantitative
analyses and provide a richer understanding by filling in gaps perceived in the
questionnaire data. In this study, qualitative data consisted of student and teacher
responses to semi-structured interview questions.

Therefore, responses from interviews, that were recorded and fully transcribed as
described in Section 3.5.2.2, constituted the raw qualitative data for further analysis.
These transcripts were then subjected to content analysis (Neuendorf, 2002) in
which content was coded, tallied, ranked, and analyzed for emergent themes. More
specifically, raw data were ‘chunked’ into color-coded categories and reported
statistically through well-accepted procedures, such as frequency counts, averages,
and percentages for recurring themes (Erickson, 2012; Wolcott, 1994). More
specifically, responses to questions from the same scales of the LAG were grouped
together; however, the researcher also considered themes that emerged from
interviews that were beyond the dimensions measured by LAG scales. Responses
from interviews were analyzed as they became available and then re-analyzed as a
whole for emerging patterns. Analytic induction (Lindesmith, 1947) was also
undertaken in which the qualitative data were viewed and reviewed with various
lenses. As a result of analytic induction, the researcher modified some questions
during the interview and/or focused on certain questions more than others.

132
Wolcott (1994) distinguishes between analysis and interpretation, with the former
referring to the description of the results of content analysis and the latter referring
to “efforts at sense-making, a human activity that includes intuition, past experience,
emotion – personal attributes of human researchers that can be argued endlessly but
neither proved nor disproved to the satisfaction of all” (2009, p. 30). Thus, the
description of content analysis, through statements from interviews that added
insight to the results from questionnaires, are embedded throughout the report of the
quantitative results in Chapter 4. Additionally, the emergent themes stemming from
responses to interview questions, as interpreted by the researcher, are summarized in
the discussion included in Chapter 5.

3.7 Limitations

Even when much time and effort are invested in carefully planning and designing a
study, methodological errors are unavoidable. This section discusses deviations
from the original design and how accommodations were incorporated. Section 3.7.1
describes methodological issues related to loss of sample, while Section 3.7.2
explores ambiguity concerning treatment conditions, Section 3.7.3 notes technical
difficulties, and Section 3.7.4 explains issues regarding administration of the LAG
questionnaire.

3.7.1 Loss of Sample

In general, a greater sample allows for both the increased detection of statistically
significant effects and the generalization of these effects for larger populations.
Therefore, if the sample for the current study had been larger, results from
quantitative data possibly might have provided more accurate insights into the
effectiveness of virtual laboratories.

Originally, the study was designed for a larger sample (~800 students), which was
made logistically possible due to placement of the researcher in a large school
environment with 21 equally diverse classes all following the same curriculum.
However, permission to conduct the study was overturned by the superintendent of
the district after implementation had already begun. Therefore, the researcher was
transferred to a new school containing 43 students eligible to participate in place of
the original 640 eligible, and thus potential, student participants.

133
Additionally, as described in Section 3.5.4 regarding electronic versions of the
questionnaire, the researcher used a form available for free through the Internet to
any Google user. While this initially worked well, on the day when many students
(at least 100) were to complete the LAG, the link was dysfunctional. Google
acknowledged this error and fixed it within two days time, which allowed some
students to complete the survey. However, it was too late for most of the students to
complete the questionnaire because school was no longer in session and it was
difficult to track students down via email. Unfortunately, this error also affected the
timetable for collecting qualitative data because the last few days of the semester
were spent trying to sort out the technological issue and copying and mailing paper
versions of the questionnaire instead of contacting students to interview them.

Nevertheless, the current sample size was large enough to determine validity and
reliability of the LAG questionnaire even though a larger sample size could have
better informed the quantitative results.

3.7.2 Treatment Conditions

Another issue confounding the results for the effectiveness of virtual laboratories
was a certain degree of ambiguity about the nature of the treatment conditions.
While the demarcation of teaching methods for the experimental and control groups
was clear to the researcher, it was perhaps less clear to the participating teachers.
The researcher wished to grant the participating teachers as much freedom and
independence as possible in the implementation of the study so as not to interfere
with their standards of teaching and preparation of students for end-of-year
standardized examinations in biology. However, the lack of uniformity both in how
teachers taught classes in the control condition (i.e. without use of virtual
laboratories) and in the experimental condition (i.e. use of virtual laboratories)
proved to confuse students’ perceptions of the definition of a ‘virtual laboratory’.
For instance, if a teacher also used an educational computer game with students in
the experimental group, the students might have thought that such a computer game
was also a ‘virtual laboratory.’

A number of teachers included other Internet-based activities, such as simulations,


games, animations, and Webquests, in their teaching of control classes. In some

134
instances, hands-on laboratories were conducted with students in experimental
classes who were only supposed to complete virtual laboratories. Two teachers had
their VL classes complete four virtual laboratories within two weeks, which perhaps
caused fatigue and boredom in students because they were overexposed to the same
medium.

At one point, a participating teacher stated that “perhaps the line between virtual and
actual is getting blurry!” This statement indicates the lack of clarity about the
definition of a virtual laboratory amongst participants. While not wanting to burden
participating teachers with theoretical discussions concerning the definition of a
virtual laboratory, perhaps the researcher should have more clearly restricted what
sorts of activities should have been employed or avoided in VL classes and non-VL
classes. This is further discussed in Chapter 5 as part of suggestions for further
research.

Furthermore, while variables such as students’ intelligence, age, and socio-economic


status were spread relatively similarly amongst the two groups, such differences still
could account for some variability in the results. In the future, further statistical
analyses could be conducted to investigate the influence of these differences.

3.7.3 Technical Issues

A key factor that might have affected the outcomes of this study was the availability
of resources. Participants in the experimental condition required computers in good
working order with an uninterrupted Internet connection in order to complete virtual
laboratories. Although participating teachers initially indicated that their schools
provided these resources, as situations often transpire in large school environments
where resources are constrained, access to these materials was not without problems.
Some teachers and students reported that a particular computer or Internet link to a
virtual laboratory was not in good working order and that each student would have
to be paired with another student. In other instances, some students could not
complete the virtual laboratory because of a lapse in the Internet connection.
Perhaps the experience of completing a virtual laboratory in this manner might have
influenced the students’ responses to LAG items measuring the learning
environment, attitudes towards science, and achievement.

135
3.7.4 Instrument Administration

Finally, the administration of the questionnaire also presented some methodological


issues relevant to this evaluation of the effectiveness of virtual laboratories. Firstly,
as noted in Section 3.7.1, the link to the electronic version of the questionnaire was
unavailable for a few days at a key time in the implementation of the study. Second,
some students did not respond to items consistently or left large sections of the LAG
blank, which is predictable amongst students of this age group.

Most detrimental to the administration of the questionnaire was the lack of clarity
about the terms used to refer to virtual laboratories in various items. The subject of
the questionnaire item was often generalized so that students in both VL and non-
VL classes could respond. However, in trying to avoid introducing bias, the
researcher overcompensated by generalizing terms (such as ‘activity’ instead of
‘virtual laboratory’), which possibly gave rise to confusion amongst students, whose
recorded responses might have differed had a more specific term been used in the
item. Therefore, a degree of clarity could have been lost for the sake of integrity.
Perhaps a different version of the questionnaire should have been administered to
participants in the VL classes and non-VL classes for simplification purposes, as
will be suggested in Section 5.4.

3.8 Summary

This chapter explained the methodological details of my evaluation of the


effectiveness of virtual laboratories in terms of students’ perceptions of their
learning environment, attitudes towards science, and achievement in science.

The study used a quasi-experimental design to compare students in 11 classes that


engaged in virtual laboratories with students in 10 classes that did not. Eight
different virtual laboratories related to the topic of genetics were chosen by the
researcher for their design and use of inquiry, and related worksheets were provided.
Teachers were instructed to use at least four such virtual activities with the
experimental group while students in the control group continued learning and
experimenting in their normal fashion. The treatment period lasted from two to 12
weeks.

136
This study combined quantitative and qualitative methods of data collection.
Quantitative methods included the use of a questionnaire called the Laboratory
Assessment in Genetics (LAG) administered to all participants at the end of the
treatment period. The scales were adopted from previously validated questionnaires
that measure students’ perceptions of the learning environment, such as the Science
Laboratory Environment Inventory (SLEI) and the Technology-Rich Outcomes-
Focused Learning Environment Inventory (TROFLEI), in addition to scales
measuring students’ attitudes towards science from the Test Of Science Related
Attitudes (TOSRA) and an achievement scale with items borrowed from
standardized biology examinations. For a sample of 322 US biology students,
learning environment and attitude scales were tested for validity and reliability,
including factor analysis, internal consistency reliability (Cronbach alpha
coefficients), discriminant validity (mean correlation with other scales), and the
ability of the learning environment scales to differentiate between classrooms
(ANOVA).

To investigate associations between perceived classroom learning environment and


the student outcomes of achievement in and attitudes towards science, simple
correlation and multiple regression analyses were conducted.

Finally, concerning the effectiveness of using virtual laboratories, data from the
LAG questionnaire were subjected to a two-way multivariate analysis of variance
(MANOVA) with the learning environment scales and student outcomes (attitudes
and achievement) as the dependent variables, and with instructional method and sex
as the independent variables. Then, when Wilk’s lambda criterion revealed
statistically significant findings for the set of dependent variables as a whole, the
univariate two-way ANOVA was interpreted separately for each dependent variable
(students’ perceptions of their learning environment, their attitudes, and
achievement). To quantify the magnitude of the difference between two groups (i.e.
either between instructional methods, or between males and females), effect sizes
were also calculated. Analyses explored all possible interactions between the two
independent variables (instructional method and sex) for each type of dependent
variable (learning environment, attitudes, and achievement).

137
After quantitative data analysis, qualitative data were collected from six students
and three teachers who were interviewed to explore underlying themes that lent
further insight into the quantitative data. For the purposes of this study, ten open-
ended questions were constructed based on the LAG questionnaire to use in semi-
structured interviews. Responses from interviews were recorded, fully transcribed,
and subjected to content analysis and analytic induction.

Methodological limitations of this study included sample loss, confusion regarding


the treatment conditions, technical issues, and ambiguity of some of the language in
the LAG questionnaire.

138
Chapter 4

Data Analyses and Results

“It is a capital mistake to theorize before one has data. Insensibly one begins to
twist facts to suit theories, instead of theories to suit facts.” – Arthur Conan Doyle

4.1 Introduction

This chapter reports and interprets the findings of this study. Each of the research
questions is addressed by analyzing data and then determining whether the
hypothesis for that question is supported.

As described in Chapter 3, the majority of this study was based on quantitative data
collected using the Laboratory Assessment in Genetics (LAG). Qualitative data
stemming from semi-structured interviews were used in an attempt to fill gaps in the
quantitative data, and to provide a more holistic view of the effectiveness of virtual
laboratories.

This chapter first presents results for validation of the instrument used to collect
quantitative data, the LAG. The LAG contains 74 items in nine scales adapted from
several other validated questionnaires: the Science Laboratory Environment
Inventory (SLEI), the Technology-Rich Outcomes-Focused Learning Environment
Inventory (TROFLEI), the Test of Science-Related Attitudes (TOSRA), and
achievement items from state standardized examinations in Biology. More
specifically, two scales (Enjoyment and Inquiry) were adapted from the TOSRA
(Fraser, 1981) to assess students’ attitudes towards science, in general. These scales
originally included some negative items but were modified to be positively worded
in order to increase the readability and clarity of the LAG for students. Some items
were also replaced with new items modeled after the original items contained in the
TOSRA and wording was generalized to include all types of activities in science
lessons.

In order to measure students’ perceptions of their learning environment, two scales


(Integration and Material Environment) were modified from the original SLEI
(Fraser, Giddings, & McRobbie, 1992) by rewording reverse items and adding a few
similarly-worded items to maintain a consistent number of items (eight) per scale.
139
Items from the Material Environment scale were also reworded to include all
possible ‘materials’ used in science laboratories, namely, computers and internet
service that enable normal functioning of virtual laboratories. Similarly, four scales
adapted from the TROFLEI, those of Teacher Support, Task Orientation,
Investigation, and Differentiation, were also included in the LAG to measure
students’ perceptions of their learning environment. Wording of the Investigation
and Differentiation items was generalized to include hands-on activities as well as
computer laboratory activities. Because the scales used on the LAG were modified
from their original versions, they required validation as part of this study.

In order to assess readability, the LAG was first given to a pilot sample of students
and, based on their comments, the number of items and the item wording were
adjusted. In the main study that took place one year later, the LAG was
administered to 322 students, aged 13–18 years, in 12 US public school classes from
Massachusetts, Pennsylvania, and Virginia.

Qualitative data were obtained from this same sample; students and teachers from
these 12 classes were given the opportunity to be interviewed by the researcher and
their responses were recorded, transcribed and analyzed. These comments
accompany the quantitative data, in an attempt to further explain the results, and
they are interspersed throughout Sections 4.4.

Therefore, this chapter reports results for validation of the various parts of the LAG
in Section 4.2, for associations between perceptions of the learning environment
(SLEI, TROFLEI) and attitudes (TOSRA) and achievement in Section 4.3, and for
the effectiveness of virtual laboratories in Section 4.4, including results for the
differential effectiveness of virtual laboratories for males and females.

4.2 Validity and Reliability of Learning Environment, Attitude, and


Achievement Scales Composing the LAG

In order to address the first research question below, the scales composing the LAG
were administered to 322 US students in 12 classes ranging in age from 13–18
years.

140
Research Question 1: Are scales from the Test Of Science Related Attitudes
(TOSRA), Science Laboratory Environment Inventory (SLEI), and
Technology-Rich Outcomes-Focused Learning Environment Inventory
(TROFLEI) questionnaires valid and reliable when used with a sample of
high school students taking biology in the US?

This section reports the factor structure (4.2.1), internal consistency reliability
(4.2.2), and discriminant validity (4.2.3) for learning environment scales and attitude
scales. Section 4.2.4 focuses on the ability of the learning environment scales to
differentiate between classrooms. Validation of the achievement section of the
LAG, comprising the last 10 items, is also reported (Section 4.2.5).

4.2.1 Factor Structure of Learning Environment and Attitude Scales

Because items were modified from the original scales from which they were
adapted, the internal structure of the various learning environment and attitude
scales was examined to ensure validity. Principal axis factoring with varimax
rotation (using Kaiser normalization) was employed to inspect the internal structure
of the 64-item survey containing learning environment and attitude scales when used
with the sample in this study. Principal axis factoring analyses inter-relationships
(variability) between all items in the questionnaire and categorizes them by their
common underlying dimensions or factors. Each dimension serves as a construct for
further analysis in this study. The criteria for retention of any item in its scale were
a factor loading greater than 0.40 on its own scale and less than 0.40 on all other
scales. Varimax rotation was applied because of its common use in providing a
scheme for orthogonal rotation; it minimizes the complexity of the components by
making the large loadings larger and the small loading smaller in order to identify
each variable with a single factor.

Table 4.1 provides the factor loadings for these eight attitude and learning
environment scales. Item numbers shown in the table refer to the question numbers
in the questionnaire (Appendix A). Table 4.1 also reports the percentages of
variance and eigenvalues for each scale.

141
Table 4.1 Factor Analysis Results for Attitude and Learning Environment Scales
Factor Loadings
Item No. Inquiry Enjoyment Integration Material Teacher Task Investigation Different
Environment Support Orientation -iation
Q1 .722
Q3 .413
Q4 .670
Q5 .576
Q6 .658
Q8 .686
Q10 .712
Q11 .685
Q12 .617
Q13 .520
Q14 .547
Q15 .664
Q16 .651
Q18 .560
Q19 .494
Q20 .662
Q22 .672
Q23 .489
Q24 .572
Q25 .589
Q26 .512
Q27 .401
Q28 .613
Q29 .717
Q30 .426
Q31 .452
Q34 .721
Q35 .741
Q36 .735
Q37 .656
Q38 .549
Q39 .724
Q40 .717
Q41 .702
Q42 .695
Q43 .636
Q44 .751
Q45 .670
Q46 .594
Q47 .650
Q48 .721
Q49 .588
Q50 .641
Q51 .719
Q52 .481
Q54 .744
Q55 .646
Q56 .656
Q59 .489
Q60 .774
Q61 .436
Q62 .797
Q63 .861
Q64 .783
% Variance 24.75 7.48 5.58 4.47 3.67 3.44 2.88 2.78
Eigenvalue 15.84 4.72 3.57 2.86 2.35 2.20 1.84 1.78
N = 322 students in 12 Classes.
Factor loadings less than 0.40 have been omitted from the table.
Items 2, 7, 9, 17, 21, 32, 33, 53, 57, and 58 were removed from this analysis.

142
Factor analysis resulted in the retention of the original eight learning environment
and attitude scales of the LAG. No more than two items were removed per scale.
Therefore, the items retained supported the factorial validity of the scales modified
from the TOSRA, SLEI, and TROFLEI when used with the sample of 322 students
in this study.

Ten questions were eliminated from the learning environment and attitude scales for
further analysis because they had a factor loading lower than 0.40 on their own scale
and/or greater than 0.40 on any other scale. The following items were removed in
order to improve the internal consistency reliability and factorial validity: Questions
2 and 7 from Inquiry, Question 9 from Enjoyment, Questions 17 and 21 from
Integration, Question 32 from Material Environment, Question 33 from Teacher
Support, Question 53 from Investigation, and Questions 57 and 58 from
Differentiation. For only the scale of Task Orientation, all eight items from the
original version were retained.

Table 4.1 indicates that the optimal factor solution occurred for the set of 54 items.
The percentage of variance for the different scales ranged from 2.78% for
Differentiation to 24.75% for Inquiry, with a total variance of 55.05% for all scales.
The eigenvalues ranged from 1.78 to 15.84. Results from the factor analysis
strongly supported the factorial validity of the scales from the TOSRA, SLEI, and
TROFLEI for this study’s sample of 322 students. These findings replicate other
validation studies (Aldridge & Fraser, 2003; Fraser, 1981; Fraser, Giddings, &
McRobbie, 1992, 1995), as discussed previously in Chapter 2.

4.2.2 Internal Consistency Reliability of Learning Environment and Attitude


Scales

Internal consistency reliability is a measure of the extent to which items in the same
scale measure a common construct. Cronbach’s alpha coefficient was used as the
index of internal consistency for this study. After the removal of invalid items from
the factor analysis, the alpha coefficient was calculated for the revised 54-item
questionnaire measuring learning environment perceptions and attitudes towards
science, for two units of analysis (the individual student and the class mean). Scales

143
with a Cronbach alpha coefficient greater than 0.60 were considered to have
adequate internal consistency reliability (De Vellis, 1991).

Table 4.2 Scale Mean, Standard Deviation, Internal Consistency (Cronbach Alpha
Reliability), Discriminant Validity (Mean Correlation with other Scales), and
Ability to Differentiate between Classrooms (ANOVA Results) for Learning
Environment and Attitude Scales
Mean
Correlation
No of Unit of Alpha with other ANOVA
Scale Items Analysis Mean SD Reliability Scales Eta²
Integration 6 Individual 3.76 0.60 0.83 0.40 0.12***
Class Mean 3.90 0.22 0.96 0.64
Material 7 Individual 3.76 0.61 0.81 0.36 0.07***
Environment Class Mean 3.87 0.20 0.85 0.41
Teacher Support 7 Individual 3.67 0.80 0.91 0.36 0.17***
Class Mean 3.91 0.35 0.98 0.58
Task 8 Individual 3.92 0.71 0.91 0.30 0.07***
Orientation Class Mean 3.99 0.29 0.97 0.25
Investigation 7 Individual 3.45 0.74 0.90 0.41 0.14***
Class Mean 3.64 0.30 0.98 0.63
Differentiation 6 Individual 2.79 0.85 0.86 0.16 0.23***
Class Mean 2.85 0.36 0.95 0.20
Inquiry 6 Individual 3.53 0.74 0.81 0.23
Class Mean 3.61 0.25 0.93 0.43
Enjoyment 7 Individual 3.51 0.80 0.90 0.40
Class Mean 3.73 0.34 0.96 0.54
Achievement 10 Individual 2.83 1.38 0.76
Class Mean 2.96 0.99 0.96
***p<0.001
N=322 students in 6 classes.

Table 4.2 shows that reliability amongst scales measuring students’ perceptions of
their learning environment as measured by the Cronbach alpha coefficient ranged
from 0.81 to 0.91 with the individual as unit of analysis, and from 0.85 to 0.98 with
the class mean as the unit of analysis (Table 4.2). Internal consistency reliability
(Cronbach alpha coefficient) for the two scales measuring attitudes adapted from the
TOSRA were 0.81 and 0.90 with the individual as the unit of analysis, and were
0.93 and 0.96 with the class as the unit of analysis. These high reliability estimates

144
are in agreement with past studies using scales from the TOSRA (Fraser, 1981; Teh
& Fraser, 1994).

These internal consistency reliability results are consistent with other studies using
scales from the SLEI (Fraser, 1998a; Fraser, Giddings, & McRobbie, 1995;
Lightburn & Fraser, 2007; Maor & Fraser, 1996; Martin-Dunlop & Fraser, 2007),
the TROFLEI (Aldridge, Dorman, & Fraser, 2004; 2003; Gupta & Koul, 2007) and
the TOSRA (Aldridge & Fraser, 2003; Fraser, 1981; Fraser, Giddings, & McRobbie,
1995; Koul, Fisher, & Shaw, 2011; Wolf & Fraser, 2008).

In general, reliability estimates in Table 4.2 are higher when the class mean was
used as the unit of analysis, as evidenced in other studies (Zandvliet & Fraser,
2005). Because all scales had Cronbach alpha coefficients greater than 0.60, they
demonstrated satisfactory internal consistency reliability for learning environment
and attitude scales.

4.2.3 Discriminant Validity of Learning Environment and Attitude Scales

The purpose of conducting discriminant validity analysis for the learning


environment and attitude scales was to check whether each scale measured a unique
aspect of the learning environment or attitude towards science. That is, discriminant
validity is a measure of whether scales that ought not to be related to one another are
indeed not related (Campbell & Fiske, 1959). To calculate an index of discriminant
validity, the mean correlation of each scale with all other scales was used. Both the
individual and the class were used as units of analysis as reported in Table 4.2.

Discriminant validity results, in Table 4.2, show that most scales were reasonably
unique in the dimension that each assessed. For the classroom learning environment
scales, the mean correlation of a scale with the other scales varied from 0.16 to 0.41
with the individual as the unit of analysis and from 0.20 to 0.64 with the class mean
as the unit of analysis. For scales that measured attitudes towards science, the mean
correlations varied from 0.23 to 0.40 with the individual as the unit of analysis and
from 0.43 to 0.54 with the class mean as the unit of analysis. These findings suggest
that raw scores on these scales measure relatively unique aspects of the learning
environment and attitudes, despite some overlap. However, the factor analysis

145
results reported in Section 4.2.1 attest to the independence of factor scores.
Discriminant validity results are in agreement with findings from past studies using
some of the same scales from the SLEI (Fraser, 1998a; Fraser, Giddings, &
McRobbie, 1992, 1995; Lightburn & Fraser, 2007; Maor & Fraser, 1996; Martin-
Dunlop & Fraser, 2007), TROFLEI (Aldridge, Dorman, & Fraser, 2004; 2003;
Gupta & Koul, 2007), and TOSRA (Fraser, 1981; Teh & Fraser, 1994; Wolf &
Fraser, 2008).

4.2.4 Ability of Learning Environment to Differentiate Between Classrooms

An ANOVA, with class membership as the main effect, was used to determine the
ability of each learning environment scale to differentiate between the perceptions of
the students in different classrooms. Students in the same class should have scores
on learning environment scales that are relatively similar to each other, but which
are different from the scores of students who are in different classes. Table 4.2
reports the ANOVA results, including eta2 values to represent the proportion of
variance in scale scores amongst individual students accounted for by class
membership. Eta2 scores ranged from 0.07 to 0.23 for scales measuring students’
perceptions of the learning environment as measured by the SLEI and TROFLEI.

Overall, the ANOVA analysis revealed statistically significant differences (p<0.001)


between student perceptions in different classes for all learning environment scales,
indicating the ability of scales from the SLEI and TROFLEI to differentiate between
different classrooms. These results are consistent with those from other studies
using the same scales from the SLEI (Fraser, 1998a; Fraser, Giddings, & McRobbie,
1992, 1995; Lightburn & Fraser, 2007; Maor & Fraser, 1996; Martin-Dunlop &
Fraser, 2007), and TROFLEI (Aldridge, Dorman, & Fraser, 2004; 2003; Gupta &
Koul, 2007).

4.2.5 Validation of Achievement Section of the LAG

The scale for achievement was developed by the researcher to assess student overall
content knowledge of genetics. The scale included 10 items from valid and reliable
standardized examinations in Biology from the following states in which the
majority of students sampled in this study attended school: New York,
Massachussetts, and Virginia (see Appendix A).

146
To check the achievement scale for internal consistency reliability, an alpha
coefficient was calculated. This analysis resulted in an alpha reliability coefficient
of 0.76 with the individual as unit of analysis and of 0.96 with the class mean as unit
of analysis, as shown in Table 4.2. These results indicate that the 10-item
achievement scale was reliable.

Other methods were also employed to determine the validity of the achievement
scale. According to the ‘Bell Curve’ theory, scores on any measure of achievement
result in normal distributions for large populations (Herrnstein & Murray, 1996).
Therefore, if valid, this scale should show a relatively normal distribution for the
group of students in this study.

60

50

40

30 Number of
Students
20

10

0
0 1 2 3 4 5 6 7 8 9 10

Achievement Scores

Figure 4.1 Frequency Distribution for Achievement (Mean = 5.67, SD = 2.76, N = 322)

The histogram in Figure 4.1 shows the distribution of achievement scores for all 322
students in this study. The pattern illustrated in the histogram is similar to typical
patterns of normal distribution (Herrnstein & Murray, 1996), except that more
students than expected received an achievement score of 10. The divergence from a
normal distribution might be explained by the relatively small sample size in this
study.

As well, statistical data are available online for students who took the biology
Massachusetts Comprehensive Assessment System (MCAS) throughout the state of
Massachusetts. As two items from the achievement scale were borrowed from this
examination, the researcher compared the percentage of students in this study’s
147
sample that correctly answered the questions with the percentage of students in
Massachusetts that correctly answered these same questions, as another measure of
validity. Results indicate that 68% of students (n= 53,296) taking the biology
MCAS in 2009 correctly answered a genetics-related item (Massachusetts
Comprehensive Assessment System (MCAS), 2009), whereas 70% of participants in
any study correctly answered the same item taken from that examination. For
another genetics item borrowed from the MCAS, 61% of those taking the
examination scored correctly, while 61% in my study did. These results show that
student responses in my study were similar to those of a larger population. This
finding coupled with the near normal distribution of achievement scores displayed in
Figure 4.1 supports the validity of the achievement scale.

4.3 Associations Between Learning Environment, and Attitudes, and


Achievement

Research Question 2: Are there associations between the perceived


classroom learning environment and student outcomes of attitudes towards
and achievement in science?

To answer the second research question, simple correlation and multiple regression
analyses, with the individual as the unit of analysis, were used to investigate the
relationship between student perceptions of the classroom learning environment and
the student outcomes of attitude towards science and achievement in genetics.
Simple correlation (r) was used to consider the bivariate relationship between each
student outcome (attitude or achievement) with each learning environment scale of
the Laboratory Assessment in Genetics (LAG). Multiple regression analysis was
applied to investigate the combined influence of the whole set of learning
environment scales on each student outcome, with the multiple correlation (R)
indicating the multivariate association between an outcome and the set of learning
environment scales. The standardized regression coefficient () was used to indicate
the contribution of each learning environment scale to the variance in student
attitude or achievement when other learning environment scales were controlled.

Analysis to uncover associations involved scores from 322 American science


students to a refined eight-scale, 54-item version of an attitude and learning

148
environment questionnaire with an additional achievement scale (as described in
Section 3.4.1). For these analyses, the scores on the attitude and achievement scales
measured the various effects of the learning environment, which served as the
independent variables.

This section reports the results for associations between the learning environment
and student attitudes (Section 4.3.1) and achievement (Section 4.3.2). Table 4.3
shows simple correlations (r), standardized regression coefficients (), and multiple
correlations (R) — in order to determine the extent of these associations.

Table 4.3 Associations between Learning Environment Questionnaire Scales and


Attitudes and Achievement in Terms of Simple Correlations (r), Multiple
Correlations (R) and Standardized Regression Coefficients ()

Learning Environment Attitude-Environment Associations


Scale Inquiry Enjoyment Achievement
r β r β r β
Integration 0.30** 0.10 0.50** 0.11 0.21** 0.18*
Material Environment 0.34** 0.21** 0.54** 0.22** 0.20** 0.14*
Teacher Support 0.51** 0.13* 0.58** 0.29** 0.18** 0.08
Task Orientation 0.25** 0.08 0.48** 0.18** 0.06 0.05
Investigation 0.37** 0.21** 0.51** 0.14* 0.05 0.02
Differentiation 0.22** 0.10 0.17** 0.00 -0.16** 0.21**
Multiple Correlation (R) 0.45*** 0.70*** 0.34***
R 2
0.20 0.49 0.12
*p<0.05, **p<0.01, ***p<0.001
N = 322 students in 12 classes

4.3.1 Associations Between Learning Environment and Attitudes

Table 4.3 shows that each learning environment scale correlated significantly
(p<0.01) and positively with each of the student attitudes (Inquiry and Enjoyment),
indicating that positive perceptions of the learning environment are aligned with
improved students’ attitudes towards science. The learning environment scale of
Teacher Support showed the highest correlation with both attitude scales of Inquiry
(0.51) and Enjoyment (0.58) and the scale of Differentiation showed the lowest
correlation with both attitude scales of Inquiry (0.22) and Enjoyment (0.17).

149
As shown in Table 4.3, the multiple correlation coefficient (R) between the six
learning environment scales and attitude was 0.45 for the Inquiry scale and 0.70 for
the Enjoyment scale. These values were statistically significant (p<0.001),
suggesting that student attitudes toward science were related to student perceptions
of their learning environment. The coefficient of determination (R2), which is a
measure of the proportion of variance in attitudes explained by learning environment
scales, was 0.20 for Enjoyment and 0.49 for Inquiry scales. This means that
learning environment scales were stronger predictors of Enjoyment than of Inquiry.

In order to further identify which of the six learning environment scales accounted
for variance in student attitudes, when the other five scales were controlled, the
standardized regression coefficients (), shown in Table 4.3, were examined. Three
learning environment scales (Material Environment, Teacher Support, and
Investigation) were statistically significant (p<0.05), positive, independent
predictors of both attitude scales, whereas two scales (Integration and Task
Orientation) were statistically significant, positive, independent predictors of only
the Enjoyment attitude scale. The learning environment scale of Differentiation was
a statistically significant independent predictor of neither attitude scale.

Generally, these analyses reveal that student perceptions of their learning


environment were positively related to student attitudes, therefore suggesting that
improving conditions of the classroom learning environment might enhance
students’ attitudes towards science. These associations replicate the results of past
studies (Fraser, 2012; Lightburn & Fraser, 2007; Martin-Dunlop & Fraser, 2007).

4.3.2 Associations Between Learning Environment and Achievement

The simple correlation analysis reported in Table 4.3 reveals statistically significant
(p<0.01) and positive associations between three learning environment scales
(Integration, Material Environment, and Teacher Support) and achievement, while
the scale of Differentiation had a statistically significant (p<0.01) and negative
correlation with achievement. The learning environment scales of Task Orientation
and Investigation showed no statistically significant correlation with achievement.

150
As shown in Table 4.3, the multiple correlation between the six learning
environment scales and achievement was 0.34. This value was statistically
significant (p<0.001), suggesting that there is a multivariate relationship between
achievement and student perceptions of their learning environment.

In order to identify which of the six learning environment scales accounted for the
variance in student achievement, when the other five scales were controlled,
regression coefficients were inspected. Standardized regression coefficients ()
indicated that the learning environment scales of Integration, Material Environment,
and Differentiation uniquely accounted for a significant (p<0.05) amount of variance
in academic achievement. On the other hand, Teacher Support, Task Orientation,
and Investigation scales were not statistically significant independent predictors of
achievement.

The negative simple correlation between Differentiation and achievement suggests


that the more differentiated the classroom environment, the less students achieved.
Past studies indicate mixed results; Aldridge et al. (2003, 2008) indicate a positive,
non-significant association. Similarly, Gupta and Koul (2007) found a negative
association between Differentiation and academic achievement, albeit not
statistically significant. Perhaps these students were not familiar with how
differentiation was applied to their classroom settings, and they might have feared
that differentiated assignments would not result in greater achievement due to a
perception that teachers accommodate for the under-achievers.

In another attempt to explain this finding, the six teachers involved in the study were
consulted regarding the amount and type of actual differentiation in their classrooms
during the implementation of the study. They admitted that not much differentiation
was provided. Therefore, perhaps the questionnaire items asking about
differentiation confused students, producing the mixed results reported in this
section.

However, as noted above, Differentiation did prove to be a statistically significant


(p<0.01), positive independent predictor of student achievement when there was
control for other predictor variables, indicated by its standardized regression

151
coefficient (ß). Thus, the bivariate relationship between differentiation and
achievement and the multivariate contribution for differentiation on achievement
present conflicting results. This is known as the ‘Suppressor Effect’, often found
with the addition of predictor variables that increase the value of R2 and lower the
error term, resulting in inaccurate statistical significance of a prediction; this effect
is characteristic of low sample power (Thompson & Levine, 1997). Therefore,
results from this study concerning the relationship between Differentiation and
achievement are inconclusive.

Overall, the results of correlation analyses in Table 4.3, show that most learning
environment scales were positively correlated with the student outcomes of attitude
and achievement, which means that positive perceptions of the learning environment
are linked with improved attitudes towards science and better achievement. Such
links between the learning environment and students’ attitudes and achievement as
replicate past studies (Fraser, 2012; Lightburn & Fraser, 2007; Martin-Dunlop &
Fraser, 2007).

4.4 Effectiveness of Virtual Laboratories and their Differential Effectiveness


for Different Sexes in Terms of Learning Environments, Attitudes, and
Achievement

To answer the third and fourth research questions regarding the effectiveness of
using virtual laboratories and its differential effectiveness for different sexes, data
were gathered from classes that engaged in virtual laboratories (the intervention) and
classes that did not.

Research Question 3: Is the use of virtual laboratories in high school science classes
effective in terms of students’

a) perceptions of their learning environment

b) attitudes towards science, and

c) academic achievement in genetics?

Research Question 4: Is the use of virtual laboratories differentially effective for


males and females in terms of students’

152
a) perceptions of their learning environment

b) attitudes towards science, and

c) academic achievement in genetics?

Among the six teachers who volunteered for the implementation of this study, each
teacher taught at least one class with the intervention and one class without the
intervention. The total sample for the study was comprised of 322 American
students from Grades 8–10. Over a treatment period of about 2–12 weeks, students
in the experimental group completed at least four to eight virtual laboratory
experiments in genetics using computers that employed ‘point-and-click’ techniques
for manipulating various laboratory materials. Each of these virtual experiments
simulated a real, hands-on experiment and followed a typical experimental format
for which students observe phenomena, formulate hypotheses, set up controls,
follow procedures, test hypotheses, and analyze results. Students in the control
group continued learning and experimenting in their normal fashion, without the use
of virtual experiments; instructional methods for these classes included lectures,
textbook reading, hands-on experiments, and/or other activities. Further detail
regarding the sample, data collection, treatment conditions, and procedures followed
to implement this study are described in Sections 3.3 and 3.5.

Upon completion of the treatment period, the Laboratory Assessment in Genetics


(LAG), including learning environment, attitude, and achievement scales, was
administered to both groups to provide the quantitative data for this study.
Qualitative data were also collected from six students and three teachers who were
interviewed in order to explore underlying themes that lent further insight to the
quantitative data (see Section 3.6 for more detail).

Differences in LAG scale scores between instructional methods and sexes were
examined using a two-way multivariate analysis of variance (MANOVA) with the
learning environment scales from the SLEI and TROFLEI and student outcomes
(attitudes and achievement) as the dependent variables, and with instructional
method and sex as the independent variables. Because the multivariate test using
Wilks’ lambda criterion yielded statistically significant differences for the set of
dependent variables, the individual, univariate two-way ANOVA was interpreted

153
separately for each dependent variable (students’ perceptions of their learning
environment, their attitudes, and achievement), with the student as the unit of
analysis. This analysis enabled an exploration of all possible interactions between
both independent variables (instructional method and sex) and all three dependent
variables (students’ perceptions of their learning environment, their attitudes, and
achievement).

To quantify the size of instructional differences and sex differences, effect sizes
were also calculated to describe the ratio of variance in the dependent variable
attributable to the independent variable, while controlling for other independent
variables. The size of an effect is particular to the sample with which the test is
applied and is purported to be an important aspect of an intervention in addition to
statistical significance alone (Vacha-Haase & Thompson, 2004). In this study, two
different types of effect sizes were utilized: Cohen’s d and Eta-squared (2).
Cohen’s d is the difference between two sample means divided by the pooled
standard deviation. Eta squared (2) is a measure of the strength of association (or
effect size) based on the proportion of variance accounted for by the effect of the
independent variable on the dependent variable. The methods of statistical analysis
are also reviewed in Section 3.6.3.

First, a general overview is provided of the results (Section 4.4.1), for the
effectiveness of virtual laboratories, as well as for the interactive effect between the
two independent variables of instructional method and sex. Then, Sections 4.4.2
and 4.4.3 detail the results for each independent variable (instructional method and
sex) separately, while Section 4.4.4 reports the interaction effects that involve the
differential effectiveness of virtual laboratories for different sexes.

4.4.1 Overview of Results for Effectiveness for Virtual Laboratories and


Differential Effectiveness of Virtual Laboratories for Males and Females

The results of the two-way ANOVAs for instructional method, student sex, and the
interaction between independent variables (instructional method and sex) are
displayed in Table 4.4 for the six learning environment and three student outcome
scales.

154
Table 4.4 Two-Way Analysis of Variance (ANOVA) for Instructional Method and
Sex for each Scale of the LAG
Instructional Instructional
Scale Method Student Sex Method/Sex
F Eta2 F Eta2 F Eta2
Learning Environment

Integration 0.85 0.00 3.83* 0.02 1.31 0.01


Material Environment 0.04 0.00 2.38 0.01 5.13* 0.03
Teacher Support 0.15 0.00 0.22 0.00 4.39* 0.02
Task Orientation 0.10 0.00 1.58 0.01 1.52 0.01
Investigation 0.27 0.00 1.91 0.01 0.85 0.00
Differentiation 1.24 0.01 4.46* 0.03 0.00 0.00

Outcomes

Inquiry (Attitude) 1.09 0.01 3.06 0.02 5.02* 0.03


Enjoyment (Attitude) 0.60 0.00 8.05** 0.04 1.37 0.01
Achievement 0.59 0.00 0.55 0.00 0.04 0.00

Sample Size Instructional Method: Non-VL = 153, VL = 166


Sex: Females = 169 and Males = 150
*p<0.05, **p<0.01

The two-way ANOVAs presented in Table 4.4 yielded a number of statistically


significant findings:

 No statistically significant differences existed for instructional method (i.e.


between student scores in VL classes versus non-VL classes).

 Regardless of instructional method, statistically significant differences


(p<0.05) were found between males and females for the learning
environment scales of Integration and Differentiation and for the attitude
scale of Enjoyment (p<0.01). The effect sizes for all three of these scales
were small.

 A statistically-significant (p<0.05) Instructional Method x Sex interaction


emerged for the two learning environment scales of Material Environment

155
and Teacher Support and for the attitude scale of Inquiry. The effect sizes for
all three of these scales were small.

Detailed results for each independent variable (Instructional Method and Sex) are
discussed in Sections 4.4.2 and 4.4.3, respectively. As well, a more detailed report
of the interactions from the ANOVAs appears in Section 4.4.4.

4.4.2 Effectiveness of Instruction Using Virtual Laboratories in Terms of


Learning Environment Perceptions, Attitudes, and Achievement

This section reports in greater detail results for the third research question
concerning the effectiveness of virtual laboratories as tested on classes that used
these virtual laboratories and classes that did not.

To further clarify the instructional differences presented Table 4.4 above, more
details are furnished in Table 4.5, including the mean score, standard deviation, and
effect size for the difference in scores between VL and non-VL classes for each
learning environment scale and student outcome (attitudes and achievement). The
mean was obtained by dividing the original scale mean by the number of items in
each scale to allow for meaningful comparison of average scores across scales of
varying lengths. F values from the ANOVA in the first column in Table 4.4 are
repeated in Table 4.5 below. Effect sizes (Cohen’s d values) displayed in Table 4.5
illustrate the number of standard deviations from the mean for any differences found
between classes that had the intervention and classes that did not.

The mean scores represent the average of students’ scores on each scale which
ranged from 1 (Strongly Disagree) to 5 (Strongly Agree). Because achievement
scores were measured from 0 to 10, with each score representing the number of
items each student answered correctly out of 10 items, the final score was divided by
2 to allow for consistent and meaningful comparisons of scores between all scales.

156
Table 4.5 Item Mean, Item Standard Deviation and Difference Between Instructional
Methods (ANOVA Results and Effect Size) for each Learning Environment
and Student Outcome Measured by the LAG
Mean Standard Deviation Difference
Scale
Non-VL VL Non-VL VL F Effect Size
Learning Environment

Integration 3.80 3.73 0.60 0.60 0.85 -0.12

Material Environment 3.77 3.75 0.59 0.62 0.04 -0.03

Teacher Support 3.66 3.69 0.73 0.85 0.15 0.04

Task Orientation 3.90 3.93 0.69 0.73 0.10 0.04

Investigation 3.43 3.47 0.73 0.75 0.27 0.05

Differentiation 2.73 2.83 0.83 0.87 1.24 0.12

Outcomes

Inquiry (Attitude) 3.49 3.53 0.74 0.73 1.09 0.05

Enjoyment (Attitude) 3.48 3.53 0.79 0.81 0.60 0.06

Achievement 2.90 2.78 2.80 2.72 0.59 -0.04

Sample Size = 322 (Control Group =153 and Experimental Group =169)

Differences in the means between classes using virtual laboratories and classes that
did not use virtual laboratories are illustrated in Figure 4.2. While the mean is
reported on a scale from 1 (Strongly Disagree) to 5 (Strongly Agree), the graph only
shows the scale of 2 (Disagree) to 4 (Agree) in order to magnify the difference
between the means. No mean scores fell below 2 or above 4. The first six scales
measure students’ perceptions of the learning environment, the next two scales
measure students’ attitudes, and the last scale measures achievement.

According to the results shown in Table 4.5 and Figure 4.2, students in the
experimental group, using virtual laboratories, did not perceive their learning
environment too differently from students in the control group who did not engage
in virtual laboratories. Statistically significant differences were not found for any of
the learning environment, attitude, or achievement scales. Furthermore, effect sizes
for using virtual laboratories were small, ranging from 0.03 to 0.12 (all small)
standard deviations for the different dependent variables. Although these findings do
not support the effectiveness of virtual laboratories, they also provide no evidence

157
that using virtual laboratories negatively impacted on students’ perceptions of the
learning environment, attitudes, or achievement.

Non‐VL
2 Classes

Figure 4.2 Profile of Means for Instructional Groups as Measured by LAG

For most of the learning environment scales in Table 4.5 (namely, Teacher Support,
Task Orientation, Investigation, and Differentiation), as well as for the attitude
scales of Inquiry and Enjoyment, the mean for the experimental group using virtual
laboratories was slightly greater than the mean for the control group for which no
virtual laboratories were used. These patterns are also demonstrated in Figure 4.2.
Conversely, the means for the VL classes for the dimensions of Integration, Material
Environment, and Achievement were slightly lower than the means for the non-VL
classes.

My finding of no significant differences between classes that used the intervention


(i.e. virtual laboratories) and classes that did not in this study is consistent with a
worldwide trend, identified in literature reviewed in Section 2.5.5, in which
technological innovations do not always measure up to their intended expectations.
More specifically, my findings also replicate those from other studies reporting that
virtual laboratories offered neither advantages nor disadvantages over other methods
of instruction (Cobb et al., 2009; Cross & Cross, 2004; Javidi & Sheybani, 2006;
Russell, 1999; Stuckey-Mickell & Stuckey-Danner, 2007), suggesting that virtual
laboratories are useful as a supplementary tool in science classrooms, rather than a
substitute for more traditional methods, such as hands-on laboratories (Nedic,

158
Machotka, & Nafalski, 2003; Raineri, 2001; Toth, Morrow, & Ludvico, 2009; Yu,
Brown, & Billet, 2005).

Qualitative data were gathered by interviewing participating students and teachers in


order to add insight. What follows first is a description of the qualitative data
pertaining to the scales for which positive differences (albeit small and non-
significant) were noted for the VL classes in comparison to the non-VL classes,
which is then followed by qualitative data used to explain negative (albeit small and
non-significant) differences.

The quantitative difference between the two comparison groups for Teacher Support
was almost negligible. Similarly, replies about Teacher Support during interviews
indicated no differences between classes that used virtual laboratories and the
classes that did not. Teacher A noted, “Assistance [between the two groups] was
about the same. Maybe a little more explanation [was required for VL classes just]
to get started.” As well, Teacher M agreed but added, “I would say that the non-VL
students needed more teacher assistance. The virtual labs that I chose had very clear
directions and stepped students through processes at a good pace for them. The
main questions from the VL group were more to do with navigation of the site,
rather than content.” Students tended to agree that teacher assistance was similar for
the two treatment groups. In response to being asked whether she needed help with
the virtual laboratories, Lara answered “Usually it was just because I put the website
in wrong, but it was never just to get things done.” Therefore, the type of support
needed in each treatment condition differed  in the non-VL classes, more
instructional assistance was needed and, in the VL classes, more technical assistance
was needed  but the amount of teacher support was roughly the same.

Additionally, questions about Teacher Support in both the written questionnaire and
the semi-structured interview caused mixed understanding among students about
whether they referred to the support from the physical teacher or from the virtual
program. For instance, Lara stated “I think it’s not as easy understanding science
when you have one teacher per 20-something students and I think it’s easier when
you have one computer working with you one-on-one; I think it helps a lot more and
you get a lot more out of it.” In this instance, virtual laboratories represent the

159
teacher and the personalized feedback is equivalent to the support that a teacher
would offer. Perhaps this misunderstanding of the term ‘Teacher’ (i.e. either the
actual teacher or instruction from a computer program) caused the absence of clear
quantitative results; in the future, the lack of clarity in the wording of items on the
Teacher Support scale ought to be considered when applying the scale when other
methods of instruction are used.

The highest score for students’ perceptions of their learning environment was for the
scale of Task Orientation, even though the difference between the two groups was
small and non-significant. Regardless of instructional method, students in these
science classes seemed motivated to complete the tasks set. Interest in the aspect of
Task Orientation originally motivated the researcher to initiate this study because
virtual laboratories contain an extrinsic motivational element that lends itself to task
completion, as explained in Section 1.2. However, quantitative and qualitative data
showed no differences between students who used virtual laboratories and students
who did not in terms of Task Orientation.

Responses from students and teachers during qualitative data collection reflected the
high quantitative score for Task orientation amongst both groups. All four students
in the experimental group and two students in the control group noted that they were
motivated to complete their work. As well, teachers noted that they did not observe
any differences between the classes regarding motivation to complete the activities,
as indicated by the quantitative data. Thus, it can be inferred that motivation to
complete tasks, as measured by Task Orientation, is not an outcome of some
extrinsic factor, such as virtual laboratories; rather, it is intrinsic motivation that
might be a predictor of the degree of task completion for any activity, whether
innovative or traditional. As Teacher M said, “I think that the motivation differs
among students, not between the two classes [VL versus non-VL].” As such,
perhaps the scale of Task Orientation could be further delineated into extrinsic
motivation (the intended measurement in this study) and intrinsic motivation (the
measurement perceived by students and teachers in this study) when applied to
measuring the effectiveness of an innovative intervention.

160
Comments from student and teacher interviews also reflected the lack of a
significant instructional difference for Investigation. Amongst both treatment
groups, students at this maturity level seem to prefer, or have been conditioned to
prefer, prescribed instructions and clear guidelines, allowing them to feel more
control and preventing them from straying too far from the expected result of the
experiment. As Lara in a VL class confided “I’d rather not have to go back and do
things a million times because I messed up; I’d rather get it right the first time and
learn from it.” Erica in the control group also related: “I prefer the teacher giving us
a set of instructions.” These observations suggest that the implementation of
innovative interventions that aim to increase students’ sense of Investigation might
be more successful with more senior students and/or in non-traditional environments
where students are already encouraged to investigate independently.

According to Table 4.5, the difference of 0.12 standard deviations between the
means of VL classes and non-VL classes was the greatest for the scale of
Differentiation, albeit still not statistically significant. No major differences
between the groups were noted during interviews with students and teachers.
However, students in the VL classes commented that they were allowed to go on to
the next task once they had completed the previous one; this practice is part of the
self-paced nature of virtual laboratories. Teacher A observed, “They [virtual
laboratories] also allowed the more advanced students to move more quickly
through the labs.”

Qualitative data were also obtained for the two attitudes scales, for which means
were higher (albeit not significant) for VL classes than non-VL classes. Teachers
and students did not observe any differences regarding the level of inquiry between
instructional methods. However, the researcher noted a theme that emerged from
student interviews based on the Inquiry scale: students preferred hands-on activities
and the opportunity to collaborate with other students, both being features present in
traditional ‘wet-labs’ and absent from virtual laboratories; these features are both
aspects of Inquiry but such inquiry-driven activities might not necessarily have
resulted in mastery of concepts or skills. Hayley gave numerous examples of sordid,
shock-provoking hands-on activities that piqued her sense of Inquiry, such as “you
take the egg and you either put it in vinegar or in syrup…the egg was huge, …it

161
was disgusting!” However, Hayley was unable to explain the concept learned from
such activities. Lara’s comment also revealed this theme: “…not me [but] a lot of
people enjoy doing the [hands-on] labs like mixing the chemicals and dissecting and
it wouldn’t be as enjoyable for them to just be on the computer clicking on things.
But I actually thought it was better because the computer helped [me] to understand
things and it would say ‘good job, you understand this now’ or it would say ‘no you
didn’t do this right, try again’…”. Therefore, while higher levels of Inquiry were
aligned with hands-on laboratories, according to student interviews, the level of
inquiry did not necessarily result in greater learning, which was a separately
measured dimension.

Regarding Enjoyment, Table 4.5 shows a mean score of 3.53 for VL classes and
3.48 for non-VL classes (effect size of 0.06). As opposed to traditional ‘chalk and
talk’ instruction, investigative laboratory activities, whether hands-on or virtual, are
likely to promote feelings of enjoyment as suggested in Section 2.5, which justifies
the tendency of both groups to score closer to the ‘agree’ side of the scale.

However, upon interviewing students, reports of enjoyment differed slightly


between the two instructional groups. Jasper, in a VL class, responded, “most of the
[virtual] activities we did were fun” and Hayley also in a VL class said, “I looked
forward to that one [biology class] at the end of the day.” Lara further clarified,
“This year, they were a lot more fun than in the past because we did a lot of online
labs.”; she also indicated that she enjoyed the “genetics portion of our learning”
more than all other topics in biology, and this was the subject of most of the virtual
laboratories. Furthermore, when given a choice regarding placement into VL or
non-VL classes before beginning the study (which is another measure of Enjoyment
of virtual laboratories), most students responded positively for the condition of
virtual laboratories and would not have changed this preference even after learning
that there were no significant differences between the groups. On the other hand,
Ann in the non-VL class, reported that students never went to the computer room for
science class and that her science class “wasn’t very fun…and some of the labs were
unclear, but some of them were fun but most weren’t.” Erica in the non-VL class
also expressed her preference to be in the VL classes, “The virtual seemed kinda
cool.”

162
Teachers’ assessments of students’ enjoyment in using virtual laboratories showed a
different perspective, one that did not necessarily offer any advantage for virtual
laboratories with regard to Enjoyment. Teacher A related, “I think the students liked
the VL classes because they added some variety to the usual classroom
environment.” Similarly, Teacher M agreed, “In my classroom, I would use virtual
labs as another tool in addition to hands-on-labs, class work, and lecture. Virtual
labs are great for labs where you might not have the equipment to do the labs, and
they are a way to preview/review other work that you have done in class.”

Conversely, the means for the VL classes for the learning environment dimensions
of Integration and Material Environment were slightly lower than the means for the
non-VL classes. The finding concerning Integration (albeit not significant) might
suggest that the successful implementation of virtual laboratories depends on how
well the particular teacher integrates the intervention with the content of the
curriculum, but it might not necessarily indicate anything about the integrative
nature of virtual laboratories themselves. That is, students’ perceptions for the
dimension of Integration might be more affected by differences amongst teachers,
than by the instructional method. Comments from student interviews did not differ
all that greatly between those who used virtual laboratories and those who did not,
thus supporting the quantitative results. As well, all participating teachers claimed
that they fully integrated the laboratory activities into the topics explored at the time,
irrespective of instructional method.

The difference in the means for Material Environment was slightly negative but
nearly negligible. Qualitative data obtained from interviews also supported this
finding. Responses from students, regarding the equipment used in science
laboratories, were mixed. Students in VL classes reported that computers were
“slow” or that the number of available computers was insufficient for the number of
students in the class, while Teacher G mentioned, “there were not enough working
laptops”. Even if there was ample computer access, Teacher M explained that
“there were times when the websites that we were trying to access were jammed up,
and so they had trouble getting to a lab.” The functionality of equipment in non-VL
classes was also variable. Lara mentioned that wet-lab equipment was inadequate
and that “microscopes definitely were something we had a problem with

163
because…[they] were pretty old…and it took away from our learning time so that
was a bit of a pain.” Therefore, for schools where the condition of digital equipment
far surpasses the condition of traditional laboratory equipment, a phenomenon more
common in recent years, the use of virtual laboratories might be beneficial. Teacher
M agreed: “The biggest difficulty with hand-on labs in genetics is the expense and
technical expertise to use more sophisticated equipment.”

Students in the VL classes also scored negligibly lower than students in the non-VL
classes in terms of achievement. Therefore, the quantitative data suggest that both
instructional methods were equally effective with regard to content retention and
understanding. These findings replicate results from the other small number of
studies using virtual laboratories (Cobb et al., 2009; Cross & Cross, 2004; Javidi &
Sheybani, 2006; Stuckey-Mickell & Stuckey-Danner, 2007).

The results concerning Achievement require further elucidation. In an effort to


avoid researcher bias and maintain validity of questionnaire items to measure
achievement, the researcher limited herself to choosing items originating from
standardized examinations dictated by national learning standards in the US but, in
the process, such items could have lost closer relevance to virtual laboratories than if
the researcher had created her own questions. Therefore, the achievement items
might not have accurately measured understanding of content.

Qualitative data showed that all four students interviewed from VL classes reported
that they had a good understanding of genetics (the content for the virtual
laboratories), scored highly on their particular class examinations, and were able to
explain these concepts to the interviewer orally. Out of the two students in the non-
VL classes, one reported that she had a good understanding of genetics and the other
did not. Student interview responses from the two groups did not seem to indicate
any advantage in using virtual laboratories with regard to achievement. As well,
Teacher M noted, “I’m not sure it [VLs] made a difference. The larger factors may
be student ability and motivation.”

164
The theme noted in the discussion of the Inquiry scale resurfaced in interview
responses concerning achievement: the understanding of content did not correlate
with the sense of intrigue from ‘hands-on’ investigations. For instance, Teacher M
observed students “…doing less mental processing of hands-on labs and being more
partner-dependent. In the VL [virtual laboratory], they had to do the thinking on
their own.” In this way, virtual laboratories might have required students to reflect
on the content and engage in higher-level inquiry-based skills, as opposed to the
more hands-on approach of traditional laboratories that were devoid of such higher-
level skills. Virtual laboratories provided an environment free from ‘hands-on’
distractions. This theme is supported by the literature: simulations and virtual
laboratories are likely to increase conceptual understanding (Marbach-Ad, Rotbain,
& Stavy, 2008; Raineri, 2001; Toth, Morrow, & Ludvico, 2009; Tsui & Treagust,
2004) and traditional laboratories focus more on design skills and the scientific
process (Ma & Nickerson, 2006; Toth, Morrow, & Ludvico, 2009).

Therefore, to conclude the findings obtained from qualitative data, it seems there are
two components to laboratories (whether innovative or traditional) that might
necessitate separate measurements in future studies: 1) exploration, which includes
investigation, use of physical tools and techniques (‘hands-on’), and getting dirty,
and 2) understanding what the laboratory is investigating and how it relates to the
content learned in class.

4.4.3 Sex Differences in Learning Environment Perceptions, Attitudes, and


Achievement

Differences between sexes, regardless of instructional method, are reported in this


section in detail. The learning environment scales and student outcomes (of
attitudes and achievement) served as the dependent variables in exploring sex
differences between a group of 171 females and 151 males.

165
Table 4.6 Item Mean, Item Standard Deviation and Sex Difference (ANOVA Results and
Effect Size) for Each Learning Environment Scale and Student Outcome
Measured by the LAG
Standard
Scale Mean Deviation Difference
Female Male Female Male F Effect Size
Learning Environment
Integration 3.70 3.83 0.61 0.58 3.83* 0.22

Material Environment 3.70 3.82 0.59 0.62 2.38 0.20

Teacher Support 3.65 3.70 0.81 0.78 0.22 0.06

Task Orientation 3.96 3.86 0.70 0.72 1.58 - 0.14

Investigation 3.39 3.51 0.71 0.76 1.91 0.16

Differentiation 2.69 2.89 0.81 0.89 4.46* 0.24

Outcomes
Inquiry (Attitude) 3.46 3.60 0.72 0.75 3.06 0.19

Enjoyment (Attitude) 3.38 3.64 0.83 0.75 8.05** 0.33

Achievement 2.78 2.90 2.68 2.85 0.55 0.04

Sample Size = 322(Females =171 and Males =151)


*p <0.05, **p<0.01

To further understand the differences presented in Table 4.4, more details are
furnished in Table 4.6, including the mean score, standard deviation, and difference
between males and females for each learning environment scale and student
outcome (attitudes and achievement). F values for sex differences from the
ANOVAs in Table 4.4 are repeated in Table 4.6. As for instructional method
differences, effect sizes are displayed in Table 4.6 to illustrate the magnitude of
differences found between female and male scores expressed in standard deviation
units. These mean scores are also displayed graphically in Figure 4.3 to show sex
differences in learning environment, attitudes, and achievement scales.

Table 4.6 reveals statistically significant differences (p<0.05) between males and
females for the learning environment scales of Integration and Differentiation.
Males perceived these aspects of their learning environment to be more positive than
females. These differences were associated with small effect sizes (0.22 and 0.24
standard deviations, respectively). A statistically significant difference (p<0.01)
also emerged between males and females for the attitude scale of Enjoyment, with

166
males reporting more enjoyment in science than females, and with magnitude that
can be considered small to medium (0.33 standard deviations).

2 Females
Males

Figure 4.3 Profile of Means for Different Sexes as Measured by LAG

The magnitude of differences for those scales for which sex differences were non-
significant ranged from 0.04 to 0.20 standard deviations (all small). Examination of
the means in Table 4.6 also clarifies the direction of these differences. Although
most differences between the sexes were small and non-significant, a pattern still
emerged: males scored higher than females on nearly all scales (i.e. Integration,
Material Environment, Teacher Support, Investigation, Differentiation, Inquiry,
Enjoyment, and Achievement) except for Task Orientation, for which females
scored higher than males.

Integration measures the extent to which regular science classes and laboratories are
related. In this case, males perceived the laboratory activities to be more relevant to
the content learned in class than did females. If males enjoy the laboratories more,
as indicated by the results of this study as well as other studies (see below regarding
Enjoyment), then they might perceive a stronger connection between the
laboratories and their science classes than do females. However, this finding is
inconsistent with other studies of the SLEI, which indicated that females perceived
more Integration than males (Fraser, Giddings, & McRobbie, 1995; Kijkosol, 2005).

167
Differentiation measures the extent to which work assigned is individualized for the
pace and level of each student. Males in this sample perceived that they completed
tasks at a different pace and level from their female peers, contributing to the
significant difference found in this study. Differentiation can be an aspect of the
broader phenomenon present in male behavior during laboratory activities, as
explained by the qualitative data below and in Section 4.4.4.

The attitude scale of Enjoyment measures the extent to which students enjoy science
lessons. According to the results of this study, males enjoyed their science classes
significantly more than females. This phenomenon is well documented in the
literature (Neathery, 1997; Oakes, 1990; Raaflaub & Fraser, 2002; Wolf & Fraser,
2008), suggesting that males typically derive greater enjoyment from science, and
the ensuing laboratory activities, than females.

Qualitative data gathered to support the quantitative results indeed revealed


agreement regarding the greater enjoyment that males experienced during science
activities. Both male interviewees reported enjoyment in their science classes,
although three out of four female interviewees reported likewise. Jasper (male)
declared that science “was one of my favorite subjects!”

Nevertheless, many of the non-significant differences between scores of females and


males were negligible. Recent research suggests that the gap between the sexes in
many aspects of science education, most notably in achievement, has narrowed
(Gupta & Koul, 2007; Neathery, 1997; Oakes, 1990; Osborne, Simon, & Collins,
2003). My study, too, showed no differences between males and females regarding
achievement.

Upon interviewing, neither students nor teachers mentioned major differences


between males and females with regard to achievement. Lara noted that
achievement amongst males and females “was about the same” and Erica agreed
that the split in achievement levels was “50:50”. Interviewees were in agreement
that achievement did not depend on sex but on other factors. Jasper said that “it
depends on if you like the subject or not.” Teacher M observed that boys were more
motivated to undertake investigative activities and that would affect achievement.
Other students mentioned factors, such as distractions. Hayley observed that boys

168
were “just joking around and girls were more quiet” and focused, so girls “got more
answers than boys.” She also commented that the boys both create more
distractions but can also work better with distractions, whereas the girls “need quiet
to concentrate.”

A general theme regarding gender emerged from qualitative data gathered by


interviewing students and teachers. Many commented that males prefer to get dirty,
handle equipment, make jokes, be noisy, and they don’t focus as much as girls. Ann
reported, “I don’t think that the guys normally pay that much attention.” Lara noted
such differences too: “I guess with females, they have a little more control…they’d
wait patiently…whereas males, they’re a little more hands-on, they’re really excited
to get into things and they just can’t wait…We were looking at a rat that was dead
and the boys went crazy and wanted to touch it,” while this same rat display seemed
to disengage the girls.

Similarly, Teacher M answered, “They seem about equally motivated. In hands-on


labs, boys seem to be more motivated to do the activity, but this could be due to the
fact that they also are less participatory in the lab write-ups and thought questions
about the lab. Girls tend to pick up the slack for the lab analysis, and so they are
usually less excited about hands-on labs because they know they will be doing more
work.” This comment, which relates to a theme noted in Section 4.4.2, distinguishes
between initial interest in laboratory activities and the understanding of content that
results from such activities. According to such observations, although males enjoy
scientific, investigative activities more than females, females might be the ones who
are actually motivated to complete the work (as shown by the reverse pattern for
Task Orientation). This idea is also explained by Osborne (2003, p. 1072) who
notes, “…the general finding that girls are always more motivated to achieve than
boys”.

4.4.4 Differential Effectiveness of Virtual Laboratories for Males and Females

Whereas Section 4.4.2 focused on instructional differences separately, and Section


4.4.3 focused on sex differences separately, this section focuses simultaneously on
the two independent variables of instructional method and sex. Students’
perceptions of the learning environment, attitudes, and achievement comprised the

169
dependent variables. Table 4.7 repeats the results from the two-way ANOVAs
(previously reported in Table 4.4) for the interaction between instructional method
and sex. The presence of a statistically significant instruction-by-sex interaction
was used to identify the differential effectiveness of virtual laboratories for males
and females.

Table 4.7 Differential Effectiveness (Instructional Method x Sex Interaction) of Virtual


Laboratories for Males and Females for Each Learning Environment Scale and
Student Outcome Measured by the LAG
Instructional
Standard Method x Sex
Scale
Sex Mean Deviation Interaction
Non-VL VL Non-VL VL F Eta2
Learning Environment
Integration Female 3.77 3.63 0.62 0.60 1.31 0.01
Male 3.83 3.84 0.57 0.59

Material Environment Female 3.80 3.63 0.58 0.60 5.13* 0.03


Male 3.75 3.89 0.62 0.62

Teacher Support Female 3.73 3.58 0.73 0.88 4.40* 0.02


Male 3.59 3.81 0.73 0.81

Task Orientation Female 4.00 3.93 0.71 0.69 1.52 0.01


Male 3.80 3.92 0.67 0.77

Investigation Female 3.41 3.38 0.71 0.72 0.85 0.00


Male 3.45 3.57 0.75 0.77

Differentiation Female 2.63 2.74 0.85 0.77 0.00 0.00


Male 2.84 2.94 0.80 0.96

Outcomes
Inquiry (Attitude) Female 3.51 3.41 0.68 0.75 5.03* 0.03
Male 3.47 3.74 0.80 0.68

Enjoyment (Attitude) Female 3.40 3.37 0.83 0.83 1.38 0.01


Male 3.55 3.73 0.74 0.75

Achievement Female 2.86 2.71 1.36 1.32 0.04 0.00


Male 2.95 2.86 1.45 1.41
*p<0.05
N = 79 females in non-VL classes; 74 males in non-VL classes; 92 females in VL classes; 77 males
in VL classes

170
For each scale, Table 4.7 also displays the mean and standard deviation separately
for four groups, namely, males in the control group (Non-VL), males in the
experimental group (VL), females in the control group (Non-VL), and females in the
experimental group (VL).

Although no statistically significant differences were uncovered by the analysis for


method of instruction alone (Section 4.4.2), significant (p<0.05) interactions
between instructional method and sex emerged for three out of the nine dependent
variables, namely, Material Environment, Teacher Support, and attitude in terms of
Inquiry (see Table 4.7). In other words, virtual laboratories were differentially
effective for different sexes in terms of students’ attitudes to inquiry and their
perceptions of the material environment and how well teachers support them. The
amount of variance accounted for by the statistically significant interactions, as
represented by the eta2 statistic, was 0.02 for Teacher Support and 0.03 for Material
Environment and Inquiry; each of these interaction effects is small in magnitude.

The average item means reported in Table 4.7 can be used in the interpretation of
the statistically significant interactions between method of instruction and sex.
Means also have been graphed in Figures 4.4–4.6 for the three significant
interactions.

The interpretation of the significant interaction for Material Environment (see


Figure 4.4) is that males perceived a more positive Material Environment in VL
classes than in non-VL classes. However, females perceived a less positive Material
Environment in VL classes than in non-VL classes. Therefore, virtual laboratories
were more effective for males than for females for Material Environment, while
instruction without the use of virtual laboratories was nearly equally effective for
males and females.

171
Material Environment

3.8

3.7

3.6
Females
3.5 Males

3.4

3.3
Non‐VL Classes VL Classes

Figure 4.4 Differential Effectiveness of Virtual Laboratories for Females and Males for
the Learning Environment Scale of Material Environment

This pattern suggests that males in VL classes might feel that laboratory equipment
and materials, such as the technology required for virtual laboratories, were
adequate while females perceived this less so. Conversely, males in non-VL classes
perceived the functionality of equipment used in traditional laboratories slightly less
favorably than females. This finding is supported by results from other studies that
show a significant difference between instructional methods for Material
Environment (Lightburn & Fraser, 2007; Maor & Fraser, 1996) and by the
differential effectiveness reported for an intervention for males and females in terms
of Material Environment (Quek, Wong, & Fraser, 2005).

Qualitative data also confirmed the more positive perceptions of learning media (i.e.
materials) amongst males in the VL classes compared to females. As teacher A
observed, “Perhaps there was a slightly greater interest on the boys part [rather than
the girls], simply because some of the [virtual] labs were much like a video game.”
Literature suggests that boys are more engaged with interfaces that mimic video
games (Brotman & Moore, 2008; Farenga & Joyce, 1997; Hanson, 2009) and,
because virtual laboratories share a similar interface, males might be more open to
and perceive greater functionality in equipment that engages them. The virtual
laboratory interface, as in gaming, gives the user more control over the results and,
as Wolf quipped, “males prefer to have a sense of control over the experience and

172
that such control is a motivating factor for them.” (2006, p. 118). On the other hand,
females did not seem to be as affected by the medium for learning.

Teacher Support

3.9

3.8
Females
3.7 Males

3.6

3.5
Non‐VL Classes VL Classes

Figure 4.5 Differential Effectiveness of Virtual Laboratories for Females and Males for
the Learning Environment Scale of Teacher Support

The statistically significant (p<0.01) interaction between instructional method and


sex is shown for Teacher Support in Figure 4.5, which is a graphical representation
of the result in Table 4.7. Males perceived greater Teacher Support in VL classes
than in non-VL classes. This finding also appears in other studies (Khoo & Fraser,
2008; Raaflaub & Fraser, 2002). The opposite was true for females, who perceived
slightly greater Teacher Support in the non-VL classes than in VL classes. Thus,
virtual laboratories were more effective for males than females, with regard to
Teacher Support, while instruction without the use of virtual laboratories was
slightly more effective for females rather than males.

This finding might be a reflection of the fact that males are more willing to explore
innovations than females and will ask for, and therefore receive, more assistance
from their teachers in so doing. In contrast, females might be more comfortable
eliciting and consequently receiving teachers’ assistance in the traditional
environment to which they are more accustomed. Such a pattern for perceptions of
increased Teacher Support by females (in traditional classrooms) replicates past
research (Raaflaub & Fraser, 2002; Wong & Fraser, 1996).

173
Responses from student interviews did not seem to identify any differences between
VL and non-VL classes in sex differences for the dimension of Teacher Support.
Out of the six students interviewed, three females and two males reported that they
felt a high degree of Teacher Support, regardless of instructional method. Only one
student in the non-VL class admitted that she felt the teacher was unclear in his
instruction. Teachers stated that they did not notice any difference between the
different sexes or between the classes (VL versus non-VL).

As noted earlier (Section 4.4.2), perceptions of the definition of Teacher Support


might have been blurred because of the particular setting; some students might have
considered the instructions from the virtual program to be ‘Teacher Support’.
Therefore, an accurate assessment of student perceptions for the scale of Teacher
Support is inconclusive.

Inquiry

3.9

3.8

3.7
Females
3.6 Males

3.5

3.4
Non‐VL Classes VL Classes

Figure 4.6 Differential Effectiveness of Virtual Laboratories for Females and Males for
the Attitude Scale of Inquiry

Figure 4.6 illustrates the interpretation of the interaction between instructional


method and sex (see Table 4.7) in terms of Inquiry: virtual laboratories were
differentially effective for different sexes, with greater effectiveness for males than
for females, while non-VL classes were slightly more effective for females than for
males. In other words, males perceived greater Inquiry with virtual laboratories
compared with traditional laboratory activities. While females also had positive

174
perceptions for Inquiry, they perceived relatively less Inquiry with virtual
laboratories than with traditional methods. Support for this finding is evident from
Wolf and Fraser’s (2008) study that reported the same pattern of more positive
attitudes for males than for females in an inquiry setting, as compared to slightly
more positive attitudes for females than for males in a non-inquiry setting.

Qualitative data also supported this finding. Students and teachers alike agreed
males seemed to engage in experiential, inquiry-driven activities and therefore
perceived more Inquiry, but that females were liable to follow through with the
work required and gain more of an understanding from the activities, as demanded
by more traditional environments. The delineation between initial interest in an
activity and the motivation to understand the content of the activity, as well as
follow through with task completion, was a theme previously noted in qualitative
data at the conclusion of Sections 4.4.2 and 4.4.3. In this section, the delineation is
divided along sex differences. Interviewees observed that males tended to engage
because of initial interest of a novel activity (i.e. virtual laboratories), while females
were more motivated to understand content and complete tasks, regardless of the
activity, and that females might even be intimidated by such novel activities.

By definition, an inquiry-based experience takes place during the initiation of an


activity, and therefore it refers to the initial interest that drives students to
investigate independently (Edelson, Gordin, & Pea, 1999). Teacher M observed how
boys are “…doing less mental processing” compared to girls, but that boys are more
aroused by inquiry-based laboratories. In comparing sexes, Lara stated, “…whereas
males, they’re a little more hands-on, they’re really excited to get into things and
they just can’t wait.” Qualitative data indicated that males engaged in such inquiry,
which supports the higher Inquiry score for males in Figure 4.6. Females,
interviewees noticed, have a “little more control”, “were more quiet”, and would
“wait patiently” to engage in novel activities, such as virtual laboratories, and would
be almost apprehensive; this explains the lower Inquiry score (Figure 4.6) for
females in the VL-classes. In the non-VL classes, the difference between the sexes
was not as apparent, perhaps because of the lack of a novel activity to stimulate
Inquiry in males and dampen Inquiry amongst females.

175
The trend for all three significant interactions is that there were greater differences
between males and females in VL classes than in non-VL classes. Males
consistently scored higher in the VL classes than did the females, whereas females
consistently scored higher in the non-VL classes than did the males. This is a
noteworthy pattern in that virtual laboratories seemed to be more beneficial for
males than females with regard to perceptions of the learning environment (on two
scales, Material Environment and Teacher Support) and attitudes (Inquiry), but
females tended to fare better in more traditional learning environments without such
technological interventions as indicated by numerous studies (Aldridge & Fraser,
2008; Kijkosol, 2005; Koul, Fisher, & Shaw, 2011; Wolf & Fraser, 2008; Wong &
Fraser, 1996).

Furthermore, as displayed in Figures 4.4–4.6, differences between males and


females for the scales showing significant interactions (Material Environment,
Teacher Support, and Inquiry) were less pronounced in non-VL classes than in VL
classes. This makes sense because recent literature (Koul, Fisher, & Shaw, 2011;
Scantlebury, 2012; Wolf & Fraser, 2008) negates the idea that males prefer and
perform better in science classes; therefore, in the non-VL classes, there is not as
much of a difference between the sexes. However, once the environment is changed
through technological intervention, males might embrace the stimulus (i.e. virtual
laboratories) more than females and thus perceive a more positive learning
environment. Teacher A detected this subtlety, “Perhaps there was a slightly greater
interest on the boys part, simply because some of the labs were much like a video
game [i.e. a technological innovation].”

4.5 Summary

This chapter reported results related to my study’s research questions, including


validation of the instrument used, associations between the learning environment
and student outcomes, the effectiveness of virtual laboratories, and their differential
effectiveness for different sexes.

The Laboratory Assessment in Genetics (LAG), the instrument used for this study,
contains scales from two learning environment questionnaires (the SLEI and
TROFLEI) and an attitude questionnaire (the TOSRA), and some achievement

176
items. Validation of the LAG was based on 322 US students in 12 grade 8–10
classes.

Principal axis factor analysis with varimax rotation and Kaiser normalization led to
a reduction in the number of items on the LAG from 64 to 54, which increased the
validity and reliability of the six learning environment and two attitude scales. All
remaining items had a factor loading of 0.40 or higher on their own scale and lower
than 0.40 on any other scale; the total variance was 55.05% for all scales. Use of
Cronbach’s alpha reliability coefficient confirmed strong reliability for each of the
SLEI, TROFLEI, and TOSRA scales, as well as for the achievement items;
Cronbach alpha coefficients ranged from 0.76–0.91 with the individual as the unit of
analysis and 0.85–0.97 with the class as the unit of analysis. Discriminant validity
analysis supported the unique nature of each learning environment and attitude
scale. ANOVA results also indicated that all the learning environment scales could
differentiate between the perceptions of students in different classrooms. All these
results supported the validity and reliability of these scales for use with this sample
and add to past research that also validated scales from the SLEI (Fraser, Giddings,
& McRobbie, 1995; Lightburn & Fraser, 2007; Martin-Dunlop & Fraser, 2007), the
TROFLEI (Aldridge, Dorman, & Fraser, 2004; 2003; Gupta & Koul, 2007) and the
TOSRA (Aldridge & Fraser, 2003; Fraser, 1981; Fraser, Giddings, & McRobbie,
1995; Koul, Fisher, & Shaw, 2011; Wolf & Fraser, 2008).

Associations between learning environment and the two student outcomes of


attitudes and achievement were also reported using simple and multiple correlation
analyses with the individual as the unit of analysis. All six learning environment
scales showed positive correlations with the two attitude scales (Inquiry and
Enjoyment), whereas multiple regression analysis revealed that Material
Environment, Teacher Support, and Investigation were significant independent
predictors of Inquiry; all scales except for Differentiation were significant
independent predictors of Enjoyment. Integration, Material Environment, and
Teacher Support correlated positively with achievement, and Differentiation showed
a negative correlation with achievement. Multiple correlation analyses of the SLEI
and TROFLEI scales with achievement was statistically significant. Integration,
Material Environment, and Differentiation were also positive independent predictors

177
of achievement, even though Differentiation resulted in a significant negative
bivariate association with achievement. Overall, these results show strong links
between learning environment and attitude scales, and moderate links with
achievement; this is supported by past research (Fraser, 2012; Lightburn & Fraser,
2007; Martin-Dunlop & Fraser, 2007).

Finally, the effectiveness of virtual laboratories was investigated for LAG scales.
Differences in LAG scale scores between instructional methods and sexes were
examined using a two-way multivariate analysis of variance (MANOVA). Because
the multivariate test using Wilks’ lambda criterion yielded statistically significant
differences for the set of dependent variables, the individual, univariate two-way
ANOVA was interpreted separately for each dependent variable (students’
perceptions of their learning environment, their attitudes, and achievement), with the
student as the unit of analysis. Effect sizes were also calculated to quantify the size
of instructional differences and sex differences. This analysis revealed no
significant differences for instructional method, and moderate significant sex
differences, with males reporting more positively for the scales of Integration,
Differentiation, and Enjoyment.

Small and statistically significant interactions were found between instructional


method and sex for three of the eight scales: Material Environment, Teacher
Support, and Inquiry. For each of the three scales showing significant interactions,
males consistently scored higher in the VL classes than did the females whereas, in
the non-VL classes, males and females consistently scored nearly equally. The
more positive classroom perceptions amongst females, as compared to males, that
emerged in this study reflect similar results to those reported in past research
(Aldridge & Fraser, 2008; Kijkosol, 2005; Koul, Fisher, & Shaw, 2011; Wolf &
Fraser, 2008; Wong & Fraser, 1996).

Further interpretation of these results is discussed in the following chapter. The


significance of these results, their implications for educational research and the
classroom, limitations of this study, and suggestions for future research are all
considered in the next chapter as well.

178
Chapter 5

Discussion
“Intuition becomes increasingly valuable in the new information society precisely
because there is so much data.” – John Naisbitt

5.1 Introduction

The aim of the current study was to evaluate the effectiveness of virtual laboratories,
an educational innovation, in terms of students’ perceptions of the learning
environment, their attitudes towards science and their achievement in science. The
differential effectiveness of such virtual laboratories was also explored for males
versus females.

Previous chapters included the rationale for this study in Chapter 1, the literature
that provided the context for this study in Chapter 2, the research methods used to
implement the study in Chapter 3, and the results for the four research questions that
guided this study in Chapter 4.

This chapter will first summarize the earlier chapters regarding research methods
and results (Section 5.2), explicate the significance of the results and implications
for educational research and practice (Section 5.3), point out the limitations of this
study, suggest directions for further research (Section 5.4), and provide a final
conclusion for the study (5.5).

5.2 Overview of Thesis

This study was first conceptualized based upon the researcher’s anecdotal
observation that the interest of students not normally engaged in science classes was
piqued by the use of virtual laboratories. Therefore, the researcher set out to test this
initial observation methodically to determine if virtual laboratories were indeed
effective in increasing students’ positive perceptions of the classroom, their
attitudes, and levels of achievement. Because this phenomenon seemed to initially
manifest especially for males, the researcher also wished to test differential
effectiveness of virtual laboratories for male and female students.

179
The rationale for this study is based on a combination of improved standards in
science education, particularly for the topic of genetics, and the lack of improvement
in the resources necessary to enable students to attain those higher standards.
Virtual laboratories represent a possible method to narrow the gap between lack of
resources and higher standards in science education in that they allow students to
experience laboratory environments and experiments that would not otherwise be
possible in a high school classroom but with which students are required to be
familiar.

First, the relevant literature was reviewed concerning learning environments, the
framework for this study, and one of the measurements of effectiveness for virtual
laboratories. The field of learning environments seeks to understand the effects of
the psychosocial aspects of the classroom on learning, from the student’s
perspective. Over the last 40 years, the field of learning environments has become
more important in educational research and, along with its development, numerous
important questionnaires have emerged.

Next, the role of students’ attitudes towards science was explored by defining the
term ‘attitude’, explaining the assessment of attitudes, and reviewing the effect of
various educational interventions on students’ attitudes. Attitudes constituted
another criterion of effectiveness for virtual laboratories. The issue of student sex in
science education was also considered because girls and boys might respond
differently to virtual laboratories in terms of their perceptions of and attitudes
towards their classes. As well, because research reveals a gender gap in science
achievement (Hill, Corbett, & St. Rose, 2010; National Center for Educational
Statistics (NCES), 2012a; Scantlebury, 2012), it was deemed appropriate to
investigate the differential effectiveness of virtual laboratories for different sexes.

Finally, the topic of virtual laboratories was addressed within the context of
educational technology. Virtual laboratories are defined as electronic workspaces
that are based on interactive simulations of scientific experiments. Benefits include
the increased emphasis on conceptual understanding and reduced reliance on
constraints, such as time, safety hazards, geographic distance, and cost. While
technological interventions in the classroom are often predicted to be more useful

180
than studies have shown (Russell, 1999), they are not generally detrimental to
students’ learning and they are therefore considered to be effective alternatives for
certain educational experiences.

The remainder of this section reviews the research methods and key findings for
each research question (Sections 5.2.1–5.2.4) and also summarizes the qualitative
data gathered from students (5.2.5) and from teachers (5.2.6).

5.2.1 Research Question 1

Research Question 1: Are scales from the Test Of Science Related Attitudes
(TOSRA), Science Laboratory Environment Inventory (SLEI), and
Technology-Rich Outcomes-Focused Learning Environment Inventory
(TROFLEI) questionnaires valid and reliable when used with a sample of
high school students taking biology in the US?

In order to assess the effect of virtual laboratories on three dependent variables


(perceptions of the learning environment, attitudes, and achievement), appropriate
instruments were needed to measure each variable. Scales were adopted and
adapted from various previously validated questionnaires for inclusion in the
Laboratory Assessment in Genetics (LAG), but the LAG’s validity and reliability
was checked for use with the sample in this study in order to be used as an
instrument for this particular instance.

Scales to measure the learning environment were taken from the Science Laboratory
Environment Inventory (SLEI) (Fraser, et al., 1992) and the Technology-Rich
Outcomes-Focused Learning Environment Inventory (TROFLEI) (Aldridge &
Fraser, 2003), both of which have been validated in numerous countries, in different
content areas, and with various age levels, as described in Section 2.2.2 (Aldridge &
Fraser, 2003; Fraser, 2012; Fraser, Giddings, & McRobbie, 1992). Scales to
measure students’ attitudes were borrowed from the Test Of Science Related
Attitudes (TOSRA), which also has been validated in numerous countries, in
different content areas, and with various age levels, as described in Section 2.3.2
(Fraser, 1981; Fraser, Aldridge, & Adolphe, 2010; Ogbuehi & Fraser, 2007; Welch
et al., 2012; Wong & Fraser, 1996). Items in each of these scales were modified; for

181
instance, negatively-worded TOSRA items were worded positively, wording in a
learning environment scales was generalized to include their application to virtual
laboratories, and some items in all scales were removed or added to ensure a
consistent number of eight items per scale. Validity and reliability analysis were
also necessary to check the validity of these modifications.

To assess the validity and reliability of the scales, the factor structures of the WIHIC
and TOSRA items were checked using principal axis factoring with varimax rotation
for the sample of 322 students in 12 classes. Next, the internal consistency
reliability for each SLEI, TROFLEI, and TOSRA scale was used to measure the
extent to which items in a given scale assess the same construct. As well, the mean
correlation of a scale with the other learning environment and attitude scales was
used an index to assess the uniqueness of each scale and ensure discriminant
validity. Furthermore, the ability of each SLEI and TROFLEI scale to distinguish
between different classrooms was assessed using an ANOVA.

An achievement scale was constructed by the researcher, consisting of 10 items,


each borrowed from previously validated standardized state examinations in
biology. The items were all related to the topic of genetics. To assess validity of
this scale, achievement scores were plotted in a histogram to assess the overall
normality of scores in addition to comparing the means of certain items to the means
obtained for much larger populations that answered the same items.

Key findings for the validity and reliability of scales used for the LAG reported in
Section 4.2 are summarized below:

 The optimal factor solution occurred for the set of 54 items in 8 scales from
the SLEI, TROFLEI, and TOSRA, after the removal of 10 items to increase
validity, with a total variance of 55.05% for all scales.
 The 54 remaining items from the SLEI, TROFLEI, and TOSRA showed high
reliability and satisfactory discriminant validity for two units of analysis
(individual and class mean).
 The learning environment scales (SLEI, TROFLEI) were able to differentiate
between the perceptions of students in different classrooms.

182
 Achievement scores showed a close-to-normal distribution and scores on
selected items were similar to scores for a larger population for the same
items.

As with past research, modified scales from the SLEI (Fraser, Giddings, &
McRobbie, 1992), TROFLEI (Aldridge & Fraser, 2003), and TOSRA (Fraser, 1981)
showed strong validity and reliability. The findings suggest that these scales can be
effectively utilized to assess student perceptions and attitudes in high school
classrooms in the US. The almost normal distribution of achievement scores is in
line with patterns of scores from most standardized examinations (Herrnstein &
Murray, 1996) and scores on selected items were similar to those of a larger
population (Massachusetts Comprehensive Assessment System (MCAS), 2009),
therefore suggesting that such items are appropriate measures of achievement in
genetics for high school students in the US.

5.2.2 Research Question 2

Research Question 2: Are there associations between the perceived


classroom learning environment and the student outcomes of attitudes
towards and achievement in science?

Associations between the learning environment and student outcomes (attitudes and
achievement) were investigated using simple correlation and multiple regression
analyses with a sample of 322 students in 12 classes, and using the individual means
as the unit of analysis.

Key findings for the associations between the learning environment (as measured by
the SLEI and TROFLEI, a total of six scales) and attitudes (as measured by two
TOSRA scales) were reported in Section 4.3.1 are summarized below:

 All six learning environment scales correlated significantly and positively


with both attitude scales.
 The multiple correlation of the SLEI and TROFLEI scales with attitude
scales was statistically significant.
 Material Environment, Teacher Support, and Investigation were positive,
independent predictors of the Inquiry attitude scale, and five scales

183
(Integration, Material Environment, Teacher Support, Task Orientation,
Investigation) were positive, independent predictors of the Enjoyment
attitude scale.

Key findings for the associations between the learning environment (as measured by
six SLEI and TROFLEI scales) and achievement reported in Section 4.3.2 are listed
below:

 Integration, Material Environment, and Teacher Support correlated


significantly and positively with achievement, while Differentiation
correlated significantly and negatively with achievement.
 The multiple correlation of the SLEI and TROFLEI scales with achievement
was statistically significant.
 Integration, Material Environment, and Differentiation were positive,
independent predictors of achievement.

The overall positive associations between learning environment and student


outcomes of attitude and achievement have been replicated many times in past
research (Fraser, 2012; Lightburn & Fraser, 2007; Martin-Dunlop & Fraser, 2007).
The negative correlation between Differentiation and achievement was surprising
and is further discussed in Section 4.3.2; however, this finding warrants further
investigation in future research.

5.2.3 Research Questions 3 and 4

Research Question 3: Is the use of virtual laboratories in high school science


classes effective in terms of students’

d) perceptions of their learning environment


e) attitudes towards science, and
f) academic achievement in genetics?

Research Question 4: Is the use of virtual laboratories differentially effective


for males and females in terms of students’

d) perceptions of their learning environment

184
e) attitudes towards science, and
f) academic achievement in genetics?

The intervention investigated in this study involved six teachers each teaching at
least one class that used virtual laboratories and at least one class that did not, over a
period of about 2–10 weeks. Altogether, there were 322 students, who were diverse
in ability and socio-economic status, in 12 US grade 8–10 classes. The virtual
laboratories available for application in the classroom were chosen by the researcher
for their emphasis on inquiry skills as well as complex conceptual understanding of
techniques not otherwise available in a high school classroom.

To explore the differences between modes of instruction, and also between males
and females, as well as to find interactions between instructional method and sex, a
two-way MANOVA was used for the set of learning environment, attitude, and
achievement scales. The multivariate test using Wilks’ lambda criterion yielded
significant differences, and so the univariate ANOVA was interpreted for each scale.

Key findings for the differences between the two instructional methods in terms of
learning environment and student outcomes from Section 4.4.2 are summarized
below:

 No statistically significant differences existed for instructional method (i.e.


between student scores in VL classes versus non-VL classes) for any scale.

 For Teacher Support, Task Orientation, Investigation, Differentiation,


Inquiry, and Enjoyment scales, scores were slightly higher for VL classes
than for non-VL classes, and Integration and Material Environment scores
were slightly lower for VL classes than for non-VL classes, but these
findings were not statistically significant.

 The largest effect sizes for differences between instructional methods


occurred for the scales of Integration (-0.12 standard deviations) and
Differentiation (0.12 standard deviations). Other scales had effect sizes of
less than 0.10 standard deviations.

185
These findings replicate those from other studies reporting that virtual laboratories
offered neither advantages nor disadvantages over other methods of instruction
(Cobb et al., 2009; Cross & Cross, 2004; Javidi & Sheybani, 2006; Russell, 1999;
Stuckey-Mickell & Stuckey-Danner, 2007), and suggest that virtual laboratories
might be useful as a supplementary tool in science classrooms, rather as a substitute
for more traditional methods, such as hands-on laboratories (Nedic, Machotka, &
Nafalski, 2003; Raineri, 2001; Toth, Morrow, & Ludvico, 2009; Yu, Brown, &
Billet, 2005). Qualitative data were consistent with the quantitative results; a more
detailed summary of this can be found in Section 5.2.4. However, a subtle pattern
emerged from the qualitative data: higher levels of Inquiry were perceived with
hands-on laboratories than with virtual laboratories, but the level of inquiry did not
necessarily result in greater understanding while several students who used virtual
laboratories did show such understanding.

Key findings for the differences between males and females, regardless of
instructional method (see Section 4.43), were:

 Significant but moderate differences were found between males and females
for the learning environment scales of Integration (0.22 standard deviations)
and Differentiation (0.24 standard deviations) and for the attitude scale of
Enjoyment (0.33 standard deviations).
 All significant differences revealed scores that were higher for males than for
females.
 For the rest of the scales not showing significant differences, males also
scored higher than females, except for the scale of Task Orientation (-0.14
standard deviations).
 Modest effect sizes for other differences between the sexes occurred for the
scales of Material Environment (0.20 standard deviations), Investigation
(0.16 standard deviations), and Inquiry (0.19 standard deviations). Other
scales had effect sizes of less than 0.10 standard deviations.

The finding that males perceived the learning environment more positively than
females can be contrasted with past research (Fraser, Giddings, & McRobbie, 1995;
Kijkosol, 2005) and requires further investigation. However, past research indicates

186
more positive attitudes for males towards science than for females (Neathery, 1997;
Oakes, 1990; Raaflaub & Fraser, 2002; Wolf & Fraser, 2008), and this is consistent
with my findings. The finding that no significant differences existed for different
sexes regarding achievement is also consistent with recent research suggesting a
narrowing of the gender gap in science achievement (Gupta & Koul, 2007;
Neathery, 1997; Oakes, 1990; Osborne, Simon, & Collins, 2003). Qualitative data
revealed that, although males enjoy scientific, investigative activities more than
females, females might be the ones who are more motivated to complete the work
(as measured by Task Orientation). More details for qualitative data are
summarized in Section 5.2.4.

Key findings for the differential effectiveness of the instructional methods for males
and females in terms of learning environment and student outcomes (Section 4.4.4)
are summarized below:

 A significant Instructional Method x Sex interaction emerged for the two


learning environment scales of Material Environment and Teacher Support
and for the attitude scale of Inquiry, all with small effect sizes (the amount of
variance accounted for being 0.03, 0.02, and 0.03, respectively).

 Virtual laboratories were more effective for males than for females for
Material Environment, Teacher Support, and Inquiry, but instruction without
the use of virtual laboratories was nearly equally effective for males and
females on all scales.

Similar patterns were described by Wolf and Fraser (2008) in that males perceived a
more positive learning environment and attitudes in the class with an inquiry-based
intervention than in the class without the intervention, but that generally the opposite
was true for females. Other studies also reported differential effectiveness of an
intervention for males over females for the dimensions of Material Environment
(Quek, Wong, & Fraser, 2005), Teacher Support (Khoo & Fraser, 2008; Raaflaub &
Fraser, 2002), and Inquiry (Wolf & Fraser, 2008). In general, more positive
perceptions of the learning environment for females in traditional classrooms have
been noted (Fraser, Giddings, & McRobbie, 1995; Kijkosol, 2005; Raaflaub &
Fraser, 2002; Wong & Fraser, 1996).

187
Qualitative data indicated that males were keen to plunge into experiments that they
perceived to contain high levels of inquiry, whereas females were somewhat
apprehensive. Both students and teachers observed that, in general, males are more
accepting of, and excited by, interventions, especially technological ones, than are
females, which is supported by past research (Brotman & Moore, 2008; Farenga &
Joyce, 1997; Hanson, 2009).

As my investigation of the differential effectiveness of virtual laboratories for males


and females involved a sample of only 322 students, findings should be considered
tentative until they are replicated with larger samples in future research (see Section
5.4).

5.2.4 Summary of Qualitative Data

The method of gathering qualitative data was through semi-structured interviews of


students who volunteered to be contacted via email over the summer break. Four
students interviewed were from VL classes, consisting of two males and two
females, and two students interviewed were from non-VL classes, both of whom
were females. All interviewees received parental permission to participate.
Interview questions, which were written by the researcher and modeled after the
questionnaire items used in this study, were used to explore student perceptions,
attitudes, and sense of achievement (see interview questions in Appendices B and
C). A summary of the overall responses (Section 4.4) for students who experienced
each of the instructional methods (VL and non-VL) is provided in Table 5.1. To be
sensitive to students, the researcher refrained from asking questions regarding
Differentiation because this scale measures the extent to which class work is
personalized for students with different abilities.

188
Table 5.1 Summary of Student Interview Results for Students Experiencing each
Instructional Method for each Learning Environment and Outcome Variable
(Based on Section 4.4)
Learning Comments from Students in VL classes Comments from Students in non-VL
Environment classes
Scale/Outcome
Integration Students cited numerous examples of Mixed responses revealed that most
laboratory activities that connected to laboratory activities were related to
concepts recently learned in the concepts learned in class, but some
classroom or used an introduction to were not. Some students were not
concepts learned subsequently. able to explain how the activity fitted
with the topic they learned.
Material Students cited examples about how some Students reported that equipment
Environment traditional laboratory equipment was in fine working order, except
(microscopes) was old and caused that sometimes the Internet
problems, which took away time from connection was slow.
learning. Many students also
commented on the state of technological
equipment (computers, Internet) with
mixed responses as to their functionality.
Teacher Support Students recounted that the teacher was Some reported that the teacher was
always helpful whenever students had helpful. Others felt that, while the
questions but that the teacher did not tell teacher was knowledgeable,
them exactly what to do. Examples knowledge was not transmitted
included evidence of forming personal clearly. They wished for the teacher
relationships. to provide more instruction.
Task Orientation Students reported their desires to finish Students reported their desires to
what they started and finish work on finish what they started and finish
time, and described feeling positive as a work on time, and described feeling
result. positive as a result.
Investigation Students reported that they were given This dimension was not addressed by
diagrams and graphs to interpret the interviewees.
evidence for investigations, and that they
had control over their experiments.
Inquiry Students stated their preferences to Students stated their preferences to
(Attitude) experiment themselves rather than be experiment themselves rather than be
told about a result. However, they told about a result, as well as the
preferred to be given a hypothesis to opportunity to find solutions together
test, rather than construct one on their with other students.
own.
Enjoyment All students described their enjoyment Students reported that, because the
(Attitude) of science classes and looked forward to teacher was boring and the
them. As examples, some cited VLs and laboratory activities were not clear,
others cited hands-on laboratories. All the class was not much ‘fun.’
students reported satisfaction about Students stated a preference to be
being placed in the VL class. Some placed in the VL class, even though
students admitted to trying VLs at home. they enjoyed the ‘hands-on’ factor of
experiments. No students reported
trying experiments at home.
Achievement Students found the content challenging, Students found the content
some admitted needing the teacher’s challenging and had difficulty
help and not all were able to explain the explaining the concepts. However,
concepts. Students reported that they they stated that they generally
generally understood the material in understood the material in genetics
genetics and achieved well in this topic. and achieved well in this topic.
Some students pointed to VLs in
assisting their understanding because of
the instant feedback.

189
At the end of each interview, the researcher informed the interviewer that the
quantitative data did not show major differences between VL and non-VL classes,
and asked the interviewee for his or her thoughts about why no such differences
appeared. The following is a summary of the key points that the interviewees
mentioned as explanations for the lack of evidence for the effectiveness of virtual
laboratories (see Section 4.4.2).

 While the lessons were stimulating, the tests were difficult.


 Instructional effectiveness depends more on the teacher, rather than the
method.
 The degree of effectiveness depends on the type of laboratory investigation
and whether or not there were differences between the virtual ones and the
real ones.
 Virtual laboratories were not conducted often enough.

Students were also asked to comment on differences between males and females that
they perceived during laboratory activities (see Section 4.4.3). Additionally, the
researcher noted whether statements were made by males or females. These
responses were categorized and summarized by the researcher into the dimensions
listed in Table 5.2.

Mostly, the qualitative data supported the quantitative results but provided some
insight regarding patterns of differences between the sexes, such as the observation
that males tended to initiate experiments and relish handling equipment and
specimens. In contrast, while initially apprehensive, females would follow-through
on the task required and focus on the purpose of the experiment. It follows that
technological interventions, such as virtual laboratories, might perhaps serve to
distract females from the work that they set out to do, while new media engage
males, causing the former to have more positive perceptions in a traditional learning
environment and the latter to have more positive perceptions in an altered learning
environment. However, there were not enough male interviewees to reach a solid
conclusion via the qualitative data and these qualitative results should be verified
with a larger sample in a future study.

190
Table 5.2 Summary of Student Interview Results for Sex Differences for each Learning
Environment and Outcome Variable
Learning
Environment Scale/ Perceived Sex Differences
Student Outcomes
Integration No differences between the sexes was noted for this scale.
Material Males mentioned the audio and visual effects as a positive feature of VLs
Environment but no other differences were noted between the sexes regarding the
functionality of equipment.
Teacher Support Females preferred to have the teacher more involved in any activity to better
guide them, whereas males tended to go at it alone. Females also described
their personal relationship with teachers whereas males simply stated
whether or not teachers were helpful. This finding tentatively explains the
differences found in the non-VL classes but not in the VL classes.
Task Orientation Females reported that males would dive right into an activity but often leave
unfinished the follow-through work, which the females would complete.
Investigation No differences between the sexes was noted for this scale.
Inquiry (Attitude) Males seemed motivated by activities that allowed them to jump in and test
things out themselves, whereas females preferred a set of prescribed
instructions. This was also noted by female students about their male peers.
Enjoyment Males were reported as being noisy, which can be interpreted as evidence of
(Attitude) their enjoyment of laboratory activities. Otherwise, both sexes seemed to
enjoy VLs and non-VLs.
Achievement Students reported that males and females at achieved at equal levels. Scores
posted by the teachers for all to see also revealed this.

5.2.5 Teachers’ Perspectives Regarding the Learning Environment and Student


Outcomes

While teachers were not identified as the subjects of my study, nor were they
included as the unit of analysis, their feedback about the implementation of the
study adds valuable insight to the current data. Six teachers, including the
researcher, were involved in the evaluation of the effectiveness of virtual
laboratories and were asked to comment about the various logistical aspects of this
study, as well as note their own observations about students’ perceptions of the
learning environment, attitudes, and achievement, as well as gender issues. Four of
the six teachers responded, excluding the researcher to avoid introducing bias, and
their comments were categorized according to the dimensions in Table 5.3.

When teachers were asked why greater differences between the two groups were not
apparent in the quantitative data, they responded that confounding variables could
include the amount of previous exposure that students have had to other laboratory
experiences. Some schools or teachers allow more hands-on investigations whereas
other school or teachers lack the resources to be able to do so and might just use
texts, lectures or videos instead. As well, some teachers used other computer-based

191
activities while implementing virtual laboratories, which might have confused
students when providing their perceptions of virtual laboratories. In conclusion,
whether a virtual or hands-on laboratory or a computer-based activity, the
boundaries that define each of these activities were somewhat blurred. In future
studies, clearer instructions about which activities should or should not be used in
providing feedback about each treatment condition might produce different results.

Table 5.3 Summary of Teachers’ Observations for each Learning Environment and
Outcome Variable, and Gender
Learning
Environment Scale/ Teachers’ Observations
Student Outcomes
Integration All teachers mentioned that they tried to align the laboratory activities with
the content of what was being learned in class.
Material Some teachers noted difficulty with accessing the websites for the VLs
Environment because of a slow internet connection or because computers were in short
supply. One teacher noted mentioned that VLs were advantageous for the
topic of genetics because the expense and technical expertise needed to use
more sophisticated equipment for hands-on laboratories in genetics are
challenging.
Teacher Support Teachers agreed with the students that assistance for the VL group was
mainly about getting started but, otherwise, the VLs were self-guided. Some
teachers observed that more help was needed in the non-VL group.
Task Orientation No differences were observed between the two classes. One teacher
commented that the ability to complete a task depends more on the student’s
motivation and ability than the instructional method. Another teacher
mentioned that student motivation was a predictor of the effectiveness of VLs
rather than the other way around.
Investigation No differences between the groups were noted for this scale.
Differentiation Teachers observed that students in the VL group were able to advance
through the activities at their own pace and review parts, as necessary.
Inquiry (Attitude) One teacher commented that because students could progress at different
paces with VLs, the more skilled ones were able to progress further and
experience more inquiry. Teachers agreed with students that males tended to
take action right away, with VLs and non-VLs, leading to more inquiry.
Enjoyment All teachers noted that VLs are a valuable addition to the regular classroom
(Attitude) activities because students seemed to enjoy them, but VLs should not replace
other activities. Several teachers perceived the males to be particularly
engaged in VLs, more so than the females, which might be because of their
familiarity with other virtual environments online and with video games.
Achievement One teacher reported that her classes, regardless of the instructional method,
perceived the genetic achievement items as being too easy. Another teacher
observed that males were required to do more mental processing with VLs, as
opposed to non-VLs, in that they simply explored and left the mental
processing to their female partners.

5.3 Significance and Implications

The National Research Council’s Committee on Science Learning: Computer


Games, Simulations, and Education calls for partnerships between academic

192
researchers, developers, entrepreneurs from the gaming industry, education
practitioners, and policy makers to facilitate “rich intellectual collaboration”
(National Research Council (NRC), 2011, p. 3). The results of my study add one
more piece to the body of evidence amassed by these professionals about the
effectiveness of virtual environments in education. The findings herein pose
important implications for both the field of educational research (Section 5.3.1) and
for practitioners in education (Section 5.3.2).

5.3.1 Implications for Educational Research

A leading authority in the field of educational research, Fraser (2012) advocates the
incorporation of learning environment scales in evaluating the effectiveness of
educational innovations because traditional measures of effectiveness (such as
achievement) do not provide a complete picture of the educational process. Despite
a number of recent studies (see Section 2.3), the amount of research involving
assessing the impact of educational innovations on transforming the classroom
learning environment is small relative to the speed at which educational innovations
are being incorporated into classrooms. Thus, this study was the first of its kind to
adopt a learning environment framework in which the classroom environment, in
addition to achievement and attitudes, served as a criterion of effectiveness in
evaluating educational innovations. Its findings contribute to the growth in research
into evaluating educational innovations within the increasingly rich and diverse field
of learning environments.

As the roles of and interactions between teachers, students, and instructional


materials evolve, the development of robust questionnaires that are economical,
valid, and reliable has become necessary for evaluating such changes. This study
provided evidence for the validity and reliability of another questionnaire that
assesses the impact of technological innovations in science classes on the learning
environment and the student outcomes of attitudes towards science and achievement
in genetics. The Laboratory Assessment in Genetics (LAG) (Appendix A), which
can be administered within one class period and is also available online, is one more
instrument that can be used for this purpose. While the validation of an instrument to
evaluate the use of simulations or virtual laboratories is an important step in science
education research, and while the instrument used in this study is focused on science
193
learning, this research can also be of interest to a broader education community
because the use of technology is not limited to science education.

Additionally, the inclusion of qualitative measures in educational research studies


has become increasingly important, particularly for innovations that are
implemented into classrooms around the world and in various contexts, in order to
detect cultural nuances. While this study was conducted in only one country, its
design is adaptable to a range of cultural school environments globally because
virtual laboratories, and the LAG questionnaire, are available online, and because
qualitative evidence was also included. The methodology of this study can be
repeated with adjustments, as described in Section 5.4.

The findings of this study also confirmed positive associations between learning
environment dimensions and attitudes as reported in previous studies (Aldridge &
Fraser, 2003; Fraser, 2012; Lightburn & Fraser, 2007; Wolf & Fraser, 2008).
Addressing student attitudes towards science in the early high school years (grades
8–10) is important because studies have pointed to the decline in such attitudes at
this time (Oliver & Venville, 2011; Tytler & Osborne, 2012). Based on the results
of this study, males who engaged in VLs exhibited more positive attitudes
(regarding inquiry) towards the class. Because Material Environment, Teacher
Support, and Investigation were positive, independent predictors of the Inquiry
attitude scale, these findings further highlight the importance of considering the field
of learning environments in future research.

Mainly, this study is important because it evaluated the effectiveness of an


educational innovation in terms of students’ perceptions of the learning environment
and learning outcomes. The results provided quantitative evidence that virtual
laboratories are no more effective for students than any other instructional media.
On the other hand, virtual laboratories were not shown to be ineffective and,
therefore, they offer one efficient, economical, and stimulating approach to
experimentation in science classes limited in resources, equipment, and time.
Further research on the effectiveness of other technological interventions is needed
to ensure that negative impacts on education do not emerge.

194
Of interest were significant differences between the perceptions and attitudes of
males and females in this study. Males perceived greater levels of Integration,
Differentiation, and Enjoyment than females. These differences build upon the
well-studied topic of gender imbalance in science education (Scantlebury, 2012) and
could provide direction for future research in this area, especially with regard to
technological innovations in the classrooms.

A degree of effectiveness for virtual laboratories was indeed suggested by the results
of this study, but only for a subsample: for males in VL classes versus males in non-
VL classes. The positive value of virtual laboratories, however, was not evident for
females in VL classes relative to non-VL classes. However, this could be an area of
investigation in future research.

Finally, the findings from this study as well as those from similar studies (Raineri,
2001; Toth, Morrow, & Ludvico, 2009) suggest the need for expanded development
of virtual laboratories, especially regarding the aspects of inquiry, resources, and
teacher support, as well as further evaluative research regarding their effects among
students in secondary and post-secondary classrooms.

5.3.2 Implications for Educational Practitioners

The outcomes of this study have the potential to inform policy-makers who call for
technological advancements in education and for administrators and teachers who
could implement these technological tools in their classrooms.

Innovations that alter the dynamic of the traditional classroom, from collaborative
teaching to the incorporation of technology, such as online textbooks and virtual
laboratories to instances of ‘learning without walls’, such as fully online classes or
distance education, have been heralded as a solution for increasing student
motivation and for initiating a paradigm shift in defining the learning environment.
However, the results of this study do not fulfill this promise. The results of this
study simply point to the value of virtual laboratories in providing an equally
beneficial experience for students in alternative educational environments, such as
online or distance education, or for students in schools that lack resources for hands-
on laboratories.

195
Perhaps the most important implication of this study is that it provides a practical
model for teachers for integrating virtual laboratories into traditional high school
classrooms. The results of this study suggest that virtual laboratories can be
incorporated confidently into science curricula without detrimental effects, in
contrast to fears that virtual learning is disadvantageous to students. Added benefits
include that virtual laboratories are an efficient, safe, and cost-effective alternative
to running physical laboratories, that students are able to learn independently and,
more importantly, that they are exposed to laboratory equipment, procedures, and
skills that they could not otherwise access because of limited funding and
maintenance.

Because the use of virtual laboratories is at least as effective as other instructional


media, teachers can add this innovation to their repertoire of presentation tools.
Although teachers often feel pressure to complete their curricula in the time allotted,
and therefore might be hesitant to try new technologies, the justification for
attempting to include virtual laboratories as part of their teaching repertoire is that it
might ultimately save time, relative to attempting to conduct a physical experiment
with equipment that also can intimidate the teacher.

Moreover, the use of virtual environments demonstrates to students that technology,


gaming, and virtual activities can be used for learning as well as for pastime
activities. This might serve as a valuable reference for the developers of such
technology because they can continue to improve and market their products with the
knowledge that such interventions are not detrimental or distracting to students’
educational experiences. Furthermore, quantitative data and the qualitative data
gathered in this study could provide direction for refining virtual laboratories, such
as increasing the sense of investigation for all students, building on the personalized
feedback that the system affords, and incorporating female-friendly aspects into
such an experience.

Based on the findings in this study, males who engaged in virtual laboratories
exhibited significantly more positive attitudes (Inquiry) toward the class than males
in non-VL classes. Also, because Material Environment, Teacher Support, and
Investigation were positive, independent predictors of the Inquiry attitude scale,

196
improving these aspects of the learning environment could result in improved
attitudes amongst males in classrooms with such technological interventions, a
valuable observation for educational practitioners to note. Furthermore, by
redesigning virtual laboratories to incorporate the preferences of females, who
appreciate certain aspects of VLs, such as personalized and immediate feedback, it
is possible that the attitudes among females could also improve in inquiry classes.
To further engage females, perhaps product developers could merge virtual
experimentation with the realm of social media to allow for greater collaboration
and interpersonal interactions as well as interactions with inanimate objects. In
general, improving students’ attitudes toward science at this stage might lead to
increased overall interest in science that influences the rest of their science courses
throughout high school and beyond.

Significant differences also emerged between males and females in this study,
regardless of the instructional method. Males perceived greater Integration,
Differentiation, and Enjoyment in science classes. Teachers can utilize these
findings in their own classrooms to ensure a more gender-fair environment by
stressing the integration of laboratory work with class work with females, by
providing females with more opportunities for differentiated learning, and by
incorporating activities that are of greater interest to females.

With improvement in the perceptions of the learning environment and attitudes for
males, and without less positive perceptions of the learning environment, attitudes,
and achievement for males or females, it would seem that using virtual laboratories
could be an effective method for teaching laboratory-based content by introducing
students to specialized techniques not otherwise experienced in a high school
classroom setting. This allows teachers to expose students to scientific inquiry in
the real world without sacrificing numerous class periods by attempting the
techniques on their own (if they are even feasible or affordable at the high school
level), and without the safety hazards associated with such activities. Ultimately,
while it is possible that this educational innovation can be disregarded as being of
limited benefit to students in today’s technological society, further research into the
development and evaluation of virtual laboratories is necessary.

197
5.4 Limitations and Suggestions for Further Research

Human error affects all experiments, and my study, which not only involved a
human researcher but also human subjects, was no less error-prone. This section
revisits and summarizes the limitations of this study that were described in greater
detail in Section 3.7. As a result of the quantitative and qualitative data, other
limitations also arose, which were not addressed in Section 3.7 but are described in
this section. Additionally, this section recommends suggestions for future research
on the effectiveness of virtual laboratories based on each limitation noted for this
study.

The sample for this study consisted of 322 American high school students. While
there was much diversity amongst the students in this sample, the size of the sample
was relatively small. An even larger sample would have increased statistical power
and could have permitted differences to be identified more confidently. As well, a
larger sample would have reduced individual idiosyncrasies that could have existed
with this group of students. Similarly, a sample of interviewees greater in number
and diversity would have been desirable and likely to increase insight into the
quantitative results.

Part of the reason why the sample size was limited was a loss of opportunities that
would have allowed more students to respond to the questionnaire. As noted in
Section 3.7.1, the link to the online questionnaire was non-operational at the time
when two teachers intended to administer it. Because the school year was over, time
limitations prevented students from responding to the questionnaire when the link
was fixed or when paper versions could have been provided. This error also limited
the researcher’s ability to recruit interviewees because, during the summer break,
students (and teachers) are apt to neither respond to school-related requests nor
remember the details of what occurred during the school year. In the future, it
would be advisable for the researcher to note the closing date for the school year for
each teacher, in order to ensure that the implementation of the study is completed
well before that date and to allow extra time to fix any errors. Indeed, at the outset,
more time should be allotted to enable increased efforts in finding participants
before the implementation of the study. The suggested timetable for implementation

198
of such a study, assuming the experimental design and preparation of materials is
complete, is 8–10 months of an academic year.

Regarding the sample, the original research proposal included another group for
which to investigate the differential effectiveness of virtual laboratories in addition
to different sexes: minorities. However, the data collected and analyzed for this
were disregarded because of contradictory results, which would decrease the validity
of the conclusions based on this research. Future studies should attempt to
investigate the differential effectiveness of virtual laboratories for minorities with a
sample with a better representation of minority students in both the experimental
and control group.

Controlling the treatment conditions was also a limiting factor in this study (see
Section 3.7.2). Ideally, all conditions between the experimental and control groups
should have been identical, besides for the use of virtual laboratories. Naturally,
such a setup is impossible in a school setting. Nevertheless, certain conditions could
have been controlled better, such as uniformity of teaching resources amongst the
control group and more consistency regarding the frequency with which VLs were
administered.

According to the students’ perceptions, some reasons why quantitative differences


between instructional methods were not apparent include the difficulty of the topic
of genetics, the infrequency with which virtual laboratories were offered, and
confounding variables, such as differences between teachers implementing the study
and differences between the types of laboratory investigations. While some of these
issues cannot be controlled for, and in the current study the researcher wished to
provide some instructional freedom to allow teachers to better integrate the study
with their own curriculum, future studies could include more detailed instructions
regarding the timetable for the implementation of virtual laboratories and the exact
types of virtual and non-virtual laboratories to be used to enable a more accurate
comparison.

Other aspects of this study also pointed to the importance of the role of the teacher
over the instructional method, as noted in Section 4.4 from students’ responses to
the interview questions. Each teacher taught both VL and non-VL classes, thereby

199
controlling for differences between teachers. However, the precise manner in which
the VL activities were integrated into the traditional classes depended on the teacher.
Additionally, the degree of enthusiasm and commitment of the teacher to an
alternative teaching method could have influenced student perceptions. Similarly, in
another study about a web-based learning environment, researchers highlighted the
role of the teacher in affecting students’ perceptions and, ultimately, in the
educational effectiveness of the environment (Chandra & Fisher, 2009; Eklund,
Kay, & Lynch, 2003). The inclusion of both pretest and posttest administrations of
a questionnaire in future studies that seek to repeat such an evaluation might
alleviate some of the issues concerning differences between teachers and differences
amongst laboratory activities.

Another issue related to the different treatment groups was the ‘John Henry effect’
mentioned in Section 2.5.5. According to the quantitative data (see Table 4.2), the
mean scores measuring students’ perception of the learning environment, attitudes,
and achievement ranged from 2.79 to 3.92 with the student as the unit of analysis.
These results demonstrate that, overall, regardless of instructional method, students
tended to agree with the questionnaire statements, indicating their positive
perceptions of the learning environment, positive attitudes, and above-average
achievement in their science classes. Therefore, the ‘John Henry effect’ might
explain the lack of significant differences for instructional method; the control group
might have worked harder to improve their learning experience because these
students (and their teachers) knew that they were competing against the group using
virtual laboratories, which was assumed to produce better results.

In fact, while most teachers taught at least one class with the use of virtual
laboratories and at least one class without, one teacher divided each of her classes so
that half of the students in each class used virtual laboratories and the other half did
not. In this instance, the potential for the ‘John Henry effect’ was stronger because
the students in the control group saw what the students in the experimental group
were doing, and they might have over-compensated for the expected difference
when responding to the questionnaire.

200
To account for this issue in future studies, a double-blind design might produce
more accurate results. Participating teachers should not be informed about the exact
purpose of the study, and they should be given more precise instructions for the
control group. For example, the researcher could provide alternative activities for
the control group so that the comparison of students across different teachers would
be uniform. Furthermore, an improved design would involve students in answering
the questionnaire before the implementation of the study, in addition to answering
the questionnaire upon completing the virtual laboratories or comparison
instructional method.

The questionnaire itself might also be improved in a future study to enable the
emergence of more accurate results. As reported by participating teachers, a number
of their students complained about the length of the questionnaire. Based on the
results of this study, the dimension of Differentiation could be removed from the
LAG because it did not produce any significant differences for the instructional
method or for the instructional method x sex interaction, and because its items were
poorly understood by students, as evidenced by the interview process. Also, further
clarity regarding terminology in certain items could be enhanced by defining the
terms for each scale. For instance, before presenting the items for Teacher Support,
instructions could have delineated what is or is not included in the reference to
‘teacher’.

The researcher chose to borrow and adapt scales from previously-validated and
often-used questionnaires in the field of learning environments but, in retrospect, the
novel research presented in this thesis begged for the creation of a new instrument
or, at least, some new scales that could more accurately measure the defining
features emerging from virtual technology. Also the 10 achievement items could
have been better mapped to reflect how simulations affect students’ understanding
of genetics. Future studies could evaluate the validity of newly-created scales that
might be adapted to the implementation of diverse educational technology such as
Content and Learning Management Systems, social media, and virtual
experimentation.

201
Finally, to validly assess the effectiveness of virtual laboratories, future studies
might aim to compare three groups: classes with no virtual and no physical
experiments; classes with only physical experiments; and classes with only virtual
experiments. A number of studies have already compared physical and virtual
laboratories and many of them conclude that virtual laboratories enhance the
effectiveness of physical laboratories, relative to the effectiveness of physical
laboratories alone (Akpan & Strayer, 2010; Cobb, Heaney, Corcoran et al., 2009; de
Jong, Linn, & Zacharia, 2013; Pyatt & Sims, 2012; Toth, 2009; Yu, Brown, &
Billet, 2005; Zacharia, Olympiou, & Papaevripidou, 2008), but most of these studies
did not involve lower-secondary classrooms (grades 8–10). Because secondary
schools invest in better technological equipment for science experiments, it would
be wise to enrich future research with studies involving such a three-way
comparison.

5.5 Conclusion

This study provided numerous opportunities to learn about the process of


quantitatively and qualitatively comparing students in classes using virtual
laboratories with students in classes that did not, especially regarding differences
between males and females in these two groups. While significant benefits were not
found for students who engaged in virtual laboratories, a number of findings
emerged from this study that inform future research and practice in science
education.

Learning environment and attitude scales adapted from the Science Laboratory
Environment Inventory (SLEI), Technology-Rich Outcomes-Focused Learning
Environment Inventory (TROFLEI) questionnaires, and Test Of Science Related
Attitudes (TOSRA) were found to be valid and reliable when used with a sample of
US high school students taking biology. These scales have been employed in the
past and can continue to be adapted to a wide variety of samples and situations.

This study also identified associations between students’ perceptions of the learning
environment and their attitudes and achievement. All six learning environment
scales correlated significantly and positively with both attitude scales, and a number
of those scales were positive, independent predictors of the attitude scales,

202
indicating that a more positive learning environment could lead to more positive
attitudes. Associations with achievement were significant for three learning
environment scales (Integration, Material Environment, and Teacher Support), and
two of those scales were positive, independent predictors of achievement,
suggesting that greater integration between laboratory work and class lessons and
better equipment might lead to improved achievement.

Finally, comparisons revealed no significant differences between students who used


virtual laboratories and students who did not. On average, scores were above 3.00,
which was between the Agree and Strongly Agree response choices, showing that
students on average had positive perceptions of the classroom environment, positive
attitudes towards science, and above average achievement, irrespective of the
instructional method.

Further analysis revealed that virtual laboratories were somewhat more effective for
males than for females, as compared to males and females in the control group.
Male who engaged in virtual laboratories, compared to males who did not, perceived
better equipment (Material Environment), greater support from teachers (Teacher
Support), and experienced more inquiry (Inquiry), while females either perceived
negligible differences between the instructional methods for these aspects, or
perceived them to be more positive in the traditional environment without virtual
laboratories.

These findings suggest that technological interventions, such as virtual laboratories,


might not offer any direct educational advantages in traditional school
environments, but also that they are not detrimental to students’ learning
experiences. Because they are comparable to any other instructional method in their
effectiveness, virtual laboratories might be particularly useful in alternative school
environments (such as online settings or in schools without adequate resources).
Further research could be conducted into the effectiveness of virtual laboratories,
with improvements to the methodology of this study and with an enhanced product
that might be better designed by taking the interest of females into account. On the
other hand, educational researchers might also use these findings to conclude that no
further research should be conducted regarding this intervention and that resources

203
might be better invested in evaluating other aspects of the learning environment in
science classes.

204
References

Afari, E., Aldridge, J. M., Fraser, B. J., & Khine, M. S. (in press). Students’
perceptions of the learning environment and attitudes in game-based
mathematics classrooms. Learning Environments Research.

Akpan, J., & Strayer, J. (2010). Which comes first: The use of computer simulation
for frog dissection or conventional dissection as an academic exercise?
Journal of Computers in Mathematics and Science Teaching, 29, 113-138.

Aldridge, J. M., Dorman, J. P., & Fraser, B. J. (2004). Use of multitrait-multimethod


modelling to validate actual and preferred forms of the Technology-Rich
Outcomes-Focused Learning Environment Inventory (TROFLEI). Australian
Journal of Educational & Developmental Psychology, 4, 110-125.

Aldridge, J. M., & Fraser, B. J. (2003). Effectiveness of a technology-rich


outcomes-focused learning environment. In M. S. Khine & D. Fisher (Eds.),
Technology-rich learning environments: A future perspective (pp. 41-69).
Singapore: World Scientific Publishing Company.

Aldridge, J. M., & Fraser, B. J. (2008). Outcomes-focused learning environments:


Determinants and effects. Rotterdam: Sense Publishers.

Aldridge, J. M., Fraser, B. J., Bell, L., & Dorman, J. (2012). Using a new learning
environment questionnaire for reflection in teacher action research. Journal
of Science Teacher Education, 1-32.

Aldridge, J. M., Fraser, B. J., & Fisher, D. L. (2000). A cross-cultural study of


classroom learning environments in Australia and Taiwan. Learning
Environments Research, 3, 101-134.

Aldridge, J. M., Fraser, B. J., & Huang, T. C. I. (1999). Investigating classroom


environments in Taiwan and Australia with multiple research methods.
Journal of Educational Research, 93, 48-62.

Aldridge, J. M., Fraser, B. J., & Laugksch, R. C. (2011). Relationship between the
school-level and classroom-level environment in secondary schools in South
Africa. South African Journal of Education, 31, 127-144.

Aldridge, J. M., Fraser, B. J., & Ntuli, S. (2009). Utilising learning environment
assessments to improve teaching practices among in-service teachers
undertaking a distance education programme. South African Journal of
Education, 29, 147-170.

205
Aldridge, J. M., Fraser, B. J., & Sebela, M. P. (2004). Using teacher action research
to promote constructivist learning environments in South Africa. South
African Journal of Education, 24, 245-253.

Aldridge, J. M., Fraser, B. J., Taylor, P. C., & Chen, C. C. (2000). Constructivist
learning environments in a cross-national study in Taiwan and Australia.
International Journal of Science Education, 22, 37-55.

Aldridge, J. M., Laugksch, R. C., Seopa, M. A., & Fraser, B. J. (2006).


Development and validation of an instrument to monitor the implementation
of outcomes-based learning environments in science classrooms in South
Africa. International Journal of Science Education, 28, 45-70.

Alhalabi, B., Hamza, M. K., Hsu, S., & Romance, N. (1998, November). Virtual
labs vs. remote labs: Between myth & reality. Paper presented at the Florida
Higher Education Consortium 7th Statewide Conference, Deerfield Beach,
FL.

Allen, D., & Fraser, B. J. (2007). Parent and student perceptions of classroom
learning environment and its association with student outcomes. Learning
Environments Research, 10, 67-82.

American Association for the Advancement of Science (AAAS). (1989). Science for
all Americans: A project 2061 report on literacy goals in science,
mathematics, and technology. Washington, DC: AAAS.

Anderson, G. J., & Arsenault, N. (1998). Fundamentals of educational research


(2nd ed.). Bristol, PA: Routledge.

Annetta, L., Klesath, M., & Meyer, J. (2009). Taking science online: Evaluating
presence and immersion through a laboratory experience in a virtual learning
environment for entomology students. Journal of College Science Teaching,
39, 27-33.

Atherton, J., & Buriak, P. (1988). Video simulation as a computer applications


instructional technique for professionals and students. Journal of Vocational
Education Research, 13, 59-71.

Atherton, L. L. (1971). A comparison of movie and multi-image presentation


techniques on affective and cognitive learning. Doctoral dissertation,
Michigan State University, East Lansing, MI. (32(6-A), 5924)

Aud, S., Hussar, W., Johnson, F., Kena, G., Roth, E., Manning, E., Wang, X.,
Zhang, J. (2012). The condition of education. Washington, DC: U.S.
Department of Education, National Center for Education Statistics.

206
Bahar, M., Johnstone, A., & Hansell, M. (1999). Revisiting learning difficulties in
biology. Journal of Biological Education, 33, 84-86.

Baird, A. C. (2012). Teacher shortage areas: Nationwide listing 1990 – 1991


through 2012 – 2013. Washington, DC: US Deptartment of Education.

Baird, J., & White, R. (1996). Metacognitive strategies in the classroom. In D. F.


Treagust, R. Duit & B. J. Fraser (Eds.), Improving teaching and learning in
science and mathematics (pp. 190-200). New York: Teachers College Press.

Banchero, S., & Simon, S. (2011, November 12). My teacher is an app, Wall Street
Journal.

Barak, M., & Asad, K. (2012). Teaching image-processing concepts in junior high
school: boys’ and girls’ achievements and attitudes towards technology.
Research in Science & Technological Education, 30, 81-105.

Barnette, J. J. (2000). Effects of stem and Likert response option reversals on survey
internal consistency: If you feel the need, there is a better alternative to using
those negatively worded stems. Educational and Psychological
Measurement, 60, 361-370.

Beard, M. H., Lorton, P. V., Searle, B. W., & Atkinson, T. C. (1973). Comparison of
student performance and attitude under three lesson-selection strategies in
computer-assisted instruction. Stanford, CA: Defense Technical Information
Center.

Beck, J., Czerniak, C. M., & Lumpe, A. T. (2000). An exploratory study of teachers’
beliefs regarding the implementation of constructivism in their classroom.
Journal of Science Teacher Education, 11, 323-343.

Beede, D., Julian, T., Langdon, D., McKittrick, G., Khan, B., & Doms, M. (2011).
Women in STEM: A gender gap to innovation. Washington, DC: US
Department of Commerce, Economics and Statistics Administration.

Beichner, R., Bernold, L., Burniston, E., Dail, P., Felder, R., Gastineau, J., et al.
(1999). Case study of the physics component of an integrated curriculum.
American Journal of Physics, 67, S16-S24.

Bell, R. L., & Trundle, K. C. (2008). The use of a computer simulation to promote
scientific conceptions of moon phases. Journal of Research in Science
Teaching, 45, 346-372.

Black, E. W., Ferdig, R. E., & DiPietro, M. (2008). An overview of evaluative


instrumentation for virtual high schools. The American Journal of Distance
Education, 22, 24-45.

207
Blalock, C. L., Lichtenstein, M. J., Owen, S., Pruski, L., Marshall, C., &
Toepperwein, M. (2008). In pursuit of validity: A comprehensive review of
science attitude instruments. International Journal of Science Education, 30,
961-977.

Bohus, C. A., Aktan, B., Crowl, L. A., & Shor, M. A. (1996). Distance learning
applied to control engineering laboratories. IEEE Transactions on
Education, 3, 320-326.

Borgman, C. L., Abelson, H., Dirks, L., Johnson, R., Koedinger, K., Linn, M. C., et
al. (2008). Fostering learning in the networked world: The cyberlearning
opportunity and challenge. Office of Cyberinfrastructure and Directorate for
Education and Human Resources of the National Science Foundation.
Retrieved from http://www.nsf.gov/publications/pub_summ.jsp.

Bredderman, T. (1982). What research says: Activity science--The evidence shows


it matters. Science and Children, 20, 39-41.

Brekelmans, M. Y., Levy, J., & Rodriguez, R. (1993). A typology of teacher


communication style. In T. Wubbels & J. Levy (Eds.), Do you know what
you look like? (pp. 46-55). London, UK: Falmer Press.

Brotman, J. S., & Moore, F. M. (2008). Girls and science: A review of four themes
in the science education literature. Journal of Research in Science Teaching,
45, 971-1002.

Brown, E. (2012, April). Virginia's new high school graduation requirement: One
online course, Washington Post.

Brown, J. D. (1972). An evaluation of the Spitz student response system in teaching


a course in logical and mathematical concepts. The Journal of Experimental
Educational, 40, 12-20.

Burden, R., & Fraser, B. J. (1993). Use of classroom environment assessments in


school psychology: A British perspective. Psychology in the Schools, 30,
232-240.

Burkholder, P. R., Purser, G. H., & Cole, R. S. (2008). Using molecular dynamics
simulation to reinforce student understanding of intermolecular forces.
Journal of Chemical Education, 85, 1071.

Caleon, I. S., & Subramaniam, R. (2008). Attitudes towards science of intellectually


gifted and mainstream upper primary students in Singapore. Journal of
Research in Science Teaching, 45, 940-954.

208
Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validation by
the multitrait-multimethod matrix. Psychological bulletin, 56, 81-105.

Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental


designs for research. Boston, MA: Houghton Mifflin.

Campbell, H. (2012, May 24). Why science test scores being 'stagnant' is a good
thing. Retrieved from
http://www.science20.com/science_20/why_science_test_scores_being_stag
nant_good_thing-90396

Campuzano, L., Dynarski, M., Agodini, R., Rall, K. (2009). Effectiveness of reading
and mathematics software products: Findings from two student cohorts—
Executive summary (NCEE 2009-4042). (1422325237). Washington, DC:
National Center for Education Evaluation and Regional Assistance, Institute
of Education Sciences, U.S. Department of Education.

Cannon, J. R. (1995). Further validation of the Constructivist Learning Environment


Survey: Its use in the elementary science methods course. Journal of
Elementary Science Education, 7, 47-62.

Cennamo, K. S. (1990). Can interactive video overcome the "couch potato"


syndrome? Paper presented at the Convention of the Association for
Educational Communications and Technology, Anaheim, CA.

Chandra, V., & Fisher, D. L. (2009). Students' perceptions of a blended web-based


learning environment. Learning Environments Research, 12, 31-44.

Chang, K. (2009, November 23). White house pushes science and math education,
New York Times.

Chang, V., & Fisher, D. L. (2003). The validation and application of a new learning
environmet instrument for online learning in higher education. In M. S.
Khine & D. L. Fisher (Eds.), Technology-rich learning environments: A
future perspective (pp. 1-20). River Edge, NJ: World Scientific Publishing
Company.

Chionh, Y. H., & Fraser, B. J. (2009). Classroom environment, achievement,


attitudes and self esteem in geography and mathematics in Singapore.
International Research in Geographical and Environmental Education, 18,
29-44.

Cho, J. I., Yager, R. E., Park, D. Y., & Seo, H. A. (1997). Changes in high school
teachers’ constructivist philosophies. School Science and Mathematics, 97,
400-405.

209
Clancy, M., Titterton, N., Ryan, C., Slotta, J., & Linn, M. (2003). New roles for
students, instructors, and computers in a lab-based introductory
programming course. ACM SIGCSE Bulletin, 35, 132-136.

Clark, R. E. (1983). Reconsidering research on learning from media. Review of


educational research, 53, 445-459.

Clayton, J. F. (2007). Development and validation of an instrument for assessing


online learning environments in tertiary education: The Online Learning
Environment Survey (OLLES). Curtin University of Technology.

Cobb, S., Heaney, R., Corcoran, O., & Henderson-Begg, S. (2009). The learning
gains and student perceptions of a second life virtual lab. Health and
Bioscience Education, 13, 1-8.

Cohen, L., Manion, L., & Morrison, K. (2007). Research methods in education.
New York: Routledge.

Cross, T., & Cross, V. (2004). Scalpel or mouse? A statistical comparison of real &
virtual frog dissections. The American Biology Teacher, 66, 409-411.

Csikszentmihalyi, M., & Schneider, B. (2001). Becoming adult: How teenagers


prepare for the world of work. New York: Basic Books.

Cutler, R. L., McKeachie, W. J., & McNeil, E. B. (1958). Teaching psychology by


telephone. The American Psychologist, 13, 551-552.

de Jong, T., Linn, M. C., & Zacharia, Z. C. (2013). Physical and virtual laboratories
in science and engineering education. Science, 340, 305-308.

De Vellis, R. F. (1991). Scale development: Theory and application. Newbury Park,


CA: Sage Publications.

Dede, C. (1999). The role of emerging technologies for knowledge mobilization,


dissemination, and use in education. Washington, DC: US Department of
Education.

Dede, C. (2005). Planning for neomillennial learning styles. Educause Quarterly,


28, 7-12.

den Brok, P., Fisher, D., Rickards, T., & Bull, E. (2006). Californian science
students’ perceptions of their classroom learning environments. Educational
Research and Evaluation, 12, 3-25.

210
den Brok, P., Telli, S., Cakiroglu, J., Taconis, R., & Tekkaya, C. (2010). Learning
environment profiles of Turkish secondary biology classrooms. Learning
Environments Research, 13, 187-204.

Didcoct, D. H. (1958). Comparison of the cognitive and affective responses of


college students to single-image and multi-image audio-visual presentations.
Unpublished doctoral dissertation, Cornell University, Ithica, NY.

Dori, Y. J., & Barak, M. (2001). Virtual and physical molecular modeling: Fostering
model perception and spatial understanding. Educational Technology &
Society, 4, 61-74.

Dorman, J. P. (2003). Cross-national validation of the What Is Happening In this


Class? (WIHIC) questionnaire using confirmatory factor analysis. Learning
Environments Research, 6, 231-245.

Dorman, J. P. (2008). Use of multitrait-multimethod modelling to validate actual


and preferred forms of the What Is Happening In this Class? (WIHIC)
questionnaire. Learning Environments Research, 11, 179-197.

Dorman, J. P. (2012). The impact of student clustering on the results of statistical


tests. In B. J. Fraser, K. Tobin & C. McRobbie (Eds.), Second international
handbook of science education (pp. 1333-1351). New York: Springer
Verlag.

Dorman, J. P., Aldridge, J. M., & Fraser, B. J. (2006). Using students' assessment of
classroom environment to develop a typology of secondary school
classrooms. International Education Journal, 7, 906-915.

Dorman, J. P., & Fraser, B. J. (2009). Psychosocial environment and affective


outcomes in technology-rich classrooms: Testing a causal model. Social
Psychology of Education, 12, 77-99.

Dorman, J. P., Fraser, B. J., & McRobbie, C. (1997). Relationship between school-
level and classroom-level environments in secondary schools. Journal of
Educational Administration, 35, 74-91.

Drever, E. (1995). Using semi-structured interviews in small-scale research.


Edinburgh: The Scottish Council for Research in Education.

Dugger, W. E. (2010, December). Evolution of STEM in the United States. Paper


presented at the 6th Biennial International Conference on Technology
Education Research, Gold Coast, Australia.

Duit, R., & Confrey, J. (1996). Reorganizing the curriculum and teaching to
improve learning in science and mathematics. In D. F. Treagust, R. Duit &

211
B. J. Fraser (Eds.), Improving teaching and learning in science and
mathematics (pp. 79-93). New York: Teachers College Press.

Edelson, D. C., Gordin, D. N., & Pea, R. D. (1999). Addressing the challenges of
inquiry-based learning through technology and curriculum design. Journal of
the Learning Sciences, 8, 391-450.

Eklund, J., Kay, M., & Lynch, H. M. (2003). E-learning: Emerging issues and key
trends. Retrieved July 26, 2012, from www.flexiblelearning.net.au

Erickson, F. (1998). Qualitative research methods for science education. In B. J.


Fraser & K. Tobin (Eds.), International handbook of science education (pp.
1155-1173). Hingham, MA: Kluwer Academic Publishers.

Erickson, F. (2012). Qualitative Research Methods for Science Education. In B. J.


Fraser, K. Tobin & C. McRobbie (Eds.), Second international handbook of
science education (pp. 1451-1469). New York: Springer Verlag.

Farenga, S. J., & Joyce, B. A. (1997). What children bring to the classroom:
Learning science from experience. School Science and Mathematics, 97,
248-252.

Felder, R. M., & Silverman, L. K. (1988). Learning and teaching styles in


engineering education. Engineering Education, 78, 674-681.

Ferguson, P. D., & Fraser, B. J. (1998). Changes in learning environment during the
transition from primary to secondary school. Learning Environments
Research, 1, 369-383.

Finkelstein, N., Adams, W., Keller, C., Kohl, P., Perkins, K., Podolefsky, N., et al.
(2005). When learning about the real world is better done virtually: A study
of substituting computer simulations for laboratory equipment. Physical
Review Special Topics-Physics Education Research, 1, 1-8.

Fisher, D. L., & Cresswell, J. (1998). Actual and ideal principal interpersonal
behaviour. Learning Environments Research, 1, 231-247.

Fisher, D. L., & Fraser, B. J. (1981). Validity and use of My Class Inventory.
Science Education, 65, 145-156.

Fisher, D. L., & Fraser, B. J. (1983). A comparison of actual and preferred


classroom environments as perceived by science teachers and students.
Journal of Research in Science Teaching, 20, 55-61.

212
Fisher, D. L., Henderson, D., & Fraser, B. J. (1995). Interpersonal behaviour in
senior high school biology classes. Research in Science Education, 25, 125-
133.

Fisher, D. L., Henderson, D., & Fraser, B. J. (1997). Laboratory environments &
student outcomes in senior high school biology. American Biology Teacher,
59, 214-219.

Fisher, D. L., & Khine, M. S., (Eds.). (2006). Contemporary approaches to research
on learning environments: Worldviews. Singapore: World Scientific.

Fotos, J. T. (1955). The Purdue laboratory method in teaching beginning French


courses. The Modern Language Journal, 39, 141-143.

Fraser, B. J. (1978). Some attitude scales for ninth grade science. School Science
and Mathematics Education, 78, 379-384.

Fraser, B. J. (1979). Evaluation of a science-based curriculum. In H. J. Walberg


(Ed.), Educational environments and effects: Evaluation, policy, and
productivity (pp. 218-234). Berkeley, CA: McCutchan.

Fraser, B. J. (1981). TOSRA: Test of Science Related Attitudes. Melbourne:


Australian Council for Educational Research.

Fraser, B. J. (1982). Development of short forms of several classroom environment


scales. Journal of Educational Measurement, 19, 221-227.

Fraser, B. J. (1986). Classroom environment. London, UK: Croom Helm.

Fraser, B. J. (1990). Individualised Classroom Environment Questionnaire.


Melbourne, Australia: Australian Council for Educational Research.

Fraser, B. J. (1994). Research on classroom and school climate. In D. Gabel (Ed.),


Handbook of research on science teaching and learning (pp. 493-541). New
York: Macmillan.

Fraser, B. J. (1998a). Classroom environment instruments: Development, validity,


and applications. Learning Environments Research, 1, 7-33.

Fraser, B. J. (1998b). Science learning environments: Assessment, effects, and


determinants. In B. J. Fraser & K. G. Kobin (Eds.), International handbook
of science education (pp. 527-564). Dordrecht: Kluwer Academic Publishers.

Fraser, B. J. (1999). ‘Grain sizes’ in learning environment research: Combining


qualitative and quantitative methods. In H. Waxman & H. J. Walberg (Eds.),

213
New directions for teaching practice and research (pp. 285-296). Berkeley,
CA: McCutchan.

Fraser, B. J. (2001). Twenty thousand hours. Learning Environments Research, 4, 1-


5.

Fraser, B. J. (2007). Classroom learning environments. In S. K. Abell & N. G.


Lederman (Eds.), Handbook of research on science education (pp. 103-124).
New York: Routledge.

Fraser, B. J. (2012). Classroom learning environments: Retrospect, context and


prospect. In B. J. Fraser, K. Tobin & C. McRobbie (Eds.), Second
international handbook of science education (pp. 1191-1239). New York:
Springer Verlag.

Fraser, B. J., Aldridge, J. M., & Adolphe, F. S. G. (2010). A cross-national study of


secondary science classroom environments in Australia and Indonesia.
Research in Science Education, 40, 551-571.

Fraser, B. J., Aldridge, J. M., & Soerjaningsih, W. (2010). Instructor-student


interpersonal interaction and student outcomes at the university level in
Indonesia. The Open Education Journal, 3, 32-44.

Fraser, B. J., Anderson, G. J., & Walberg, H. J. (1982). Assessment of learning


environments: Manual for Learning Environment Inventory (LEI) and My
Class Inventory (MCI). Perth, Australia: Western Australian Institute of
Technology.

Fraser, B. J., & Butts, W. L. (1982). Relationship between perceived levels of


classroom individualization and science-related attitudes. Journal of
Research in Science Teaching, 19, 143-154.

Fraser, B. J., & Fisher, D. L. (1983). Student achievement as a function of person-


environment fit: A regression surface analysis. British Journal of
Educational Psychology, 53, 89-99.

Fraser, B. J., & Fisher, D. L. (1986). Using short forms of classroom climate
instruments to assess and improve classroom psychosocial environment.
Journal of Research in Science Teaching, 5, 387-413.

Fraser, B. J., Fisher, D. L., & McRobbie, C. J. (1996, April). Development,


validation, and use of personal and class forms of a new classroom
environment instrument. Paper presented at the annual meeting of the
American Educational Research Association, New York.

214
Fraser, B. J., Giddings, G. J., & McRobbie, C. J. (1992). Assessment of the
psychosocial environment of university science laboratory classrooms: A
cross-national study. Higher Education, 24, 431-451.

Fraser, B. J., Giddings, G. J., & McRobbie, C. J. (1995). Evolution and validation of
a personal form of an instrument for assessing science laboratory classroom
environments. Journal of Research in Science Teaching, 32, 399-422.

Fraser, B. J., & Kahle, J. B. (2007). Classroom, home and peer environment
influences on student outcomes in science and mathematics: An analysis of
systemic reform data. International Journal of Science Education, 29, 1891-
1909.

Fraser, B. J., & Lee, S. S. U. (2009). Science laboratory classroom environments in


Korean high schools. Learning Environments Research, 12, 67-84.

Fraser, B. J., & McRobbie, C. J. (1995). Science laboratory classroom environments


at school and universities: A cross-national study. Educational Research and
Evaluation, 1, 289-317.

Fraser, B. J., & Rentoul, A. J. (1982). Relationship between school-level and


classroom-level environment. Alberta Journal of Educational Research, 28,
212-225.

Fraser, B. J., & Tobin, K. (1987). Use of classroom and school climate scales in
evaluating alternative high schools. Teaching and Teacher Education, 3,
219-231.

Fraser, B. J., & Tobin, K. (1989). Student perceptions of psychosocial environments


in classrooms of exemplary science teachers. International Journal of
Science Education, 11, 14-34.

Fraser, B. J., & Tobin, K. (1991). Combining qualitative and quantitative methods in
classroom environment research. In B. J. Fraser & H. J. Walberg (Eds.),
Educational environments: Evaluation, antecedents and consequences (pp.
271–292). Elmsford, NY: Pergamon Press.

Fraser, B. J., & Treagust, D. F. (1986). Validity and use of an instrument for
assessing classroom psychosocial environment in higher education. Higher
Education, 15, 37-57.

Fraser, B. J., Walberg, H. J., Welch, W. W., & Hattie, J. A. (1987). Syntheses of
educational productivity research. International Journal of Educational
Research, 11, 145-252.

215
Fredricks, J. A., Blumenfeld, P. C., & Paris, A. H. (2004). School engagement:
Potential of the concept, state of the evidence. Review of Educational
Research, 74, 59-109.

Friedman, T. L. (2006). The world is flat: A brief history of the twenty-first century.
New York: Farrar Straus & Giroux.

Gallagher, A. G., Ritter, E. M., Champion, H., Higgins, G., Fried, M. P., Moses, G.,
et al. (2005). Virtual reality simulation for the operating room: Proficiency-
based training as a paradigm shift in surgical skills training. Annals of
Surgery, 241, 364-372.

Gardner, D. P. (1983). A nation at risk: The imperative for educational reform.


Washington, DC: United States Government Printing Office.

Gardner, P. L. (1975). Attitudes to Science. Studies in Science Education, 2, 1-41.

Getzels, J. W., & Thelen, H. A. (1960). The classroom group as a unique social
system. In N. B. Henry (Ed.), The dynamics of instructional groups: Socio-
psychological aspects of teaching and learning (pp. 53-82). Chicago:
University of Chicago Press.

Giallousi, M., Gialamas, V., Spyrellis, N., & Pavlaton, E. (2010). Development,
validation, and use of a Greek-language questionnaire for assessing learning
environments in grade 10 chemistry classes. International Journal of Science
and Mathematics Education, 8, 761-782.

Gibson, H. L., & Chase, C. (2002). Longitudinal impact of an inquiry-based science


program on middle school students' attitudes toward science. Science
Education, 86, 693-705.

Goh, S. C., & Fraser, B. J. (1996). Validation of an elementary school version of the
Questionnaire on Teacher Interaction. Psychological Reports, 79, 512-522.

Goh, S. C., & Fraser, B. J. (1998). Teacher interpersonal behaviour, classroom


environment and student outcomes in primary mathematics in Singapore.
Learning Environments Research, 1, 199-229.

Goh, S. C., & Khine, M. S., (Eds.). (2002). Studies in educational learning
environments. Singapore: World Scientific.

Goh, S. C., Young, D. J., & Fraser, B. J. (1995). Psychosocial climate and student
outcomes in elementary mathematics classrooms: A multilevel analysis.
Journal of Experimental Education, 64, 29-40.

216
Goldberg, F. (1997). Constructing physics understanding in a computer-supported
learning environment. San Diego: The Learning Team.

Gonzales, P., Williams, T., Jocelyn, L., Roey, S., Kastberg, D., & Brenwald, S.
(2008). Highlights from TIMSS 2007: Mathematics and science achievement
of U.S. fourth- and eighth-grade students in an international context.
((NCES 2009001)). Washington, DC: National Center for Education
Statistics (NCES), U.S. Department of Education.

Gupta, A., & Koul, R. B. (2007, December). Psychosocial learning environments of


technology rich science classrooms in India. Paper presented at the Annual
Conference of Australian Association for Research in Education, Perth.

Haertel, G. D., Walberg, H. J., & Haertel, E. H. (1981). Socio-psychological


environments and learning: A quantitative synthesis. British Educational
Research Journal, 7, 27-36.

Hanson, S. (2009). Swimming against the tide: African American girls and science
education. Philadelphia, PA: Temple University Press.

Harms, U. (2000, June). Virtual and remote labs in physics education. Paper
presented at the Second European Conference on Physics Teaching in
Engineering Education, Budapest.

Harwell, S. H., Gunter, S., Montgomery, S., Shelton, C., & West, D. (2001).
Technology integration and the classroom learning environment: Research
for action. Learning Environments Research, 4, 259-286.

Helding, K. A., & Fraser, B. J. (in press). Effectiveness of NBC (National Board
Certifi ed) teachers in terms of learning environment, attitudes and
achievement among secondary school students. Learning Environments
Research.

Herrera, L. (2011, January 17). In Florida, virtual classrooms with no teachers, New
York Times.

Herrmann, A. (2012). uRespond: A Classroom Response System on the iPad. Master


of Science, University of North Carolina, Wilmington.

Herrnstein, R. J., & Murray, C. A. (1996). The bell curve: Intelligence and class
structure in American life. New York, NY: Free Press Paperbacks.

HHMI. (2003). Howard Hughes Medical Center BioInteractive: Virtual Labs.


Retrieved July 24, 2012, from http://www.hhmi.org/biointeractive/vlabs/

217
Hill, C., Corbett, C., & St. Rose, A. (2010). Why so few? Women in science,
technology, engineering and mathematics. Washington, DC: AAUW.

Hiltz, S. R., & Wellman, B. (1997). Asynchronous learning networks as a virtual


classroom. Communications of the ACM, 40, 44-49.

Hiltzik, M. (2012, February 4, 2012). Who really benefits from putting high-tech
gadgets in classrooms?, The Los Angeles Times, pp. B-1. Retrieved from
http://www.latimes.com/business/la-fi-hiltzik-20120205,0,639053.column

Hofstein, A., & Kind, P. M. (2012). Learning in and from science laboratories. In B.
J. Fraser, K. Tobin & C. McRobbie (Eds.), Second international handbook of
science education (pp. 189-207). New York: Springer Verlag.

Hofstein, A., & Lunetta, V. N. (1982). The role of the laboratory in science
teaching: Neglected aspects of research. Review of Educational Research,
52, 201-217.

Hofstein, A., & Lunetta, V. N. (2004). The laboratory in science education:


Foundations for the twenty-first century. Science Education, 88, 27-54.

Hofstein, A., & Walberg, H. J. (1995). Instructional strategies. In B. J. Fraser & H.


J. Walberg (Eds.), Improving science education (pp. 1-20). Chicago:
National Society for the Study of Education.

Holdampf, B. A. (1983). Innovative associate degree nursing program – remote


area. Austin: Texas Education Agency, Department of Occupational
Education and Technology.

Horn, D. (1994). Distance education: Is interactivity compromised? Performance+


Instruction, 33, 12-15.

Houston, L. S., Fraser, B. J., & Ledbetter, C. E. (2008). An evaluation of elementary


school science kits in terms of classroom environment and student attitudes.
Journal of Elementary Science Education, 20, 29-47.

Huang, S., & Fraser, B. J. (2009). Science teachers' perceptions of the school
environment: Gender differences. Journal of Research in Science Teaching,
46, 404-420.

International Association for K–12 Online Learning (iNACOL). (2012). Fast facts
about online learning. Retrieved August 29, 2012, from
http://www.inacol.org/press/docs/nacol_fast_facts.pdf

Javidi, G. (1999). Virtual reality and education. University of South Florida, Tampa,
FL.
218
Javidi, G., & Sheybani, E. (2006, October). Virtual engineering lab. Paper presented
at the 36th ASEE/IEEE Frontiers in Education Conference, San Diego, CA.

Jegede, O. J., Fraser, B. J., & Fisher, D. L. (1995). The development and validation
of a distance and open learning environment scale. Educational Technology
Research and Development, 43, 90-93.

Jegede, O. J., Fraser, B. J., & Okebukola, P. A. (1994). Altering socio-cultural


beliefs hindering the learning of science. Instructional Science, 22, 137-152.

Johnson, B., & McClure, R. (2004). Validity and reliability of a shortened, revised
version of the Constructivist Learning Environment Survey (CLES).
Learning Environments Research, 7, 65-80.

Johnson, D. M., Wardlow, G. W., & Franklin, T. D. (1997). Hands-on activities


versus worksheets in reinforcing physical science principles: Effects on
student achievement and attitude. Journal of Agricultural Education, 38, 9-
17.

Johnson, M. (2002). Introductory biology "online": Assessing outcomes of two


student populations. Journal of College Science Teaching, 31, 312-317.

Johnstone, A. H. (1991). Why is science difficult to learn? Things are seldom what
they seem. Journal of Computer Assisted Learning, 7, 75-83.

Jones, A. (2012). Technology in science education: Context, contestation, and


connection. In B. J. Fraser, K. Tobin & C. McRobbie (Eds.), Second
international handbook of science education (pp. 811-821). New York:
Springer Verlag.

Judd, W., Bunderson, C., & Bessent, E. (1970). An investigation of the effects of
learner control in computer-assisted instruction prerequisite mathematics.
Austin, TX: University of Texas.

Kahle, J. B. (2004). Will girls be left behind? Gender differences and acountability.
Journal of Research in Science Teaching, 41, 961-969.

Kanner, J. H., Runyon, R. P., & Desiderato, O. (1954). Television in army training:
Evaluation of television in army basic training. Washington, DC: George
Washington University.

Karplus, R., & Butts, D. P. (1977). Science teaching and the development of
reasoning. Journal of Research in Science Teaching, 14, 169-175.

219
Kempa, R. F., & Ward, J. E. (1975). The effect of different modes of task
orientation on observational attainment in practical chemistry. Journal of
Research in Science Teaching, 12, 69-76.

Khine, M. S., & Fisher, D. L., (Eds.). (2003). Technology-rich learning


environments: A future perspective. Singapore: World Scientific.

Khoo, H. S., & Fraser, B. J. (2008). Using classroom psychosocial environment in


the evaluation of adult computer application courses in Singapore.
Technology, Pedagogy and Education, 17, 67-81.

Kijkosol, D. (2005). Teacher-student interactions and laboratory learning


environments in biology classes in Thailand. Unpublished doctoral thesis,
Curtin University of Technology.

Kim, H. B., Fisher, D. L., & Fraser, B. J. (2000). Classroom environment and
teacher interpersonal behaviour in secondary science classes in Korea.
Evaluation and Research in Education, 14, 3-22.

Klahr, D., Triona, L. M., & Williams, C. (2007). Hands on what? The relative
effectiveness of physical versus virtual materials in an engineering design
project by middle school children. Journal of Research in Science Teaching,
44, 183-203.

Klass, G., & Crothers, L. (2000). An experimental evaluation of Web-based tutorial


quizzes. Social Science Computer Review, 18, 508-515.

Klopfer, L. E. (1971). Evaluation of learning in science. In B. S. Bloom, J. T.


Hastings & G. F. Madaus (Eds.), Handbook on summative and formative
evaluation of student learning (pp. 559-641). New York: McGraw Hill.

Klopfer, L. E. (1976). A structure for the affective domain in relation to science


education. Science Education, 60, 299-312.

Koballa, T. R., & Glynn, S. M. (2007). Attitudinal and motivational constructs in


science learning. In S. K. Abell & N. G. Lederman (Eds.), Handbook of
research on science education (pp. 75-102). Mahwah, NJ: Lawrence
Erlbaum.

Koul, R. B., Fisher, D., & Shaw, T. (2011). An application of the TROFLEI in
secondary-school science classes in New Zealand. Research in Science &
Technological Education, 29, 147-167.

Koul, R. B., & Fisher, D. L. (2005). Cultural background and students’ perceptions
of science classroom learning environment and teacher interpersonal
behaviour in Jammu, India. Learning Environments Research, 8, 195-211.

220
Kroemer, K., & Grandjean, E. (1997). Fitting the task to the human: A textbook of
occupational ergonomics (5th ed.). London: Taylor and Francis.

Lazarowitz, R., & Tamir, P. (1994). Research on using laboratory instruction in


science. In D. L. Gabel (Ed.), Handbook of research on science teaching and
learning (pp. 94-128). New York: Macmillan.

Lee, O. M. (1985). The effect of type of feedback on rule learning in computer-based


instruction. Doctoral dissertation, Florida State University, Tallahassee, FL.
(Dissertation Abstracts International 46, 955A)

Lee, S. S. U., Fraser, B. J., & Fisher, D. L. (2003). Teacher-student interactions in


Korean high school science classrooms. International Journal of Science and
Mathematics Education, 1, 67-85.

Lewin, K. (1936). Principles of topological psychology. New York: McGraw.

Lightburn, M. E., & Fraser, B. J. (2007). Classroom environment and student


outcomes among students using anthropometry activities in high-school
science. Research in Science & Technological Education, 25, 153-166.

Lindesmith, A. R. (1947). Addiction and opiates. Chicago, IL: Aldine De Gruyter.

Loder, J. E. (1937). A study of aural learning with and without the speaker present.
Lincoln, NE: University of Nebraska.

Logan, K. A., Crump, B. J., & Rennie, L. J. (2006). Measuring the computer
classroom environment: Lessons learned from using a new instrument.
Learning Environments Research, 9, 67-93.

Ma, J., & Nickerson, J. V. (2006). Hands-on, simulated, and remote laboratories: A
comparative literature review. ACM Computing Surveys, 38, 1-24.

MacLeod, C., & Fraser, B. J. (2010). Development, validation and application of a


modified Arabic translation of the What Is Happening In this Class?
(WIHIC) questionnaire. Learning Environments Research, 13, 105-125.

Majeed, A., Fraser, B. J., & Aldridge, J. M. (2002). Learning environment and its
association with student satisfaction among mathematics students in Brunei
Darussalam. Learning Environments Research, 5, 203-226.

Maor, D., & Fraser, B. J. (1996). Use of classroom environment perceptions in


evaluating inquiry-based computer-assisted learning. International Journal
of Science Education, 18, 401-421.

221
Marbach-Ad, G., Rotbain, Y., & Stavy, R. (2008). Using computer animation and
illustration activities to improve high school students' achievement in
molecular genetics. Journal of Research in Science Teaching, 45, 273-292.

Marchevsky, A. M., Relan, A., & Baillie, S. (2003). Self-instructional “virtual


pathology” laboratories using web-based technology enhance medical school
teaching of pathology. Human pathology, 34, 423-429.

Marjoribanks, K. (1991). Families, schools, and students’ educational outcomes. In


B. J. Fraser & H. J. Walberg (Eds.), Educational environments: Evaluation,
antecedents and consequences (pp. 75-91). London, UK: Pergamon.

Martin, E. D., & Rainey, L. (1993). Student achievement and attitude in a satellite-
delivered high school science course. American Journal of Distance
Education, 7, 54-61.

Martin-Dunlop, C., & Fraser, B. J. (2007). Learning environment and attitudes


associated with an innovative science course designed for prospective
elementary teachers. International Journal of Science and Mathematics
Education, 6, 163-190.

Massachusetts Comprehensive Assessment System (MCAS). (2009). Item by Item


Results for Grade HS Biology. Massachusetts Department of Elementary
and Secondary Education. Retrieved December 8, 2012, from
http://profiles.doe.mass.edu/state_report/mcas.aspx

Mathison, S. (1988). Why triangulate? Educational Researcher, 17, 13-17.

McCarty, G., Hope, J., & Polman, J. L. (2010, March). The youth engagement with
science and technology survey: Informing practice and measuring outcomes.
Paper presented at the Annual Meeting of the National Association for
Research on Science Teaching, Philadelphia, PA.

McKavanagh, C., & Stevenson, J. (1992, December). Measurement of classroom


environment variables in vocational education. Paper presented at the
Conference of the Australian Association for Research in Education and the
New Zealand Association for Research in Education, Deakin University,
Geelong.

McRobbie, C., & Fraser, B. J. (1993). Associations between student outcomes and
psychosocial science environment. The Journal of Educational Research, 87,
78-85.

Midgley, C., Eccles, J. S., & Feldlaufer, H. (1991). Classroom environment and the
transition to junior high school. In B. J. Fraser & H. J. Walberg (Eds.),

222
Educational environments: Evaluation, antecedents and consequences (pp.
113-139). London, UK: Pergamon.

Milrad, M., & Spikol, D. (2007). Anytime, anywhere learning supported by smart
phones: Experiences and results from the MUSIS project. Journal of
Educational Technology and Society, 10, 62-70.

Mink, D. V., & Fraser, B. J. (2005). Evaluation of a K – 5 mathematics program


which integrates children’s literature: Classroom environment and attitudes.
International Journal of Science and Mathematics Education, 3, 59-85.

Mock, R. (2000). Comparison of online coursework to traditional instruction.


Unpublished thesis, Michigan State University, East Lansing, MI.

Moore, R. W., & Sutman, F. X. (1970). The development, field test and validation
of an inventory of scientific attitudes. Journal of Research in Science
Teaching, 34, 327-336.

Moos, R. H. (1974). Social climate scales: An overview. Palo Alto, CA: Consulting
Psychologists Press.

Moos, R. H. (1978). A typology of junior high and high school classrooms.


American Eductional Research Journal, 15, 53-66.

Moos, R. H. (1991). Connections between school, work, and family settings. In B. J.


Fraser & H. J. Walberg (Eds.), Educational environments: Evaluation,
antecedents and consequences (pp. 29-53). London, UK: Pergamon.

Moss, G., Jewitt, C., Levaaic, R., Armstrong, V., Cardini, A., & Castle, F. (2007).
Interactive whiteboards, pedagogy, and pupil performance: An evaluation of
the schools whiteboard expansion project (London Challenge). London:
Department for Education and Skills/Institute of Education, University of
London.

Muirhead, P. (2003). Technology and maritime education and training: A future


perspective. In M. S. Khine & D. L. Fisher (Eds.), Technology-rich learning
environments: A future perspective (pp. 235-254). River Edge, NJ: World
Scientific Publishing Company.

Munby, H. (1997). Issues in validity of science attitude measurement. Journal of


Research in Science Teaching, 20, 141-162.

Murray, H. A. (1938). Explorations in personality. New York: Oxford University


Press.

223
Nasr, A., & Soltani, K. A. (2011). Attitude towards biology and Its effects on
students' achievement. International Journal of Biology, 3, 100-104.

National Center for Educational Statistics (NCES). (2012a). The nation's report
card. Science 2011. (NCES 2012–465). Washington, DC: National Center
for Education Statistics, Institute of Education Sciences, U.S. Department of
Education.

National Center for Educational Statistics (NCES). (2012b). The nation’s report
card. Science in action: Hands-on and interactive computer tasks from the
2009 science assessment. (NCES 2012–468). Washington, DC: National
Center for Education Statistics, Institute of Education Sciences, U.S.
Department of Education.

National Research Council (NRC). (1996). National science education standards.


Washington, DC: National Academy Press.

National Research Council (NRC). (2005). National science education standards.


Washington, DC: National Academy Press.

National Research Council (NRC). (2011). A framework for K–12 science


education: Practices, crosscutting concepts, and core ideas. Committee on a
conceptual framework for new K–12 science education standards. Board of
Science Education, Division of Behavioral and Social Sciences and
Education. Washington, DC: The National Academies Press.

NCLB. (2001). No Child Left Behind Act of 2001. Retrieved from


http://www2.ed.gov/legislation/ESEA02.

Neathery, M. F. (1997). Elementary and secondary students' perceptions toward


science: Correlations with gender, ethnicity, ability, grade, and science
achievement. Electronic Journal of Science Education. Retrieved July 26,
2012, from http://ejse.southwestern.edu/article/view/7573/5340

Nedic, Z., Machotka, J., & Nafalski, A. (2003, November). Remote laboratories
versus virtual and real laboratories. Paper presented at the 33rd ASEE/IEEE
Frontiers in Education Conference, Boulder, CO.

Neuendorf, K. A. (2002). The content analysis guidebook. Thousand Oaks, CA:


Sage.

Newby, M., & Fisher, D. L. (1997). An instrument for assessing the learning
environment of a computer laboratory. Journal of Educational Computing
Research, 16, 179-190.

224
NGSS. (2011). Next Generation Science Standards. Retrieved from
http://www.nextgenscience.org/.

Nix, R. H., Fraser, B. J., & Ledbetter, C. E. (2005). Evaluating an integrated science
learning environment using the Constructivist Learning Environment
Survey. Learning Environments Research, 8, 109-133.

Nix, R. K., & Fraser, B. J. (2011). Using computer-assisted teaching to promote


constructivist practices in teacher education. In B. A. Morris & G. M.
Ferguson (Eds.), Computer-assisted teaching: New developments (pp. 93-
115). New York: Nova Science Publishers.

Nooriafshar, M. (2011, September). New and emerging applications of tablet


computers such as iPad in mathematics and science education. Paper
presented at the Proceedings of the 11th International Conference of the
Mathematics Education into the 21st Century Project (ME21), Dresden,
Germany.

Norton, S. J., McRobbie, C. J., & Ginns, I. S. (2007). Problem solving in a middle
school robotics design classroom. Research in Science Education, 37, 261-
277.

Oakes, J. (1990). Opportunities, achievement, and choice: Women and minority


students in science and mathematics. Review of Research in Education, 16,
153-222.

Ogbuehi, P. I., & Fraser, B. J. (2007). Learning environment, attitudes and


conceptual development associated with innovative strategies in middle
school mathematics. Learning Environments Research, 10, 101-114.

Oh, P. S., & Yager, R. (2004). Development of constructivist science classrooms


and changes in student attitudes toward science learning. Science Education
International, 15, 105–113.

Olitsky, S., & Milne, C. (2012). Understanding engagment in science education. In


B. J. Fraser, K. Tobin & C. McRobbie (Eds.), Second international
handbook of science education (pp. 19-33). New York: Springer Verlag.

Oliver, J. S., & Simpson, R. D. (1988). Influences of attitude toward science,


achievement motivation, and science self concept on achievement in science:
A longitudinal study. Science Education, 72, 143-155.

Oliver, M., & Venville, G. (2011). An exploratory case study of Olympiad students'
attitudes towards and passion for science. International Journal of Science
Education, 33, 2295-2322.

225
Organization for Economic Co-operation and Development (OECD). (2009).
Equally prepared for life? How 15-year-old boys and girls perform in school.
Retrieved June 25, 2012, from
http://www.pisa.oecd.org/pages/0,3417,en_32252351_32235907_1_1_1_1_1
,00.html

Organization for Economic Co-operation and Development (OECD). (2010). PISA


2009 at a glance. Retrieved June 25, 2012, from
http://dx.doi.org/10.1787/9789264095298-en

Orlans, F. B. (1988). Should students harm or destroy animal life? The American
Biology Teacher, 50, 6-12.

Osborne, J., Simon, S., & Collins, S. (2003). Attitudes towards science: A review of
the literature and its implications. International Journal of Science
Education, 25, 1049-1079.

Owen, J. M. (1979). The Australian Science Education Project-A study of factors


affecting its adoption and implementation in schools. Melbourne, Australia:
Monash University.

Parsons, T. S. (1957). A comparison of instruction by kinescope, correspondence


study and customary classroom procedures. Journal of Educational
Psychology, 48, 27-40.

Partin, G. R., & Atkins, E. L. (1984). Teaching via the electronic blackboard. In L.
Parker & C. Olgren (Eds.), Teleconferencing and electronic communication
III. Madison, WI: University of Wisconsin Extension, Centre for Interactive
Programs.

Peiro, M. M., & Fraser, B. J. (2009). Assessment and investigation of science


learning environments in the early childhood grades. In M. Ortiz & C. Rubio
(Eds.), Educational evaluation: 21st century issues and challenges (pp. 349-
365). New York: Nova Science Publishers.

Perez-Pena, R. (2012, July). Top universities test the online appeal of free, New
York Times.

Perpich, J. (2012). Howard Hughes Medical Institute: The Virtual Immunology Lab.
Retrieved May 30, 2012, from
http://www.hhmi.org/biointeractive/vlabs/immunology/index.html

Perrodin, A. F. (1966). Children's attitudes towards science. Science Education, 50,


214-218.

Piaget, J. (1963). Origins of intelligence in children. New York: Norton.

226
Piaget, J. (1970). Structuralism. New York: Basic Books.

Pickett, L. H., & Fraser, B. J. (2009). Evaluation of a mentoring program for


beginning teachers in terms of the learning environment and student
outcomes in participants’ school classrooms In A. Selkirk & M. Tichenor
(Eds.), Teacher education: Policy, practice, and research (pp. 1- 51).
Hauppauge, NY: Nova Science Publishers, Inc.

Popham, W. J. (1961). Tape recorded lectures in the college classroom. Educational


Technology Research and Development, 9(2), 109-118.

Prensky, M. (2001). Digital natives, digital immigrants Part 1. On the Horizon, 9, 1-


6.

Programme for International Student Assessment (PISA). (2009). PISA 2009 key
findings. Retrieved July 17, 2012, from
http://www.oecd.org/pages/0,3417,en_32252351_32235731_1_1_1_1_1,00.
html

Promratrak, L., & Malone, J. (2006, July). The development and evaluation of a CAI
package for use in Thai tertiary electronics laboratories. Paper presented at
the 7th International Conference on Information Technology Based Higher
Education and Training, Sydney.

Psotka, J. (1995). Immersive training systems: Virtual reality and education and
training. Instructional Science, 23, 405-431.

Pyatt, K., & Sims, R. (2012). Virtual and physical experimentation in inquiry-based
science labs: Attitudes, performance and access. Journal of Science
Education and Technology, 21, 133-147.

Quek, C. L., Wong, A. F. L., & Fraser, B. J. (2005). Student perceptions of


chemistry laboratory learning environments, student-teacher interactions and
attitudes in secondary school gifted education classes in Singapore. Research
in Science Education, 35, 399-421.

Raaflaub, C. A., & Fraser, B. J. (2002, April). Investigating the learning


environment in Canadian mathematics and science classrooms in which
laptop computers are used. Paper presented at the annual meeting of the
American Educational Research Association, New Orleans, LA.

Raineri, D. (2001). Virtual laboratories enhance traditional undergraduate biology


laboratories. Biochemistry and Molecular Biology Education, 29, 160-162.

Rauwerda, H., Roos, M., Hertzberger, B. O., & Breit, T. M. (2006). The promise of
a virtual lab in drug discovery. Drug Discovery Today, 11, 228-236.

227
Redfield, R. J. (2012). Perspective — "Why do we have to learn this stuff?"— A
new genetics for 21st century students. PLoS Biology, 10, e1001356.

Reising, M. D. (2010). Bridging biology lectures and labs through higher-order


thinking. Master of Education, Bowling Green State University, Bowling
Green, OH.

Rentoul, A. J., & Fraser, B. J. (1979). Development of a school-level environment


questionnaire. Journal of Educational Administration, 21, 21-39.

Rich, M. (2012, July 6). 'No child' law whittled down by white house, The New York
Times.

Rickards, T., den Brok, P., & Fisher, D. L. (2005). The Australian science teacher:
A typology teacher-student interpersonal behaviour in Australian science
classes. Learning Environments Research, 8, 267-287.

Robinson, E., & Fraser, B. J. (in press). Kindergarten students’ and parents’
perceptions of science classroom environments: Achievement and attitudes.
Learning Environments Research.

Rogers, D. L. (2000). A paradigm shift: Technology integration for higher education


in the new millennium. Association for the Advancement of Computing in
Education, 1, 19-27.

Rulon, P. V. (1943). A comparison of phonographic recordings with printed


motivation to further study. The Harvard Educational Review, 8, 246-255.

Russek, B. E., & Weinberg, S. L. (1993). Mixed methods in a study of


implementation of technology-based materials in the elementary classroom.
Evaluation and Program Planning, 16, 131-142.

Russell, A., & Siley, C. (2005). Strengthening the science and mathematics pipeline
for a better America. American Association of State Colleges and
Universities, 2, 1-4.

Russell, T. L. (1992). Television's indelible impact on distance education: What we


should have learned from comparative research. Research in Distance
Education, 4, 2-4.

Russell, T. L. (1999). The no significant difference phenomenon. Raleigh, NC:


North Carolina State University.

Sabah, S. (2011, April). The effect of computer simulation on students' cnceptual


understanding of electric circuits. Paper presented at the National

228
Association of Research in Science Teaching annual international
conference, Orlando, FL.

Saettler, L. P. (2004). The evolution of American educational technology. Charlotte,


NC: Information Age Publishing, Inc.

Salomon, G., & Globerson, T. (1987). Skill may not be enough: The role of
mindfulness in learning and transfer. International Journal of Educational
Research, 11, 623-637.

Salomon, G., & Perkins, D. (1996). Learning in wonderland. In S. T. Kerr (Ed.),


Technology and the future of schooling (pp. 111-129). Chicago, Il: National
Society for the Studies in Education.

Saretsky, G. (1972). The OEO PC experiment and the John Henry effect. Phi Delta
Kappan, 53, 579-581.

Scantlebury, K. (2012). Still part of the conversation: Gender issues in science


education. In B. J. Fraser, K. Tobin & C. McRobbie (Eds.), Second
international handbook of science education (pp. 499-512). New York:
Springer Verlag.

Scott, R. H., & Fisher, D. L. (2004). Development, validation and application of a


Malay translation of an elementary version of the Questionnaire on Teacher
Interaction (QTI). Research in Science Education, 34, 173-194.

Seibert, W. F., & Honig, J. M. (1960). A brief study of televised laboratory


instruction. Educational Technology Research and Development, 8, 115-123.

Sere, M. G. (2002). Towards renewed research questions from the outcomes of the
European project labwork in science education. Science Education, 86, 624-
644.

Shibeci, R. A. (1984). Attitudes to science: An update. Studies in Science Education,


11, 25-59.

Sinclair, B. B., & Fraser, B. J. (2002). Changing classroom environments in urban


middle schools. Learning Environments Research, 5, 301-328.

Sink, C. A., & Spencer, L. R. (2005). My Class Inventory – Short form as an


accountability tool for elementary school counsellors to measure classroom
climate. Professional School Counseling, 9, 37-48.

Sirkemaa, S. (2003). Learning environment in the digital age: Supporting the


student. Informing Science, 6, 63-67.

229
Smerdon, B., Cronen, S., Lanahan, L., Anderson, J., Iannotti, N., & Angeles, J.
(2000). Teachers’ tools for the 21st century: A report on teachers’ use of
technology. Education Statistics Quarterly, 2, 48-53.

Spinner, H., & Fraser, B. J. (2005). Evaluation of an innovative mathematics


program in terms of classroom environment, student attitudes, and
conceptual development. International Journal of Science and Mathematics
Education, 3, 267-293.

Stern, G. G. (1970). People in context: Measuring person-environment congruence


in education and industry. New York: Wiley.

Stern, G. G., Stein, M. L., & Bloom, B. S. (1956). Methods in personality


assessment. Glencoe, IL: Free Press.

Sticht, T. G. (1971). Failure to increase learning using the time saved by the time
compression of speech. Journal of Educational Psychology, 62, 55.

Stuckey-Mickell, T., & Stuckey-Danner, B. (2007). Virtual labs in the online


biology course: Student perceptions of effectiveness and usability. MERLOT
Journal of Online Learning and Teaching, 3, 105-111.

Tamim, R. M., Bernard, R. M., Borokhovski, E., Abrami, P. C., & Schmid, R. F.
(2011). What forty years of research says about the impact of technology on
learning a second-order meta-analysis and validation study. Review of
Educational Research, 81, 4-28.

Tamir, P. (1974). An inquiry oriented laboratory examination. Journal of


Educational Measurement, 11, 25-33.

Taylor, P. C., & Fraser, B. J. (1991, April). CLES: An instrument for assessing
constructivist learning environments. Paper presented at the Annual Meeting
of the National Association for Research in Science Teaching, Fontane, WI.

Taylor, P. C., Fraser, B. J., & Fisher, D. L. (1997). Monitoring constructivist


learning environments. International Journal of Science Education, 459,
414-419.

Teh, G., & Fraser, B. J. (1994). An evaluation of computer-assisted learning in


terms of achievement, attitudes, and classroom environment. Evaluation and
Research in Education, 8, 147-161.

The California Educator. (2003). The power lies in giving students some control.
http://legacy.cta.org/media/publications/educator/

230
Thomas, R., & Hooper, E. (1991). Simulations: An opportunity we are missing.
Journal of Research on Computing in Education, 23, 497-513.

Thompson, A., Simonson, M. R., & Hargrave, C. P. (1996). Educational


technology: A review of the research. Washington, DC: Association for
Educational Communications and Technology.

Thompson, F. T., & Levine, D. U. (1997). Examples of easily explainable


suppressor variables in multiple regression research. Multiple Linear
Regression Viewpoints, 24, 11-13.

Thompson, S., Wernert, N., Underwood, C., & Nicholas, M. (2008). TIMSS 07:
Taking a closer look at mathematics and science in Australia. Melbourne:
Australian Council for Educational Research.

Thornton, J. W., & Brown, J. W. (1968). New media & college teaching:
Instructional television. Washington, DC: National Educational Association:
Department of Audiovisual Instruction.

Thurmond, B., Holmesa, S. Y., Annetta, L. A., Folta, E., Sears, M., Cheng, R., et al.
(2011, April). Student perceptions of learning and engagement with
scientific concepts through Serious Educational Game (SEG) development.
Paper presented at the National Association of Research in Science Teaching
annual international conference, Orlando, FL.

Tobin, K., & Fraser, B. J. (1998). Qualitative and quantitative landscapes of


classroom learning environments. In B. J. Fraser & K. Tobin (Eds.),
International handbook of science education (pp. 623-640). Hingham, MA:
Kluwer Academic Publishers.

Tobin, K., Kahle, J. B., & Fraser, B. J., (Eds.). (1990). Windows into science
classes: Problems associated with higher-level cognitive learning. London,
UK: Falmer Press.

Toth, E. (2009). "Virtual inquiry" in the science classroom: What is the role of
technological pedagogial content knowledge? International Journal of
Information and Communication Technology Education, 5, 78-87.

Toth, E., Morrow, B., & Ludvico, L. (2009). Designing blended inquiry learning in
a laboratory context: A study of incorporating hands-on and virtual
laboratories. Innovative Higher Education, 33, 333-344.

Trends in International Science and Mathematics Study (TIMSS). (2007). Improving


mathematics and science education. Retrieved June 25, 2012, from
http://timss.bc.edu/TIMSS2007/index.html

231
Trickett, E. J., & Moos, R. H. (1973). Social environment of junior high and high
school classrooms. Journal of Educational Psychology, 65, 93-102.

Trindade, J., Fiolhais, C., & Almeida, L. (2002). Science learning in virtual
environments: A descriptive study. British Journal of Educational
Technology, 45, 471-488.

Tsui, C. Y., & Treagust, D. F. (2004). Motivational aspects of learning genetics with
interactive multimedia. The American Biology Teacher, 66, 277-285.

Tytler, R., & Osborne, J. (2012). Student attitudes and aspirations towards science.
In B. J. Fraser, K. Tobin & C. McRobbie (Eds.), Second international
handbook of science education (pp. 597-625). New York: Springer Verlag.

University of Utah. (2004). Genetic Science Learning Center (1969, December 31).
Learn. Genetics. Retrieved July 24, 2012, from
http://learn.genetics.utah.edu/

Vacha-Haase, T., & Thompson, B. (2004). How to estimate and interpret various
effect sizes. Journal of Counseling Psychology, 51, 473.

van de Bunt-Kokhuis, S. (2001). On-line learning at universities in developing


countries: From leap-frogging to antelope-jumping – Specific needs and
solutions. Higher Education in Europe, 26, 241-246.

Van der Meer, A. W. (1950). Relative effectiveness of instruction by films


exclusively, films plus study guides, and standard lecture methods.
(Technical Report No. SDC 269-7-130). Port Washington, NY: U. S. Navy
Training Devices Center.

Van Petegem, P., Deneire, A., & De Maeyer, S. (2008). Evaluation and participation
in secondary education: Designing and validating a self-evaluation
instrument for teachers to solicit feedback from pupils. Studies in
Educational Evaluation, 34, 136-144.

Van Rooy, W. S. (2011, April). Transforming and enhancing the learning and
teaching of senior biology via digital technologies. Paper presented at the
National Association of Research in Science Teaching annual international
conference, Orlando, FL.

Vlab. (2012). Virtual labs: An initiative of the Ministry of Human Resource


Development (MHRD) under the National Mission on Education through
ICT. Retrieved July 24, 2012, from http://www.vlab.co.in/

Vygotsky, L. S. (1978). Mind in society: The development of higher psychological


processes. Cambridge, MA: Harvard University Press.

232
Wahyudi, & Treagust, D. F. (2004). The status of science classroom learning
environments in Indonesian lower secondary schools. Learning
Environments Research, 7, 43-63.

Waight, N., & Abd-El-Khalick, F. (2007). The impact of technology on the


enactment of 'inquiry' in a technology enthusiast's sixth grade science
classroom. Journal of Research in Science Teaching, 44, 154-182.

Walberg, H. J. (1981). A psychological theory of educational productivity. In F.


Farley & N. J. Gordon (Eds.), Psychology and eduaction: The state of the
union (pp. 81-108). Berkeley, CA: McCutchan.

Walberg, H. J., & Anderson, G. J. (1968). Classroom climate and individual


learning. Journal of Educational Psychology, 59, 414-419.

Walker, S. L., & Fraser, B. J. (2005). Development and validation of an instrument


for assessing distance education learning environments in higher education:
The Distance Education Learning Environments Survey (DELES). Learning
Environments Research, 6, 267-287.

Watson, J. D., & Crick, F. H. C. (1953). Molecular structure of nucleic acids.


Nature, 171, 737-738.

Weisberg, M. (2011). Student attitudes and behaviors towards digital textbooks.


Publishing Research Quarterly, 27, 188-196.

Welch, A. G., Cakir, M., Peterson, C. M., & Ray, C. M. (2012). A cross-cultural
validation of the Technology-Rich Outcomes-Focused Learning
Environment Inventory (TROFLEI) in Turkey and the USA. Research in
Science & Technological Education, 30, 49-63.

Westendarp, C., & Westendarp, H. (2009). Race to the Top. Retrieved July 17,
2012, from http://racetotop.com/

White, B., Bolker, E., Koolar, N., Ma, W., Maw, N., & Yu, C. (2007). The virtual
genetics lab: A freely-available open-source genetics simulation. The
American Biology Teacher, 69, 29-32.

Wiggins, G., & McTighe, J. (2005). Understanding by design. Alexandria, VA:


Association for Supervision and Curriculum Development.

Wilkinson, G. L. (1980). Media in Instruction: 60 Years of Research. Washington,


DC: Association for Educational Communications and Technology.

233
Winn, W., Stahr, F., Sarason, C., Fruland, R., Oppenheimer, P., & Lee, Y. L. (2006).
Learning oceanography from a computer simulation compared with direct
experience at sea. Journal of Research in Science Teaching, 43, 25-42.

Woelfel, N., & Tyler, I. K. (1945). Radio and the school. Tarrytown, NY: World
Books.

Wolcott, H. F. (1994). Transforming qualitative data: Description, analysis, and


interpretation. London: Sage.

Wolcott, H. F. (2009). Writing up qualitative research (3rd ed.). Thousand Oaks,


CA: Sage.

Wolf, S. J. (2006). Learning environment and student attitudes and achievement in


middle-school science classes using inquiry-based laboratory activities.
Doctor of Philosophy, Curtin University of Technology, Perth.

Wolf, S. J., & Fraser, B. J. (2008). Learning environment, attitudes and achievement
among middle school science students using inquiry-based laboratory
activities. Research in Science Education, 38, 321-341.

Wong, A. F. L., Young, D. J., & Fraser, B. J. (1997). A multilevel analysis of


learning environments and student attitudes. Educational Psychology, 17,
449-468.

Wong, A. L. F., & Fraser, B. J. (1996). Environment-attitude associations in the


chemistry laboratory classroom. Research in Science & Technological
Education, 14, 91-102.

Wu, H. K., Krajcik, J. S., & Soloway, E. (2001). Promoting understanding of


chemical representations: Students' use of a visualization tool in the
classroom. Journal of Research in Science Teaching, 38, 821-842.

Wu, W., Chang, H. P., & Guo, C. J. (2009). The development of an instrument for a
technology-integrated science learning environment. International Journal of
Science and Mathematics Education, 7, 207-233.

Wubbels, T., Brekelmans, M. Y., & Hooymayers, H. (1991). Interpersonal teacher


behaviour in the classroom. In B. J. Fraser & H. J. Walberg (Eds.),
Educational environments: Evaluation, antecedents and consequences (pp.
141-160). London: Pergamon Press.

Wubbels, T., & Levy, J. (1993). Do you know what you look like: Interpersonal
relationships in education. London: Falmer Press.

234
Yarrow, A., Millwater, J., & Fraser, B. J. (1997). Improving university and primary
school classroom environments through preservice teachers’ action research.
International Journal of Practical Experiences in Professional Education, 1,
68-93.

Yasar, O., & Landau, R. H. (2003). Elements of computational science and


engineering education. SIAM Review, 45, 787-805.

Yu, J., Brown, D., & Billet, E. (2005). Development of a virtual laboratory
experiment for biology. European Journal of Open, Distance and e-learning.
Retrieved July 26, 2012, from
http://www.eurodl.org/?p=archives&year=2005&halfyear=2&article=195

Zacharia, Z. (2007). Comparing and combining real and virtual experimentation: an


effort to enhance students' conceptual understanding of electric circuits.
Journal of Computer Assisted Learning, 23, 120-132.

Zacharia, Z. C., Olympiou, G., & Papaevripidou, M. (2008). Effects of


experimenting with physical and virtual manipulatives on students'
conceptual understanding in heat and temperature. Journal of Research in
Science Teaching, 45, 1021-1035.

Zandvliet, D. B., & Buker, L. (2003). Learning environments in new contexts: Web-
capable classrooms in Canada. In M. S. Khine & D. L. Fisher (Eds.),
Technology-rich learning environments: A future perspective (pp. 133-156).
Singapore: World Scientific.

Zandvliet, D. B., & Fraser, B. J. (2004). Learning environments in information and


communications technology classrooms. Technology, Pedagogy and
Education, 13, 97-123.

Zandvliet, D. B., & Fraser, B. J. (2005). Physical and psychosocial environments


associated with networked classrooms. Learning Environments Research, 8,
1-17.

Every reasonable effort has been made to acknowledge the owners of copyright
materials. I would be pleased to hear from any copyright owner who has been
omitted or incorrectly acknowledged.

235
Appendices

These Appendices contain the questionnaire and semi-structured interview questions


used in my study, as well as the teacher instructions for participating in my study,
and list of virtual laboratories utilized during the implementation of my study. A
sample student worksheet for one virtual laboratory is also presented.

APPENDIX A: LABORATORY ASSESSMENT IN GENETICS (LAG) .................................... 234


APPENDIX B: SEMI-STRUCTURED INTERVIEW QUESTIONS FOR STUDENTS .................... 243
APPENDIX C: SEMI-STRUCTURED INTERVIEW QUESTIONS FOR TEACHERS.................... 245
APPENDIX D: LIST OF VIRTUAL LABORATORIES AVAILABLE FOR TEACHERS ............... 247
APPENDIX E: INSTRUCTIONS TO TEACHERS FOR PARTICIPATING IN MY STUDY ............. 248
APPENDIX F: EXAMPLE OF A VIRTUAL LABORATORY WORKSHEET .............................. 249

236
Appendix A: Laboratory Assessment in Genetics (LAG)

Study in Science Education by Rachel Oser


Directions:

This survey contains questions about your thoughts on science, your perceptions about
science laboratories, and your understanding of the concepts illustrated through laboratory
activities. Part I refers to background information about yourself and your class (14
Questions), Part II refers to your attitudes toward science and perception of the laboratory
environment (Questions #1-64), and Part III refers to your understanding of the concepts
illustrated through the laboratory activities in your class (Questions #65-74).

When you complete this survey, you will be given the opportunity to provide your email
address which enters you into a raffle to win a $50 gift certificate, to thank you for your
participation.

I. In this part of the questionnaire you will answer simple background questions
about yourself and your class.

II. This part of the questionnaire asks questions about student attitudes towards
science and student perceptions of the learning environment

This section contains 64 questions in 8 frames. In this part of the questionnaire,


there are no right or wrong answers, only your opinions. Although some
statements in this survey may seem similar to other statements, you are asked
to indicate your opinion about each statement. For example: Suppose you were
given the statement “I like science”. You would need to decide whether you
Strongly Agree, Agree, Not Sure, Disagree, or Strongly Disagree with this
statement and then circle the corresponding number. If you mistakenly circle
the wrong number, please place an “X” over that circle and then circle the
appropriate response.

PLEASE NOTE: The word "laboratory" in this survey refers to any experiment
you have done in your science class whether it was "hands-on" or virtual (on a
computer). Thank you.

III. This section contains 10 questions on your understanding of genetics. Example:


Suppose you were given a statement “Genetics is the study of____________”.
You would need to choose the best answer from the choices given such as “A)
the environment, B) heredity, C) evolution, D) plants”. For instance, if you
selected “B) heredity”, then circle the letter "B".

237
Part I. Background Information
Personal Details:
1. Gender: 3. Ethnity:
 Female  White
 male  Hispanic
2. Is English the main language you use to  Black (non-Hispanic)
communicate?  Asian
 Yes  Other:_________
 No 4. Age:_____

Class Details:
5. Grade: 6. Type of class:
 8th  Standard/College Preparatory
 9th  Honors
 10th  Inclusion
 11th  Advanced Placement
 12th  Other:___________
 Other:________
7. Teacher Code:________

Computer Usage:
8. Do you have a computer at home? 10. Do you have Internet access at
 Yes home?
 No  Yes
 No
9. How many hours a week do you
spend on the computer? 11. How many hours a week do you
 0-2 spend on the Internet?
 2-5  0-2
 5-10  2-5
 10-15  5-10
 More than 15  10-15
 More than 15

Future Plans:
12. Do you plan on going 13. Which type of job would you like when you leave school?
to college?
 Yes
 No  Doctor  Psychologist  Chef
 Lawyer  Actor  Fashion Designer
 Politician  Nurse  Journalist
 Scientist  Athlete  Businessman
 Accountant  Teacher  Designer
 Mechanic  Model  Other:_____
 Programmer  Banker

Please go on to Part II. Thank You!

238
Part II. Student Attitudes towards Science and Student Perceptions of the Learning
Environment
Strongly Disagree Not Agree Strongly
Disagree Sure Agree
1. I would prefer to find out why
something happens by doing an 1 2 3 4 5
experiment than by being told.
2. I would prefer to do experiments
than to read about them. 1 2 3 4 5
3. It is better to create my own
hypothesis than to be given a 1 2 3 4 5
hypothesis to test out.
4. I would prefer to do my own
experiments than find out 1 2 3 4 5
information from a teacher.
5. It is better to try out different
ways of setting up an experiment
than to be told exactly how to set 1 2 3 4 5
it up.
6. It is better to find an answer by
doing experiments than to ask the 1 2 3 4 5
teacher the answer.
7. I would prefer to guess the results
than to be told the expected
results before doing an 1 2 3 4 5
experiment.
8. It is better to find out scientific
facts from experimenting than to 1 2 3 4 5
be told them.

Strongly Disagree Not Agree Strongly


Disagree Sure Agree
9. Science is one of the most
interesting school subjects. 1 2 3 4 5
10. The activities we do in science
lessons are fun. 1 2 3 4 5
11. I enjoy the audio and visual
effects of the activities we do in 1 2 3 4 5
science lessons.
12. The technology used in activities
makes the science lessons more 1 2 3 4 5
exciting.
13. The activities we do in science
lessons are useful. 1 2 3 4 5
14. The activities we do in science
lessons helped develop my 1 2 3 4 5
problem-solving skills.
15. I look forward to the activities we
do in science lessons. 1 2 3 4 5
16. I would enjoy school more if there
were activities such as the ones we 1 2 3 4 5
do in science lessons.

239
Strongly Disagree Not Agree Strongly
Disagree Sure Agree
17. The laboratory activities are related
to the topics that I am studying in my 1 2 3 4 5
science class.
18. My regular science class work is
integrated with laboratory activities. 1 2 3 4 5
19. I use the theory from my regular
science class sessions during 1 2 3 4 5
laboratory activities.
20. The topics covered in regular science
class work are quite similar to topics 1 2 3 4 5
in laboratory activities.
21. What I do in the laboratory helps me
to understand the theory covered in 1 2 3 4 5
regular science classes.
22. My laboratory activities and regular
science class work are related. 1 2 3 4 5
23. The concepts addressed in the
laboratory are those I need to know 1 2 3 4 5
for my science class.
24. The skills used in laboratory
activities are similar to the skills 1 2 3 4 5
addressed in my science class.

Strongly Disagree Not Agree Strongly


Disagree Sure Agree
25. The materials that I need for
laboratory activities and technology 1 2 3 4 5
are readily available.
26. The laboratory is an appealing place
for me to work in. 1 2 3 4 5
27. I find the audio and visual effects
used in the technology in this class to 1 2 3 4 5
be appealing.
28. The laboratory and/or technology
space has enough room for 1 2 3 4 5
individual or group work.
29. The materials that I need for
laboratory activities and technology 1 2 3 4 5
are in good working order.
30. I find the instructions to use the
materials in laboratory activities and 1 2 3 4 5
technology to be clear and precise.
31. I do not have to wait to use both
laboratory and technology materials. 1 2 3 4 5
32. Help is available for laboratory
materials when I need it. 1 2 3 4 5

240
Strongly Disagree Not Agree Strongly
Disagree Sure Agree
33. The teacher takes a personal
interest in me. 1 2 3 4 5
34. The teacher goes out of his/her
way to help me. 1 2 3 4 5
35. The teacher helps me when I have
trouble with my work. 1 2 3 4 5
36. The teacher is interested in my
problems related to schoolwork. 1 2 3 4 5
37. The teacher moves about the class
to talk with me. 1 2 3 4 5
38. The teacher’s questions help me
to understand the topic. 1 2 3 4 5
39. The teacher guides me through
activities when I am stuck. 1 2 3 4 5
40. The teacher helps me with
problems related to schoolwork. 1 2 3 4 5

Strongly Disagree Not Agree Strongly


Disagree Sure Agree
41. Getting a certain amount of work
done is important to me. 1 2 3 4 5
42. I do as much as I set out to do
regarding the activities in this 1 2 3 4 5
class.
43. I know the purpose of completing
the activities in this class. 1 2 3 4 5
44. I am ready to start my work in
this class on time. 1 2 3 4 5
45. I know what I am trying to
achieve in this class. 1 2 3 4 5
46. I pay attention during this class. 1 2 3 4 5
47. I try to understand the work in
this class. 1 2 3 4 5
48. I know how much work I have to
do in this class. 1 2 3 4 5

241
Strongly Disagree Not Agree Strongly
Disagree Sure Agree
49. I carry out investigations to test
my ideas in this class. 1 2 3 4 5
50. I am asked to think about the
evidence for statements in this 1 2 3 4 5
class.
51. I carry out investigations to
answer questions during the 1 2 3 4 5
activities in this class.
52. I explain the meaning of
statements, diagrams, and graphs 1 2 3 4 5
during activities in this class.
53. I carry out investigations to
answer questions that puzzle me 1 2 3 4 5
in this class.
54. I carry out investigations to
answer the teacher’s questions in 1 2 3 4 5
this class.
55. I find out answers to questions by
doing investigations in this class. 1 2 3 4 5
56. I solve problems by using
information obtained from my 1 2 3 4 5
own investigations in this class.

Strongly Disagree Not Agree Strongly


Disagree Sure Agree
57. I work at my own speed regarding
the activities I do in this class. 1 2 3 4 5
58. Students who work faster than me
in these activities move onto the 1 2 3 4 5
next task.
59. I am given a choice of tasks
regarding the activities I do in this 1 2 3 4 5
class.
60. I am given tasks that are different
from other students’ tasks. 1 2 3 4 5
61. I am given work that suits my
ability. 1 2 3 4 5
62. I use different materials from
those used by other students. 1 2 3 4 5
63. I am assessed in a different
manner from other students in 1 2 3 4 5
this class.
64. I do work that is different from
other students’ work in this class. 1 2 3 4 5

Please go on to Part III. Thank you!

242
Part III. Understanding of Concepts in Genetics
1. Which of the following features of DNA is most important in determining the
phenotype of an organism?
A) The direction of the helical twist
B) The number of deoxyribose sugars
C) The sequence of nitrogenous bases
D) The strength of the hydrogen bonds

2. Fireflies produce light inside their bodies. The enzyme luciferase is involved in the
reaction that produces the light. Scientists have isolated the luciferase gene.

A scientist inserts the luciferase gene into the DNA of cells from another organism. If
these cells produce light, the scientist knows that which of the following occurred?
A) The luciferase gene mutated inside the cells.
B) The luciferase gene was transcribed and translated.
C) The luciferase gene destroyed the original genes of the cells.
D) The luciferase gene moved from the nucleus to the endoplasmic reticulum.

3. Steps in a reproductive process used to produce a sheep with certain traits are listed
below.

Step 1 — The nucleus was removed from an unfertilized egg taken from sheep A; Step
2 — The nucleus of a body cell taken from sheep B was then inserted into this
unfertilized egg from sheep A; Step 3 — The resulting cell was then implanted into the
uterus of sheep C.; Step 4 — Sheep C gave birth to sheep D. Which sheep would be
most genetically similar to sheep D?

A) Sheep A, only
B) Sheep B, only
C) Both sheep A and B
D) Both sheep A and C

4. Bacteria in culture A produce slime capsules around their cell walls. A biologist removed
the DNA from some of the bacteria in culture A and injected it into bacteria in culture B,
which normally do not produce slime capsules. After the injection, bacteria with slime
capsules began to appear in culture B. What conclusion can best be drawn from this
investigation?

A) The bacteria in culture A are mutations.


B) Bacteria reproduce faster when they have slime capsules.
C) The slime capsules of bacteria in culture B contain DNA.
D) DNA is most likely involved in the
production of slime capsules.

5. What does structure B represent in the


diagram?

A) a ribosome
B) transfer RNA
C) recombinant DNA
D) a male gamete

243
6. Which process is illustrated in the diagram below?
A) chromatography
B) direct harvesting
C) meiosis
D) genetic engineering

7. After a culture of cells is allowed to multiply and is viewed through a microscope, the
cells are x-rayed with high-energy radiation for less than 1/100th of a second. After the
radiation, many newly reproduced cells appear different. What has probably occurred?
A) mutation
B) speciation
C) contamination
D) bacterial infection

8. In 1910, Thomas Morgan discovered traits linked to sex chromosomes in the fruit fly.
The Punnett square below shows the cross between red-eyed females and white-eyed
males. Fruit flies usually have red eyes. If a female and male offspring from the cross
shown above are allowed to mate, what would the offspring probably look like?

A) 1 red-eyed female and 1 white-eyed female; 2 red males


B) 2 red-eyed females; 2 white-eyed males
C) 2 red-eyed females; 1 red-eyed male and 1 white-eyed male
D) 2 white-eyed females; 1 white-eyed male and 1 red-eyed male

244
9. The chances of developing cancer, diabetes, or sickle-cell anemia are higher if a family
member also has the disorder because they are —

A) Genetically based
B) Passed through blood contact
C) Highly infectious
D) Related to diet

10. The picture below shows a segment of DNA from a cat. Which of these is most likely the
kitten of this cat?

A) 1
B) 2
C) 3
D) 4

You have finished the questionnaire. Thank you!

Please write your email address here if you wish to be entered into a raffle to win a $50 gift
certificate to be drawn at the end of June:

____________________________________________________________

In this Part II of this questionnaire, items 1–16 are based on the Test Of Science Related
Attitudes (TOSRA) (Fraser, 1981) as described in Section 2.3.2, items 17 – 32 are based
on the Science Laboratory Environment Inventory (SLEI) (Fraser, Giddings, &
McRobbie, 1992)described in Section 2.2.2, and items 33–64 are based on the
Technology-Rich Outcomes-Focused Learning Environment Inventory (TROFLEI)
(Aldridge & Fraser, 2008) described in section 2.2.2. Modification of these items from
their original scales is described in Section 3.4.1. The questionnaire items were used in
this study and included in this thesis with the authors’ permission.

245
Appendix B: Semi-structured Interview Questions for Students
Introduction to students: Before we get started, do you have your parents consent to participate
in this interview and to have this interview recorded? Hi! Thank you for agreeing to participate
in this study on science education. The purpose of this research is to help me understand how
experiences in the science classroom affect students’ attitudes towards science, how students
perceive their environments, and how students achieve in science. There are no right or wrong
answers; only your opinions count and, what you say will not be reported back to your teacher!
The results will inform teachers in general on how to best teach science so that students will be
able to learn better. I will start recording now - please say your name when I pause during my
introduction. This is an audio recording on [date, time, place] between Rachel Oser and
__________. I want to remind you that you may stop this interview at any time. Let’s begin.

 [ENJ] Do you find the activities you did in your science classes to be fun?
o Can you give an example of a memorable activity? Such as games, demonstrations, labs,
puzzles, virtual labs, etc.
o Were there any laboratory activities you liked doing?
o Was it useful?
o Did you look forward to such activities?
o What did that make you think about your science class in general - was it your favorite
subject?
o Would you ever try any activities from class at home?

 [ACH] How well do you understand the topic of genetics?


a. What contributed to that understanding – what sorts of activities, labs, virtual labs, etc.
helped you understand?
b. Do you feel that you understand this topic more than other topics you learned in your
class? Why/why not?

 [MTE] Please tell me about the materials you used for labs and technology.
o Were they useful, available, in good working order?
o What about computers?
o How did you find the audio and visual effects of the activities you did?

 [TSP] How did you find the attitude of your teacher towards helping you?
o Did s/he help you when you had trouble with your work? How so?
o Did your teacher guide through activities when you were stuck?
o In what way do you think the teacher should be involved?
o In your opinion, what helps you learn more from computer activities – more or less
teacher involvement?

 [INQ] Do you prefer to learn scientific facts by experimenting or by listening to the teacher
tell you about them? Why?
o What do you like/not like about experimenting?
o Do you like to create your own hypothesis (guess the results) or test out a hypothesis
given to you by your teacher?
o What types of experiments did you like doing best? Why?

246
 [INT] Were the labs that you did related to the topics you studied in your science class?
o How so? Can you give me an example?
o Did what you learned in class help your labs or vice versa?

 [TOR] Did you feel that it was important to complete a certain amount of work in class? In
general, how motivated are you to get stuff done?
o Were you aware of the work you needed to complete in science class?
o Did you know the purpose of doing that work?
o Did you pay attention in class so that you could complete the work?
o Tell me about a time that you felt good about completing the work in this class.

[INV] Were you asked to think about evidence for statements in this class?
a. What kind of evidence were you given? Statements, diagrams, graphs?
b. During which sorts of activities were you asked to investigate such evidence?
c. Do you prefer to do such investigations to find answers to questions? Why/why not?
Can you give an ex.?
d. How important is it for you to have control over what you are doing during lab activities?

[Gender] Do you think there’s a difference between males and females in the class, in terms
of how they learn, perceive their environment, their attitudes, or achievement? How so?

[VL] You were involved in a research study where some classes did virtual labs and some did
not. Which group do you think you were in?
a. If you had a choice, which group would you prefer to be in? Why?

Follow-up for students in Experimental Group: Lastly, I will tell you what I have found from
the data I collected from the surveys. It turns out that there doesn’t seem to be a significant
difference in perceptions, attitude, or achievement between students who did the virtual labs and
students who did not. Perhaps you can help inform me why not, since I was hoping there would
be such a difference.
a. Can you tell me what you have observed in your class that could account for this?
b. Please describe the virtual lab arrangement in your class. Around how many were
completed? Do you remember which ones (read off list)? Were there any problems
with the virtual labs or computer equipment?
c. Knowing these results, would you change your answer to 10. a.?

12. Is there anything else I have not asked about lab activities in your classroom that you think I
should know?

Thank you so much for your time. What I have learned from you and this conversation will
help me inform teachers about how students learn in science classrooms. I will send you a
transcript to review shortly and email you your $10 iTunes certificate. Have a good summer!

247
Appendix C: Semi-structured Interview Questions for Teachers
Introduction to teachers: The purpose of this research is to help me understand how experiences
in the science classroom (namely, virtual laboratories) affect students’ attitudes towards science,
how students perceive their environments, and how students achieve in science. There are no
right or wrong answers; only your opinions count. (VL = Virtual Lab)

Practical Details:
 How many virtual labs did you do with your VL classes?_____ Which VL’s did
you choose to do with them (see attached list as a reminder)?
 Period of time for implementation of this study: (# of days/weeks/months)
 Please describe how often (or the interval) you did VL’s with your VL classes:
(did you do them every day for a week straight or once a week, etc…)
 In contrast, what sort of activities did you do with your non-VL classes? (was it
regular lecture, rich discussions, hands-on labs, other computer activities)
 Please mention any confounding variables between the VL and non-VL classes:
(did you teach the same topic, were students at the same level, did the time of
day differ for those classes consistently, etc.)

 [ENJ] How do you think your students differed in their opinions of how ‘fun’ the
activities were in your science classes? Did students who did the VL’s overall find
science classes to be more enjoyable or students who did not do VL’s? Do you think
that students who did VL’s found them to be more useful/would look forward to
doing them/would try them at home? If given a choice, would students prefer or not
prefer to be in the VL class?

 [ACH] How do you think your students differed in their understanding of genetics?
Did the students who did VL’s understand the topic better as a result of doing VL’s
or did it make no difference?

 [MTE] Was the equipment used for VL’s in good working order? Ex. Internet
speed, enough computers, etc. Please describe any technical difficulties, if any. In
contrast, were there any technical problems with other materials you may have used
in hands-on labs?

 [TSP] How much did you have to help your students along with the VL’s? Did you
find that you provided more assistance in VL classes or non-VL classes?

 [INQ] How do you think students differed regarding the level of inquiry? Were
students in VL classes more/less/about the same as curious about experimenting
using VL’s as students in non-VL classes?

 [INT] Were the labs (whether VL’s or other activities you did with your non-VL
classes) that you did related to the topics you taught in your science class? Was there

248
a difference in the integration of these activities amongst the two classes (VL and
non-VL)?

 [TOR] How do you think students differed in their motivation to complete


activities/labs between the VL and non-VL classes? Did students in the VL classes
complete the VL’s more/less/the same than students in non-VL classes or
more/less/the same than other activities?

[Gender] Did you notice any differences in gender regarding the VL’s? Did boys or
girls seem to be more engaged/motivated to do them? If so, does the same gender
difference exist with other activities or in non-VL classes? you think there’s a
difference between males and females in the class, in terms of how they learn,
perceive their environment, their attitudes, or achievement? How so?

Lastly, I will tell you what I have found from the data I collected from the surveys. It
turns out that there doesn’t seem to be a significant difference in perceptions,
attitude, or achievement between students who did the virtual labs and students who
did not. The only quantitative, significant difference was found when comparing VL
and non-VL classes against gender so that males showed a significantly higher score
on the attribute of ‘inquiry’ for VL’s versus non-VL’s. Perhaps you can help inform
me why more significant differences were not apparent, since I was hoping there
would be such differences. (Although, since there was no negative effect of doing
VL’s, they are at least as effective as any other educational method and can be
implemented in classrooms with confidence.) Can you tell me what you have
observed in your class that could account for this?

10. Is there anything else I have not asked about the implementation and results of this
study that you think I should know?

Thank you so much for your time.

249
Appendix D: List of Virtual Laboratories Available for Teachers
List of Virtual Labs (Just the links):

 http://highered.mcgrawhill.com/sites/0073031208/student_view0/virtual_labs.ht
ml - On this page, there are 2 labs relevant to this study (Reproduction and
Heredity & Molecular Genetics), but they are quite advanced and not too
interactive. They do not allow for use of virtual lab materials only for analyzing
data from a graph. These are excellent sources for advanced students interested
in the particular questions explored in the labs and they include a self-checking
feature to determine if the data supports the hypothesis.
 http://www.biologylabsonline.com/ - This site contains many great virtual labs
but requires an access code (provided below) and registration. I will be mailing
each of you the instructor’s packets for these labs. Access code: USCS-BLUFF-
TREND-POWAN-FIORI-PRIES (please do not distribute as I have procured
this code solely for the purpose of this study which can be accessed up to 1,000
times; otherwise there is usually a fee).
 http://www.hhmi.org/biointeractive/vlabs/index.html - There are 2 labs relevant
to this study (Transgenic FlyLab, Bacterial Identification) for which I have
created instructor guides and student worksheets (see the dropbox and/or your
email accounts – please let me know if you need another copy).
 http://learn.genetics.utah.edu/ - All the labs on this website are relevant to this
study for which I have created instructor guides and student worksheets (see the
dropbox and/or your email accounts – please let me know if you need another
copy).
 http://virtuallaboratory.colorado.edu/Biofundamentals/index.htmlhttp://www.ph
school.com/science/biology_place/labbench/ - Some of these are great,
interactive and realistic labs that require data collection, but they also require
extensive background reading so these are for more advanced students.
 http://www2.edc.org/weblabs/WebLabDirectory1.html - This website contains a
list of genetics virtual labs – some are great and really interactive (more
animated than realistic) whereas others are more clicking-through types to cover
content.
 http://www.jdenuno.com/TechConnect/OnLineLabs.htm - This is a list of many
virtual labs, which are worthwhile to peruse as you may choose to do these over
the others. Some include virtual labs on Mendelian genetics.
 http://www.ucopenaccess.org/courses/APBioLabs/course/index.html - This is a
resource used for online AP biology courses and contains a number of virtual
labs about various topics. I will be creating worksheets to go along with the
“Genetics of Organisms” (Fruit Flies) lab and the “Molecular Biology”
(Bacterial Transformation) lab.
 http://virtuallabs.stanford.edu/ - A general list of virtual labs for your own
resources (not too many on genetics).

250
More to be added as deemed appropriate…… (Open to suggestions!)

251
Appendix E: Instructions to Teachers for Participating in My Study
General Instructions for Participation in Virtual Lab Study
Thank you for participating in this study to evaluate the effectiveness of virtual
laboratories in science classrooms, specifically applied to the topic of genetics. Here
are just a couple of quick instructions for you to follow, in order please, so that I can
ensure validity and reliability of the resulting data. All forms are available via
‘dropbox’ or I will send as an email attachment. Please:
1. Read and sign the ‘Consent Form for Teachers’ – return via email, mail, or fax.
2. Fill out the ‘Teacher Information Sheet’- return via email, mail, fax, or Dropbox.
3. Introduce participation of this study to your students via my video (on the blog)
or read a transcript of the video aloud to them. DO NOT mention the real goal of
this study (so they remain unbiased!) but you can describe it as a ‘study on
learning methods in science classrooms’.
4. Hand out consent forms for students and their parents (I will provide copies, if
you wish) and let them know that they only need to return these forms if they
wish NOT to participate in the study. Keep returned forms in a safe place to be
returned to me with all other materials at the end of the study.
5. Read the ‘Introduction to Virtual Laboratory Activities’ document.
6. Browse the suggested virtual laboratories and choose 4-5 that are appropriate for
your classes – print student worksheets or let me know which documents you
need copied and I’ll mail those to you. Re: you will only be doing this with half
of your classes.
7. Implement the virtual laboratories any time between Feb.- June 2010; they need
not follow in succession nor follow equal intervals of time between their
administrations. You will need to ensure Internet access for each student
completing the virtual lab activity so be sure to reserve computer labs or laptop
carts ahead of time. Re: you will only be doing this with half of your classes.
8. Notify me when you have finished using the virtual laboratories and I will send
you the surveys to administer to all of your classes (even the ones who did not
use virtual labs as they are the control group) – surveys should be completed
during one class period (perhaps you can offer extra credit for compliance!). I
may select some students to interview at this time.
9. Notify me when all students have completed the survey and I will send you an
envelope in which you will return all forms and surveys to me. You may keep
any materials used during this study, but I ask that you do not share them with
others until this study is complete (June 25, 2010). Thank you for all your help
and your name will be entered into a raffle for $100 of coffee!
My Contact Information:
82 Eighth St., Providence, RI 02906
Cell: 917-640-8355 / Fax: 781-982-4201 (Attention: Rachel Oser)
[email protected]
http://oserscienceedstudy.blogspot.com/ -blog for announcements, FAQs, etc.
www.dropbox.com – Storage and sharing of all documents (I emailed you a link)

252
Appendix F: Example of a Virtual Laboratory Worksheet

Title of Lab: DNA Microarray Virtual Lab – University of Utah

Access: http://learn.genetics.utah.edu/content/labs/microarray/

Brief Description: In this activity students learn the procedure and concepts that
underlie the use of a DNA Microarray for the field of genomics. The purpose of each of
the lab materials is explained clearly and the tasks are simple. This lab takes more
time, relatively, than the other labs from the same website but it does include an
investigative piece and students get to make a real-life application to the differences
between healthy and cancer cells.

Rating: *Advanced  Basic*


Comments: This lab is a little more complicated than the other three labs from the same
website but is great for applying all of the techniques to a real-life problem. It is important that
students have studied Protein Synthesis and have an introduction to genomics first, but if not,
the lab does have a link to educational pages that explain such processes. There are a number
of options as to how you can use this lab:
1 – as a stand-alone virtual laboratory
2- in conjunction with the other 3 labs from the same website so that it leads to an
investigative piece

Vocabulary:
 Organism
 Genomics
 Genetics
 Genes
 Gene expression
 Gene expression profile
 DNA microarray
 Cancer
 RNA
 Vortex
 Microcentrifuge
 Poly-A tail
 Buffer
 cDNA
 Oligo-dT primers (poly-T tails)
 Reverse transcriptase
 Nucleotides
 Hybridization
 Complementary

253
Standards:
 NY: Standard 1, Performance Indicators 1.1, 1.2a, 1.3a, 2.3, 2.4, 3.1, 3. & Standard
4, Performance Indicators 2.1, 2.2
 MA: Standards 3.1-3.8
 National Standards: THE MOLECULAR BASIS OF HEREDITY [Content
Standard B (grades 9-12)]
 In all organisms, the instructions for specifying the characteristics of the organism are
carried in DNA, a large polymer formed from subunits of four kinds (A, G, C, and T).
The chemical and structural properties of DNA explain how the genetic information that
underlies heredity is both encoded in genes (as a string of molecular ''letters") and
replicated (by a templating mechanism). Each DNA molecule in a cell forms a single
chromosome.
 Most of the cells in a human contain two copies of each of 22 different chromosomes. In
addition, there is a pair of chromosomes that determines sex: a female contains two X
chromosomes and a male contains one X and one Y chromosome. Transmission of
genetic information to offspring occurs through egg and sperm cells that contain only one
representative from each chromosome pair. An egg and a sperm unite to form a new
individual. The fact that the human body is formed from cells that contain two copies of
each chromosome—and therefore two copies of each gene—explains many features of
human heredity, such as how variations that are hidden in one generation can be
expressed in the next.
 Changes in DNA (mutations) occur spontaneously at low rates. Some of these changes make
no difference to the organism, whereas others can change cells and organisms. Only
mutations in germ cells can create the variation that changes an organism's offspring.

http://www.nap.edu/openbook.php?record_id=4962&page=185

Student Worksheets: See attached – you may decide to give the first page as a pre-
lab for HW to save time

254
Name:_____________________ Period:_____ Date:_____________

DNA Microarray Virtual Laboratory (Student Worksheet) – University of Utah

“DNA microarray analysis is one of the fastest-growing new technologies in the


field of genetic research. Scientists are using DNA microarrays to investigate
everything from cancer to pest control. Now you can do your own DNA microarray
experiment! Here you will use a DNA microarray to investigate the differences
between a healthy cell and a cancer cell.” (Taken from website below)

Background: Read the introduction in chapters 1 and 2 and answer the questions below:
http://learn.genetics.utah.edu/content/labs/microarray/
1. About how many genes do humans have?____________________________

2. What is genomics?____________________________________________

3. What does it mean for a gene to be “expressed”? If a gene is expressed, what would
be produced?________________________________________________

4. Briefly explain how many different cell types can form in the body if they all have
the same DNA. _______________________________________________
_________________________________________________________
5. What is a gene expression profile and why is it useful?____________________

6. What’s the advantage of using a DNA microarray?_______________________

7. What are some other names for the DNA microarray (see the pull-down “+”
sign)?_____________________________________________________
8. From where can one get a DNA microarray?___________________________

9. What does each spot on the DNA microarray represent?__________________

Problem: What’s the difference between a healthy cell and a cancer cell?

10. Explain the usefulness of looking at cancer cells under the microscope. Will cancer
cells appear to be different?______________________________________

11. In cancer cells, something has gone wrong with the genes that control:_________

12. Why is it important to find out which genes are the culprits each type of cancer?
_________________________________________________________

Procedure: Read and follow all the prompts given to you in the lab and answer the
questions that follow in order as you perform the specific tasks (read the questions ahead of
time!)

1. List the 7 steps in the experiment in which a DNA microarray is used to compare the
differences in gene expression levels between cancer cells and healthy cells:

255
2. List all the materials needed for this experiment:________________________

3. What substance will you measure from both healthy and cancer cells to determine
which genes are turned on/off?____________________________________

4. From where will you obtain the cancer cells?___________________________

5. What do the vortex and microcentrifuge do?___________________________

6. Where is the RNA found at this point?_______________________________

7. How will you retain only the mRNA? Why do you want to retain that particular type
of RNA? ___________________________________________________

8. What is the buffer used for?______________________________________

9. Why do you have to convert mRNA back into DNA (called complementary DNA or,
cDNA)? __________________________________________________

10. Which substance converts the mRNA into cDNA?______________________

11. What is hybridization? ________________________________________

12. A single spot on the microarray contains multiple copies of the same/different
(circle one) DNA sequences whereas the DNA is the same/different (circle one)
from one spot to another.

13. Each spot number on the microarray corresponds to a___________________

Outcomes: You now have a chip to which your sample DNA has been added that
represents every known gene in this organism and how the sample and spots on the chip
match up will determine the relationship between those matched genes and the particular
cancer.

14. What do the darker colored spots on the green (healthy) image
represent?_________

15. Interpret the data from the merged image:


a. What does a yellow spot show?_____________________________
b. What does a green spot show?______________________________
c. What does a red spot show?_________________________________

16. As the prompt describes, imagine you are a researcher studying such genes in skin
cancer cells, on which color spot would you focus and why?
_________________________________________________________

17. As the prompt asks, what color are the spots that are turned down by gene 4263? __

18. As the prompt asks, what color is gene 6219 on the microarray? _____________

19. What are some of the advantages of using the DNA microarray technique?_______
_________________________________________________________

20. What are some limitations of using the DNA microarray technique?
_________________________________________________________

256

You might also like