0% found this document useful (0 votes)
40 views190 pages

X030463363

This study examines how allowing elementary students to search the internet for 30 minutes affects their essay scores. Students were randomly assigned to three groups: a control group who wrote for 90 minutes, a group who searched online for 30 minutes then wrote for 60 minutes, and a group who received instruction on online searching before completing the same task as the second group. Two teachers scored the essays in several areas. Statistical analysis found no significant difference between the control and uninstructed internet groups, but the instructed internet group scored significantly higher on total score and mechanics. Effect sizes also favored the instructed internet group.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views190 pages

X030463363

This study examines how allowing elementary students to search the internet for 30 minutes affects their essay scores. Students were randomly assigned to three groups: a control group who wrote for 90 minutes, a group who searched online for 30 minutes then wrote for 60 minutes, and a group who received instruction on online searching before completing the same task as the second group. Two teachers scored the essays in several areas. Statistical analysis found no significant difference between the control and uninstructed internet groups, but the instructed internet group scored significantly higher on total score and mechanics. Effect sizes also favored the instructed internet group.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 190

THE EFFECTS OF THE INTERNET ON STUDENTS' ESSAY SCORES

A Dissertation Presented to

the Faculty in the Curry School of Education

at the University of Virginia

In Partial Fulfillment

of the Requirements for the Degree

Doctor of Philosophy

by

Ngan Kim Doan

May 2008
Curriculum, Instruction and Special Education

Cuny School of Education

University of Virginia

Charlottesville, Virginia

APPROVAL OF THE DISSERTATION

This dissertation, The Effects of the Internet on Students' Essay Scores, has
been approved by the Graduate Faculty of the Curry School of Education in
partial fulfillment of the requirements for the degree of Doctor of Philosophy.

Ruth Ferree, Co-Chair

Xita6 Fan

Aaron Bloomfield �,
ABSTRACT

The Internet, and in particular the World Wide Web (:NWW), has been

promoted as a useful learning tool for schools, teachers, and students alike.

While many articles exist to promote its pedagogical usages in the K-12

classroom, no empirical study currently exists to document its effects on

students' essay writings. Via experimental group design, this study examined

how the allowance of 30 minutes of search time on the Web affects upper

elementary students' essay scores in their response to a standards-based writing

prompt.

Essays were obtained from 49 fourth- and fifth-grade students enrolled in

an elementary school in Virginia. Students were placed by random assignment

into three groups with the same writing prompts for all three groups. The three

groups were: 1) the control group--students who received standard

administration testing procedures in which they receive the writing prompt and a

total of 90 minutes to plan and write, 2) students who received the writing prompt,

30 minutes to browse the Internet, and 60 minutes to write, and 3) students who

received three 45-minute lessons on how to use the Internet. Then, on their test

day, these students received the writing prompt, 30 minutes to browse the

Internet, and 60 minutes to write.

Two preservice teachers from the local school of education scored the

essays in four areas: composing, written expression, usage/mechanics, and the

total essay score. Data analyses using ANOVA indicate that there was no

statistical significance when students who used the Internet without instruction
(Group II) were compared to the control group (Group I). Statistically significant

results did appear when Group I was compared to Group Ill, the group that

received instruction on using the Internet. Group Ill outperformed the control

group in two areas: the total essay score (p=.053) and usage/mechanics

(p=.028).

Using Cohen's d to calculate effect sizes for Total Essay Score, Group II

obtained an effect size of .406, Group Ill obtained an effect size of .827, and

Group W (both Groups II and Ill) obtained an effect size of .570 when compared

to the control group. Effect sizes in the subcomponents of writing (composing,

written expression, and usage/ mechanics) were also observed.

These findings imply that students will produce a better essay when they

use the Internet to search for information during the writing process. Implications

for schools, teachers, students, and society are also discussed.


DEDICATION

I would like to dedicate this dissertation to all Generation 1.5 immigrants and my

son, Ian Sebastian.

IV
Acknowledgements

Nothing in the world can take the place of persistence. Talent will not; . . . Genius will

not; . . . Education will not; . . . Persistence and determination alone are omnipotent.

--Calvin Coolidge

When I first embarked upon this journey, I thought that my intelligence and formal

education would be enough. It wasn't. I could not have done this without the help of

those who encouraged me during this journey. First, I would like to thank Jane

Hansen for taking me under her wings, for her guidance and encouragement when I

had to start over again. Many thanks to Dr. Ferree, Dr. Fan, and Dr. Bloomfield, my

other committee members who made it possible with their guidance, suggestions,

and comments. Many other people made this possible: Kempa who made it

financially possible for me to live and eat; the teachers who allowed me into their

classroom for without your willingness to help, I would not be here; and finally, my

friends who provided the emotional support that carried me through to this point.

This is the end of one journey and the start of the next .....

V
TABLE OF CONTENTS

Dedication........................................ . ............................. . ......... iv

Acknowledgement.............................. . ............. .......................... v

List of Tables............................................................................. vi

List of Figures......................... . ................ . ............................ . .... vii

CHAPTER ONE: INTRODUCTION................ .................................. 1 - 9

Characteristics of Good Writing ......................................... . 2

A New Generation of Writers.............. .......................... ...... 3

The Role of Technology in Good Writing................... ............ 4

Federal Legislation to Push Technology Use................. 5

The Role of Technology in Writing Assessment.............. 6

Statement of the Problem ................................................... 7

Significance of the Study .......... ............................................ 8

CHAPTER TWO: LITERATURE REVIEW......................................... 11 -65

Overview of Technology as It Relates to Schools,

Achievement, and Writing................... ......................... 11

Technology As It Relates to the Schools........................ 12

Technology As It Relates to the Elementary Schools........ 15

Technology As It Relates to Student Achievement ............ 21

Technology As It Relates to Writing.......................... 28

Findings of Search for Research on the Internet as It Connects

to Writing Instruction.................................. ................ 37

Recommendations from the Researchers......................... 65

VI
CHAPTER THREE: METHOD ..................................................... 68 - 99

Setting ........................................................................... 65

The Community ...................................................... 65

The Elementary School and the Area It Serves ............ 66

Study Design ........................................ ............................. 71

Participants Description ..... ........................................ 76

Researcher's Role ....................................... 75

Teacher Participants ........................................ 75

Student Participants......................................... 76

Timeline ....................... ............ ................................ 76

Treatment .......................................................................... 77

Administration of Surveys ............................................ 78

Internet Instruction ..................................................... 79

Procedures During the Writing Days................................ 82

Group I ............................................ ............. 81

Group II ......................................................... 82

Group Ill ......................................................... 83

Data Collection ................................................................. 86

Independent Measures ............................................. 86

Assigned Group ............................................. 86

Internet Self-Perception Scale ..... ............... .... 86

Behavior Correlates Questionnaire ... . ................... 86

Grades in Language Arts ................................... 88

VII
Gender........................................................... 88

Dependent Measure ............. ...................................... 90

Data Analysis ....................................................... . ............ 92

Assessment Tool ....................................................... 93

Calibrating the Readers .......... .................................. . 96

Inter-rater Agreement ................................................... 98

Methodology ............................................................. 99

CHAPTER FOUR: RESULTS .......................................................... 100-125

Findings for Each Research Question...................................... 100

Findings: Question 1 ................................................ 100

Findings: Question 2 ............................................... 104

Findings: Question 3 .............. ................................. 107

Findings: Question 4 ............................................... 111

The Gender Factor .................... ......................................... 114

Teasing Out Other Factors ................ ................................... 114

Factor 1: Students' Writing Abilities .............................. 116

Factor 2: Computer Ownership and Internet Access ......... 117

Factor 3: Self-Efficacy and Internet Usage ...................... 118

Evaluating the Hypotheses .......... ............ ..................... ........ 123

CHAPTER FIVE: DISCUSSION AND CONCLUSION ................... 126-150

Summary of Research ................ .... ..... ................................ 126

Summary of Findings ............. ........................ ... .................. 127

Brief Summary of Scores Obtained From Essays ....................... 128

Vlll
Discussion of Findings on Essay Scores................................... 128

Discussion of Effect Sizes.................. .................................. 133

Brief Summary of Findings on the Scale and Questionnaire......... 134

Discussion of Findings for the Scale and Questionnaire............. 135

Findings on Gender............................................................... 137

Limitations of the Study.................................. ....................... 139

Implications........................................ -; ._........... :................... 140

Implications for Schools and School Systems.................... 141

Implications for Teachers and Teacher Education............... 142

Implications for Students and Student Learning .................. 146

Implications for Society ................................................. 148

Suggestions for Future Investigation........................................ 150

REFERENCES ............................................................................. 151-169

APPENDICES.......... .................................. .................................. 170

Appendix A.......... ............... ............................................... 170

Appendix B......................................................................... 172

Appendix C......................................................................... 175

lX
LIST OF TABLES

Table 1: The County's Per Pupil Expenditure as Compared to the

Commonwealth of Virginia.................................................... 77

Table 2: Comparison of Ethnic Composition of Students in School,

District, and State.............................................................. 78

Table 3: Virginia SOL Passing Rate for Southside Elementary School..... 78

Table 4: Three Group Design: Study Procedures................................. 82

Table 5: Inter-rater Agreement for the Two Readers............................. 106

Table 6: Group Means for Total Essay Score...................................... 112

Table 7: Group Means for the Writing Subcomponent: Composing.......... 116

Table 8: Group Means for the Writing Subcomponent: Written Expression.. 117

Table 9: Group Means for the Writing Subcomponent: Usage/Mechanics... 118

Table 10: Comparison of Group Means Between Groups II & Ill ............... 120

Table 11: Comparison of Effect Size against Control Group in the Writing

Subcomponents and Total Essay Score............................... 121

Table 12: Gender comparison in the Writing Subcomponents and

Total Score.................................................................. 122

Table 13: Group Means for Grades in Language Arts............................ 125

Table 14: Comparison of Effect Size for All Groups in Total Essay Score.... 141

X
LIST OF FIGURES

Figure 1: The Student-Achievement-Technology Connection............ 11

Xl
CHAPTER ONE

INTRODUCTION

The No Child Left Behind Act (NCLB) of 2001 requires that states create a

system of accountability for students' learning. The legislation requires that states

must have essential learning standards that students are to meet in the areas of

language arts, math, and science at various grade levels. While states can add

more subjects (ie. history, health, geography) to the required content areas, they

cannot have less. Student progress and learning is measured by annual tests,

and states are mandated to disaggregate the data from students' tests by

gender, race, socioeconomic status, disability status, and native home language.

Each student group, regardless of their personal demographics (ie., race or

family income) must demonstrate adequate yearly progress (AYP) in order for

the states to continue receiving federal funding and to avoid sanctions.

Current policies in most states require that students demonstrate

proficiency in several academic areas in order to receive promotion to the next

grade level or to receive a high school diploma. The Commonwealth of Virginia,

like other states, also requires that students demonstrate proficiency in various

academic subjects, including the area of writing, in order to be promoted.


5th
Students must demonstrate writing proficiency at three grade levels: in the

grade, in the 8th grade, and finally in high school between the freshman and

senior year.

At each grade level, students must demonstrate they can write an

essay(s) and/or a letter(s). Each state creates its own writing prompt to which

1
students respond in the essay and/or letter, establishes its own grading rubric,

and determines its own criteria for passing.

Characteristics of Good Writing

Most English teachers will agree that good writing exhibits six

characteristics (Shapiro, 2004). The six characteristics are: 1) development of

ideas and content, 2) organization, 3) voice, including audience 4) word choice,

5) sentence fluency, and 6) grammatical conventions (Peha, 2003). Some school

districts have formally trained their teachers to teach and score essays based on

the six traits.

In order to achieve the six traits of good writing, writers must learn to

adopt certain behaviors. According to Hansen (2001), good writers exhibit these

behaviors: 1) write often and on a regular basis, 2) gather information for the

content, 3) share their writing with others as well as respond to others' writing, 4)

revisit a piece of writing day to day or periodically, and 5) know when to ask for

assistance.

English teachers routinely teach the six traits of good writing in conjunction

with teaching effective writing behaviors. It is not customary to assess whether a

student exhibits good writing behaviors or not; however, assessing the six traits

of good writing is commonplace. Teachers from Arizona (Scott, 2002) to

Wyoming (Laramie School District, 2006) to Connecticut (Shapiro, 2004) are

assessing students' writings by looking for the six traits.

Writing a good composition requires that students learn more than just the

six traits of good writing. Ketter and Pool (2001) write that we need "some

2
agreement among test creators about what characterizes good writing" (p. 345).

Ketter and Pool explain, "Because theories about what constitutes good writing

draw from many disciplines .....these disciplinary perspectives .reflect diverse and

sometimes competing positions" (p. 345). The authors believe that most people

will agree that writing exhibits the following qualities: 1) it is an act of

interpretation, 2) it is "historically determined and situationally constrained," 3) it

involves the making and remaking of selves, and 4) it is meaning making that

involves both the writer and the reader (p. 345). The current students, children of

the 2151 century, live in different "historically determined and situationally

constrained" times than their parents and teachers did when they were students.

Today, students have writing tools such as the computer and the Internet

available to them that were nofavailable in previous generations.

A New Generation of Writers

As Hansen (2001) acknowledged, good writers need to gather information

for their writing. To gather information in the current cultural-historical

environment of the digital age, students typically turn to the computer, using

computer discs (CD), Internet access to the World Wide Web, bulletin boards,

and emails.

What is the Internet? What is the World Wide Web? An article in the early

years of the Internet (McGreal, 1997) defines the Internet as "a distributed

network in which there is no center. Large numbers of computers connected via

a range of media hold textual, graphical, audio, and other materials that are

available to anyone to access" and defines the World Wide Web as "a

3
hypermedia environment on the Internet" (p.68). The Colorado State University's

The Writing Studio defines the Internet as "a network of national and international

computers that allows access to an interconnected Web of information"

(Colorado State University, 2006, n. p.). Like the terms Jello and gelatin,

both the terms, the "Internet," and, the 'World Wide Web," have become

synonymous with each other and have no distinction. For the purpose of this

study, I will define the Internet as free online websites and webpages (pages

within sites) that allow users to gather information about a topic. This includes

commercial sites from banks and retail stores, sites from non-profit organizations

such as the American Cancer Society and the Red Cross, governmental sites

such as the National Institute of Mental Health and the Department of Defense,

and sites from educational institutions such as school districts and universities.

This definition of the Internet excludes emails, Weblogs, instant messages, social

network pages (ie. Myface, myspace), chat rooms, bulletin boards, online

tutorials, online classes, and any databases or websites that charge a fee for its

use.

The Role of Technology in Good Writing

Students today rarely think of going to the library or searching through

volumes of encyclopedias; information is now accessible via phone, cable lines,

and wireless systems 24 hours a day, every day. Students no longer spend

Saturday afternoons in the library looking for information to finish the term paper

or science project; they now search for information after soccer practice or at

3am, according to their convenience. The uses of the computer and the Internet

4
allow students to obtain and retrieve information they need to extend their

learning. In fact, 94 percent of teens with access to the Internet use it to research

school projects (National Education Technology Plan, 2004).

Federal Legislation to Push Technology Use

The federal government's promotion of technology use is not a passing

trend. In 1983, a federal report called A Nation At Risk (National Commission on

Excellence in Education, 1983) stated that American children were falling behind

their international counterparts because of "a rising tide of mediocrity." In order

to secure "America's position in the world" the report recommended that "new

instructional materials should reflect the most current applications of technology

in appropriate curriculum areas, the best scholarship in each discipline, and

research in learning and teaching" (A Nation At Risk, 1983, n.p.). Despite the

passing of more than two decades since the 1983 report, American schools still

lag behind in using the most current applications of technology, prompting new

legislation to again address this issue.

The No Child Left Behind Act (NCLB; P.L. 107-110, 2002) requires that

every student be technologically literate by 8th grade graduation. To achieve this

goal, a National Education Technology Plan was created (2004). The Plan has

seven suggestions for improving our nation's schools:

1) Strengthen leadership

2) Consider budgeting

3) Improve teacher training

4) Support e-learning and virtual schools

5
5) Encourage Broadband access

6) Move toward digital content

7) Integrate data systems

With textbook expenditure decreasing 50 percent from 1965 to 1983 (A

Nation At Risk, 1983), and the high cost and outdated information in them not

likely to change, the move towards using information in a digital context (Step 6)

has been a popular one. The government's push for students' use of technology

has morphed from a recommendation in 1983 to a mandate in 2002. Clearly, the

push for technological literacy is not a movement that will disappear overnight.

The Role of Technology in Writing Assessment

At the same time that the federal government is promoting the use of

technology in the K-12 system, higher education institutions are making full use

of technological advances. Exams for entrance into institutions of higher

education (i.e. Scholastic Aptitude Test (SAT), Graduate Record Examination

(GRE), Law School Admission Test (LSAT)) have been conducted on the

computer for several years now. Many K-12 school districts are beginning to

consider assessing their students by using computer technology. Some K-12

school districts have started pilot programs within individual schools to use

computers to assess students' achievement of the state standards, starting with

the areas of math and science. Various states, including Virginia, will eventually

assess students' writing via digital methods in the near future.

6
Statement of the Problem

There are many books and articles espousing the use of the Internet in the

classroom to promote 1) student interest in writing, 2) lengthier essays with more

details, 3) the gathering of the most current information via digital methods, and

4) more thoughtful, critical writing. While students have shifted towards using the

Internet to gather and retrieve information for their writing and while school

districts are moving towards assessing students' writing using digital media, there

is currently no connection between the students' method of writing, including the

gathering of information for their writing, and the district's assessment of their

writing as measured by state standards. Instead, students are asses.sed via

methods that are different from the manner in which they write in their daily lives.

Currently, there is little empirical research conducted on the use of the

Internet as it affects students' writing. Though strongly supported by teachers,

parents, and students, the use of the Internet to improve the writing of students

has not been verified through empirical research. In 2001, Castellani and Jeffs

wrote, "Currently, there is little research to support claims of the utility of the

Internet for instruction" (p. 60). Seven years later, the truth of that statement has

not changed significantly. NCLB requires that schools use research-based

evidence to determine the best practices that work. The research I propose to

conduct is designed to shed more light on this topic.

This study seeks to show how the use of the Internet will affect students'

essays as measured by scores earned in response to a state-published writing

prompt.

7
The following questions guided this research study:

1) What are students' perceptions of the Internet as a tool in their

own writing process?

2) Will using the Internet as a research tool help students write a

better essay than without the use of the Internet?

3) Will the use of the Internet affect the scores students receive on

these specific characteristics of writing: composing, written

expression, and usage/mechanics when compared to the control

group?

4) Does Internet training on discerning the differences among

websites make a difference in the quantitative scores of students'

essays?

Significance of the Study

As a pedagogical tool, the use of the Internet has many ramifications. It

can deliver instruction in visual, auditory, and textual manners, appealing to

learners' multiple intelligences (Gardner, 1993). More effective instruction can

reach more students in the classroom and thus, create a better learning

environment.

The use of the Internet also has implications for students with disabilities

and Limited English Proficient (LEP) students. The Individuals with Disabilities

Education Improvement Act of 2004 (U.S. Government, 2006) mandates that

states and its individual districts include students with disabilities in the

assessment and accountability process. NCLB requires that all students,

8
including students with disabilities, be held to grade-level achievement standards

when taking assessments. Limited English Proficient students are also held to

grade level standards, even if they enter the American school system with little or

no literacy. School districts are allowed to exempt only 1 percent of the student

population from assessment that determines AYP (Briggs, 2005). Therefore,

almost all students in special education, as well as LEP students, will participate

in the assessment process.

Current studies show that, in regard to assessments, students with

disabilities often have problems with writing and memory (Hallahan & Kauffman,

2005) while LEP students have problems related to their limited exposure to

American culture and background. If the use of the Internet affects the scores

obtained when writing an essay, it is possible that its usage may prove to be a

testing accommodation for both groups of students. With the availability of the

Internet, students with disabilities will not have to worry about memorizing names

and dates but focus on the content of their writing. Students of LEP background

can use the Internet to gather information before and while they are writing the

essay, including those they write for the state mandated test. In an authentic

context, writers compose and research information simultaneously. Both special

education and LEP groups are entitled to testing accommodations so they are

not left behind.

A few years ago, the states realized that unrestricted time benefited all

students, not just students with needs. The states, including the Commonwealth

of Virginia, now allow all students to take untimed tests. The use of the Internet

9
as a research tool may follow the path of unrestricted time in that it may

eventually be made available for all students.

10
CHAPTER TWO

LITERATURE REVIEW

The following chapter is organized into two sections. The first is a general

overview of how technology relates to schools, achievement, and writing. The

second section focuses on the Internet as a specific tool as it relates to K-12

students or to writing. The introduction of the Internet into the schools is a recent

phenomenon; therefore, the number of cited published research studies is

reasonably small.

Overview of Technology as It Relates to Schools,

Achievement, and Writing

This section is a general overview of how technology relates to schools,

achievement, and writing. It is organized into three parts: 1) technology as it

relates to schools, 2) technology as it relates to student achievement, and 3)

technology as it relates to writing and the writing process.

A graphical representation of this chapter is below:

Technology I Schools

Student Achievement Writing

Figure 1: Student Achievement-Technology--Writing Connection

11
Technology as It Relates to Schools

I begin by establishing the current status of the role and prevalence of

. computers in the public schools. Few will argue that the advent of computers and

the Internet have changed the way in which the American public searches,

retrieves, and conveys information. Parents throughout America have purchased

computers for their children, viewing the cost as an educational investment. It

seems that this change seeped into the schools and classrooms overnight. In

1998, the student-computer ratio was 12 students to 1 computer. In 2002, the

ratio improved dramatically to 4.8 students per 1 computer (Kleiner & Lewis,

2003). The student-to-computer ratio continues to improve; many schools have

stated that their goal is to have a 1 to 1 ratio.

In 1994, when the National Center for Education Statistics (NCES) first

surveyed schools about Internet access, 35 percent of the nation's public schools

had Internet access. In 2002, about 99 percent of public schools had Internet

access. In addition, from 1996 to 2002, public schools went from slow dial-up

Internet connections to speedy broadband connections in 94 percent of the

nation's schools. Not surprisingly, the availability of the Internet extended beyond

school hours. In 2002, 73 percent of secondary schools and 47 percent of

elementary schools allowed students access to the Internet beyond regular

school hours, with 74 percent of them providing access before the school day

and 96 percent providing availability after school. Surprisingly, 6 percent even

made their computer labs available to students on the weekends.

12
13

Even with the increased availability of computers in the schools, students

who do not own or have access to a computer at home are at a disadvantage. To

address the gap between students who own computers and those who do not, a

number of schools now allow students to borrow laptops for home use. In 2002, 8

percent of schools had an average of seven laptops available for loan to students

(Kleiner & Lewis, 2003). Some schools allowed students to borrow for up to a

week, some for up to a month, and some for the entire school year. Of the

schools that did not have laptops available for students to borrow, 7 percent

planned to acquire laptops for students to borrow and use at home (Kleiner &

Lewis, 2003).

Schools are also supporting the use of technology and the Internet in

other ways. Many districts now allocate funding for a full-time technology

specialist to directly support the school sites. In 2002, 38 percent of schools

indicated that they had a full-time technology specialist at the school site and 26

percent of schools indicated that they had access to district personnel to get

technological assistance. An additional 18 percent of schools had a teacher who

had the formal responsibility as technology specialist for the school site (Kleiner

& Lewis, 2003). Since 86 percent of public schools had a website in 2002, the

technology specialist often maintained the website and conducted training for

teachers and students.

Also, schools are currently using computers and the Internet as a means

of communicating with parents and disseminating information to the community.

Notices and announcements of important dates, events, school menus, test


14

results, field trips, fund raisers, and homework assignments can be posted on the

school's website for all to see. A posting on the webpage is usually more cost

effective and more timely than printing and sending letters home. Schools can

also use the webpage to celebrate the achievements of the schools, whether it

be raising test scores, earning a "Teacher of the Year'' award or a win at a track

meet.

Additionally, the Internet allows schools to share resources and expertise

with other schools and the community. For example, many universities post

classes, speeches, and lectures that are a free resource to the community. Other

universities list experts in various fields that the community can book as guest

speakers for no cost or very little cost.

Some K-12 schools list resources open to the community such as Boy

Scouts, Club Sports, and Boys and Girls Clubs which operate out of their school

but yet are open to all in the community. Sharing resources can benefit both

large districts with budget constraints that have many students to serve, and

small, rural districts that face challenges such as distance, fewer offerings in

programs, and fewer personnel to run programs.

Schools also use computer technology to keep track of data. This allows

school systems to disaggregate data on various student groups and put more

focus on groups that may need more help as measured by attendance, report

cards, school discipline/suspension records, and standardized tests. With such

technology, schools can take the hours it would have required to analyze the
15

data and refocus them to directly help students. This in turn should increase

student achievement and learning.

Technology as It Relates to the Elementary Schools

Since this study took place in the elementary school setting, it is important

to examine the influence of technology as it relates to that level.

A quasi-experimental study by Page (2002) supports the addition of

technology in the elementary classroom. Page infused technology into five

different schools, with two classrooms at each school (one an experimental and

one a control). The five schools were located in a lower socioeconomic

neighborhood in Louisiana. Two of the classrooms contained third grade

students and three of the classrooms contained fifth grade students. The 211

students in the study were randomly distributed by the principal at the beginning

of the school year. There were a total of 106 students in the control groups and

101 students in the experimental groups.

The five experimental classrooms were given the following equipment:

one teacher computer, at least four student computers, Internet connection, a

laser printer, an inkjet printer, a large television monitor, a projector, a digital

camera, a scanner, VCR, a laserdisc, a computer camera, and a classroom set

of calculators. Software programs given to the students included Microsoft Office,

MathBlaster, Kid Pix, Hyperstudio, Grolier Encyclopedia, and a Portfolio

Assessment Toolkit. Each classroom contained more students than computers,

but computer time was shared equally. The classes used the technology as they

deemed appropriate, not for a particular set amount of time. Most used the
16

technology extensively throughout the day. The five teachers in the control

classrooms taught students in the traditional manner. Page described the control

classroom _as "little or no technology access was provided" (p. 397); he did not

provide any other description of the control classroom.

Since each school had an experimental and a control group, Page (2002)

conducted between-group comparisons by analyzing academic achievement

based on the test that the schools normally use. Four of the schools used the

Iowa Test of Basic Skills (ITBS) and one school used the California Achievement

Test (CAT). At the school that used the CAT, there were significant differences in

vocabulary and comprehension scores, p<.001, with the group using technology

outscoring the control group. The CAT math concepts and applications test

scores were also statistically significant, p<.05, again favoring the group that

used technology. For the schools that took the ITBS, the only significant

difference was in the math total scores. Overall, the use of computers in this

study positively affected the students' scores in the areas of reading and math.

A study by Purcell, Ponomarenko, and Brown (2006) showed mixed

results when they infused the use of the Geographic Information Systems (GIS)

into the fifth-grade curriculum at an elementary school in San Antonio, Texas.

The authors described their GIS system as "computer software that captures,

manipulates, analyzes, and displays data on specialized layered maps" (p. 24).

From the rest of the description in the article and with the authors specifically

stating that teachers can get "free education-oriented websites that employ GIS
17

technology," (p. 24), I gathered that their GIS was similar to Google Earth, a free

public domain website.

For their study, Purcell et al. (2006) created three sites geared for the

ability levels of fifth graders: one site focused on volcanoes; one focused on

earthquakes; and one focused on volcanoes and earthquakes combined. Two

teachers each taught two classes of science: one using GIS and one using the

district's curriculum based on text books. In other words, each teacher taught a

class via the traditional method and a class infusing the technology. Both groups

proceeded through a 5E learning cycle: Engage, Explore, Explain, Elaborate, and

Evaluate. More details of each phase are explained below.

The GIS-based classes learned to navigate through the GIS system to find

information (Explore) about people, climate, agriculture, and geography.

Students had written instructions and questions to guide them through their

learning about volcanoes and earthquakes (Explain and Elaborate). The students

had to look for information on population to answer why people might choose to

live near a volcano and the dangers involved (Evaluate). At the end of the unit,

students took a test to assess their learning.

The traditional textbook based group also proceeded through the 5E cycle

but lessons followed the district's curriculum. Students read through the books

(Explore), searched the Internet for information (Explore and Explain), watched

short videos that accompanied the textbooks (Elaborate), and participated in a

hands-on experiment in which graham crackers and icing simulated the earth's

movement of plate tectonics (Evaluate). The graham crackers activity was a


18

standard lesson in this school district. At the end of the unit, teachers also

assessed this group of students.

Quantitative data from the pre-test and post-test measures based on the

Texas Essential Knowledge Skills {TEKS) administered to both groups showed

that both groups "performed similarly overall on both tests" (Purcell,

Ponomarenko, and Brown; 2006, p. 26). The difference between the two groups

showed on the map-based questions, with the GIS group performing better. The

authors concluded that the GIS curriculum was as good as a traditional

curriculum but proved a better learning tool when a map-based component was

being taught.

A different study by Mouza and Bell (2001) also showed mixed results.

Mouza and Bell studied the effects of a web-based science program called

ALPINE with a group of 5th graders. Through using the ALPINE program,

students were to learn about weather concepts and then use it to gather data,

make decisions, and problem solve for different situations. The researchers

looked at both teachers and students, citing that teachers' attitudes and beliefs

about computer use can influence its success in the classroom.

Mouza and Bell conducted their qualitative study in a suburban

elementary school in New Jersey with 6 fifth-grade teachers and 126 students.

The teachers were veteran teachers but relatively new at integrating technology

into the classroom. The only training the teachers received was a meeting held at

the beginning of the school year to acquaint them with the ALPINE program.

Teachers discussed how they could use the program in their classroom.
19

Teachers were given a teacher's guide and web resources to use in their

classroom. No additional support was given to the teachers throughout the

project.

The ALPINE program can be found on the web and is organized into

seven sections: Weather Facts, Ski Site Hunting, Activities, Forums, Teacher

Room, Glossary, and Links. The Weather Facts section provides weather

information and data based on seasons, including maps and graphs. Ski Site

Hunting is a role-playing activity where students work in teams, gather weather

data, and decide upon a suitable location for the U.S. ski team to train in. There

are many possibilities for students to choose from. The Activities section allows

students to use real-time weather information in investigations. The Forums allow

teachers and students to share information with each other. Teachers can also

track progress of each team and suggest hints towards problem solving. The

Teacher Room has guides and lesson plans that teachers may use with

students. The Glossary has definitions of weather terms and related items. The

Links section has weather-related Internet resources that students may click on

and browse through to get more information.

To gather data, Mouza and Bell (2001) developed pre-test and post-test

surveys for the teachers as well as pre-test and post-test surveys for the

students. The time period between the pre-test and post-test surveys was

approximately 4 months. Mouza and Bell also interviewed the teachers and the

students to gather their views about computer instruction and technology. They

interviewed the teachers about three areas: "(a) beliefs about the role of
20

technology in education, (b) reactions toward ALPINE, and (c) professional

development experiences and the use of technology in the classroom" (Mouza &

Bell, p. 274).

Data from posttest survey revealed that the teachers had no change in

their views about computers. The teachers felt that computer use was beneficial

to the classroom and that schools should invest in the technology. Before using

ALPINE, teachers thought it would have a positive effect in the classroom. The

teachers maintained their views about ALPINE after its use and expressed

willingness to use it the following year. The one area that differed on the posttest

survey was that after the use of ALPINE, teachers felt more comfortable

integrating technology into the classroom.

The student pretest and posttest surveys were based on four themes: "(a)

competence with computers, (b) beliefs toward using computers in school, (c)

interest in science, and specifically, the study of weather phenomena, and (d)

experience and beliefs about groupwork" (Mouza and Bell, 2001, p. 274). Very

little change was seen in the first category, students' competence with

computers. The fifth graders felt comfortable with computers before using

ALPINE (with only 9% feeling uncomfortable} and remained confident after using

ALPINE (with only 2% feeling uncomfortable). No significant changes were found

for students' attitudes toward using the computer as they were positive to begin

with. Students' attitudes toward science did not change either. Pretest data

showed that 14 percent of students liked science and the numbers remained the

same in the posttest data. The pretest data for the numbers of students who did
21

not like science remained the same after ALPINE as well. Students' attitudes

toward group work remained the same in the pretest and posttest data.

The numerical data from Mouza and Bell (2001) did not show any

significant changes. Student attitudes toward the use of the Internet remained

positive but students' interest toward science was not affected. Mouza and Bell

concluded that "meaningful educational change will not be achieved merely by

placing computers or computer-based programs in the classroom" (p. 290).

In summary, the three studies involving technology at the elementary level

produced different effects. Page (2002) found technology positively influenced

students' achievement as measured by test scores only in the areas of reading

and math. Purcell et al (2006) found that the control group and the group that

received the technological treatment performed the same. The technology in the

study from Mouza and Bell (2001) produced mixed results.

Technology as It Relates to Student Achievement

The zeitgeist of the early 21st century is that students need to develop

"textured literacy-the ability to comfortably use and combine print, spoken,

visual, and digital processes in composing a piece of writing" (Yancey, 2004, p.

38). Part of the digital process is using the Internet in the composition of essays.

Many feel the Internet can instill and promote students' desire to learn because it

is the students and not teachers or other adults who have to decide on the

quality, quantity, and applicability of the information that they encounter. It is the

learners who are "required to take responsibility and find their own methods of
22

gathering, analyzing, synthesizing and evaluating information" (Yumuk, 2002, p.

143).

Computers and the Internet can, potentially, benefit students in practical

ways. First, there are software programs and websites that allow students to

practice skills they still need to master without much adult supervision. The

immediate feedback the computer provides corrects students so that they do not

continue making the same errors. Teachers can use computers to differentiate

instruction. Each student can start at a different place and proceed at his/her own

pace. In large urban school districts that are plagued by large class sizes,

teachers can use computers to provide corrections and feedback to students.

Computers allow students to learn in three different modes: visually - in

the form of images and video segments, auditorily in the form of speeches and

newsreels, and in print (Moreno & Mayer, 2000). Since computers address many

learning styles, their availability and usage is one way to provide

accommodations to students with various needs, whether remedial, special

education, gifted, or English Language Learners (ELL).

With the amount of information increasing each year, some in the field of

education are advocating that we move from memorizing facts and statistics to

learning to retrieve information (Johnston, 2000). According to Gulli and Signorini

(2005), there were 11.5 billion pages on the Internet in 2005, and the number of

pages is expected to increase exponentially (Huberman & Adamic, 1999).

Students can now retrieve information with the click of a mouse. Instead of

expending energy on memorizing information, students can now use their higher-
23

order thinking skills and concentrate on using the retrieved information to solve

real-life problems. Jacobson and Spiro (1995) found that learning in a more

"hypertext-like treatment promoted superior knowledge transfer" (p. 301).

Lastly, this learning now extends beyond the four walls of the classroom.

Students can learn about faraway places, chat with people throughout the world,

and see real-time video footage of a special ceremony in an Inuit village in

Alaska or a storm brewing in the Indian Ocean. Teachers, parents, and students

prefer this type of authentic learning as it can "promote a more active role for the

learner and require students to engage more actively in the learning experience"

(Castellani & Jeffs, 2001, p. 61). Expanding learning beyond the classroom

walls is a concept that all stakeholders (ie. students, teachers, parents, society)

can appreciate.

While technology serves many functions in the schools, research results

until now on technology and school achievement have been mixed. A study by

Culbertson, Daugherty, and Merrill (2004) of technology in a middle school

showed no achievement gains. Culbertson et al. took a sample of 201 seventh

graders and 188 eighth graders and randomly placed the students into one of

three groups: a group receiving 12 weeks of technology education, a group

receiving 6 weeks of technology education, and the last group receiving no

technology education. Culbertson et al. administered the TerraNova Performance

Assessment to all students in a pre-test format to obtain scores in the areas of

reading, language arts, math, science, and social studies.


24

The students in the treatment groups received technology education via

twelve learning stations called modules. A commercial vendor, not specified by

the authors, supplied the modules. The vendor claimed that all modules

addressed reasoning and writing skills. Students moved through the modules in

pairs and spent 12 days at each module and then proceeded to the next rotation.

The students (n = 98) receiving 12 weeks of technology education were scheduled

to complete five modules but because of scheduling issues in the school

(assemblies, fire drills, etc.), these students completed four modules. The

researchers did not specify the number of modules the group (n=107) receiving

just 6 weeks completed.

Post-test scores for 308 students (the scores for 81 students were not

available) revealed that there was no statistical significance in any of the subject

areas. Pre-test and post-test reliability of the TerraNova was p<.001. Culbertson

at al. pointed out that the limitations for this study included the use of a

commercial program from a vendor. They further suggested that:

One could reasonably expect differing results when testing technology

education's impact on achievement when other content delivery methods

(standards-based, traditional, or courses delivered with other commercial

products) were utilized. Further research could identify types of technology

education that are more effective at raising achievement in certain areas.

(p. 18)

A different study, focusing on the use of video streaming, a manner of

retrieving video images through the Internet, specifically using Unitedstreaming ™


25

(Boster, 2002), in elementary and middle schools throughout Virginia found that

students in the treatment groups performed better than students in the control

group. Unitedstreaming is a database of video and audio clips that range from 2

minutes to full length movies. Boster and his colleagues based their study on

students in two grade levels: third and eighth grade.

In the third grade, a total of 913 students from 13 schools were randomly

assigned to participate in an experimental group or the control group. There were

three experimental groups: those who viewed videos in both social studies and

science, those who viewed videos only in social studies, and those who viewed

videos only in science. In the eighth grade, 556 students from eight schools were

randomly assigned to participate in either a science or a social studies group.

Boster (2002) and colleagues used the design of pretest-posttest. The

pretest that the third graders took included a 15-item test in social studies and a

15-item test in science before exposure to any video streaming. The students

took the same test after exposure to video streaming. The eighth grade students

took a 24-item test in both science and social studies before their exposure to the

video and then after their exposure to the video. The administration of the

pretests occurred in February 2002 and the posttests occurred in mid-March

2002. For eighth grade science, pretests occurred in late March 2002 and

posttests occurred in mid-May 2002.

Teachers received a one-day training on how to use the video clips in the

classroom. Then teachers were given 30 video clips selected from

Unitedstreaming's library of over 14,000 video clips. Clips were chosen based on
26

their correlation to Virginia's Standards of Learning. Teachers were instructed to

show all 30 video clips using the Unitedstreaming TM application, showing at least

one video clip once a day for the next 30 days. The pretest scores between the

experimental groups and control groups showed no substantial difference.

However, in the posttest, the experimental groups improved substantially as

compared to the control groups. When Boster and colleagues examined within

group differences among the districts, they also found that students exposed to

video streaming performed better than students who learned in standard

classroom conditions without videos.

In Baster's study, students are learning from videos presented through

video streaming. While Unitedstreaming ™ is a database of short and lengthy

videos that schools systems have to subscribe to in order to access the videos,

video clips are easily obtainable from various Internet sources including popular

search engines such as Google and Yahoo! When students realize that teaming

can take place without the presence or direction of a teacher via video streaming,

they may deduce that learning continues beyond the school day, beyond high

school graduation. When individual studies are reviewed, it is difficult to

see patterns in relation to the technology as it relates to student achievement. In

an effort to remedy this difficulty, a meta-analysis of 27 studies (Christmann,

Badgett, & Lucking, 1997) of computer-assisted instruction (CAI) was conducted.

However, Christmann et al. found that CAI produced mixed results in academic

achievement.
27

In the Christmann et al meta-analysis, the authors did not define CAI nor

state how computers were used in the instructional delivery, only that computers

were used in the classrooms to supplement traditional instruction. Another term

for computer assisted instruction is computer assisted learning (CAL) which

Guttormsen-Schar and Krueger (2000) define as "different forms of computer­

mediated teaching methods in which the student is paired with a computer as

virtual teacher" (p. 40). I will follow this definition when using the terms CAI or

CAL.

To start, Christmann et al. reviewed over 1,000 studies but only found 27

that fit their four criteria: 1) the studies were conducted in secondary schools, 2)

providing computers was the treatment and results were quantified, 3) the design

was of an experimental, quasi-experimental, or correlational nature, and 4) the

sample sizes included at least a total of 20 participants in both the treatment and

control group. The 27 studies yielded a total of 3,795 participants, with the mean

sample size as 140.6.

Christmann et al. then analyzed the 27 studies, calculating mainly the

"effect sizes to establish statistical meaning" (p. 284). When all 27 studies were

grouped together, CAI yielded a low mean effect size of 0.209.

When Christmann et al. calculated effect sizes based on content areas,

the effect sizes varied considerably. The individual studies had a large variance

in effect sizes (ES), ranging from an impressive ES of 0.775 in one math study

(n = 46) to a negative ES of -0.073 in another math study (n=28). Individual


28

studies in reading also had a wide range in ES, from 0.626 (n=85) to an ES of -

0.042 (n =191 ).

In general, studies that used CAI in science classes improved student

achievement considerably, yielding an effect size of 0.639. Studies in other

subject areas yielded smaller effect sizes: reading, 0.262; music, 0.230; special

education, 0.214; social studies, 0.205; and math, 0.179. Two subject areas even

yielded negative effect sizes: vocational education, -0.080; and English, -.0420.

Based on the results of many studies, the connection between technology

and school achievement can show evidence of positive outcomes. However, one

can also find many studies that point towards neutral or negative outcomes.

Additional research needs to be conducted to assist us toward reaching a clearer

conclusion about possible ways to effectively use computers to assist student

achievement, given that computers have become an inherent part of students'

lives.

Summarizing how technology has affected student achievement is a

difficult task. There are many studies to indicate that technology has positive

effects on students as measured by test scores, grades, or performance

demands. On the other hand, there are just as many studies to indicate that

technology has neutral or even negative effects on achievement scores; that the

outcomes depends on the specific scores and subject areas that one examines.

Technology as It Relates to Writing

Early studies about technology as it connects to writing showed promising

results. Womble (1984), a high school English teacher in northern Virginia, was
30

Except for technical difficulties which Womble and the students grew to be

patient with, Womble found the word processor provided enormous benefits to

teaching essays and to students handling the essay.

MacArthur, Graham, Schwartz, and Schafer (1995) found that word

processing, when incorporated into a writing program, prompted students to

produce more text as well as facilitated revising and publishing. To reach this

conclusion, MacArthur et al. conducted their empirical study over a two-year

period in 22 elementary grade classrooms (10 classrooms in the control group).

In the first year, the authors took 65 students with learning disabilities (LO) in a

self-contained class and exposed them to the treatment condition. The control

group consisted of 62 students. Each classroom in the treatment group received

4 to 6 computers or had daily access to a computer lab. The computers had word

processing software programs installed on them. Teachers were trained on how

to use the word processing programs in order to help the students. The

researchers gave teachers a curriculum guide to follow but teachers were

allowed to choose the specific writing tasks they felt were appropriate to the

students.

At the start of the school year, for the first 4 to 6 weeks, students learned

to use the keyboard three times a week for 10 minutes each session. Students

then used the word processing program for most of their writing. MacArthur et al.

then collected two personal narratives and two informative papers from each

student in the treatment group and the control group. All students responded to

the same writing prompt. There was no time limit for either group nor were the
29

one of the first teachers/researchers to write about the word processoring

software program. Womble had 107 students and one computer. She created a

schedule that allowed students to use the computer throughout the day, including

use during lunch and after school. Though there were difficulties, the students

found that the word processor "became a welcome replacement for paper and

pencil" (p. 35). Students liked revising their paper on the computer. One student

said, "I enjoy revising with the computer. I like the neatness" (p. 35). Another

student "paid more attention to developing ideas and cleaned up the misspellings

and punctuation errors" (p. 35). Since the word processor made experimenting

with text easier, students tried new things. Womble also noted in her qualitative

study that students' writings showed not just more in quantity but also showed a

higher quality. Womble did not specify how she measured students' essay length

or essay quality.

Womble also noted other attitude changes in the students. Students with

bad handwriting were more willing to write, noting that the printed product made it

easier for the reader to read one's thoughts. Students were more willing to take

risks with shuffling the order of their ideas and to edit because they could more

easily see their mistakes on the paper. Students liked fixing their errors "without

having to start all over" (p. 35) as one student said.

Additionally, students were more willing to stay with a piece of writing

longer, revisiting the paper more often because making changes with the

processor was easier than the traditional method of paper and pencil. Because of

the printed page, students were more aware of an audience reading their paper.
31

groups allowed to use any references in the writing. The pre-test writing samples

from both groups were done with their own handwriting. The post-test writing

samples from the control group were completed in handwriting. With the

treatment group, the researchers randomly assigned the students into two

groups, asked half of the group to submit writing samples in their own

handwriting and the other half of the group to submit samples that were word

processed (no spell check allowed).

To start the scoring process, all writing samples were transcribed into

word processed format. Writing samples were then scored on an 8-point scale for

quality via a holistic manner. The essay scores of the two halves of the treatment

group were then analyzed for quality, length, and errors. MacArthur et al. did not

find a difference between the two groups so they combined the two halves

together to compare to the control group.

The results indicated that the treatment group scored higher in the areas

of quality of narrative essays, with an effect size of 0.42. The quality of the

informative essays also favored the treatment group, with an effect size of 0.35.

Students in the treatment group also wrote lengthier narrative essays, with an

effect size of 0.33, but did not write lengthier informative essays. When the

researchers conducted within-group comparisons, the posttest scores of the

treatment group as compared to their pretest scores were significantly different in

both the narrative and informative tasks. The control group showed no

differences. When an analysis of the errors was conducted, the treatment group
32

showed marked improvement in spelling but not with capitalization or·

punctuation.

Thus far, two studies showed positive effects of using word processing

programs. The Womble study (1984) showed positive effects of using word

processing with regular education students, while the MacArthur et al. study

showed positive, but small, effects for students with learning disabilities.

In a different study, Swan, van't Hoof, Kratscotki, and Unger (2005)

focused on mobile computing devices (MCDs) with students in grades 3 through

7. Swan et al. went into two school sites in Ohio and gave students MCDs. They

do not define mobile computing device or specify which types they used in the

study but the article compares the function of the MCD to a desktop computer.

This was sufficient enough to gather that the MCDs were small, portable hand­

held computers (i.e. Palmpilot or Blackberry). At the first site, the participants

included 28 sixth graders, 41 fourth graders, and 16 third graders. These

students were required to use the MCDs for note-taking and allowed to use the

devices in all of their classes for other tasks as well. Additionally, the students

were allowed to use the MCDs at home for 6 weeks. During this time, the classes

also spent half a day for 6 weeks in the computer lab on the campus of the local

university. The lab had "access to desktops, wireless laptops, and handheld

computers (1 :1 ), a document scanner, a presentation system, scanners, printers,

digital cameras, teleconferencing equipment, video and audio recorders, VCRs,

video editing equipment, CD and DVD burners, digital microscopes, scientific


33

probes, wireless writing pads, and a wide variety of software to support teaching

and learning" (p. 101).

At the second site, the participants were 50 seventh graders all enrolled in

a science class with the same teacher. The students at this site were allowed to

use the MCDs in class and at home for more than half the school year. All of the

teachers in the study required the students to use the devices for note-taking.

The science teacher required the students to use a drawing program. Other

students in the study also used the drawing program but of their own accord.

Swan et al. then collected students' usage data by using "Rubberneck, a

hidden software tool that collects usage data from individual devices. Local

transfer of mobile device data to desktop computers sends this data to an off-site

server that is accessible through the Internet" (p. 102). Swan et al. found that

students used the devices often in- and out- of the classroom, used them most

frequently for writing activities such as note-taking and journaling, wrote more,

and edited their work more often than they did before the availability of the

MCDs. According to Swan et al, "One teacher commented that the use of mobile

devices resulted in noticeable improvements in both the peer editing process and

the quality of student writing" (p. 108).

A study involving technology and writing in the primary school was

conducted by van Leeuwen and Gabriel (2007). Van Leeuwen and Gabriel went

into a first grade classroom in Ontario, Canada to examine the effects of word

processing on students' writing. Van Leeuwen and Gabriel visited the classroom

every three weeks throughout the school year and gathered data through
34

classroom observations, conversations with the teachers, interviews with

students, and students' writing samples. Each classroom visit lasted from 40 to

90 minutes.

During the classroom visits, the researchers observed to see how the

children used the word processing method as well as the pencil and paper

method. The researchers collected writing samples from both methods of

producing text. There were three computers in the classroom. The children took

turns using the computer which amounted to each student being able to use the

computer's word processing program every one and one-half weeks. Students

were encouraged to discover and use new features on the computer. One

student discovered the shift key needed for capitalizing letters and the teacher

encouraged that. Learning to use the computer was simultaneously taught along

with the writing process (prewriting, drafting, editing, revising, and publishing).

Van Leeuwen and Gabriel used the Canadian province Alberta

Education's criteria for assessing students' writing samples in three areas: ideas

and order, words and sentences, and conventions of language. They found that

there was no difference in the quality of students' writing, in that both methods of

producing text were similar. However, van Leeuwen and Gabriel did find a

difference in length of text. Students who used pencil and paper wrote lengthier

text, with a range of 73 to 305 words, with a mean of 163 words. The number of

words for students who composed their writing using word processing ranged

from 37 to 116, with a mean of 71 words. No probability values were given.


35

Observational data revealed four things: 1) the teacher was more tolerant

of student talk during computer writing time than during traditional writing time, 2)

students helped each other spell and use computer functions during computer

time, relieving the responsibility of learning solely on the teacher, 3) students

offered suggestions for improving peers' writing more often when they read it on

the computer monitor than on paper, and 4) students' rereading of their own

paper occurred more frequently during computer time than during traditional

writing time.

Then, van Leeuwen and Gabriel chose 4 of the 13 students to represent

the class and interviewed them without pre-formulated questions, looking for

emerging themes. It seems that 3 of the 4 students interviewed preferred using

the computer for writing because it didn't take as much effort with letter formation

and their hands didn't tire as easily.

At the end of the article, Van Leeuwen and Gabriel offer the opinion that

"no one composing tool is able to serve all the needs of beginning writers" (p.

427) and that while word processors are beneficial, they should be one of many

tools used during writing and the writing process.

The last important study to take note of is a meta-analysis conducted by

Goldberg, Russell, and Cook (2003). Goldberg et al. looked at 26 studies

between 1992 and 2002 to compare the effects of writing produced by word

processing vs. pencil-and-paper writing. They had two questions in mind when

they embarked on the study:

• Does word processing impact k-12 student writing? If so, in what ways
36

(i.e., is quality and/or quantity of student writing impacted)?

• Does the impact of word processing on student writing vary according

to other factors, such as student-level characteristics (grade level,

previous experience with computers, writing abilities) ?

Goldberg et al. initially found 99 articles that seemed suitable to their study.

However, to be included in Goldberg et al's meta-analysis, the study had to be:

1) quantitative in nature and published between 1992 and 2002, 2) have "quality"

or "quantity" or "revision" of writing as an outcome measure, 3) not focus on

grammar or spell-check or multimedia-enhanced software programs, 4) not

examine writing within the context of test administration, and 5) focus on students

in grades K-12. Two researchers read through the studies and found that only

26 met the criteria established. Of the 26 studies, only 15 could be used to

calculate effect sizes.

The next step the researchers took was to closely examine the 15 studies.

They found that only 60 percent of the studies were published in refereed

journals; the remaining were doctoral dissertations or masters' theses. The

number of participants ranged from a low of 8 to a high of 136. Six of the studies

included demographic descriptions of the participants while eight did not (no

mention of the missing study). Across the grade levels, seven of the studies were

conducted in elementary schools; five in the middle schools, and three at the

high school level. In only 3 of the 15 studies were students grouped by random

assignment.
37

The last step that the researchers took was to calculate effect sizes. The

mean effect size of all 15 studies was .501. When Goldberg et al. created a

funnel plot of the effect sizes, they found that positive effect sizes were

distributed across small to medium-size samples. Negative and near-zero effect

sizes were also observed in small to medium-size samples.

Goldberg et al. were able to find answers to their research questions. To

_answer the first question, Goldberg et al. concluded that, overall, research on the

uses of technology shows that student achievement increases when students

use technology. In the area of writing, word processing and MCDs appear to

positively influence students as writers. They concluded that students who use

computer technology during the writing process stay more engaged, more

motivated, and write longer papers of higher quality. Thus, the use of word

processors affected both the quantity and quality of students' writing.

To answer the second question, Goldberg et al. found that students'

characteristics (ie. grade level, previous experience, writing abilities) did not have

an effect on the quantity of students' writing. The effects of computers on quality

of writing was not addressed.

In the next section, I will focus on the uses of the Internet as a particular

form of technology that may enable students' growth, and specifically, uses of the

Internet to influence students as writers.

The Internet As It Connects to Writing Instruction

Many position papers exist advocating the use of the Internet to aide

writing, but few empirical research studies exist. Much of what has been
38

published focused on college students' writing in computer laboratory settings. In

the next section, I explain how I conducted the search for empirical studies, the

findings of the literature search, and then I discuss the found research studies

themselves.

Selection of Articles

At the outset of the literature search, I excluded articles that were of a

theoretical, pedagogical, or anecdotal nature. I also excluded opinion papers,

book reviews, and essays. I established three criteria for the empirical studies

that I reviewed for this literature search: (a) the study must include both writing

and the Internet, (b), if the research did not focus on writing but included the

Internet and was conducted on K-12 students, I included it, and (c) most

importantly, the research had to be scientifically sound, though it can be of a

qualitative or quantitative nature.

To conduct a search of studies that met the above criteria, I used this

combination of search words: computers, technology, Internet, World Wide Web,

writing, students, elementary, schools, and students, and these databases:

Education Full Text, Ebsco Full Text, JSTOR, and ERIC digests powered by

EBSCOhost. Some searches such as "computers and writing" produced over

2,000 hits while others produced numbers in the hundreds. When I limited the

articles to peer reviewed journals, fewer articles remained. Next, I proceeded to

read through the abstracts in the list of results. Titles such as Thomas' (2005)

"Fun with fundamentals: Games and electronic activities to reinforce grammar in

the college writing classroom" from the journal Teaching English in the Two-Year
39

College were not pursued; though the article's title included the key words, it was

obvious that the article would not meet the needs of this study. There were many

article titles that included the important search words, such as Evaluation of

Student-Written Essays Available on an Internet Site (Myers, 2002), but reading

the abstract revealed that these were not empirical studies so I did not include

these articles in the literature review. If the results showed that the journal came

from a field such as business, government, politics, engineering, science,

medicine, or nursing, I did not pursue the articles.

This search turned up few articles that included writing, and none of those

actually focused on writing. The topics of that search include students' attitudes

and perceptions of computers, their use of online access to course materials

(Hammonds, 2003), the usefulness of online writing centers (OWC), and how

students use the internet, chat rooms, and email as academic resources. Since

my study focuses on the Internet and how it affects K-12 students' writing, I

chose to include the college research studies that focus on students' use of the

Internet as an academic resource.

To find more peer-reviewed articles, I browsed through journals that my

committee chair and I thought might produce a few more articles: Language Arts,

English Journal, Research in the Teaching of English, Journal of Educational

Multimedia and Hypermedia, Voices in the Middle, and Reading Research

Quarterly. I browsed the hardcopies and online journals as far back as 1999 with

the rationale that publications before 1999 would not have articles about online

access as it relates to K-12 students' writing. For the journal Internet Research
40

which started publishing in 1994, I used the search function and used keywords

that were previously stated to search the databases.

Another method I used to search for journals was by using key words in

the title line. When I typed in "technology" as the key word, the results indicated

there were 32 journals with the word "technology" in the title, journals such as:

Technology and Children; Technology and Culture; Technology and Leaming;

Technology Daily; Technology in Practice; Technology, Pedagogy, and

Education; Technology Research News; and Technology Teacher. When I typed

in "Journal of Technology" as the keywords, the results page displayed seven

more titles that were not previously displayed.

When I searched for journals with titles that included the word "computer,"

there were approximately 250 titles on the results page. I scanned through all of

the journal titles and selected journals that I thought might have research articles

relevant to my topic. I overlooked journal titles such as: Computer Physics Report

and Computer Methods in Applied Mechanics and Engineering. I chose eight

journal titles to browse through: Computer Science Education, Computer User,

Computer Weekly, Computers and Education, Computers in the Schools, Journal

of Computer Assisted Leaming, Journal of Educational Technology, and

Mathematics and Computer Education. My method of browsing involved looking

at the table of contents, reading the articles' titles, reading the abstracts of the

articles that seemed relevant to my study, and then accessing the full text of the

article if it seemed to fit the criteria for this research study.


41

While examining other authors' citations, I found four writing journals that I

felt might yield more articles: Reading and Writing, Journal of Basic Writing,

Issues in Writing, and Writing Instructor. My method of browsing through these

journals were the same as through the computer journals: reading the table of

contents, the articles' titles, the abstracts of the articles that seemed relevant to

my study, and then accessing the full text of the article if it seemed to fit the

criteria for this research study. I found a few articles through this method.

Lastly, I used Google Scholar and came across a few journals published

only in a digital format such as IEEE Multimedia, from the Institute of Electrical

and Electronics Engineers (they call themselves I-triple E). A search on the

university's system did not show this journal. Google Scholar also produced

results from conference proceedings which I browsed through. The following

section shows the articles I found that met my original criteria.

Findings of Search for Research on the

Internet as It Connects to Writing Instruction

Ultimately, five articles, three dissertations, and a Masters thesis were

.found and reviewed. An early, profoundly important study involving the Internet

and writing was conducted by Anstendig, Driver, and Meyer (1999). Anstendig

and Driver, two English professors, and Meyer, an Information Systems

professor, conducted their study over the course of several semesters in an

advanced university writing course called Beowulf to Lear: Text, Image,

Hypertext. The English professors required students to "develop criteria for

evaluating Web sources and to articulate their assessment of sources using Web
42

pages with text, images and links" (p. 6). In other words, students were to create

their own page to be published for public viewing on the Internet.

To prepare students to create their own document in Hypertext Markup

Language (HTML) to be posted on the World Wide Web, the professors inserted

five lessons on using the Web into the class: 1) introduction to the Web, search

engines, and conducting searches, 2) criteria to use to evaluate sources and

sites, 3) creating web documents using HTML, 4) a work session to draft an

HTML document including time to get feedback from peers and instructors, and

5) presentations to the class. Throughout all of this, students "surfed" the Internet

to view the informational web pages that existed. After separating the reliable

information from the misinformation, students used the information they

encountered to write their own webpage.

Anstendig, Driver, and Meyer found that the students enjoyed the class,

came in early and stayed late, and were no longer fearful of the literature.

Because students had to make decisions about what they would post on their

webpage, students commented that they were learning to think more carefully

about their postings. Students took responsibility for the information presented on

the page when they became aware of the audience, that anyone in the world with

access to the Internet might stumble across their web pages. Surprisingly,

students continued to work on improvements and revisions to their web pages,

even after grades were submitted for the class. One student stated that the more

information he found, the more he wanted.


43

In another study, Stapleton (2005) examined the use of the Web as it

applies to adult second-language learners (L2). In this study, the participants

were 43 college students from four areas of study (law, medicine, dentistry, and

education) in a Japanese university. The average age of the students was 21

years and 4 months. All of the participants had studied English for at least 6

years in middle and high school. From a course students took the first year in the

university, all of the students were familiar with computers, word processing, and

the Internet.

All of the participants were enrolled in a for-credit class called English

Writing taught by the author. The course lasted for 15 weeks and met in the

computer lab. The instructor/author required that the students write several short

essays (amount and length not specified) involving topics of local interest such

as the whaling industry, restrictions on importing rice from other countries, and

low scores on the TOEFL (Test Of English as a Foreign Language) exam. For

the final essay, the instructor asked students to write a persuasive essay of 700

to 1000 words about a controversial topic. The students were free to choose their

topics and could browse both Japanese and English websites.

Stapleton's goal was to examine how the availability of the Web influenced

the students' choice of topic and their essays. She required that students use the

Internet to search for information and at the same time discouraged their using

the library. After collecting the 43 final essays, Staple visited each Web reference

that students gave and "assessed it for its genre via an examination of its domain

name, content, and its self-definition (often as stated on the "About us" or the
44

"Mission Statement" pages)" (p. 182). Students wrote about 21 different topics

with most of the topics falling under the theme of life and death (i.e., euthanasia,

human cloning, capital punishment, etc.). Keyword searches using popular

search engines was the preferred method, possibly the only method, used to find

sources.

On the final day of class, students were asked to anonymously complete a

17-item questionnaire about their views and perceptions of the Web. From the

questionnaire, Stapleton found that students chose the topic for their essay

based on personal interest and not necessarily based on the availability of

websites. Students preferred to perform key word searches in their native

language but 53 percent of the citations were for English-language websites and

47 percent were for Japanese-language websites. Many of the websites were

bilingual websites created by Japanese webmasters.

When asked about the websites' objectivity, students felt that they

evaluated the websites for this, and stated that some of the websites had an

obvious bias to them. Stapleton's analysis of students' website citations show

that students still included websites that were of questionable sources. Stapleton

commented that if evaluating a website's objectivity had not been highlighted in

the course, students might have been more lackadaisical in their determination of

which websites to use and include in the citations.

When asked if students would have used the library if it had been

available to them, most said they would have used it, but they did not feel

inconvenienced from using just the Internet alone.


45

A small group of participants used online translators to translate

vocabulary words and small sections of text. These students were students who

had weaker English language skills. A few students even plagiarized text from

websites.

If the findings of Stapleton's studies can be generalized, they tell us that

students need training on using the Internet, specifically how to conduct searches

and how to discern an objective, credible website from one that is not.

A similar study conducted with students whose first language was not

English was conducted by Ware (2004). Ware's case study focused on three L2

Chinese students in a university writing classroom. Ware examined the students'

perceptions of online discussions to see if online discussions can be used

effectively for L2 students in a college writing course. The instructor's goals were

to promote "social interaction as the basis for idea development and peer review"

(Ware, 2004, p. 456), to provide an alternative to in-class discussions, and to

provide students with peer feedback on their writing. The students were enrolled

in Ware's writing class because they did not pass the university's writing

requirement for entering students. The class lasted for 15 weeks for a total of 90

contact hours. Students were required to complete eight 5- to ?-page papers for

evaluation by an instructor other than the teacher-researcher of the study.

Students were also required to participate in chat rooms at least 3 times and to

submit weekly postings in the threaded discussions.

Ware's findings were different for each of the 3 participants. One student,

who was immensely aware that he had two audiences (peers and teacher),
46

realized that he had to feign interest to the postings in order to get a good

participation grade. The instructor realized this when Alex answered a question

with: "I don't know, for me, it's more interesting reading other students' personal

experiences, but sometimes it gets too much, and it's 'Whatever, I don't really

care.' " (p. 458). This prompted the instructor to question if the online interaction

that took place was authentic or contrived for the sake of a grade.

The second student preferred the online discussion threads over face-to­

face interaction and thought the "web-based discussions were more suitable for

open and constructive criticism" (p. 459). This student used the online discussion

threads to gather ideas, broaden her perspectives, and interact socially with her

peers. She saw the online postings as "a kind of debate among her peers" (p.

459) and as an exchange of ideas but did not feel that the process benefited her

essay writing.

The third student felt that the audience of the online postings was her

peers. She did not expect to have any contributions for her peers but used the

discussion board as a way in which she could compare the quality of her writing

to her peers' writings. She also felt that the writings she posted were subject to

scrutiny and, therefore, she had to post the best writing possible.

In summary, each of the three students perceived and used the online

discussions in different manners based upon their personal history. Ware's

findings tell us that students use the Internet with different goals in mind.

Another study examining the Internet and students' use was conducted by

Yumuk (2002) in Turkey. Yumuk took 90 students in an academic translation


47

course in a Turkish university through a program that sought to change "students'

attitudes from a traditional, recitation-based view of learning to a more

autonomous view of learning" (Yumuk, 2002, p. 141). The study lasted one

semester, and its goal was to help students break away from teacher-directed

learning to become independent learners. Data in the form of pre- and post­

surveys and face-to-face interviews were collected from 85 of the students. In

weekly diaries, the teacher/researcher recorded observational data of the

students as they progressed through the four phases of learning in the class:

negotiation of the curriculum goals, learning about Internet searches, actual use

of Internet searches for translation, and reflection and feedback.

Yumuk found that students' desire to learn and to take more responsibility

for their learning increases when "their learning engages their intrinsic motivation

and they derive personal meaning from their own learning" (Yumuk, 2002, p.

151). Students took ownership for their learning and no longer relied on the

teacher to provide information in their knowledge gap. The students themselves

saw the transformation and became better translators because of the experience.

The effects of an instructional strategy delivered online to fifth graders

were studied by Meyers, Middlemiss, Theodorou, Brezynski, and McDougall

(2002). Meyers et al. matched 73 fifth graders with 12 older adult tutors between

the ages of 62 and 80, with the average age being 67 years old. Tutors were

highly educated, with 7 tutors having Masters or Ph.D. advanced degrees. The

goal of the program was to teach students to use a reading strategy while

reading. The goal of the research study was to determine if 1) students can learn
48

to use a reading strategy from using the Internet, and 2) if providing adult tutors

can make a difference in students' learning.

Using stratified random sampling, students were assigned to one of three

groups: 1) a group with Web-ba.sed instruction on using the strategy and an

assigned tutor, 2) a group with Web-based instruction on using the strategy but

without a tutor, and 3) a control group that did not receive lessons on using the

strategy. Each of the three groups consisted of 20 students. The two groups that

received treatment met three times a week in the computer lab for 1O weeks.

Each session in the computer lab lasted for the duration of 20 minutes.

To start the program, the students with tutors met their tutor online via

email. Then, both groups of students with Internet access worked through 25

lessons; the lessons taught concepts and skills such as comparison, problem­

solution planning, skills for memory and recall, cause and effect, sequencing, and

description. Only 10 percent of the students completed all 25 lessons. The group

with tutors received messages from their assigned tutors each day they came to

the computer lab. The tutors gave feedback on students' work, encouraged

students, and provided additional directions and instruction when necessary.

Additionally, students in both groups received individual feedback,

encouragement, and curricular suggestions from the "Teacher," who was

personified in the form of a clown named Patrick Plan.

Pretest scores showed that all three groups of students were not

significantly different. Meyers et al. tested students' recall of the reading

passages in two posttest conditions. In the immediate posttest in the same


49

location under the same conditions as the pretest (cafeteria), Meyers et al. found

that students were able to recall more of the information in the text. On the

delayed posttest administered 2.5 months after instruction, under the same

condition as the pretest and immediate posttest, Meyers et al. discovered that the

group that received lessons via the Internet and had tutors performed better than

the control group but not better than the students who received lessons via the

Internet but without tutors.

In summary, the five research studies show that students benefited from

the use of the Internet. Younger students needed more guidance than older

students to achieve the learning goals. The majority of the students liked using

the Internet and felt that it was worthy of the time invested.

The most recently published article connecting the Internet and writing

came from Englert, Zhao, Dunsmore, Collings, and Wolbers (2007). Englert et al.

examined the effects of a web-based software program that provided support

and scaffolds to students during the writing process. The program, Technology­

Enhanced Learning Environments on the Web (TELE-web), helped students

write by prompting students through three major processes: 1) generating ideas

and introductory statements, 2) generate supporting evidence and details as well

as fulfill readers' expectations, and 3) anticipate, predict, and compose "topically

related chunks of information while they are in the situated circumstance of text

composition" (p. 12).

The participants in the study were students with learning disabilities drawn

from six different special education classes in five urban elementary schools. Of
50

the 35 total students, 20 students were placed in the experimental condition

using TELE-web and 15 students in the control condition. Of the 20 students in

the experimental condition, 13 students had Individualized Education Plans (IEP)

that identified them as learning disabled (LO) and 7 students had other

disabilities. Of the 15 students in the control group, 7 students were LO and 4

had other disabilities. The mean age of the students in the TELE-web group was

10.64 years and the mean age of the students in the control group was 9.64

years. The mean reading level as measured by the STAR Reading Test for the

TELE-web group was 1.71 grade equivalent and the mean for the control group

was 1.49 grade equivalent. The authors report that the difference in age was not

significant (p> .05) nor was the difference in reading scores statistically

significant (p> .05).

The study proceeded in this manner: Two weeks prior to the first

intervention, the researchers collected a baseline writing sample. The students

were asked to write an informational paper about a farm animal, offering as much

information as possible. All students wrote using pencil and paper.

Then the two groups were asked to write another informational paper, this

time about a pet for people who did not own any pets. The control group, using

pencil and paper, proceeded in this manner: the first day, the teachers presented

the idea of using a concept map (also called a concept web) and modeled this

method by using a pet parrot as an example. Students were instructed to add two

or more details to each category in the concept map.


51

The next day, students were asked to write their papers with the teachers

issuing instructions to write a paragraph for each of the categories on the

concept map. The teachers modeled an introductory paragraph that would grab

the readers' attention using the parrot as an example. The teachers guided

students in writing the rest of their paper by telling them to write a new paragraph

for each different category on the concept map.

There were posters of the writing process on the walls reminding students

of four steps: a) write an introductory paragraph, use an attention getter, b) write

a body paragraph for each category on the concept map, c) write conclusion

sentences, and d) end stories with a conclusion paragraph or sentence. The

teachers reminded the students to capitalize the necessary parts of the paper.

The students in the TELE-web group received the same informational

writing assignment. This group, however, used the TELE-web software. The

concept map was pre-made by the software and students clicked to add ideas or

details. The second day, students used a template that provided scaffolds during

the writing process. The teachers presented the information in the same manner

as the teachers in the control group but they used the TELE-web software

program to model the writing process for the students.

The TELE-web group differed from the control group in several ways.

First, the TELE-web program provided text reminders of the teachers' oral

instructions. Second, there were boxes to help students write. The first box was a

topic sentence box. The second box was a supporting details box. Students

could choose a function to add more paragraphs which brought the Topic
52

Sentence box and the Supporting Details box to reappear. There was a

Concluding Sentence box to help students finish their paper.

TELE-web had additional features that students could use freely. First,

there was a spelling checker that checked words against an online dictionary.

Second, students could employ the text-to-speech function and ask the computer

to read the text back to them. Third, students could submit the paper online using

a TELE-web function. TELE-web allows teachers to provide feedback but for the

study, teachers did not submit feedback. Finally, students could publish their final

draft in the TELE-web.

After the papers were submitted, the handwritten papers were typed so

that raters were blind to the experimental condition. A rubric developed by the

first author, Englert, was used to score the papers. A reader was trained to reach

a criterion level of 85 percent reliability based on agreement with other trained

raters. This reader scored all the papers. A second reader read 33 percent of the

papers to control for scoring drift and inaccuracies. There was an average

agreement of 95 percent between the two readers.

The results indicate that the students in the TELE-web condition received

an average score of 3.304 (on a 4 point rating scale) while students in the control

group received an average score of 2.861. The authors considered this effect

size to be moderately large. Analysis showed there were no significant effects

due to the students' disabilities.

In addition to the published research studies, I found three recently

published dissertations related to the use of the Internet and a Master's thesis.
53

The first dissertation was conducted by Duran (2003) of West Virginia University,

the second by Dail (2004) of Florida State University, and the third by Kelso

(2005) of New York University. The Master's thesis was a study conducted by

Desjarlais (2007). All four studies contributed a substantial amount of information

to the current body of knowledge. The three dissertations and the Master's thesis

are presented here in chronological order, with the oldest study (2003) presented

before the most recent study (2007).

The first dissertation, by Duran (2003), examined students' attitudes

toward using the Internet for educational purposes. To start, Duran chose the

Attitude Toward Educational Use of the Internet (ATEUI), created by Duggan,

Hess, Morgan, Kim, and Wilson (2001), and adjusted it slightly to create her own

survey. She then asked 25 students enrolled in an English 108 class in a

university in West Virginia to take the survey to determine if certain computer-use

behaviors affected their outlook on the Internet. One hundred percent of the

students owned a computer and had access to the Internet at home. Duran also

sought to determine if students 25 and older had different views of the Internet as

compared to students 24 and younger so she separated the group into two

subgroups: traditional-aged students (24 and under) and non-traditional aged

students (25 and older).

Duran's results indicated that the traditional and non-traditional students

only varied in two areas: using the Internet to consult with classmates and use of

chat rooms. Traditional-aged students (47 percent of the traditional-aged

students) viewed the Internet and chat rooms more favorably than the non-
54

traditional students (53 percent). While only 1 participant over 25-years-old used

chat rooms, 59 percent of those under 25 used chat rooms. Only one other

behavior reached near significance: keeping track of useful educational sites on

the Internet. In this area, 85 percent of older students kept track of valuable

educational sites while only 35 percent of traditional-aged students did. There

was no correlation with gender or the student's year in school. The questionnaire

provided room for additional comments and students from the nontraditional­

aged group felt less knowledgeable about browsing the Web and viewed

themselves at a disadvantage as compared to the younger students.

In a different dissertation study, Dail (2004) examined how 1oth graders

interact with the Internet. Dail sought to determine how the Internet impacts

students' reading of the text, specifically how students interact and comprehend

the text in a hypertext document. Dail points out that reading in a hypertext

environment is not linear or nonlinear, but rather that it is multilinear. In a

multilinear environment, text can be read in many different sequences, similar to

Create Your Own Adventures texts.

Dail's study was guided by two research questions, reprinted verbatim:

1) What does the environment of a tenth-grade classroom using

computers regularly in the new millennium look like?

2) What processes do tenth-grade English language arts students use

when reading online hypertext?

The participants in Dail's study were 30 tenth grade students in a world

literature class. To start, the students were taken through a 10- to 15-minute
55

tutorial session on how to navigate the hypertext document, a document similar

to the one they would use when Dail observed them. Though Dail was very

careful to note the minutes of students' completion of an activity, she did not

specify the length of time it took to complete this study. From the information she

gave, it appears that the time she spent with the students could have been

completed in as little as 3 weeks or in as much as three months.

From the tutorial that the students completed, students learned how to

click on a hyperlink in order to obtain more information and to view examples

about a literary term for which they wanted more information. They also learned

to recognize the text as black and hyperlinks as bright blue. Additionally, the

tutorial exposed students to sample questions in the same format they would

encounter them in the research situation.

Dail designed the actual test page so that it was similar to the tutorial.

First, Dail looked at textbooks that were not considered for adoption by the

classroom. After Dail obtained permission to use the materials contained in the

book, she converted the text in the book into a hypertext document, the

webpage. She added public domain poems to the webpage.

In the actual "testing" situation, students were placed on the computers to

read the required hypertext. The reading was done classwide. Students were

then instructed to close the browser window and obtain a set of comprehension

questions from the teacher in printed paper format. The questions only asked

students to recall information from the homepage, not from any extension

materials in any of the hyperlinks.


56

Dail also collected data related to other aspects of students' interaction

with the Internet in seven manners: 1) observations of students as they

conducted research on the Internet, 2) survey of students' attitudes towards the

Internet, 3) videotapes of students reading online text, 4) records of sites

students visited via "cookies" - a computer feature that records Internet websites

the students visited, 5) notes taken during students' think-aloud sessions as they

browsed the Internet , 6) students' responses to comprehension questions, and

7) interviews with students.

As expected, the results of Dail's survey indicated that students use the

Internet for various reasons. About 76 percent of the students use the Internet for

fun, 96.6 percent use it for school assignments, and 80 percent regularly use it

for emailing friends and family members. A number of students also indicated

they use the Internet for other purposes, including reading stories and posting

readings, chat rooms, travel, weather, and research.

An examination of the cookies feature revealed some interesting findings.

Some students navigated the hypertext links according to the linear order in

which they were sequenced on the page. What was interesting to note was that

the student with the lowest score on the comprehension questions followed the

same navigational path as the student with the highest score. The difference was

that the student with the lowest score spent 17 minutes navigating the hypertext

and only 6 minutes answering the comprehension questions, while the student

with the highest score spent 7 minutes navigating the hypertext and 10 minutes

answering the comprehension questions. The majority of students spent 7 to 10


57

minutes navigating the site, with most of that time on external links, links that did

not contain the relevant information. This data informs teachers that time spent

reading does not necessarily equate to comprehension of the text.

Dail's study also found the same results as other studies: that the students

search for information using popular search engines such as Yahoo!, Google,

and AskJeeves. The fact that the classroom teacher heavily relies on AskJeeves

is indicative of where students learned this habit. Once students reached an

informational page, they skimmed and scrolled through the text. Some students

reported that they left a site if the information on the page seemed like it would

take too long to read. Dail projects that skimming and glancing might indeed be a

strategy:

It might serve as a quick means of skimming the Web site and determining

that the material is either not of interest or not relevant. In this context,

since the external links were only tangentially related to the content of the

main hypertext document and since their content was not addressed in the

comprehension questions, students likely used this approach to decide

that they were not interested in the content. Their decision regarding the

importance of the content could have also been defined by the task

associated with the hypertext reading. (p. 132)

Another strategy students used was to take notes with pen and paper.

Surprisingly, students did not use "cut and paste" as a method of taking notes.

Other strategies that Dail observed students using included summarizing

readings that they felt were important to fellow students. Using prior knowledge
58

to enhance their understanding of a text was also present. This was observed

when students attempted to answer a question which did not have the answer on

the website by using their prior knowledge. One student re-read information to

enhance her comprehension of the text, another student silently previewed the

reading before actually attempting to read it. This same student tried to visualize

the presented information in his head as well as connecting the information to his

own life experiences.

The various strategies that students used yielded various correct answers.

An average of 7.6 students received a score of O (no correct answers) on literal­

level questions. If the computation included another question that required

students to use prior knowledge, the average number of students receiving a

score of O would be 9.25. What is important is learning that students do not have

a strategy for browsing webpages, confirming the importance of teaching

students how to use the Internet.

The third study was conducted by Kelso (2005), a teacher-researcher,

who used qualitative methods to examine middle school students' use of online

discussion boards. Kelso had all 65 of her eighth-grade students participate in

literary discussions online during the reading of various novels during the

academic year. The students were assigned to online discussion groups that

were not necessarily with peers from the same English class but with peers from

different classes. Two of the three class sections contained a number of students

who had special needs and met with a resource teacher in special education. For

the study, Kelso focused on a small group of 7 participants who discussed the
59

autobiography Red Scarf Girl by Ji-Ji Jiang. To facilitate students' responses,

Kelso posted questions that students could respond to. For this non-fiction piece,

students were required to submit six postings during a 5-week period. Kelso

reserved computer lab time to ensure that students had time and access to

submit postings to the discussion group. During this time, students could choose

to read postings, post one of their own, or reply to another student's post.

Kelso's results are interesting to educators who are considering using

online discussion posts. Kelso arrived at several conclusions: (1) if students don't

understand the reading, they are likely to regurgitate or agree with what other

students wrote, (2) reading what other students posted helped the weaker

students understand the reading, (3) some students saw the discussion boards

as a place to put forth the right answers while others saw them as a place to

share their ideas, (4) reading peers' postings made the students think more

deeply about the literature, (5) students read the same passages but interpreted

them differently, and (6) the lives students live outside of school affect their focus

in school.

In the analysis of the students' postings, Kelso found several concepts

that teachers thinking of incorporating online discussion boards may want to

consider. The first was the students' development of awareness of an audience.

Posts at the beginning of the school year were aimed at the teacher while posts

at the end of the year recognized that everyone online was part of the audience.

The second change was developing voice. Students generally felt more

comfortable voicing their thoughts or opinions if they were placed in groups with
60

friends they saw on a daily basis. Some students began to see this medium as a

way to voice their ideas. Generally, students in this age group liked knowing the

faces of those they were responding to. Students generally saw posting their idea

as risky but appreciated that others took the risk of posting. Third, the students

who invested more of themselves in the discussion boards experienced more

growth than those who did not. The students who did not utilize the discussion

boards much viewed the requirement to post as homework. And homework is

homework regardless of the medium one has to submit the homework in. Finally,

students went from thinking that learning only occurs when the teacher delivers

the information to experiencing that learning also comes from interactions with

others, including peers.

The most recently published study was a Master's thesis by Desjarlais

(2007). Desjarlais' study closely resembles my study in that she examined

whether the use of the Internet affects students' essays. Her purpose was to

determine if the Internet aids students during the writing process when writing

about a topic in which they had a high domain knowledge versus a topic in which

they had low domain knowledge.

Desjarlais asked her participants to complete two essays, one requiring

high knowledge domain and one requiring low knowledge domain. The

participants were 60 undergraduate college students randomly assigned to three

conditions: 1) search the Internet for an hour with notes present and then write

the essay, 2) search the Internet for an hour without notes and then write the

essay, and 3) did not search the Internet prior to completing the essay.
61

Desjarlais tested five hypothesis (presented here in her own words but

condensed for brevity):

Hypothesis 1: Students would perform better on the essay corresponding to a

high knowledge domain than to a low knowledge domain.

Hypothesis 2: The presence of notes when writing the essay would facilitate

learning from the Internet when domain knowledge was low.

Hypothesis 3: Providing plenty of time (i.e. 1 hour) to search the Internet prior to

writing the essay may compensate for low domain knowledge.

Hypothesis 4: A high level of motivation for using the Internet to search for and

retrieve information would enhance students' learning.

Hypothesis 5: Learners who had notes present when writing the essay would

indicate that this procedure is more similar to the method they used for

completing essays for their university courses in comparison to learners who did

not have notes present.

Of the 60 student participants, 30 of the participants came from a field

related to Kinesiology (ie. health and physical education) and 30 of the

participants came from a field related to political science, (ie. history and criminal

justice). Desjarlais equally divided the Kinesiology students into the three

conditions which amounted to10 students in each condition. She then equally

divided the 30 political science students into the three conditions which

amounted to 10 students in each condition.

Desjarlais then chose two writing topics: one in the political science area

and one in the area of sports. All students wrote in response to both topics. The
62

political topic was considered a high domain knowledge to political science

students and of low domain knowledge to kinesiology students. The sports topic

was considered a high domain knowledge to kinesiology students and of low

domain knowledge to political science students. For the sports topic, students

were told to "learn about how the athleticism of Ancient Greece and the sports

spectatorship of Ancient Rome are similar/ different from contemporary sport and

physical activity in the 21 st century" (p. 48-49). For the political topic, students

were told to "learn about how the role and powers of the American President are

different from the Canadian Prime Minister" (p. 50). The students were given 20

minutes to write the essay using Microsoft Word. In the first writing session, half

of the students responded to one topic while the other half responded to the

other topic. The students returned to the computer lab to respond to the second

topic an average of 4 days later. The control group (no Internet) completed both

essays in one session.

To score the essays, Desjarlais used a method in which 1 point was given

for an answer that directly addressed the question, half a point for an answer that

was relevant to the question, and O point for wrong or irrelevant answers.

Desjarlais hired two readers, a student in kinesiology and a student in political

science. Each student read 30 percent of the essays in their field. The student

readers were unaware of the assigned conditions for each essay. Desjarlais

served as the second reader for every essay. Desjarlais and the reader for the

sports essays achieved 82 percent reliability; Desjarlais and the reader for the
63

political essays achieved 80 percent reliability. Desjarlais read the remaining

essays herself.

Desjarlais proved several hypotheses to be true. For hypothesis 1,

students performed better when asked to write an essay in their area of study. In

other words, political science students scored better on the political essay and

kinesiology students scored better on the sports essay, p<.05. For hypothesis 2,

Desjarlais found no effects for the use of notes during the writing of the essay,

stating that "notes did not aid essay quality when learners had high or low

domain knowledge in comparison to an absence of notes" (p. 60). For hypothesis

3, Desjarlais found that the use of the Internet made a difference in that the

Internet groups scored better than the control group. For hypothesis 4, Desjarlais

found that motivation did not influence the essay scores, regardless of whether it

was a low domain knowledge essay or a high domain knowledge essay. For

hypothesis 5, both the Internet-with-notes and the Internet-no-notes groups

indicated that using the Internet to write essays was how they typically wrote

essays for their university courses.

The research study conducted by Desjarlais is significant in that she

designed an empirical study based on random assignment. The result most

significant to my study is that students who used the Internet in her study wrote

better essays as measured by the scores they earned than students who did not.

What is problematic of her study is that she participated in the scoring of the

essays. She did not indicate that the 70 percent of the essays that she scored for

each essay topic underwent a process that would make the condition of the
64

writing (Internet or no Internet) blind even to herself. It's possible that she

remembers the condition that each essay was written in and subconsciously

scored the essays written with the use of the Internet higher than the essays

written without the use of the Internet.

The findings from the four dissertations and Dail's Master's thesis are

interesting. Though each dissertation had a different focus from the other

dissertations, the themes that emerged are similar. The first theme that emerged

was that most students like to use the Internet. They like the ease and comfort

that Internet access provides. The second theme that emerged is that the

Internet can be used to facilitate the learning of many subject areas including

reading, writing, and critical thinking skills. The inclusion of technology has made

it easier for teachers to individualize instruction for students. Lastly, despite

students' savvy with the Internet, they still need guidance in using the Internet as

well as in understanding the content area. A teacher who understands both the

Internet and the content area is invaluable in guiding the students.

Overall, the message that emerges from all of the research studies is

similar. The most important message is that the Internet is becoming an integral

part of students' learning, regardless of grade level or subject area. To facilitate

student learning, teachers and instructors have to stay abreast of recent

discoveries and uses of computers and the Internet in order to understand the

benefits it offers, and to be able to assist the students.


65

Recommendations from the Researchers

Integrating cla�sroom learning and technology is not an easy task.

Technology alone does not teach students; the instructor is a big component in

the success of the students' integration and application of technology. The

following is a list of suggestions compiled from the recommendations of the

various researchers cited in the above review.

First, it is crucial that the instructor is aware of students' access to

technology, previous experiences with online resources such as the Internet,

attitudes toward integrating technology, and goals for using each specific

technological tool in situated contexts (Kleiner & Lewis, 2003). Just c;1s reading

teachers need to know their students' reading levels and abilities, teachers who

incorporate technology into their classrooms need to understand the students'

background experiences with technology.

Second, if a teacher plans to incorporate technology such as discussion

boards into the curriculum, the teacher should know that it can take an immense

amount of time in monitoring and reading the postings on the boards (Kelso,

2005). However, the time invested in setting up the discussion boards or other

lessons involving technology may produce results that benefit both the instructor,

the students, and the knowledge community.

Finally, teachers must be willing to give up the perception that they are the

sole authority of information and relinquish control of students' learning to the

students (Kelso, 2005; Yumek, 2002). This new era of learning necessitates that
66

teachers teach students how to seek and find information rather than memorize

facts.

Teachers who plan to incorporate technology, computers, or the use of the

Internet into their curriculum may want to consider the researchers'

recommendations in order to increase their success. Computers and the Internet

are developing into a fundamental part of learning; understanding the research

that has been conducted is important.

Conclusion

While the research seems to favor integrating technology and online

resources into the classroom, there is still room for discussion. The dearth of

existing data connecting online resources such as the Internet to K-12 students'

writing does not allow for conclusions to be drawn. While the studies conducted

thus far shed light on a few previously unknown facets of integrating the Internet,

further studies need to be conducted to determine the specific effects of online

resources and media such as the Internet, Biogs, discussion boards, and chat

rooms and their application to K-12 students' writing.

The study I conducted examined whether students with Internet access

would perform better than students without access in synthesizing information

into the form of an essay. Specifically, I examined whether the use of the World

Wide Web as a tool in the composition of essays influences students' essay

scores. The results of this study will add to the body of research that currently

exists about the Internet, its connections to students, and its effects on students'

writings.
68

CHAPTER THREE

METHOD

The purpose of this study was to determine the effects of Internet access

and/or training on the scores 4th and 5th grade students receive on essays when

responding to a state-published writing prompt. The writing prompt was obtained

from the Commonwealth of Virginia's Department of Education website. More

discussion about the writing prompt will take place later in the chapter.

Setting
The Community

The setting for this study is a county located in Central Virginia and has a

2005 countywide population of about 80 thousand. A large percentage of this

population count comes from a major health industry employer which employs

approximately 10 thousand employees. The county's residents rank in the top 10

percent in education level in the state. At the same time, problems that plague

other school districts also plague this school district: African Americans are

overrepresented in special education and underrepresented in gifted programs,

problems recruiting and retaining teachers of color, and the pressure to meet the

No Child Left Behind's requirement for adequate yearly progress (AYP).

The School District

For the 2006-2007 school year, the county enrolled 12, 747 students in its

25 schools. The state's average per pupil expenditure was lower than the

county's per pupil expenditure while the state's average teacher-to-student ratio

was higher than the county's average teacher-to-student ratio. Countywide, a


69

smaller percentage of the students received free or reduced lunch as compared

to the statewide numbers. Table 1 provided here is for easy comparison between

the district and the state.

Table 1

The County's Per Pupil Expenditure as Compared to the Commonwealth of Virginia

State District

Per pupil expenditure $8,886 $9,558

Teacher-to-student ratio 1 :13 1 :12

Free or reduced lunch 31% 18%

Gifted 15.5%

Has a disability 16.5%

Like many other school districts, this school district has problems hiring and

retaining teachers of minority background but is currently making an attempt to

address this issue.

The Elementary School and the Area It Serves

Southside Elementary School is a fully accredited public school located in

the southern end the county, in a very rural setting. In the spring of 2007, the

school had an enrollment of 180 students, making the teacher-to-student ratio at

Southside Elementary 10: 1. Southside Elementary students' ethnicity reflects

less diversity than the diversity of the state's or county's student population.

The ethnicity percentages in the table below are for Southside Elementary

School (Greatschools.net, 2007). The presentation of the statistics is to illustrate

how Southside Elementary compares to the district and the state's ethnicity

percentages.
70

Table 2

Ethnic Composition of Students in School, District, and State

Ethnic Group State District Southside Elem

White, not Hispanic 59% 77% 92%

Black, not Hispanic 27% 13% 5%

Hispanic 8% 5% 2%

Asian/Pacific Islander 5% 4% <1%

Native Indian/ Alaskan <1% <1% <1%

Note. Racial labels used are labels from Greatschools.net

The socioeconomic status of the students at Southside Elementary is

similar to the rest of the district. Approximately 31 percent of the students receive

free or reduced lunch, exactly the same as the state average of 31 percent.

Results of the Spring 2007 4th and 5


th
grade SOL passing rates for

Southside Elementary School is in Table 3 below.

Table 3

Virginia SOL Passing Rate for Southside Elementary School

4th Grade 5 Grade

Passing Rate Passing Rate

Content Area State Southside State Southside

Average Elementary Average Elementary

English: Reading 87% 94% 87% 82%

Math 81% 88% 87% 89%

English: Writing 89% 96%

Science 88% 96%


71

As the table shows, Southside Elementary School made AYP in the spring 2007

SOL test administration as defined by the No Child Left Behind Act.

Study Design

This study was designed to provide three groups of students with varied

opportunities to write essays. Group I was the control group, provided with time

to write under standard administration. "Standard administration" means following

the same procedures that the school would normally follow when administering

this test. In this case, standard administration meant students received the

writing prompt and then received a total of 90 minutes to read the writing prompt

and write the essay.

Group II was provided with 30 minutes of Internet use to research their

topic, and then they wrote about the topic for 60 minutes.

Group Ill was provided with Internet training, time to use the Internet, and

then time to write for 60 minutes. Both Group II and Group Ill were the

experimental groups; both groups were the focus of this study. The details of

their involvement will appear in a later section. Overall, the effects of these varied

conditions on students' essay scores were measured quantitatively.

The following hypotheses were tested:

H1: The groups with access to the Internet (Groups II and Ill) will earn

better scores on the essay than the control group--students without

access (Group I).

H2: Group Ill, with training on how to browse the Internet, will earn better

scores on the essay than Group II.


72

In this study, t employed a three-group experimental design. The rationale

for the creation of the third group, the group that received training on using the

Internet, was based on several factors: the findings of several research articles

that elucidated the need for teachers to teach students how to effectively conduct

research on the Internet, conversations with a technology teacher at a middle

school as well as a school librarian at an elementary school, and my own

experience teaching secondary school for 8 years. The particulars of each factor

are explained in more details in the remainder of the chapter.

The findings of two recent research studies convinced me to design my

study to have three groups. Duran (2003) studied university students' use of the

Internet as they applied it to compositions required for an English class. One

student in Duran's study stated that he needed "more guidance and instruction

before using the Internet as a research tool" (p. 192). Additionally, one instructor

in the study said that students need "to become more information literate, to be

skeptical of types of Websites" (Duran, 2003, p. 192).

The second study, a study by Stapleton (2005), involved Japanese college

students. Stapleton found that those students used popular search engines when

searching for information for their essays. This method often produces thousands

of hits, causing students to waste time browsing through the various sites. Similar

to the younger students, they needed guidance about how to use the Internet

efficiently.

The many recent articles espousing teachers to teach students how to use

the Internet also confirmed my decision to have a third group (Anderson, 2006,
73

Friesen, 2003; Harris, 2003). As Vance in Stone, Hoffman, Madigan, and Vance

(2006) stated, "Growing up with an application, however, does not mean having

an advanced skill set, no more than growing up with the English language means

having advanced compositional skills" (p. 120). According to these articles about

instruction, students need guidance in how to use the Internet.

According to the technology teacher at the Southside Elementary School,

students do not get formal training on how to browse the Internet. Teachers are

not required to instruction students on computer use or Internet usage; the skills

they learn are haphazardly gained from the content area teachers (English,

history, math, science, health) while doing research papers or other projects that

require use of the Internet. The lessons arise when an individual student asks a

question and the teacher answers that question. For some questions or

situations, the teacher and student may explore the answer together. The answer

may or may not be shared with the entire class. Furthermore, when I asked the

technology teacher if I should have two or three groups, explaining the

differences among the groups, he firmly answered: "Three."

Another factor that persuaded me to provide the third group with explicit

instruction on how to use the Internet was based on my 8 years of teaching high

school. In the classroom, when students went to do Internet searches, they often

just clicked on popular search engines Yahoo! or Google and typed in the first

search term that came to mind.

Table 4 below provides a visual of the three-group design and the tasks

involved. The specific details for each task are provided later in the chapter.
74

Table 4

• Three Group Design: Study Procedures

Group I Group II Group Ill

Standard Administration Browse Internet & Write Internet Instruction,

Browse Internet, & Write

--completed two measures:

Internet Self Perception Scale and

Behavior Correlates Questionnaire

Internet instruction {three 45-

minute sessions) is provided

to discern a good website

from a bad website.

--Practice Day: --Practice Day: --Practice Day:

Students are given the Students are given the Students are given the

writing prompt and then writing prompt, then get writing prompt, then get

get 90 minutes to plan 30 minutes to browse the 30 minutes to browse the

and write an essay Internet, then 60 minutes Internet, then 60 minutes

to write for a total of to write, for a total of

90 minutes. 90 minutes.

--Testing Day --Testing Day --Testing Day

Students are given the Students are given the same Students are given the same

writing prompt and then writing prompt as Group I, get writing prompt as Group I, get

get 90 minutes to plan 30 minutes to browse the Internet, 30 minutes to browse the Inter

and write an essay. then 60 minutes to write for a total net then 60 minutes to write for

of 90 minutes. a total of 90 minutes.


75

The research design was based on two pre-writing measures, treatment, and

one post-writing measure-the essay score students received.

Participants Descriptions

Researcher's Role

I was a part-time itinerant teacher for students who were classified as English

Language Learners at Southside Elementary School for 3 of the 4 semesters during

the 2005-2007 school years. I worked at Southside Elementary for 1 hour per week,

working directly with one student in a pull-out model. In the three semesters there, I

had about 2 minutes of contact each week with 1 of the 3 teachers in the study; I

had had no contact with the other two teachers in the study before approaching

them about the study. Additionally, months before the fall of 2007, when this study

took place, I had already resigned from my teaching position in the county and was

no longer considered an employee of the county nor did I work in any capacity in the

county during the length of this study.

Teacher Participants

The three teachers in the study self-reported their years of experience. The

teacher of Group I, the Standard Administration group, had 3 years of teaching

experience, all 3 years at Southside Elementary. The teacher of Group II, the group

that used the Internet to write the essay but received no formal Internet instruction,

had 11 years of teaching experience; all 11 years have been at Southside

Elementary. The teacher of Group Ill, the group that received Internet instruction,

had 12 years of teaching experience, with 10 years at Southside Elementary.


76

Student Participants

All students in the 4th and 5th grades at Southside Elementary School were

invited to participate in the study. The only requirement was that the student return

the parent consent form. Ultimately, 49 essays were collected from 4th an.d 5th_grade

students; there were 27 boys and 22 girls. Participants in the study mirrored the

grade, gender, and ethnic representation of the school (95 percent Caucasian).

Teachers would riot release the number of students receiving special education

services in each classroom; they only stated that "There are equal numbers of

special education students in each group." There were no students receiving direct

or indirect services as an English Language Learner in any of the three groups.

Timeline

This study took place over a period of 12 weeks in the fall of 2007, specifically in

October, November, and December of 2007. The actual time it took for students to

respond to both writing prompts was 2 days, in December. The student contact days

were as follows:

Day 1: Introduced self, explained study, and distributed parent consent forms.

Students returned the consent forms to their teachers and the teachers gave them to

me.

Days 2, 3, 4: Observed Internet Instruction for Group Ill.

Day 5: Administer writing prompt I to all three groups.

Day 6: Administer writing prompt II to all three groups.


77

Treatment
Gersten, Fuchs, Compton, Coyne, Greenwood, & Innocenti (2005) agree that

one quality indicator of a good study is random assignment. According to the

principles of randomized controlled trials, "the study participants (e.g., students,

teachers/classrooms, or schools) should have been placed to each study condition

through random assignment or a process that was haphazard and functionally

random" (What Works Clearinghouse, 2006, p. 6).

The participants in this study were randomly placed into 1 of 3 groups by, of

all people, the school's new principal. This is how it happened: Southside

Elementary was facing an unusual year. It seemed that in both the 4th and 5th grade

levels, there weren't enough students to create two classes as two classes would

have resulted in 14 students in each class which is too low to hire two full time

salaried teachers. At the same time, there were too many students for one class as

one class would result in 28 students, too high for the school's standards of optimal

learning. The principal felt it was better to have what is called combination classes, a

combination of two grade levels-4th and 5th grades, and to have both grade levels

distributed into three classrooms, in order to produce a teacher-student ratio that

was conducive to learning. Thus, the principal and teachers all agreed that it was

better to have three combination classes, with both 4th and 5th grade in each class.

In the enrollment process during the summer, the principal randomly distributed the

students into the three combination classrooms. The principal did not know any of

the three teachers nor any of the students as she was new to the school. Based on

the principal's random distribution, each student received an equal chance of being
78

placed into any 1 of the 3 groups in the study. That is the process to which students

were randomly assigned into the groups.

For the purposes of this study, I kept the students grouped by classes as the

randomization process had already taken place in the beginning of the school year.

To determine which classroom would be placed into the various conditions of the

study, I drew names from a hat. The teacher of Group Ill, the group that would

receive Internet instruction, was offered the choice of having an outside person

teach the Internet instruction part but when he learned that lesson plans already

existed for the Internet instruction, he decided to conduct the Internet instruction

himself. The Internet instruction was based on a unit from Library Sparks, a journal

for librarians (Larsen, 2005).

Administration of Surveys

Each teacher received a classroom set (20) of two measures: the Behavior

Correlates Questionnaire (Duran, 2003) and the Internet Self Perception Scale

(Hinton, DiStefano, and Daniel, 2003). The teachers administered the surveys

themselves during the language arts block in late October. Each teacher read the

questions aloud and students marked the answers with a pencil. To ensure that the

students would receive anonymity, they were told that they should not put their

names on the surveys. However, one of the measures asked for age, grade level,

race, and gender. Comparisons of the two measures were done among the groups

and not among students.


79

Internet Instruction

Internet instruction for Group Ill occurred in mid-November of 2007. The three

45-minute lessons were based on a lesson found in Library Sparks, a librarian's

journal (Larsen, 2005) and conducted in the students' regular classroom. The

teacher used a laptop and an overhead projector to teach three lessons about using

the Internet. Students browsed the Internet using the school's MacIntosh laptops

and chose their own Web browser.

Lesson 1: Domain Names on the Internet. In this lesson, the teacher taught students

about web addresses and the endings of .edu, .com, .gov, etc. Students learned that

a .gov website would be more reliable than a .com website which typically has the

purpose of promoting or selling a product.

Lesson 2: Using Key Words During Searches. In this lesson, students learned about

using various key words to find information. For example, one can type in "fashion,

clothing, designer clothes", or a combination of search words using AND to find

information (i.e. clothes AND Ralph Lauren). Students also learned to use quotation

marks and the purposes for using the quotation marks.

Lesson 3: Evaluating a Website for Authenticity. In this lesson, the teacher taught

students that anyone can create a website and showed students how to research a

website's registered owner by going to a specific website. Then the teacher asked

students to examine a website about a farm that grows Velcro as a crop

(http://home.inreach.com/kumbach/velcro.html) and then to decide if this was a real

website or a fake website. Students also looked at a website about a tree octopus

and had to decide if this octopus really existed.


80

During the three instructional sessions, students were paired off so that an

academically strong student, which the teacher defined as a good reader, was

partnered with a weak reader. Each partner took turns operating the computer

keyboard and typing in the domain addresses they were instructed to go to. Each

laptop had three search engines available for browsing: Safari, Firefox, and

Dashboard. The teacher used Firefox as his Web browser but issued no verbal

directions to students about the web browser they should use. Students were

allowed to choose their own browsers during their part of the exercise.

Rationale for the Chosen Writing Prompts

The writing prompts that the Virginia DOE issues are very general in wording

and do not require much background knowledge of the student in order to write an

essay. For example, the writing prompt for the 2006 SOL test for 5
th
graders

obtained from the Virginia DOE website reads: "Imagine that you are suddenly able

to fly whenever you want. Where would you go? What would you do? Write to

explain your new talent and how you would use it." The 2007 SOL writing prompt

requires even less background knowledge and reads: "What is your favorite subject

in school? Tell about that subject and explain your reasons for choosing it."

For the purposes of this study, I deliberately used writing prompts that would

require background knowledge on the part of the writer. I "found" a writing prompt

from the Virginia DOE's 5


th
grade 2006 SOL test in the WRITING section which

requires students to read a mock essay from a fictitious student and find the errors

by choosing 1 of 4 multiple choice answers. The original prompt read: "In science

class, Kevin is studying about the ocean. His teacher asks each student to write a
81

report about an ocean animal." Since this was found in the 5th grade SOL test, I felt it

was valid for use with 5th graders. I took this writing prompt and adapted it to read:

"In science class, students are studying the ocean. Your teacher has asked each

student to write a report about an ocean animal. Choose an ocean animal and write

about the animal." This was the writing prompt study participants had to respond to

on the practice day.

The second writing prompt used in this study was also found on the Virginia

DOE website, in the year 2005. Again, the prompt was found in the SOL WRITING

section for 5th graders rather than the "Writing Prompt" section. The original prompt

in the Writing section read: "David's class has been learning about the �arly 1900s.

For a class project, he talks to his grandfather about what life was like for him living

in the early 1900s. David decides to write about the things he and his grandfather

talked about." With input from the teacher participants in the study, I took this writing

prompt and changed it to read: "David's class has been learning about the early

colonial days. For a class project, he has to find out what life was like for people

living in the 1700s. Pretend that you are David and write about life during the

colonial days." The teachers felt that students in the control group could answer this

question because they had had some academic exposure to life in colonial days. It is

interesting to note that students' essays are not evaluated on the accuracy of the

content but rather on the students' writing abilities. That is, a student who writes that

Abraham Lincoln was the first astronaut to land on the moon can earn high marks

despite the factual inaccuracies.


82

Procedures During the Writing Days

The two writing days for study participants at Southside Elementary School

took place in the first half of December, 2007. The time between the first writing day

and the second writing day was exactly 7 days apart. On both writing days, the

students in all three groups started at the same time and finished at the same time.

On both writing days, the students followed the same procedures. The following

sections will explain how each group proceeded through the process on the writing

days.

Day One

Group I: Standard Administration. At the beginning of the writing session,

each student received a piece of full size paper (8.5 x 11) with the writing prompt

typed on it. The writing prompts were read aloud to all of the students. The writing

prompt on this day read: "In science class, students are studying the ocean. Your

teacher has asked each student to write a report about an ocean animal. Choose an

ocean animal and write about the animal." The students in this group already had

notebook paper and pencils in front of them and proceeded to follow the steps of the

writing process: brainstorming, creating an outline, writing the essay, reading the

essay for mistakes, and fixing any mistakes that they saw. They were allowed to

organize their time as they saw fit. Students proceeded through the writing process

and all finished their essay by the end of 90 minutes. The teacher collected the

essay as each student finished and asked them to read quietly at their desks so that

the other students may finish.


83

Group II: Internet Use without Instruction. Due to a limited number of

portable laptops, the three teachers agreed that this group would use the computers

in the school's computer lab, situated next to the library. The computers in the

computer lab are arranged against the four walls so that adults can supervise the

students and monitor the computer screens during the students' use of them.

Furthermore, the Children's Internet Protection Act (CIPA), a federal law enacted by

Congress in 2000, requires that schools which receive federal funding support for

Internet access must "include technology protection measures to block or filter

Internet access to pictures that: (a) are obscene, (b) are child pornography, or (c)

are harmful to minors, for computers that are accessed by minors" (Federal

Communications Commission, 2006). Southside Elementary School follows the

CIPA guidelines so students cannot access inappropriate materials on the Internet.

Students in this group proceeded in this manner. First, the students were

given the same handout as the standard administration group, with the writing

prompt typed on it. The teacher read the writing prompt aloud to the students. Then I

verbally told the students: "You now have 30 minutes to search the Internet for

information that will help you write your paper. You will use whatever techniques or

methods that you have learned to search the Internet. At the end of 30 minutes, I will

ask you to turn off your computer and proceed to the writing of your paper. You may

begin now."

Each student in this group sat at a table with an Apple computer with the

standard "desktop"-a scene with icons at the bottom. The Internet browsers that

were available for students to use were Safari, Firefox, and Dashboard. Students
84

were allowed to choose their own browser and regardless of the browser that

students chose, each browser opened up to the school's website's homepage.

Students had to navigate to a website of their own choosing by going to the address

line and typing in the address of a website. The students were not allowed to take

notes during the 30 minutes they browsed on the Internet. The students who used

the computers in the library/lab stayed in the library to write their essay using pencil

and paper. Students stayed focused during the writing task and did not talk to their

neighbors. At the end of 60 minutes of writing time, the teacher collected the essays.

Group Ill: Students Who Received Internet Instruction. Students in this

group stayed in their regular classroom and used the school's portable MacIntosh

notebook computers which I will call laptops. Unlike the training sessions, each

student had his/her own laptop. The laptops used in the classroom follow the same

security guidelines as the library's desktop computers. The only difference is that the

teacher has to circulate in order to monitor the students' screens and websites

visited. These Apple laptops also had the standard "desktop"-a scene with icons at

the bottom. The Internet browsers that were available for students to use were the

same as the ones in the computer lab-Safari, Firefox, and Dashboard. Students

were allowed to choose their own browser and regardless of the browser that

students chose, each browser opened up to the school's website's homepage.

Students had to navigate to a website of their own choosing by going to the address

line and typing in the address of a website.

The procedures for this group proceeded in this manner. First, the students

were given the same handout as the other two groups, with the writing prompt typed
85

on it. The teacher read the writing prompt aloud to the students. I then verbally told

the students: "You now have 30 minutes to search the Internet for information that

will help you write your paper. At the end of 30 minutes, I will ask you to turn off your

computer and proceed to the writing of your paper. You may begin now." The

students were not allowed to take notes during the 30 minutes they were on the

Internet. Once the 30 minutes were up, the students closed the laptops and wrote

their essays at their desk. At the end of 60 minutes of writing time, the teacher

collected the essays. Students who finished writing their essay before the 60

minutes expired were told to read quietly at their desks.

Day Two

The second day of writing occurred exactly 7 days after the first day. On the

second day, the teachers and the students followed the same procedures as the first

day. The writing prompt for the second round was: David's class has been learning

about the early colonial days. For a class project, he has to find out what life was like

for people living in the 1700s. Pretend that you are David and write about life during

the colonial days.

Again, the group under standard administration had notebook paper in front of

them and proceeded to follow the steps of the writing process: brainstorming,

creating an outline, writing the essay, reading the essay for mistakes, and fixing any

mistakes that they saw. This writing prompt proved difficult for the 4th graders in the

standard administration group as they had had very little exposure to the topic. The

5
th
graders had learned about Colonial Days in the previous school year. The
86

teacher encouraged the students to make up a story if they didn't know very much

about the topic.

The two writing prompts were chosen from the Virginia Department of

Education's released-test items for the state's Standards of Learning (SOL) test for

grade 5 students. In the actual school administration of the Virginia SOL direct

writing portion, students get unlimited time to write the essay. Due to time

constraints inherent in the hours of the school day, I chose to allow a total of 90

minutes for students to write the essay. The students in the standard administration

group could choose to spend as much time as they felt necessary to brainstorm and

plan before proceeding to the writing as this is the procedure for the state's standard

administration. The students using the Internet were allowed 30 minutes to search

the Internet and 60 minutes to write. All three groups received a total time of 90

minutes.

Data Collection

The independent measures and dependent measures that I used in data

collection are below.

Independent Measures

Assigned group. Each student was randomly assigned to 1 of 3 groups.

Group I, the Standard Administration Group, served as the control group. Groups II

and Ill were the treatment groups. The treatment that each of the two treatment

groups received was discussed above.

Pre-writing scale: Internet Self-Perception Scale (ISPS). People's attitudes

towards the Internet affect their usage of the tool (Tsai, Lin, & Tsai, 2001 ). To
87

account for the possible variance that might be attributable to students' attitudes

towards the Internet, I administered the Internet Self-Perception Scale (Hinton,

Distefano, and Daniel, 2003). This scale (see Appendix A) consists of 29 items

based on a 5-point Likert scale ranging from "Strongly Disagree" to "Strongly Agree"

and measures four categories: Personal Self Evaluation, Comparison with Others,

Physiological States, and Social Feedback.

The reliability and validity of this scale have been addressed. According to

Hinton et al., reliability scores for the ISPS range from .73 to .85 for the

subcategories. Hinton et al. also built internal validity into the scale through their

questions. For example, one question states: "I like to use the Internet" while another

question states: "I enjoy using the Internet."

Pre-writing questionnaire: Behavior Correlates Questionnaire. Duran (2003)

created the Behavioral Correlates Questionnaire (BCQ; (see Appendix B) to

measure behavior and use of the Internet. Duran's 19-item questionnaire asks

students about computer ownership and Internet usage. The BCQ is an adaptation

of the Attitude Toward Educational Use of the Internet (ATEUI), developed by

Duggan, Hess, Morgan, Kim, and Wilson (2001 ). Duran doesn't specify how she

made changes but my comparison of the two measures shows that she changed the

wording a little and added in the question: Is there anything else you would like to

say about the Internet used for English class? (write in this space here)

The ATEUI questionnaire was first administered to a sample group of

university students (n=70) to determine the scalability of it. After the first test run,

Duggan et al. pared the questionnaire down to 18 items: six negative, six neutral,
88

and six positive. The 18 items were then tested on a new sample of university

students (n =69). Duggan et al. determined that the results of this test run made the

questionnaire scalable. Duggan et al. also assessed internal consistency "which

yielded a coefficient of .60 for social desirability and .87 for the ATEUI" (p. 272). At

the final administration of the ATEUI with 188 university students, internal

consistency was .91. However, before the final administration, Duggan et al. added

5 items at the end to measure social desirability. A reliability test using Cronbach's

alpha showed a coefficient of .55 for social reliability and .89 for the ATEUI.

The ATEUI asks about computer ownership and Internet usage, something

that Hinton, DiStefano, and Daniel's scale does not address. Since 71 percent of

African-American parents indicated that their children's main access to the Internet

is through school (National School Board Foundation, 2005), the questions on

Duran's questionnaire should help separate confounds such as accessibility to the

Internet at home.

Students' grades in Language Arts. The consent forms that the students and

parents signed asked for permission to obtain students' grades in Language Arts

class the past semester. These grades will serve as another independent measure.

Students who receive good grades in the area of Language Arts are usually good

writers. The rationale for using grades as one variable is to account for any variance

in students' writing abilities.

Gender. Organizations such as the American Association of University

Women (AAUW) and many researchers advocate disaggregating data by gender.

Some researchers (Gunzelman & Connell, 2006) feel this is necessary because
89

boys generally do not perform as well as girls in school. An example of this is the

gap in reading scores as measured by data obtained from the National Assessment

of Educational Progress (NAEP) which shows that boys' reading comprehension

scores are consistently lower than girls' scores (Klecker, 2006). On the other hand,

organizations like the AAUW have pointed out that girls may do well in some content

areas but not in areas of math and science, which subsequently attributes to an

income gap since employment in math and science fields yield higher incomes than

professions rooted in service (ie. teaching, nursing) or the humanities. Bimber (2000)

found the same income gap to be true in his research.

The gender gap in technology shows girls at a disadvantage in the knowledge

and use of computers (Bhargava, Petrova, & McNair, 1999). Stephenson (2006)

writes that the gender gap in technology has not closed. Singh, Allen, Schkler, and

Darlington (2007), in their analysis of the STEM Workforce Data Project, state that

there are fewer women in computer science today than in 1983. Gender differences

in computer usage, specifically in computer programming and self-efficacy show that

boys generally have more positive attitudes about themselves and their abilities

(Varma, 2002). Hackbarth (2002), who specifically studied 4th graders and the

gender gap, found that boys were generally able to list more computer-related terms

in three minutes than girls were able to, though both boys and girls at that grade

level liked computers equally.

The writing ability of boys and girls has been found to differ (Hackett, 1999;

Newkirk, 2000). Some (King & Gurian, 2006) attribute the gap to teachers' lack of

understanding for boys and their learning styles. Jones and Myhill (2007), who
90

looked at text-level linguistic features, report that while boys may differ from girls in

many areas, their writing, overall, reflects characteristics of good writers.

Disaggregating data by gender will allow us to see if the gender gap still

exists in writing and in attitudes towards technology and computer usage in this

population.

. Dependent Measure

Essay scores. The dependent variable in this study is the students' essay

scores as measured by a rubric established by the Virginia Department of

Education. The Commonwealth of Virginia has established Standards of Learning

(SOL) that students must meet at each grade level. SOLs are "expectations for

student learning and achievement" (VDOE, 2006). The "standards represent a broad

consensus of what parents, classroom teachers, school administrators, academics,

and business and community leaders believe schools should teach and students

should learn" (VDOE, 2006a). In the area of writing, it is expected that students learn

to write narratives and expository papers. From the VDOE website, the 4th grade

writing SOLs are:

1) The student will write effective narratives, poems, and explanations.

a) Focus on one aspect of a topic.

b) Develop a plan for writing.

c) Organize writing to convey a central idea.

d) Write several related paragraphs on the same topic.

e) Utilize elements of style, including word choice and sentence variation.

f) Write rhymed, unrhymed, and patterned poetry.


91

g) Use available technology.

2) The student will edit writing for correct grammar, capitalization, spelling,

punctuation, and sentence structure.

a) Use subject-verb agreement.

b) Include prepositional phrases.

c) Eliminate double negatives.

d) Use noun-pronoun agreement.

e) Use commas in series, dates, and addresses.

f) Incorporate adjectives and adverbs.

g) Use the articles a, an, and the correctly.

h) Use correct spelling for frequently used words, including common

homophones

(VDOE, English Standards of Learning-Grade Four, 2002).

The 5
th
grade writing SOLs are:

1) The student will write for a variety of purposes: to describe, to inform, to entertain,

and to explain.

a) Choose planning strategies for various writing purposes.

b) Organize information.

c) Demonstrate awareness of intended audience.

d) Use precise and descriptive vocabulary to create tone and voice.

e) Vary sentence structure.

f) Revise writing for clarity.

g) Use available technology to access information.


92

2) The student will edit writing for correct grammar, capitalization, spelling,

punctuation, and sentence structure.

a) Use plural possessives.

b) Use adjective and adverb comparisons.

c) Identify and use interjections.

d) Use apostrophes in contractions and possessives.

e) Use quotation marks with dialogue.

f) Use commas to indicate interrupters and in the salutation and closing of a

letter.

g) Use a hyphen to divide words at the end of a line.

h) Edit for clausal fragments, run-on sentences, and excessive coordination.

The standards in both the 4th and 5 th grades are very similar. When teaching

a combination class, teachers try to teach the content for both grade levels. For the

5
th
graders, some of the things they learn may be a review. A review of skills can be

extremely helpful since it is important for the 5th grade students to master specific

skills because students in the 5th grade must pass the S0Ls, including the direct

writing portion, in order to be considered on grade level. Those who do not pass

must attend summer school and take the test again in summer school.

For the study, all students in the 4th and 5th grades were asked to write two

essays using pen or pencil and paper. This method of writing by hand is one they

use daily and rules out the variance in typing speed. A study by Hollenbeck, Tindal,

Hamiss, and Almond (1999) showed that in a statewide writing test, seventh graders
93

who took the test on computers did not do better than those who used handwriting. It

is assumed that this study can be generalized to 4th and 5 th graders as well.

Once students' essays were collected, I assigned a numerical code (ie.

39287) to each essay, assigning a specific number in the middle to inform me of the

writer's group condition. The purpose of the numerical code was to establish a blind­

scoring system, to avoid giving the two essay readers any indication of the treatment

that the participants received, purposely avoiding any subconscious biases that

might affect their scoring. This satisfies the requirement for Gersten et al's question

concerning quality indicators of a research study: Are data collectors and/or scorers

blind to study conditions and equally unfamiliar to examinees across study

conditions? (Gersten et al., 2005, p. 151 ). In this study, the two readers did not

receive any information about the study except that they would be reading essays

from 4 th and 5th graders in one elementary school. The readers did not know that

there were different writing conditions (or groups) in the study.

Assessment tool. The essays were scored by two pre-service teachers in the

school of education. The rubric (see Appendix C) they used to score the essays is

the same rubric that the Virginia Department of Education uses to score students'

writing (VDOE, 2005). The scorers followed the same procedures that the VDOE

established, assigning scores to the essays in three domains: 1) composing, 2)

written expression, and 3) usage/mechanics. These domains reflect the six traits of

good writing - as ascribed to by the state department of education in Virginia.

To assess the essays, the two readers used the Commonwealth of Virginia's

scoring rubric. The advantages and disadvantages of using rubrics have been a
94

topic of discussion for many years. Briefly, the advantages of using rubrics are that

the teacher makes goals, expectations, and grading process of the assignment clear

to the students (and parents), and allows students to direct their efforts to completing

the requirements of the assignment. Rubrics also allow the teacher to choose

assignments and to design lessons that will teach the students the skills they need in

order to meet the goals and expectations of the assignment. Rubrics also keep

teachers aligned to the grading scale and unbiased in their assessment of the

assignment (Andrade, 2005).

The disadvantages to using rubrics are also numerous. Rubrics are not self­

explanatory; they need to be explained. Students still need models and examples of

what is expected of the assignment. Lastly, rubrics do not replace the good

instruction of a teacher (Andrade, 2005).

Issues of validity and reliability also come into question when using scoring

rubrics. Research studies of rubrics have been inconclusive. This is not a study of

the validity or reliability of rubrics so I will only give a brief recount of two studies that

examined the issues of validity and reliability of specific rubrics. The first study, by

Hafner and Hafner (2003), found that a rubric used by college students in peer- and

self-assessment was both valid and reliable. The rubric examined was constructed

by the class as a class activity for a college class entitled Introduction to Human

Evolutionary Biology at Occidental College in California. The rubric was used to

evaluate an oral presentation created and presented in a collaborative (group)

mode. The instructor and the students used the rubric to score presentations in five

areas: organization and research; persuasiveness and logic of argument;


95

collaboration; delivery and grammar; and creativity and originality. The students and

the instructor used the rubric consistently across the three years of the study. The

researchers found that this specific rubric had moderate reliability, and gender

neutral.

The second study, by Novak, Herman, and Gearhard (1996) found "mixed"

results. Novak et al. examined a rubric that was specifically used to score narrative

writing samples. The researchers collected their writing samples from students in

grades 2 through 6 in an elementary school in a middle-class suburb in California.

The trainers/researchers used sample papers to train and calibrate the scorers. To

assess the reliability of the rubric, the researchers analyzed three areas:

percentages of agreement, correlations between raters, and generalizability

coefficients. Novak et al. compared this rubric to a rubric created for the Writing

What You Read (WWYR) process in previous years. The WWYR rubric specifically

addressed six subscales--theme, character, setting, plot, communication, and

narrative effectiveness. The last subscale, narrative effectiveness, was holistically

scored by integrating the previous five subscales.

Novak et al. found that the new rubric was not as reliable as the rubric used in

the WWYR assessment process. They concluded that "choice of rubric can have

substantial effect on both the technical quality and the results of a performance

assessment" (n. p.). From the studies, it can be concluded that validity and reliability

can only be established for specific rubrics used for a specific task. For this study, I

chose to be consistent with the Virginia DOE and use the VDOE's rubric.
96

Calibrating the Essay Readers

According to the VDOE, it is not necessary for the two readers to establish

reliability as the scores that each reader assigns are added together. From the

VDOE (1997): "If Reader A gives the student's paper a 3 and Reader B gives the

student's paper a 2, the student's score in the composing domain would be a 5.

Since a reader may assign a score of 1 to 4, the range of possible scores in any

domain would be from 2 to 8 when the two readers' scores are added together."

Though the VDOE does not require readers to calibrate or establish inter­

rater reliability, I felt it was important to have the two readers calibrate. The

calibration and scoring process took place in 1 day, in mid-December.

At the start of the day, I conducted a training session for the two scorers using

released samples of essays written by 4th and 5th grade students. There were four

writing samples and each sample had an assigned score of 1, 2, 3, or 4 (with 4

being the best score to receive) so that each reader learns how to recognize a 1-

essay, a 2-essay, a 3-essay, etc. A score of 4 indicates that the "writer

demonstrates consistent, though not necessarily perfect, control of almost all the

domain's features." A score of 3 indicates that the "writer demonstrates reasonable,

but not consistent, control of most of the domain's features indicating some

weakness in the domain." A score of 2 indicates that the "writer demonstrates

enough inconsistent control* of several of the domain's features indicating some

weakness in the domain." Finally, a score of 1 indicates that the "writer

demonstrates little or no control of most of the domain's features." The DOE defines

control as "the ability to use a given feature of written language effectively at the
97

appropriate grade level" (Virginia Department of Education, 2005). Each reader had

the rubric in front of her for continual reference during the calibration process.

The readers went through two rounds of calibration. The first round had four

essays, each essay had an assigned score of 1, 2, 3, or 4. In the second round of

calibration, I presented three essays, with scores of 2, 3, or 4. There was no essay

with a score of 1. Reader 1 scored perfectly in both rounds of scoring, assigning all

seven essays the correct rating. Reader 2 scored correctly on 5 of the 7 essays. In

the final calibration process, I gave the readers two real student essays to score. In

this round, the two readers agreed perfectly with each other on 5 of the 6 writing

subcomponents, with the 5th score as adjacent to each other. The two readers talked

out their differences and both felt they understood the scoring process. According to

Fan, "the two scorers do not have to exactly match each other but the discrepancy

between their scoring should be within a limit that we can tolerate" (X. Fan, personal

communication, January 11, 2007).

After calibration was established, the readers were asked to score real

student essays. For this study, the committee and I decided that it was not

necessary to score the students' responses to writing prompt 1: In science class,

students are studying the ocean. Your teacher has asked each student to write a

report about an ocean animal. Choose an ocean animal and write about the animal.

The first round of writing was intended as a practice round to familiarize students

with the procedures of the study.

To score the responses for writing prompt 2, David's class has been learning

about the early colonial days. For a class project, he has to find out what life was like
98

for people living in the 1700s. Pretend that you are David and write about life during

the colonial days, I divided the students' essays, which had been shuffled, into two

piles. Each reader received half of the 49 essays (with one reader starting with one

more essay than the other reader). Each essay had a scoring sheet stapled to the

first page. The scoring sheet was designed so that after the first reader writes in her

score, she folds the paper so that the second reader does not see the scores. The

readers read each essay and assigned a score to the three subcomponents:

composing; written expression; usage and mechanics. After the first five essays

were scored, the two readers were allowed to compare the scores and discuss their

differences again.

Each reader proceeded through her stack of essays before advancing to the

second stack of essays. The second reader then read the essays which had been

scored by the first reader and assigned her own scores. The two readers read and

assigned scores to the essays until they finished.

Inter-rater Agreement

After all essays in the study were scored and the data entered into SPSS, I

performed an analysis to determine the interrater agreement for the two readers.

The table below indicates the correlation coefficient between the two readers in four

areas: composing, written expression, usage and mechanics, and total score.

Table 5

Inter-rater Agreement for the Two Readers

Composing Written Expression Usage/Mechanics Total Score

r=.90 r = .90 r =.94 r = .96


99

Data Analysis

There are five (5) independent variables: the assigned group (administration

mode) that students are placed in, the Internet Self-Perception Scale, the Attitude

Toward Educational Use of the Internet questionnaire, the grades the students

earned in language arts the previous semester, and gender. There is one dependent

variable: the score obtained on the essay.

Through use of the predictive analysis software program (SPSS; George &

Mallary, 2003), I used analysis of variance (ANOVA) to determine if the hypotheses

would prove to be true or null. Again, the two hypotheses were:

H1: The groups with access to the Internet (Groups II and 111) will earn better

scores on the essay than the control group-students without access

(Group I).

H2: Group Ill (with training on how to browse the Internet) will earn better

scores on the essay than Group II.

To determine the effects of the treatment, I compared the essay scores as

well as the three subcomponents (composing, written expression, usage &

mechanics) of the three groups using the analysis of variance (ANOVA) method. I

also looked at the two measures administered to students, specifically examining

students' attitudes towards using the Internet, computer ownership, and access to

the Internet. Last, I disaggregated the data by gender to determine if the gender gap

still exists. The results will be discussed in the next chapter.


100

CHAPTER FOUR

RESULTS

In this study I examined whether the use of the Internet as a tool in the writing

process would affect the scores that 4th and 5th graders, receive on their essays. The

following questions guided this research study:

1) What are students' perceptions of the Internet as a tool in their own writing

process?

2) Will using the Internet as a research tool help students write a better

essay than without the use of the Internet?

3) Will the use of the Internet affect the scores students receive in these

specific characteristics of their essays: composing, written expression,

usage and mechanics?

4) Does Internet training on discerning the differences among websites make

a difference in the quantitative scores of students' essays

Additionally, two hypotheses were presented at the beginning of the study:

H1: The groups with access to the Internet (Groups II and Ill) will earn better scores

on the essay than the control group--students without access to the Internet. (Group

I).

H2: Group Ill, with training on how to browse the Internet, will earn better scores on

the essay than students in Group II who use the Internet based on skills they

acquired on their own.

As a reminder, there were three groups in this study:


101

� Group I-the control group. Students in this group wrote their essays under

standard administration. This group was not allowed to use the Internet during

the planning stage of writing or at any time during their writing process.

� Group II-students in this group were allowed to browse the Internet based on

skills they acquired on their own. This group was allowed to browse the

Internet for 30 minutes during the planning stage of writing.

� Group Ill-students in this group received three 45-minute sessions on how to

wisely use the Internet. The students were allowed to browse the Internet for

30 minutes during the planning stage of writing.

In the remainder of this chapter, I present the results of various analyses that

address my research questions, and the evaluations of the two hypotheses.

Findings to Research Questions

In the following section, I present the findings to the research questions that

guided this study. I included the specific research questions again for your

convenience.

Question 1: What Are Students' Perceptions of the Internet as a Tool They Use

Within Their Own Writing Process?

In order to answer this question, I examined students' answers to the

Behavior Correlates Questionnaire (BCQ) created by Duran (2003) which asks

students to answer questions about their perspectives on the use of the lnter�et and

their attitudes towards writing. In the remainder of this section, I discuss students'

answers to the BCQ and relay their computer practices and behaviors.
102

Responses to Behavior Correlates Questionnaire

When students were asked, "On average, how often do you search the

Internet to help you with writing assignments?" 15.9 percent of the students

answered never, 11.4 percent answered once-a-semester, 13.6 percent answered

several times a semester, 4.5 percent answered once a month, 2.3 percent

answered several times a month, 27.3 percent answered once a week, 13.6 percent

answered several-times-a-week, and 2.3 percent answered every day. That equates

to 43.2 percent of students who browse the Internet at least once a week or more to

look for help with writing assignments, indicating the Internet's integration into

students' lives.

When students were asked, "How has the Internet affected the length of your

writing assignments?" 9.1 percent answered that they write much longer papers,

18.2 percent answered somewhat longer papers, and 63.6 percent wrote that the

Internet did not affect the length of their papers. A combined 4.6 percent of students

wrote that they write somewhat shorter or much shorter papers. It's interesting to

note that 63.6 percent of the students did not find the use of the Internet to have any

affect on their writing or the length of their papers.

When students were asked, "How has using the Internet affected the quality

of your writing?" 13.6 percent claimed that their writing is much better, 31.8 percent

claimed that their writing is somewhat better, 47.7 percent claimed that the Internet

did not change the quality of their writing at all. A combined 4.6 percent of the

respondents claimed that the Internet made their writing somewhat worse or much

worse. While a combined 45.4 percent stated that the Internet improved the quality
103

of their writing, it is curious to note that 4.6 percent felt that it made their writing

worse.

When asked, "How has using the Internet for research in English class

changed your attitude toward writing?" 34.1 of the students responded that they

enjoy writing much more, 18.2 percent enjoy it somewhat more, 40.9 percent stated

that their attitude toward writing didn't change, 2.3 percent said they enjoy writing

somewhat less now. A combined 52.3 percent enjoy writing more with the Internet,

yet conversely, 2.3 percent of the students stated that they enjoy writing less with

the Internet.

When students were asked, "How has using the Internet for research in

English class affected the ease or difficulty with which you write papers?" 29.5

percent stated that the Internet made it much easier to write papers, 22.5 percent

stated the Internet made it somewhat easier, 38.6 percent stated that the Internet did

not make a difference in the difficulty or ease of writing a paper, and surprisingly, 4.5

percent stated that the Internet made it somewhat more difficult to write a paper. A

combined 52 percent of the students stated that they found that the Internet made

writing research papers easier.

When asked, "If you could get all class information from the Internet,

would you go to class?" 63.6 percent of the students responded that they would still

choose to go to class, even if they were able to obtain all information from the

Internet.

To summarize students' perceptions of the Internet as a tool in their own

writing, one can say that while many students like the use of the Internet, there are
104

also a large number of students who do not feel that it affects their writing at all. After

all, 63.6 percent of the students feel that the Internet has no effect on the length,

47.7 percent feel that it has no effect on the quality, and 38.6 percent feel that it

does not affect the ease or difficult of their writing.

I presented students' attitudes about the role of the Internet in their writing

before presenting any data because their attitudes may affect their essay scores.

The data also gives us an idea of the characteristics of the group of students in this

study.

Question 2: Will Using the Internet as a Research Tool Help Students Write a Better

Essay Than Without the Use of the Internet?

In order to answer this question, I compared the control group with Group 11,

Group Ill, and Group W-that is, Groups II and Group Ill combined as these are the

two groups that browsed the Web. If we only look at each group's mean total essay

score (see Table 6), we see that the two groups that used the Internet scored better·

than the control group.

Table 6

Group Means for Total Essay Score

Group N Mean Essay Score SD


16 11.69 3.825

II 17 13.29 4.497

111 16 15.69* 5.853

w 33 14.45 5.853

Note. The mean essay score for Group Ill when compared to Group I reaches statistical significance.
105

From Table 6, one can see the means for the Total Essay Score for each group,

including Group W which is a combination of both Groups II and Ill. Both groups that

used the Web received a higher mean than the control group. Group W, the Web

Group, also earned a higher mean than the control group. A one-way analysis of

variance (ANOVA) among the groups, however, shows that some results were

significant while others were not. The specific results between Groups I and II;

Groups I and Ill; Group I and Group W will be discussed below.

Comparison Between Groups I & II

Students in Group I earned a mean of 1 1.6 9 on their Total Essay Score and

students in Group II earned a mean of 13.29 on the same area. The standard

deviations of the scores for both groups were relatively close to each other. A test

using ANOVA showed differences between the two groups was not statistically

significant: df=1, F= 1.215, p=.27 9.

In studies with a small n, it is important to calculate for effect sizes (ES).

Since the three groups were comparable in size, I calculated for effect size using

Cohen's d (Thompson, 2007), that is d =(ME - MC) I SQRT [(S0E2 - SOC 2) / 2].

In other words, I subtracted the difference between the group means and then

divided by the pooled standard deviation. When calculations for effect size were

completed between Group I and Group II for the Total Essay Score, I obtained ad=

.38. This effect size is in the low range of effect sizes as categorized by Cohen

(1 988).
106

Comparison Between Groups I & Ill

Students in Group I earned a mean of 11.69 on their Total Essay Score while

students in Group Ill earned a mean of 15.69 on their Total Essay Score. An

analysis using ANOVA showed statistical significance, df=1, F= 4.064, p=.053.

When calculations for effect size were done for Total Essay Score, the results

indicated= .827, which signals a large effect size.

The effect size produced by the treatment is a large effect size and very

noteworthy from a research perspective. It is interesting to note the wide SD in

Group Ill as compared to the smaller SD in Group I. An explanation for the wide SD

in Group Ill is not apparent.

Comparison Between Group I and Group W

Group I earned a group mean of 11.69 in Total Essay Score while Group W earned

a mean of 14.45 on the Total Essay Score. A one-way ANOVA indicates there is no

statistical significance between the two conditions: df==1, F=2.947, p=.093. A

calculation of effect size in the Total Essay Score, however, yielded an effect size of

.570 which is a medium effect size.

Comparison Between Groups II & Ill

In this area of Total Essay score, Group II scored a mean of 13.29 and Group

Ill scored a mean of 15.69. An analysis using ANOVA showed no statistical

significance: df=1, F=1.395, p=.25. Calculations for effect size yielded d= .976 which

is a large effect size.

Thus, at this point, we can say that when we only compare students who

received instruction on how to use the Internet with students who used the Internet
107

without instruction, there is no statistical difference in their Total Essay Scores. Yet a

calculation on effect size yielded a large effect size (.976).

To summarize the findings for Question 2, there was a statistical difference

between the scores of Group I and Group Ill; Group Ill, the group that received

Internet instruction scored higher. This was the only statistically significant result

when the means of the total essay scores were compared between groups. From

this analysis, we can conclude that when students use the Internet while planning

their writing without receiving instruction, their use may or may not make a

difference. When students, however, use the Internet during their planning process,

after receiving instruction, the scores they earn on their essays are higher and

significantly different from the control group.

Question 3: Will the Use of the Internet Affect the Scores Students Receive on

These Specific Characteristics of Writing: Composing, Written Expression, and

Usage/ Mechanics When Compared to the Control Group?

The three subcomponents that are scored in the writing portion of the Virginia

SOL, composing; written expression; and usage/mechanics were used as the criteria

to answer this question. Composing refers to the writer's ability to express one or

several central ideas, elaborate on ideas, support an argument, or organize a

narrative. Written expression refers to the writer's ability to create images in the

reader's mind, choose precise words to create tone and enhance the writer's voice,

and demonstrate varied sentence lengths and structures. Usage and mechanics

refers to the writer's ability to use capitalization and punctuation skills, form

sentences that are structurally sound, and use correct spelling. In the following
108

section, I will compare Group I-the Control Group, to Groups II and Ill, to answer

the question. I will not compare Group II to Group Ill as that will be discussed in

Question 4.

Using a one-way ANOVA for the analysis, the results indicate there is

significance between use or non-use of the Internet in only one of the three areas of

the three subcomponents of writing-usage/mechanics. A detailed analysis of each

of the three subcomponents follows.

Area: Composing

The area of composing measures the writer's ability to express one or several

central ideas, elaborate on ideas, support an argument, or organize a narrative. An

examination of each group's mean score in the area of composing shows that Group

I scored a mean of 3.69 (out of a possible 8 points), Group II scored a mean of 4.06,

Group Ill scored a mean of 4.87, and Group W scored a mean of 4.45 (see table

below).

Table 7

Group Means for the Writing Subcomponent: Composing

Group N Mean SD
16 3.69 1.25

II 17 4.06 1.25

111 16 4.87 2.36

w 33 4.45 1.89

An analysis using ANOVA showed no statistical significance among the three

groups, df=2, F=2.059, p=.138. A one-way ANOVA between Group I and Group II

showed no statistical significance, df=1, F=.728, p=.400 and yielded effect size of
109

.30. Similarly, a comparison between Group I and Group Ill was not statistically

significant, df=1, F=3.157, p= .086 but yielded an effect size of .65 which is

considered a medium effect size. A comparison between Group I and Group W

showed no statistical significance, df=1, F=2.165, p=.148 but yielded an effect size

of .51 which is considered a medium effect size. While there were no statistical

significances between groups, an analysis of effect sizes yielded effects ranging

from a low of .30 to a high of .65.

Area: Written Expression

This subcomponent refers to the writer's ability to create images in the

reader's mind, choose precise words to create tone and enhance the writer's voice,

and demonstrate varied sentence lengths and structures. An analysis of each

group's mean score in the area of written expression (see Table 8) shows that

Group I scored a mean of 4.13, Group II scored a mean of 4.47, Group Ill scored a

mean of 5.13, and Group W scored a mean of 4.80.

Table 8

Group Means for the Writing Subcomponent: Written Expression

Group N Mean SD

16 4.13 1.45

II 17 4.47 1.66

III 16 5.13 2.55

w 33 4.79 2.13

A one-way analysis using ANOVA showed no significance among the three

groups, df=2, F=1.094, p=.343. An ANOVA between Group I and Group II showed

no statistical significance: df=1, F=.402, p=.531 but yielded an effect size of .22
110

which is a small effect size. Similarly, an ANOVA between Group I and Group Ill also

showed no statistical significance: df=2, F=1.853, p=.184 but yielded an effect size

of .50 which is a medium effect size. A one-way ANOVA between Group I and

Group W showed no statistical significance: df=1, F=1.155, p=.268 and yielded an

effect size of .37 which is considered a small effect size. While there was no

statistical significance among the groups, effect sizes ranged from a low of .22 to a

medium effect size of .50.

Area: Usage and Mechanics

This domain refers to the writer's ability to capitalize and punctuate, create

sentences that are structurally sound, and use correct spelling. An analysis of each

group's mean score in the area of usage and mechanics shows that Group I scored

a mean of 3.88, Group II scored a mean of 4.76, Group Ill scored a mean of 5.69,

and Group W earned a mean of 5.21.

Table 9

Group Means for the Writing Subcomponent: Usage/Mechanics

Group N Mean SD

16 3.88 1.78

II 17 4.76 2.22

111 16 5.69* 2.58

w 33 5.21 2.41

Note: Mean for Group Ill reaches statistical significance, p=.028; ES=.83

A one-way ANOVA among all four groups in usage and mechanics showed no

significance: df=2, F=2.672, p=.08. An ANOVA between Group I and Group II

showed no significance: df=1, F=1.595, p=.216 but yielded an effect size of .44

which is considered a low-medium effect size. An ANOVA between Group I and


111

Group Ill, however, showed statistical significance: df=1, F=5.38, p=.028 and yielded

an effect size of .83 which is a large effect size. A one-way ANOVA between Group I

and Group W showed statistical significance: df=1, F=3.882, p=.055 and yielded an

effect size of .63. It seems that when students are given instruction on how to use

the Internet during the planning portion of writing, they will produce statistically

significant results and large effect sizes. This is indeed a curious finding and more

discussion of this will take place later.

To summarize the findings for Question 3, we see that the analyses that

yielded statistical differences were those of usage and mechanics between Group I

and Group Ill as well as between Group I and Group W. Effect sizes ranged from a

low of .30 to .65 in the area of composing, from . 22 to .50 in the area of written

expression, and from .44 to a high of .83 in the area of usage and mechanics.

Though there was no statistical significance between Group I and Group II in any of

the writing subcomponents, the effect sizes between the control group and Group II

ranged from .30 for composing, .22 for written expression, and .44 for

usage/mechanics. Further discussion of effect sizes will take place in chapter five.

Question 4: Does Internet Instruction on Discerning the Authenticity of Websites

Make a Difference in the Quantitative Scores of Students' Essays?

In this section, I will compare only Groups II and Ill, omitting Group I-the

control group, which had no access to the Internet during the planning of their

essays. Via ANOVA, I will analyze all three subcomponents of writing (composing,

written expression, usage/mechanics) as well as the total essay scores. Table 10

below shows the mean score in the three subcomponents and the total essay score.
112

In all four areas, Group Ill earned higher scores than Group II yet an analysis using

ANOVA showed no significant differences in the total scores or in any of the

subscores.

Table 10

Comparison of Group Means Between Groups II & Ill on Writing Scores

Area Scored Group II Group Ill

n=16 n=17

M SD M SD

Composing (8 pts max) 4.06 1.25 4.88 2.36

Written Expression (8 pts max) 4.47 1.66 5.13 2.55

Usage/ Mechanics (8 pts max) 4.76 2.22 5.69 2.58

.Total Essay Score (16 pts max) 13.29 4.50 15.69 5.85

Area: Composing.

In the area of composing, Group II scored a mean of4.06 (of possible 8

points) and Group Ill scored a mean of 4.88. A one-way ANOVA indicates that there

is no significance between Groups II and Ill in the area of composing: df=1, F=1.566,

p=.22. An analysis of effect size yielded d=.44 which can be categorized as a low­

medium effect size.

Area: Written Expression

In the area of written expression, Group II scored a mean of 4.47 (possible 8

pts) while Group Ill scored 5.13. A one-way ANOVA indicates no significant

difference: df=1, F=.771, p=.387. An analysis of effect size yielded d=.31 which can

be categorized as a small effect size.

Area: Usage and Mechanics


113

In the area of usage and mechanics, Group II scored a mean of 4.76

(possible 8 pts.) while Group Ill scored a mean of 5.69. A one-way ANOVA indicates

again that there is no significant difference: df=1, F=1.219, p=.278. An analysis of

effect size yielded d=.39 which is a small effect size that almost reaches medium

effect size. Although the mean score for Group Ill is higher, the score did not

produce any difference which leads us to conclude that Internet instruction had little

effect on usage and mechanics.

Area: Total Essay Scores

In analyzing the total essay score (the sum of the scores of the three

subcomponents) for both groups, Group II scored a mean of 13.29 (of possible 24

points) while Group Ill earned a mean of 15.69. A one-way ANOVA indicates no

significant difference, df=1, F=1.219, p=.278. An analysis of effect size yielded a

d=.42 which almost reaches a medium effect size.

For a summary of the comparisons of effect sizes between the control group

and Groups II, Ill, and W, see Table 11.

Table 11

Comparison of Effect Size against Control Group in the Writing Subcomponents and Total Essay Score

Area Group II Group Ill GroupW

Composing .30 .65 .51

Written Expression .22 .50 .37

Usage/Mechanics .44 .83 .63

Total Essay Score .38 .83 .57


114

Though there was no statistical significance in any of the writing

subcomponents or in the total essay score, effect sizes ranged from small effect size

of .22 to a large effect size of .83. If we only look at statistical significance, it would

seem that the results contradict the hypothesis I made at the beginning of the study

that students who receive instruction in techniques to search for information in the

World Wide Web would produce better essay scores. If we look at effect sizes, it

would seem that instruction on using the Internet produces small to low-medium

effect sizes. Discussion for the possible findings will appear later.

The Gender Factor

As Kleckner (2006) reported, the No Child Left Behind Act does not require

that data be disaggregated by gender. However, because of the gender gap in

writing which favors girls, reported in chapter two, I felt it important to disaggregate

the data by gender.

In this study, there were a total of 49 student essays; 27 from boys (54.5

percent). The group means for the Total Essay Score as well as each of the writing

subcomponents are in the table below.

Table 12

Gender comparison in the Writing Subcomponents and Total Score

Gender

Male Female

n- 27 n=22
Area Mean SD Mean SD

Composing 4.37 1.82 4.0 1.63

Written Expression 4.56 1.97 4.59 1.97

Usage/Mechanics 4.44 2.17 5.13 2.42

Total Essay Score 13.37 5.49 13.77 5.40


115

In the area of composing, the boys' mean of 4.37 was higher than the girls'

mean of 4.0. An analysis using ANOVA showed no statistical significance between

the two groups' scores, df=1, F=.55, p=.462. Effect size obtained was .21 which is a

small effect size.

In the area of written expression, boys scored a mean of 4.56 and girls scored

a mean of 4.59. An analysis using ANOVA showed no statistical significance

between the two groups' scores: df=1, F=.004, p=.954. Calculations for effect size

for written expression yielded d=.015 which is essentially no effect.

In the area of usage and mechanics, boys scored a mean of 4.44 and girls

scored a mean of 5.18. An analysis using ANOVA also showed no statistical

significance in this area: df=1, F=1.26, p=.267. Calculations for effect size yielded

d=.30 which is a small effect size.

Results indicate that boys earned a mean of 13.37 on the Total Essay

Score and girls earned a mean of 13.77. An analysis using ANOVA showed no

statistical significance between the two genders: df=1, F=.066, p= .798. A calculation

of effect size yielded d=.07 indicating no effects.

In summary, girls scored higher than boys in all areas except in the area of

composing. ANOVAs performed on the Total Essay Score and the subcomponents

reveal that there were no statistical differences between girls and boys. Effect sizes

obtained ranged from less than .01 to .30. The results of this study follow Jones and

Myhill (2007) in showing that both boys and girls can produce quality work.
116

Teasing Out Other Factors

In order to verify that it was the treatment that produced the outcomes I

addressed, I analyzed three major factors that might have influenced the findings.

The first factor was whether the groups had differing writing abilities to start with.

The second factor that might have influenced the results would be students'

computer ownership and access to the Internet. The third factor is students'

perspective of their self-efficacy when using the Internet. In the next three sections, I

analyze the factors that might have influenced the essay scores and influenced the

above results.

Factor 1: Students' Writing Abilities

By the time students reach 4th and 5th grade, their writing abilities can differ

drastically from one another. To assess whether this was the case when the

students entered this project, I computed a mean for each group, using the following

information. In 4 th and 5th grades, teachers assign grades in six areas which are

subsumed under the area of writing: 1) planning strategies, 2) organizes and

converges a central idea, 3) uses tone, voice and sentence variation,· 4) uses correct

grammar, punctuation, capitalization, and spelling, 5) writes for a variety of

purposes, and 6) uses vocabulary effectively.

For each category, teachers assign a numerical grade ranging from 1 to 3.

Each numerical score represents students' current proficiency level in that area. A

score of 1 indicates the student "needs improvement," a score of 2 indicates the

student is "developing" the skill, and a score of 3 indicates the student "meets the

standard" in that skills. Based on this numerical district-wide rating scale, it was
117

possible for each student to receive a maximum of 3 points in six categories, to sum

up to 18 points. Once all scores were added for each student, I computed the mean

score for each group. See Table 12 for group means.

Table 13

Group Means for Grades in Language Arts

Group Mean

16.70

II 16.56

Ill 15.64

To determine if the means were statistically significant, I conducted an AN OVA

which indicated no statistical significance among the groups, df=2, F=.975, p=.390.

This analysis tells us that the students in the three groups began the study with

comparable writing abilities.

Factor 2: Computer Ownership and Internet Access

Owning a computer at home and having Internet access allow one to browse

the Internet more frequently than non-ownership. From students' self-reported data, I

will present data on students' computer ownership, internet access, and students'

views of the Internet as a resource in the writing process. Since a computer is a

large, visible item in the home, we can safely say that students' self-reported data

about computer ownership is accurate.

Based on student-reported data, 86.6 percent in Group I own a computer,

87.5 percent in Group II own a computer, and 66.6 percent in Group Ill own a

computer (missing 3 surveys from this group). An analysis using AN OVA showed
118

that computer ownership among the three groups did not result in any significant

differences in their performance in the study: df=2, F=.960, p=.391.

A household that has a computer may not necessarily have Internet access.

Based on students' self-reported data, 80 percent in Group I, 87.5 percent in Group

II, and 66.6 percent (missing 3 data sets) in Group Ill have Internet access at home.

A one-way ANOVA was conducted to answer the question, "Does Internet access at

home appear to be responsible for any statistical difference among the scores of the

groups? The results indicate that no significant difference exist among the three

groups because of Internet access from home: df=2, F=.711, p=.497.

Factor 3: Self-Efficacy and Internet Usage

The Internet Self-Perception Scale (ISP) created by Hinton, DiStefano, &

Daniel (2003) measures students' confidence in their ability to use the Internet.

Many research studies reveal that students' self-efficacy affects their learning

outcomes. To ensure unbiased answers from the students, I asked teachers to

administer the ISP to all students before the groups were placed into treatment

conditions. To examine students' self-efficacy, I chose several key questions to

analyze. The questions reflect students' attitudes towards their own ability to browse

the Web. The questions I chose, as well as the results of the various analyses, are

presented in the next section.

The ISP uses a Likert scale with five categories: Strongly Disagree (SD),.

Disagree (D), Undecided (U), Agree (A), and Strongly Agree (SA). To obtain a group

mean, I assigned numerical values to each category: S0=1; 0=2; U=3; A=4; SA=5.
119

Based on this numerical assignment, the higher values reflect students' more

positive perceptions of their ability to use the Internet.

The first question I analyzed was: "I think that I am good at using the

Internet." Students' responses produced substantially different means among the

three groups. Group I had a mean of 4.33, Group II had a mean of 4.37, and Group

Ill had a mean of 3.46. It's interesting to note that Group Ill, the group with the

lowest confidence level, was the group randomly chosen to receive Internet

instruction. A one-way ANOVA yielded statistical differences among the three

groups: df=2, F=5.731, p=.007. The scores of Groups I and II were fairly close to

each other, but the score of Group Ill was almost a full point lower than the other two

groups, indicating the lack of confidence in students in Group Ill. Their performance,

however, was-in-general-equal to the other two groups. Thus it could be argued

that their Internet instruction had more of an effect on the essay scores than the

original comparison scores indicated.

The second statement I analyzed was: "I can use the Internet faster than

other kids." Group I yielded a mean of 3.42, Group II yielded a mean of 3.07, and

Group Ill yielded a mean of 2.85. An ANOVA produced no statistical difference:

df=2, F=.081, p=.457. This indicates that, in general, there was not much difference

in the students' perceptions of their Internet proficiency.

The third statement I analyzed was: "I understand how to use the Internet as

well as other kids do." Group I had a mean of 4.42, Group II had a mean of 4.13,

and Group Ill had a mean of 3.46. In response to the different wording, students

rated themselves more favorably. A one-way ANOVA of this statement produced


120

statistical difference: df=2, F=4.85, p=.013. The two groups that were allowed

access to the Internet, Group II and Group Ill, had means of almost one full point

away from each other. This lower score by Group Ill, however, did not depress their

essay scores; their essay scores, usually, were similar or higher than those of the

students in the other groups. Thus, it could be said that the Internet instruction

benefited them.

The fourth statement I analyzed was: "I like to use the Internet" to which

students in all three groups responded favorably, with a total mean for all three

groups at 4.07. Separately, Group I scored a mean of 4.33, Group II scored a mean

of 4.37, and Group Ill scored the lowest mean of 3.46. An analysis of the fidelity

check question (paired with the former question), "I enjoy using the Internet" yielded

similar means for all three groups.

A one-way ANOVA of the statement, "I like to use the Internet" yielded no

statistical difference: df=2, F= .282, p=.756. A one-way ANOVA of the statement "I

enjoy using the Internet" also produced no statistical difference: df=2, F=.429,

p=.659. The creators of the ISP indicated that the reliability scores for the questions

range from 73 to 85 percent. I decided to test the two statements for reliability using

the Guttman split-half test for reliability. This test produced a reliability coefficient of

.80 for the two questions, telling us that the two questions correlate well with each

other. The reliability of the two questions then allows us to put more confidence in

the results we obtained from the ANOVA test which indicated no significance

The fifth statement I analyzed was: "I feel good inside when I use the

Internet." Group I scored a mean of 3.92, Group II scored a mean of 3.75, and
121

Group Ill scored the highest mean at 4.08. Again, a one-way ANOVA produced no

statistical difference: df=2, F=.312, p=.734. For internal consistency, the creators of

the ISP added the statement, "Using the Internet makes me feel good." Students'

responses to this question correlated with the statement, "I feel good inside when I

use the Internet." When I analyzed the reliability of the two statements using the

Guttman-Split Half method, the questions yielded a reliability coefficient of .7318. A

one-way ANOVA of these two statements also produced no statistical difference:

df=2, F=.860, p=.433.

Finding information on the Internet is a skill that some students possess more

than others. To determine if this was a factor that might affect the outcomes of

students' essays, I conducted an analysis of students' responses to two statements:

"When I use the Internet, I can figure out how to find information better than other

kids" and "I can figure out how to find information on the Internet better than I could

before." This scale was only administered to students before the treatment so

students' definition of "before" varies. I compared the two statements to each other

because both statements use the word "information" as the anchor. Yet, because of

the wording, students interpreted the statements differently. The statement, 'When I

use the Internet, I can figure out how to find information better than other kids"

compares the survey respondent with other students. Consistent with other

statements that compare the survey respondent to other students, this statement

also elicited low scores from the students in all three groups. In response to this

statement, Group I scored a mean of 3.33, Group II scored a mean of 2.88, and
122

Group Ill scored a mean of 3.31. A one-way ANOVA indicated no statistical

significance in the scores, df =2, F =1.045, p =.361.

The statement, "I can figure out info better than before" does not compare the

survey respondent with other students. Therefore, students rated themselves better

in this category, with Group I averaging 4.38, Group II averaging 4.29, and Group Ill

averaging 4.15. A one-way ANOVA also indicated no significance for this analysis:

df =2, F =.255, p =.777. The total mean for this statement for all three groups was

4.26, which was a full 1.11 points higher than for the statement that compared the

survey respondent to other students.

In summary, the three groups' self-efficacy about computer use and Internet

searching abilities were similar. For most statements, students in all three groups

rated themselves similarly. However, there were two statements in which the group

means differed enough to produce statistical significance. The first statement that

produced statistical significance was, "I think that I am good at using the Internet"

which yielded p=.007. The mean for Group Ill was the lowest when compared to the

other two groups. Yet it was Group Ill that produced the highest Total Essay Score

and the best score in all three of the subcomponents of writing. A discussion of the

possible reasons will take place in the final chapter.

The second statement that produced statistical significance was "I understand

how to use the Internet as well as other kids do." The ANOVA produced a p =.013.

Again, Group Ill had the lowest mean score but produced the highest Total Essay

Score and the highest score in the three subcomponents of writing. At this point, it is

unclear if students' ratings of their self-efficacy has any relation to their writing ability
123

or if the Internet instruction gave them confidence in their browsing ability which then

allowed them to search for information with confidence and write with confidence.

In the section you just read, I analyzed three factors that might account for

variance in students' essay scores: students' acquired writing abilities, students'

access to computers and the Internet, and students' self-efficacy. In summary, an

analysis of students' writing abilities as measured by the grades that each teacher

assigned revealed group means very close to each other. An analysis of the group

means of their end-of-fall-semester grades using ANOVA indicated no significant

difference in this area that might account for any variance in their essay scores.

While not every student had a computer or access to the Internet, the group

means were comparable; an ANOVA indicated no statistical significance for this

factor.

The last factor, students' self-efficacy, as reported by students, yielded two

statements that had statistical significance, as discussed above. However, the

groups that rated themselves highly in self-efficacy produced lower scores on the

essay and the writing subcomponents, though only two of the scores were

statistically lower than the group that rated itself less efficacious. Thus, these factors

could have led to a narrower range of essay scores among the groups than

predicted. None of the three factors that I analyzed accounts for the variance in

essay scores.

Evaluating the Hypotheses

Before embarking upon the study and collecting data, I made two hypotheses:
124

H1: The groups with access to the Internet (Groups II and 111) will earn better scores

on the essay than the control group--students without access (Group I).

H2: Group Ill, with training on how to browse the Internet, will earn better scores on

the essay than Group II.

Now that data have been collected and analyzed, I can evaluate the

hypotheses. The data supported hypothesis 1, as stated above, in two situations.

Groups II and Ill which had access to the Internet performed better than the control

group in the Total Essay Score as well as in the three subcomponents of writing.

Statistical differences only existed in two areas: Total Essay Score and

usage/mechanics. The statistical difference only existed when I compared the

control group with Group Ill which received Internet instruction. Results indicate that

the use of the Internet alone, without any instruction, does not produce enough of a

difference in students' writing scores to be statistically significant. Consequently,

students should receive instruction on how to use the Internet in order to write a

better essay, with scores that will be significantly better than without the use of the

Internet.

Hypothesis 2, stated above, was not supported. While the means for Group Ill

was higher in all areas of writing, including the Total Essay Score and the three

subcomponents of writing, a test of ANOVA showed no statistical significance.

In closing, the purpose of the study was to determine if allowing students to

use the Internet during the planning portion of their writing process would produce

better scores on an essay than without the use of the Internet. While the study only

produced a few statistically significant results, it did produce some medium and
125

some large effect sizes which are of interest. From the study, we can see that

students who used the Internet without first receiving instruction on searching for

information did not produce scores that were statistically significant when compared

to the control group, but they did produce effect sizes ranging from .22 to .44. In

order to produce scores that were statistically significant from the control group,

students needed Internet instruction before using the Internet during the writing

process. I will discuss the implications of these results in the next chapter.
126

CHAPTER FIVE

DISCUSSION AND CONCLUSION

At the beginning of this study, I set out to study whether access to the Internet

would affect students' essay scores. Additionally, I hoped that the findings would add

to the body of knowledge on elementary school students' current understanding and

use of the Internet. In this chapter, I will summarize the research conducted, report

the findings of the study, add my personal interpretation of the findings, and discuss

implications for the K-12 system, specifically for school districts, teachers, and

students. I conclude this chapter by discussing the limitations of the study and

offering suggestions for further investigation of the topic.

Summary of Research

This study set out to examine whether allowing 4th and 5th grade students

access to the Internet during the writing process, specifically during the planning

stage, would affect the essay scores they earned. There were three groups in the

study: the control group and two treatment groups. One treatment was to allow

students access to the Internet during the planning stage and have students browse

the Internet based on their pre-acquired knowledge of browsing the Internet. The

other treatment group received three 45-minute sessions on how to search for

information on the Internet before they were allowed to use the Internet during the

planning stage of writing. The dependent variable for all three groups was the essay

score that students earned. The independent variables included: group assignment,

responses to the Internet Self-Perception Scale and the Behavior Correlates

Questionnaire, students' grades in Language Arts, and gender.


127

The design of this study involved random-assignment of participants into 1

of 3 groups. Since participants were randomized at the beginning of the school year

and placed into classes, the grouping for this study appears as intact classes of

students. To obtain the data (essays), students were asked to write in response to

this writing prompt: David's class has been learning about the early colonial days.

For a class project, he has to find out what life was like for people living in the 1700s.

Pretend that you are David and write about life during the colonial days. All three

groups responded in writing to this prompt using pencil and paper and wrote on the

same day in December of 2007. All students were given 90 minutes total for the

planning, writing, editing, and revising stages of writing. The two groups that used

the Internet used 30 of the 90 minutes to search the Internet, leaving them 60

minutes to write their essays.

A total of 49 student essays and 46 responses to the two measures (3 .

missing) were collected and analyzed. Two pre-service teachers from the local

school of education used the Virginia DOE's writing rubric to assess the students'

essays. The method of calibrating the two readers as well as interrater reliability

were reported in chapter three.

Summary of Findings

In this section, I will report several major findings from the research study.

The first set of findings relates to the scores students obtained in response to the

Colonial Days writing prompt. The second set of findings comes from students'

responses to the Behavior Correlates Questionnaire and the Internet Self Perception

Scale. The third set of findings relates to an analysis conducted on gender. After
128

each section in which I report the findings, I will add my own personal interpretation

of the findings.

Brief Summary of Scores Obtained From Essays

In this section, I will discuss comparisons that yielded statistical significance

as well as effect size. Between group comparisons yielded statistical significance in

two areas. When compared to the control group, the mean scores for Group II and

Group Ill, the two groups that were allowed to use the Internet, were higher in all

subcomponents of writing and in the total essay score. Group II, the group that was

allowed to use the Internet based on the students' pre-acquired Internet skills,

however, did not produce a mean that reached statistical significance in any of the

subcomponents nor in the total essay score when compared to the control group.

The scores for Group Ill, when compared to Group II, did not reach significance

either.

There were two areas of the essay that reached statistical significance and

that was when comparing the control group with Group Ill, the group that received

Internet instruction. The first finding of significance is the finding that Group Ill

scored better than the control group in the Total Essay Score (p=.053), that is the

sum of the score for the areas of composing, written expression, and

usage/mechanics. The second finding to reach significance is the finding that Group

Ill scored better than the control group in the area of usage/ mechanics (p=.028).

An analysis of effect sizes among treatment conditions was also completed.

The effect sizes ranged from small to large effect sizes. In short, large effect sizes

were obtained in the Total Essay Score when the control group was compared to
129

Group Ill (.90) and when the control group was compared to Group W (1.01). A large

effect size was also seen when we compared the control group with Group Ill in the

area of usage/mechanics (.83).

Medium effect sizes were seen in the area of composing when we compared

the control group to Group Ill and when we compared the control group to Group W.

Medium effect sizes were also seen in four other areas: a) in composing, when we

compared the control group with Group Ill (.65), b) in composing when we compared

the control group with Group W (.52), c)in the area of written expression when we

compared the control group to Group Ill (.50), and d) in the area of usage/mechanics

when we compared the control group with Group W (.63).

Discussion of Findings on Essay Scores

The control group scored the lowest when compared to the two groups that

were allowed 30 minutes of Internet brows time. Group Ill, the group that received

Internet instruction performed the best, even when compared to Group II, the group

that was allowed Internet brows time but received no Internet instruction. The data

reveal that statistical significance derived from only two areas: total essay score and

usage/mechanics. From these findings, we can make several possible conclusions.

First, we can conclude that students who use the Web when writing an essay that

requires domain knowledge, will only produce a better essay if they are given formal

instruction on how to search for information on the Web. Without instruction (i.e.

Group II), the essays that students produce may not be any better than th� essays

produced under standard administration, that is essays written without the use of the

Internet.
130

It is not surprising that the two groups that browsed the Web for information

scored better than the control group in Total Essay Score. Though both readers and

students are told that the essay does not have to contain factual information, it's

possible that the students still felt the need to have factual information in order to

respond effectively to the writing prompt. It's possible that the factual information the

Internet users obtained gave them the confidence to write a better essay which

produced higher scores in all of the writing subcomponents. After all, research

indicates that students' confidence in their writing ability plays a major role in their

writing performance (Klassen, 2002).

It is interesting to note that the mean score for Group Ill in usage/ mechanics

reached statistical significance when compared to the control group. This

phenomenon cannot be easily explained. A reasonable inference might be that

students in the control group spent more mental energy on expressing the content

and ideas required to answer the question, leaving them little energy to focus on the

grammatical skills that is evaluated in the area of usage/mechanics. A study by

Branch (2004) found that college students gained confidence in their ability to search

the Internet when they were given instruction on how to use key words and search

engines. It's likely that the instruction Group Ill received gave them greater

confidence. It's possible that the factual information gained from the Web gave the

students some grammar self-efficacy and therefore they used it during the writing of

the essay (Collins & Bissell, 2004).

Another reasonable inference might be that the groups that used the Internet

consciously or subconsciously remembered specific wording or phrasings from the


131

readings and used them in their writings though they were not allowed to take notes

during the reading. Both groups that were allowed to use the Internet scored higher

in all of the writing subcomponents as well as the Total Essay Score. Only Group Ill

produced statistical significance in Usage/Mechanics as compared to the control

group. It is possible that students in Group Ill used words and phrases from the

information they read on the Internet in their essays. It is not apparent why Group II

did not produce statistically significant results in Usage/Mechanics when students in

this group also used the Internet.

Another explanation for the control group's lower performance might be that

the teacher for the control group is in her third year of teaching and therefore less

adept at teaching grammar skills when compared to the teacher for Group 111 who

was entering his 13th year of teaching. It has been well-documented that a teacher's

years of experience affect student achievement (Darling-Hammond, 2004).

A second item of discussion is the fact that the group means between Group

II and Group Ill did not reach statistical significance. The lack of statistical

significance may have derived from several factors. The first factor is related to time

and duration of the Internet instruction. The Internet instruction consisted of three

45-minute sessions on effective ways of searching for information on the Internet. In

each of the 45-minute sessions, the teacher spent about 30 minutes demonstrating

the new skill the students should learn to use, leaving the students with about 15

minutes of practice time. These 15 minutes may not be enough time for students to

learn the requisite skill. Lauw, Muller, and Tredoux (2007), in their study of 1 ih

grade students in South Africa using a math software program, found that the more
132

time students spent on learning the software, the more improvement they showed.

We can extrapolate from that study to say that the more time students spend

browsing the Internet, the more they would have improved their search skills.

Another reason why Group Ill did not perform better than Group II might be

attributable to the duration of the instruction. It's possible that three sessions may

not be enough in duration for students to understand the concept or master the skill

they were to learn. Students might benefit more if the three Internet lessons were

increased to five to ten sessions. Students would also receive more benefits if the

last few lessons consisted of an actual assignment where they are to search for

information on a topic (ie. how books are constructed).

Additionally, during the three Internet instruction sessions, the teacher paired

the students (2 students per computer), thinking that this would be more helpful to

students with lower reading abilities. Because of the pairing, students had to take

turns using the computer, further reducing the amount of practice time. As a result,

students did not have time to practice the skill that the teacher taught them. During

the actual writing days, students each received their own laptop which was not

equivalent to the practice conditions. A remedy for this might be for each student to

receive Internet instruction on their own individual laptop, to practice individually on a

laptop, and then to search on their own laptop on the actual writing days.

In this section, I offered several reasons for the lack of statistical significance

between Group II and Group Ill but I did not discuss effect size. An analysis of effect

sizes yielded a range from .31 to .44 which almost reaches a medium effect size. It's

possible that in the current cultural-historical environment of the digital age, students
133

already possess the general Internet search skills they need in order to retrieve

information to answer an essay question. The small to low-medium effect sizes that

we obtained when comparing students in Group II with students in Group Ill causes

us to speculate that if students in Group Ill had been given more Internet instruction

sessions as well as practice conducting searches, their essays might have produced

larger effect sizes.

Discussion of Effect Sizes

Thompson (2007) highly recommends that researchers examine the effect

sizes of the treatment in order to consider the effectiveness of the intervention. This

is particularly important when the data do not produce statistically significant results.

Thompson suggests that we "look at effects in context and to evaluate the precision

and replicability of effects within a literature" (p. 430). He further explains that an

intervention with a small effect size may be important if it is shown that the

intervention produces improvement because small improvements add up

incrementally over time. Table 10 in Chapter 4 summarizes the effect sizes between

the control group and the other groups in all measured areas. Table 14 is a

comparison between the control group and the other groups in the Total Essay

Score.

Table 14

Comparison of Effect Size (Cohen's d) from Group Membership on Total Essay Score

Group II Group Ill Group W

Control .406 .827 .570


134

We can see that when we compare the control group to other group

conditions in the Total Essay Score, the treatment produced effect sizes ranging

from .406 to .827. Thus, an effect size of .406 can be said to have a low-medium

effect size and an effect size of .827 can be said to have a large effect size.

Thompson states: "In short, we need to ask (a) whether our effects are

noteworthy from a practical point of view, and (b) to whom our effect size results

generalize" (p. 430). This study produced effect sizes ranging from .406 to .827 in

the Total Essay Score which are striking results. From the practitioner's point of

view, the effect sizes obtained are noteworthy and school personnel should consider

allowing students to use the Internet during the writing process when content

knowledge is required in order to produce higher essay scores. This can make a

difference between passing the writing proficiency exam and having to repeat the

exam or attend summer school. It is crucial for students to pass the proficiency exam

so they will not have to repeat the same grade again.

Summary of Findings on the Behavior Correlates Questionnaire and the

Internet Self Perception Scale

The two measures administered to students were the Behavior Correlates

Questionnaire (BCQ) and the Internet Self Perception Scale (ISP). Students'

answers to the BCQ gave us information on their uses and perceptions of the

Internet. Students' answers to the ISP gave us their views on their self-efficacy when

using the Internet.


135

Behavior Correlates Questionnaire (BCQ)

The results from the BCQ indicate that 63.6 percent of the students felt that

the Internet did not affect the length of their papers and that 47.7 percent claimed

that the Internet did not change the quality of their writing. Furthermore, 63.6 percent

of the students responded that they would still choose to go to class, even if they

were able to obtain all information from the Internet.

Before analyzing the results of the BCQ, I would have guessed that students

were heavily reliant on the Internet for information and viewed it as a resource for

their writing. It surprised me that almost two-thirds of the students did not feel that

the Internet helped them to write a longer paper and that almost half of the students

felt that the Internet did not affect the quality of their writing. Is it possible that

students are aware that the quality of their writing is based on their critical thinking

skills and is not influenced by an outside source like the Internet? It's surprising -

and assuring-that students as young as 4 and 5th graders know that the Internet
th

does not teach one how to use correct grammar, that it has to be learned in a formal

manner and used appropriately.

I was also surprised to see that 63.6 percent of the students chose to attend

class, even if they were able to obtain all necessary information from the Internet.

This is a credit to the teachers who have created a supportive learning environment

that keeps students engaged. This statistic also indicates that students attend school

for more than just the academics; they attend school for social and emotional

reasons as well. Ding and Hall (2007) found that students self-reported a dislike of

school more and more as they got older. Ding and Hall stated it best when they
136

wrote: "students' attitudes and feelings about their learning environment may

contribute to how long they stay in school, how much they learn while they are there,

and whether they succeed after they leave school (p. 161). If schools can keep

these elementary students engaged, the attrition rate at the secondary level is likely

to decrease.

Internet Self-Perception Scale (ISP)

Students' answers to the ISP reveal that students in the three groups had

various levels of efficacy about their own Internet use. Group I had a mean of 4.33,

Group II had a mean of 4.37, and Group Ill had a mean of 3.46. A one-way ANOVA

yielded statistical differences among the three groups: df=2, F=5.731, p=.007.

Somehow, students in Group Ill scored the best in all areas of writing despite their

low self-efficacy of Internet use. It's interesting to note that Group Ill, the group with

the lowest confidence level, scored the best in all measures of writing. This adds to

the belief that it was the intervention that this group received that allowed students to

earn higher essay scores.

Students' responses to the ISP yielded statistical significance to several

items. The first statement that yielded statistical significance was: "I think that I am

good at using the Internet " (p=.007). The only other question that yielded statistical

significance was: "I understand how to use the Internet as well as other kids do"

(p=.013). In both instances, it was Group Ill that scored significantly lower than the

other two groups. Yet Group Ill was chosen to receive Internet instruction and the

students performed well on their essays. This finding might be attributable to the
137

confidence that students gained when taught how to use the Internet, a confidence

which builds their efficacy in writing.

The ISP includes statements that compare the survey responded with other

students: 'When I use the Internet, I can figure out how to find information better

than other kids." It is interesting to note that when students are asked statements

such as these, they usually rated themselves fairly low. This indicates several things:

1) students at this age are aware of their skills as compared to other students, 2)

they are not confident in their computer skills. Survey designers and teachers should

take note of the wording in the questions that evoked anxiety in students in order to

create better, more accurate surveys and exams.

Findings on Gender

A noteworthy finding that emerged from the study came from an analysis of

gender. There were 27 boys and 22 girls in the study. The group mean for the girls

was slightly higher than the boys in the total essay score and in 2 of the 3 writing

subcomponents: written expression and usage/mechanics. The group mean for the

boys in composing was actually higher than the girls' group mean (4.37 vs. 4.0). This

was a surprising finding. An ANOVA on gender revealed that boys and girls

produced the same quality of work, indicating no statistical significance.

Discussion of Findings on Gender

The gender gap in writing has been the topic of much discussion. An analysis

from this study based on gender did not show a gender gap, that boys are not

lagging behind girls in their writing abilities. The results here correspond with a

recent study by Graham, Berninger, and Fan (2007) which found that girls may have
138

better attitudes towards writing but that there was no statistical difference in the

students' writing as a factor of gender.

The findings for this study may be unique due to the characteristics of the

teacher participants. Two of the three teachers who participated in this study are

male teachers, each of whom has taught at Southside Elementary for at least 1 O

years. The students, but particularly the fifth grade boys, had the two male teachers

in the 4 th grade. The boys may have closed the gender gap because of two factors.

The first is that the boys already know the teachers' routines and expectations.

When teachers make their classroom routines and expectations clear, students are

more engaged and make greater academic progress (Bohn, Roehrig, & Pressley,

2004). The second reason, many speculate, is because male teachers know how to

teach to the "minds of boys" and their learning styles (King & Gurian, 2006). Dee

(2006), in his longitudinal study of the dataset from the National Educational

Longitudinal Survey (NELS) of 1988 which took data from 25,000 students in the 3th

grade found that a teacher's gender matters, that boys performed better when

placed with male teachers

Additionally, if given that male teachers handle discipline problems in boys

better (Dee, 2006) and that the male teachers in the study have minimized discipline

problems in the classroom, then they've created a positive learning environment,

producing better achievement results for all students. From one of my visits which

occurred after the data collection, I observed all of the 4 th and 5th graders pooled into

one large group in one of the male teacher's room in order to engage in

"constructive problem talk" (Robinson & Timberley, 2007). The group's participation
139

in creating a "problem-solving environment" could also be a factor in producing a

positive learning environment and higher writing scores (McIntyre, Kyle, & Moore,

2006).

Limitations of the Study

This study had several limitations which affected the outcomes of the study.

The first limitation was the lower than expected sample size. There were a total of 49

essays analyzed for this study, which when distributed, accounted for 16 to 17

essays in each group. According to the Power Calculator, I needed an N of 25 in

each group to achieve power between .60 and .80 for effect sizes (f) between .30

and .40. I obtained effect sizes in the medium to large range but because of the

small N, it is difficult to generalize the results of this study to a larger group.

The second limitation derives from teachers' differing years of experience in

the classroom. The teacher of Group I-the Control Group-had only 3 years of

experience while the teachers for Groups II and Ill both had over 10 years of

experience. The teachers were randomly assigned to the groups but the greater

experience of the teachers for groups II and Ill may have had an unintended and

uncontrolled effect on the results. However, had the teachers been specifically

assigned to the groups, the teacher with only 3 years of experience would still have

at least 7 years less experience than the other two teachers. The results for the

group with the less-experienced teacher would always be questionable. To correct

for this error, all teachers in the study should have an equal amount of years of

experience. However, this systematic bias is inherent in the study and cannot be

factored out
140

The third limitation derives from the short nature of the Internet instruction.

The students who received instruction on how to use the Internet only received three

45-minute sessions on how to browse the Internet. This limitation was due to the

limited amount of time the structure of the school calendar and the school day

imposed on the classroom teacher. Because teachers share the students, it was

important that each teacher stay on pace with the other grade-level teachers. This

made it difficult for the teacher of Group Ill to spend more time on the Internet

instruction component of the study.

Implications

In this section, I discuss implications of the study's findings as it applies to K-

12 schools, teachers and teacher educators, students and students' learning, and

finally to society. All of the proposed implications are premised on the acceptance

that Internet access (with prior instruction) during the writing process will produce

significant results for all students.

Implications for Schools and School Systems

Equitable access to technology such as computers and the Internet have

been controversial topics for years. Schools are often viewed as the great equalizer

for social injustices including issues of equity and access. If given that the Internet

will be integral to academic learning and retrieval of information which then produces

academic success and ultimately to upward class mobility, then we have to ask the

question if it is incumbent upon the schools to provide computers to students who

cannot afford them, just as schools provide books to every student, regardless of

ability to pay.
141

Providing computers to students who cannot afford them seems fair and

equitable, yet it will undoubtedly impact the operating costs of state governments

and school systems which are consistently under funded. In 1990, the national

expenditure for education in the United States was $264.2 billion. The National

Center for Education Statistics (2007) approximated that the expenditure for the K-

12 system would reach 556 billion for the 2005-2006 school year. The taxes that

fund the schools cannot rise accordingly (ie. 40 percent). If we add in the cost of a

computer for every child or for each child who does not have one, the cost of

operating schools will be fiscally higher.

The second issue for schools and school systems to consider is its current

method of assessment. Schools currently assess students' writing by proffering a

writing prompt and asking students to respond to such prompt. A study by Weiler

(2004) found that students from Generation Y (born between 1980 and 1994) turn to

the Internet and electronic sources for academic, personal, and professional

information, relying less and less on books. Students in the current K-12 classrooms

were born after 1994; these students grew up in the digital age, viewing digital

equipment such as the Internet as a source of information. The current method of

assessment in the schools, formal and informal, differs from students' behaviors in

which they typically turn to the Internet when assigned a homework assignment or

research paper. It is conceivable that school systems may have to restructure writing

assignments and writing assessments to include access to the Internet during the

writing process.
142

The third issue for schools to consider comes from students' views of time.

Weiler (2004) found that Generation Y students are concerned with saving time and

view the Internet as a time-saving tool. We can generalize this to students in the

modern K-12 classroom. To save time, students need to learn how to effectively

search the Internet for the information that they are seeking. Weiler suggests that

students prefer to use the school's library website and the approved list of reliable

websites, saving them the time required to evaluate the authenticity of a website. For

younger students with less cognitive development, it might be more beneficial for the

students if the schools subscribe to a database of approved websites geared for

students of that developmental stage (ie. AskforKids.com).

To summarize this section, the changes schools and school systems would

have to make include providing computers for students who cannot afford them,

changing methods of evaluation for tests and classroom assignments, and possibly

subscribing to pre-approved websites and databases.

Implications for Teachers and Teacher Education

The results of this study present several implications for teachers and teacher

education. The first implication requires a paradigm shift for education personnel, a

shift from teacher as the main source and provider of information to the teacher as

one of many resources in obtaining information. To be a resource to students in the

information-seeking process, teachers will need instruction and training on how to

use the Internet themselves. After all, teachers cannot teach something that they,

themselves, do not know how to use.


143

Secondly, teachers will have to redesign lessons for student learning.

Internet-based lessons cannot be teacher-centered or lecture-based but student­

centered and activity-based. For example, a current lesson of importance is learning

how to read textbooks (Forsten, Grant, & Hallas, 2003; Myers & Savage, 2005),

including pictures, captions, graphs, and headings. With the diminished value and

utility of using textbooks (Daniels & Zemelman, 2004), teachers will have to replace

this unit with information-seeking lessons, such as lessons on using the Internet.

Then teachers will have to allow time for informational searches on the Web. For

students without access to the Internet at home, teachers will have to schedule in

time during the school day to give these students an equal opportunity for success.

Next, teachers need pedagogical methods to teach students how to use the

Internet. Teachers need to learn how to incorporate the World Wide Web and other

Internet features into their lessons. This may necessitate learning how to navigate

through websites, webpages, and hyperlinks. This may also require that teachers

learn to incorporate Webquests into lesson plans (Van Fossen, 2004) to teach the

content rather than relying on textbooks to teach the content. According to Zukas

(2000), a" WebQuest is a structured exercise created by the teacher that asks

students to solve a problem or find an answer to a question or questions by finding

information on the web" (n.p.) In the beginning, this will initially consume much of

the teachers' time but the rewards in student learning will come.

Another implication for teachers using technology, specifically Webquests, is

that they allow teachers to differentiate instruction for various learners. Schweizer

and Kossow (2007) explain that Webquests benefit teachers of gifted students
144

because Webquests allow gifted students to delve deeper into a subject, proceed at

their own pace, inquire into a real-life problem, and provide students "an authentic

way to synthesize information gathered from the Internet" (Schweizer & Kowsow,

2007, p. 30). Teachers can also use Webquests to adapt instruction for students

with learning disabilities (Skylar, Higgins, & Boone, 2007). WebQuests motivate

students to learn and allow students to proceed at their own pace. Hyperlinks allow

students to get to the exact information they need without leafing through pages and

pages of printed text. Combined with the text-to-voice technology feature, students

with lower reading ability levels can access the information more easily.

The results of this study also indicate that teachers and/or school librarians

should use instructional time during the school day to teach students Internet search

skills in order to be effective consumers of the Web. For those with younger

students, teaching them how to perform searches, create key words for the search

terms, and to visually navigate a website is crucial. In October of 2006, there were

100 million websites (Walton, 2006). That growth will surely continue. Dealing with

"information abundance" is a genuine prospect. Teachers have to understand and

then teach students about search engine biases (DiMaggio, Hargittai, Neuman, &

Robinson, 2001) so that students know to search for information using several

search engines and to scout past the first page of results after a query. Additionally,

since anyone can create a website, teachers will have to teach students how to

discern a credible website from a bogus website (Kral, 2007).

Of course, the length of the school day remains the same. In appointing time

to teach students how to use the Internet, instruction for something else must be
145

truncated. This is where the paradigm shift from memorizing facts and data to

seeking facts and data is critical to the success of teachers and students.

There are implications for teacher education as well. Schools of education will

now have to incorporate and/or require preservice teachers to show competency in

the use of technology as well. While most schools of education offer this class, it is

not required in all schools. The states' licensing boards may have to consider

requiring proficiency in technology in order to issue new teaching certificates. Some

states may have to consider technology proficiency in order to reissue or renew

teaching licenses. Since March of 1998, the state of Virginia has mandated that

institutes of higher education "incorporate technology standards in their approved

program requirements and assess students' demonstrated proficiency of the

standards" (VDOE, 1998, n.p.) and that teachers and other school personnel meet

the Technology Standards for Instructional Personnel. In order for veteran teachers

to renew their teaching license, they also must prove that they have invested hours

in learning about technology.

New teachers entering the profession are likely to know more about

technology and incorporate its use than veteran teachers. The integration of the

Internet into the classroom will require that veteran teachers learn about technology

and the Internet as well. This learning can result from inservices provided at the

school site level, from the district level, or from classes offered in the community, at

community colleges, or at universities.


146

Implications for Students and Student Learning

There has been a shift from memorizing facts and figures (Johnston, 2000) to

learning by understanding, doing, and experiencing. Weiler (2004) recommends

that college instructors "instruct as much as possible by raising questions,

encouraging discussion, and using hands-on activities than by lecturing" (p.51-52).

The Web allows us to retrieve information 24 hours a day by just typing in a few key

words and clicking the mouse. Information about health, demographics, current

events, famous quotes, and numerous other subjects can be retrieved at 3am. A

student who has not memorized the properties of an element from the Periodic

Table of Elements can turn to the Internet while sitting in his pajamas, eliminating

the need to ask his parents for a ride to the library. Learning will no longer be

restricted to school operation hours; students can access information and "talk" with

peers and teachers about homework assignments beyond the school day.

Of course, information found on the Internet can only help a writer respond to

a writing prompt that requires mainly factual information. The Internet would not help

students with writing assignments that require critical thinking skills or formulation of

an opinion. For example, if a writing prompt asks a student to explain the proverb:

"Laziness in youth spells regret in old age," and to agree or disagree, the student

would have to use some critical thinking skills and formulate an opinion about the

proverb. Another specific example would be if a question asks for a stance on a

controversial topic. While the writer can research background information, the writer

still has to consider the best approach to take in order to complete the essay.
147

Trupe (1997) suggests that we move away from the traditional essay as a

means of evaluating writing to creating digital portfolios in which students can

demonstrate their ability to write with purpose, write for an audience, collaborate with

others, and relay information. Anyone can have access to digital portfolio(s) from the

Internet from any location in the world by just entering a password. Some schools

allow open access to students' digital portfolios; anyone in the community can view

the contents of the portfolios.

Archer (2007) writes that a digital portfolio will change the classroom in many

ways. First, digital portfolios can serve as a means to prove that students meet the

requirements measured on standardized tests. More than half of the districts in the

state of Rhode Island are using digital portfolios. Second, digital portfolios allow

students more ways to show what they know. The example that comes to my mind

to illustrate this point is one student inserting a written poem into her portfolio to

demonstrate her understanding of metaphors while another student inserts an audio

clip of metaphors he found in a rap song.

Fahey, Lawrence, and Paratore (2007) write that digital portfolios can make

learning public to the school and the classroom community. This will keep parents

informed of their children's learning and demystify the learning that goes on in the

classrooms. The authors also write that displaying students' work will help them

"acquire an understanding that literacy is a social act and good writers and good

readers improve their comprehension and composition in collaboration with others"

(p. 463). If members from the larger community can view students' learning through
148

the artifacts inserted into a digital portfolio, then they are more likely to understand

the schools and approve tax hikes in order to fund the schools.

Digital portfolios can create collaboration in ways that were not used before.

Students from other classrooms and even other schools can give feedback to the

published work. The last step in the writing process is to publish students' writing.

Displaying student work on the Internet for the global community or in an electronic

portfolio is the ultimate form of publishing.

Finally, with the integration of the Internet into students' lives, it will be natural

for students to shift from viewing the teacher as the source of information to an

information guide. The teacher, as guide, will help the students navigate an

unfamiliar territory in order to reach their destination, give advice when needed,

show points of interest, and explain meaning or significance of an item when

appropriate.

Social Implications

In the early 1900s, western society went from a farming culture to an

industrial society. The global community is now going through another revolutionary

change, from a manufacturing economy to an information economy. In the current

digital era, few will argue that access to information is crucial for success.

The results of this study indicate that when students are given access to the

Internet during the planning stage of writing, then they will have the necessary

information to earn higher essay scores than without access to the Internet.

Therefore, it is important that every student obtains access to the Internet-at school
149

and at home. It is not uncommon for teachers to assign essays for homework, which

will then leave students without computer and Internet access at a disadvantage.

On a larger scale in the academic arena, there will be a change in the way we

define literacy. We currently define literacy as the ability to read, write, and compute.

As new facts and data are added to our digital world, knowledge as we know it will

have to merge with technology. Literacy will be redefined as digital literacy or

information literacy (Dickinson, 2006). According to Dickinson (2006): "Information

literacy is a process, incorporating location and access, information problem solving

and decision making, and information utilization. It is composed of skills and

attitudes" (p. 25). As a society, we will have to take steps to ensure thatthe children

of our future have not just the facts but the skills and attitudes necessary to solve

complex problems and to utilize information effectively. As a society, we must

ensure that everyone has access to information so that "there should be no

information inequities" (Dickinson, 2006, p. 27).

DiMaggio, Hargittai, Neuman, and Robinson (2001) state that the Internet can

be used for social change. Current events indicate this to be true. Politicians are

asking common people throughout the country to submit questions on social network

sites such as YouTube in order to stream the videos into conventions and debates,

and they're answering the questions as a form of reaching out to the masses

(Monahan, 2007). One benefit from this is that it promoted civic engagement in the

younger voting crowd. Social change is resulting from other videos shown on

YouTube as well. Videos of inhumane treatment of cattle in a slaughterhouse

prompted the Los Angeles Unified School District to stop buying beef from that
150

slaughterhouse (Wire, 2008). Videos shown of a sheriff deputy intentionally dumping

a quadriplegic out of his wheelchair prompted outrage from the community and

prosecution of the sheriff (Poltilove & Morelli, 2008). It will likely lead to reforms in

law enforcement procedures throughout the country.

In the classroom, the presence of the Internet will likely reduce students' use

of traditional text and textbooks to search for information. One specific example is

the use of newspapers and magazines for current events. The Internet provides

more up-to-date information and when current information is important to a project,

newspapers and magazines will be irrelevant.

Information is a form of social capital (Johnson, 2007). Some researchers feel

that access to the Internet allows users to participate in society and gain cultural

capital (Drentea & Moren-Cross, 2005). Students in rural communities without library

buildings will have access to the same information found on the Web as students in

urban areas, narrowing the digital divide between those in rural versus those in

urban areas, and therefore, narrowing the divide in social capital.

Suggestions for Future Investigation

The Internet has slowly permeated the K-12 classroom yet few studies have

been published about its effects on students in content specific areas such as

reading and writing. Studies about college students' use of the Internet are emerging

as I present this dissertation. This researcher hopes that more researchers will

investigate the effects of the Web on K-12 participants at various grade levels and

that research is done in various content areas, with various groups of students
151

(gifted, ESOL, special education). Future studies should also try to recruit a larger n

in order to determine effect sizes and power.

In this study, I did not have a qualitative component delving into students'

perspective of the Internet or of writing. A suggestion for future researchers would be

to add a qualitative component in the form of interviews or focus groups. It would

also be interesting to see if students' views of the Internet as measured by the ISP

changed after their use of the Internet during the planning stage of the essay. This

data can only be gathered from Groups II and Ill which were allowed to use the

Internet. A post-measure of students' attitudes using the ISP would give us

information on whether their perspectives have changed or not.

If anyone attempts to duplicate this study, I hope they will find classroom

teachers of comparable years of experience to factor out this variable. Secondly, if

researchers gather participants from different schools, they should consider using

standardized test scores rather than teacher-issued grades to measure students'

writing abilities before the treatment. Each state uses the same standardized test

throughout its schools which would help normalize this variable.

Closing Thoughts

The U.S. Office of Education states that it wants schools systems and

administrators to use empirical data to make decisions. The findings of this study

add to the existing data on school practices but should not be generalized until

further studies have been conducted. The findings from this study are preliminary

and indicate that schools should further investigate the effectiveness of providing

students with formal instruction on using the Internet as well as allow the use of the
152

Internet in more writing assignments and projects. This "change" will align with

students' current thoughts and practice, allowing them to use the cultural tools in

their possession.
153

REFERENCES

Andersen, D. (2006). Does Wikipedia hurt scholarship?: It promotes sloppy "first hit"

student searches. American Teacher, 91 (2), 4.

Andrade, H. G. (2005). Teaching with rubrics: The good, the bad, and the ugly.

College Teaching, 53(1), 27-30.

Anstendig, L., Driver, M., & Meyer, J. (1999). Web research and hypermedia:

Tools for engaged learning. Journal of Excellence in College Teaching, 9(2),

69-91.

Archer, J. (2007). Digital portfolios: An alternative approach to assessing

progress. Education Week, 26(30), 38.

Artelt, C. (2005). Cross-cultural approaches to measuring motivation. Educational

Assessment, 10, 231-255.

Beghetto, R. A. (2006). Factors associated with middle and secondary students'

perceived science competence. Journal of Research in Science Teaching,

44(6), 800-814.

Bhargava, A., Petrova, A. K., & McNair, S. (1999). Computers, gender

bias, and young children. Information Technology in Childhood

Education, 1999, 263-74.

Bohn, C. M., Roehrig, A. D., & Pressley, M. (2004). The first days of school in the

classrooms of two more effective and four less effective primary-grades

teachers. The Elementary School Journal, 104(4), 269-87.

Boster, F. J., Meyer, G. S., Roberto, A. J., & Inge, C. I. (2002). A report on the

effect of the Unitedstreaming application on educational performance.


154

Retrieved October 14, 2006 from http://www.iste.org/Template.cfm?

Section= Home&CONTENTID=3069&TEMPLATE=/ContentManagement/Cont

entDisplay.cfm

Branch, J. L. (2004). Nontraditional undergraduates at home, work, and school:

An examination of information-seeking behaviors and the impact of

information literacy instruction. Research Strategies, 19(1), 3-15.

Briggs, K. (2005). Alternate Achievement Standards for Students with the Most

Significant Cognitive Disabilities Non-Regulatory Guidance. Retrieved

December 5, 2005 from http://www.ed.gov/policy/elsec/guid/altguidance.pdf

Castellani, J., & Jeffs, T. (2001). Emerging reading and writing strategies using

technology. Teaching Exceptional Children, 33(5), 60-69.

Christmann, E., Badgett, J., & Lucking, R. (1997). Microcomputer-based

computer-assisted instruction within differing subject areas: A statistical

deduction. Journal of Educational Computing Research, 16, 281-296.

Collins, S. J., & Bissell, K. L. (2004 ). Confidence and competence among

community college students: Self-efficacy and performance in grammar.

Community College Journal of Research and Practice, 28(8), 663-675.

Colorado State University, The Writing Center. (2006). What is the Internet?

Retrieved July 27, 2006 from http://writing.colostate.edu/guides/teaching/net­

research/pop2b, cfm

Compeau, D. R. & Higgins, C. A. (1995). Computer self-efficacy: Development

of a measure and test. MIS Quarterly, 19(2), 189-211.

County of Albemarle, Office of Geographic Data Services (2005). County of


155

Albemarle Information Sheet. Retrieved June 25, 2006 from

http://www.albemarle.org/upload/images/forms_center/departments/communit

y_development/forms/Albemarle_lnformation_Sheet_2005.pdf

Crawford, L., Helwig, R., & Tindal, G. (2004). Writing performance assessments:

How important is extended time? Journal of Learning Disabilities, 37(2),

132-142.

Culbertson, C., Daugherty, M., & Merrill, C. (2004). Effects of modular

technology education on junior high students' achievement scores.

Journal of Technology Education, 16(1), 7-20.

Cunningham, P. (2005). If they don't read much, how they ever gonna g�t good?

The Reading Teacher, 59(1), 88-90.

Dail, J. S. (2004). Reading in an online hypertext environment: A case study of

tenth grade English students (Doctoral dissertation, Florida State University,

2004). Dissertation Abstracts International, 65(7), 2005 (UMI No. 313737).

Daniels, & Zemelman, (2004). Out with textbooks, in with learning. Educational

Leadership, 61(4), 36-40.

Darling-Hammond, L. (2004). Inequality and the right to learn: Access to

qualified teachers in California's public schools. Teachers College Record,

106(10), 1936-1966.

Dee, T. S. (2006). The why chromosome. Education Next, 6(4), 68-75.

Dickinson, G. K. (2006). The spirit of inquiry in information literacy. Teacher

Librarian, 34(2), 23-27.


156

DiMaggio, P., Hargittai, E., Neuman, W. R., & Robinson, J. P. (2001). Social

implications of the Internet. Annual Review of Sociology, 27, 307-336.

Ding, C., & Hall, A. (2007). Gender, ethnicity, and grade differences in

perceptions of school experiences among adolescents. Studies in

Educational Evaluation, 33(2), 159-74.

Drentea, P., & Moren-Cross, J. L. (2005) Social capital and social support on the

web: the case of an internet mother site. Sociology of Health & Illness, 27(7),

920-943.

Duggan, A., Hess, 8., Morgan, D., Kim, S., & Wilson, K. (2001). Measuring students'

attitudes toward educational use of the Internet. Journal of Educational

Computing Research, 25(3), 267-281.

Duran, D. G. (2003). Measurement of attitude towards educational use of the

Internet in an English composition course with a comparison of traditional

aged and non-traditional aged students (Doctoral dissertation, West Virginia

University, 2003). Dissertation Abstracts International, 64(6), 2003 (UMI No.

3094581).

Englert, C. S., Zhao, Y., Dunsmore, K., Collins, N. Y., & Wolbers, K. (2007).

Scaffolding the writing of students with disabilities through procedural

facilitation: Using an Internet-based technology to improve performance.

Leaming Disability Quarterly, 30(1), 9-29.

Fahey, K., Lawrence, J., and Paratore, J. (2007). Using electronic portfolios to

make learning public. Journal of Adolescent and Adult Literacy, 50(6), 460-

471.
157

Federal Communications Commission (2006). Children's Internet Protection Act.

Retrieved October 17, 2006 from http://www.fcc.gov/cgb/

consumerfacts/cipa.html

Federal Register (2003). Title I-Improving the Academic Achievement of the

Disadvantaged; Final Rule. December 2003, 68(236). Retrieved December

5, 2005, from http://www.ed.gov/legis1ation/FedRegister/finrule/2003-

4/120903a.pdf

Forsten, C., Grant, J., & Hallas, B. (2003). Reading to learn: Are textbooks too

tough? Principal, 83(2), 31-33.

Friesen, J. (2003). Giving students 2151 century skills: A practical guide to

contemporary literacy. MultiMedia Schools, 10(3), 1-5.

Gardner, Howard. (1993). Multiple intelligences: The theory in practice. New York:

Basic.

Gersten, R., Fuchs, L. S., Compton, D., Coyne, M., Greenwood, C., & Innocenti, M.

S. (2005). Quality indicators for group experimental and quasi-experimental

research in special education. Exceptional Children, 71, 149-164.

Graham, S., Berninger, V., & Fan, W. (2007). The structural relationship between

writing attitude and writing achievement in first and third grade students.

Contemporary Educational Psychology, 32 (3), 516-36.

Graham, S., & MacArthur, C. A. (1988). Improving learning disabled students'

skills at revising essays produced on a word processor: Self-instructional


158

strategy. Journal of Special Education, 22, 133-152.

Gulli, A., & Signorini, A. (2005). The indexable web is more than 11.5 billion

pages (poster session). Chiba, Japan: International Conference on World

Wide Web, May 2005.

Gunzelmann, B., &Connell, D. (2006). The new gender gap: Social,

psychological, neuro-biological, and educational perspectives. Educational

Horizons, 84(2), 94-101.

Guttormsen-Schar, S., & Krueger, H. (2000). Using new learning technologies

with multimedia. IEEE Multimedia (from Institute of Electrical and Electronics

Engineers), 7(3), 40-51.

Hafner, J.C., & Hafner, P. M. (2003). Quantitative analysis of the rubric as an

Assessment tool: an empirical study of student peer-group rating.

International Journal of Science Education, 25(12), 1509-1528.

Hammonds, S. (2003). Impact of Internet-based teaching on student

achievement. British Journal of Educational Technology, 34(1), 95-98.

Hansen, J. (2001). When writers read. Portsmouth, NH: Heinemann.

Hackbarth, S. (2002). Changes in 4th graders' computer literacy as a function of

access, gender, and email networks. Tech Trends, 46(6), 46-55.

Hackett, G. (1999, October 8). Boys close reading gap but still trail in writing:

Performance of 11-year olds. Times Educational Supplement, 2.

Hackett, G., & Betz, N. E. (1989). An exploration of the mathematics self-efficacy/

fathematics performance correspondence. Journal for Research in

Mathematics Education, 20(3), 261-273.


159

Harris, J. (2003). Seek strategically, find answers appropriately. Leaming and

Leading with Technology, 30(5), 50-54.

Hollenbeck, K., Tindal, G., Hamiss, M., & Almond, P. (1999). The effect of using

computers as an accommodation in a statewide writing test. Retrieved

July 15, 2006 from http://brt.uoregon.edu/files/3_CmptrAccm.pdf

Horrigan, J. (2006). Data Memo: Rural Broadband Internet Use. Retrieved

January 28, 2008 from http://www.pewinternet.org/pdfs/PIP_

Rural_Broadband.pdf

Hinton, DiStefano, & Daniel (2003). The Internet self-perception scale: Measuring

elementary students' levels of self-efficacy regarding Internet use. Journal

of Educational Computing Research, 29(2), 209-228.

Huberman, B. A., & Adamic, L. A. (1999). Growth dynamics of the World-Wide

Web. Nature, 401, 131.

Jacobson, M.. J., & Spiro, R. J. (1995). Hypertext learning environments,

cognitive flexibility, and the transfer of complex knowledge. Journal of

Educational Computing Research, 12, 301-333.

Johnson, C. A. (2007). Social capital and the search for information: Examining

The role of social capital in information seeking behavior in Mongolia.

Journal of the American Society for Information Science and Technology,

58(6), 883-894.

Johnston, C. (2000, November 10). Thanks for the pointless memory. The

London Times Educational Supplement, 33.

Jones, S., & Myhill, D., (2007). Discourses of difference? Examining gender
160

differences in linguistic characteristics of writing. Canadian Journal of

Education, 30(2), 456-482.

Kadjer, S., & Bull, G. (2004). A space for writing without writing: Biogs in the

Language Arts Classroom. Learning & Leading with Technology, 31(6), 32-

35.

Kelso, E. B. (2005). Middle school students engaging in literary discussion

online. (Doctoral dissertation, New York University, 2005). Dissertation

Abstracts International, (UMI No. 3166529).

Ketter, J., & Pool, J. (2001). Exploring the impact of a high-stakes direct writing

assessment in two high school classrooms. Research in the Teaching of

English, 35, 345-393.

King, K., & Gurian, M. (2006). Teaching to the minds of boys. Educational

Leadership, 64(1 ), 56-61.

Klassen, R. (2002). Writing in early adolescence: A review of the role of

self-efficacy beliefs. Educational Psychological Review, 14(2), 173-203.

Kleiner, A., & Lewis, L. (2003). Internet Access in U.S. PublicSchools and

Classrooms: 1994-2002. Washington, D. C.: National Center for Education

Statistics.

Kostelecky, K. L., & Hoskinson, M. J. (2005). A "novel" approach to motivating

students. Education, 125, 438-442.

Kral, J. (2007). The necessity of website evaluation. School Library Media

Activities Monthly, 23(7), 12-15.

LaramieSchool District (2007).Six Traits of Good Writing. Retrieved January 23,


161

2007 from http://www.laramie1.k12.wy.us/instruction/langarts/traits.htm

Larsen, K. (2005). How to analyze online resources: Library lessons. Library Sparks,

3(4), 17-23.

Lauw, J., Muller, J., & Tredoux, C. (2007). Time-on-task, technology and

mathematics achievement. Evaluation and Program Planning, 31(1), 41-50.

MacArthur, C. A., Graham,S., &Schwartz, S.S. (1993). Integrating strategy

instruction into a process approach to writing instruction. School

Psychology Review, 22, 671-681.

MacArthur, C. A., Graham,S.,Schwartz, S.S. &Schafer, W. D. (1995).

Evaluation of a writing instruction model that integrated a process

Approach, strategy instruction, and word processing. Learning Disability

Quarterly, 18(4), 278-291.

Mastropierri, M. A.,Scruggs, T.E., & Graetz, J.E. (2003). Reading compre­

hension instruction for secondary students: Challenges for struggling

students and teachers. Leaming Disabilities Quarterly, 26(2), 103-116.

McGreal, R. (1997). The Internet: A learning environment. New Directions for

Teaching and Leaming, 71, 67-78.

McIntyre, E., Kyle, D. W., & Moore, G. H. (2006). A primary-grade teacher's

guidance toward small-group dialogue. Reading Research Quarterly, 41(1 ),

36-66.

McMackin, M. C., & Witherell, N. L. (2005). Different routes to the same

destination: Drawing conclusions with tiered graphic organizers. The Reading

Teacher, 59(3), 242-252.


162

Meyers, B. J. F., Middlemiss, W., Theodorou, E., Brezinski, K. L., & McDougall,

J. (2002). Effects of structure strategy instruction delivered to fifth-

grade children using the Internet with and without the did of older adult tutors.

Journal of Educational Psychology, 94(3), 486-519.

Monahan, J. (2007). YouTube as a Social Change Agent? Retrieved February

16, 2008 from http://www.seattleu.edu/home/news events/

magazine/details.asp?elltemlD=MAG _1118

Moreno, R., & Mayer, R. E. (2000). A learner-centered approach to multimedia

explanations: Deriving instructional design principles from cognitive theory.

Interactive Multimedia Electronic Journal of Computer-Enhanced

Learning, 2(2), Retrieved November 1, 2006 from

http://www.imej.wfu.edu/articles/2000/2/index.asp

Muldrow, E. (1986). Electronic media: On writing and word processors in a ninth

grade classroom. The English Journal, 75(5), 84-86.

Murray, B. (2000). Sizing the Internet: A Cyveillance White Paper. Retrieved

February 11, 2006 from http://www.cyveillance.com/web/downloads/

Sizing_the_lnternet.pdf#search='Sizing%20the%201nternet'

Myers, M. P., & Savage, T. (2005). Enhancing student comprehension of social

studies material. The Social Studies, 96(1), 18-23.

National Center for Education Statistics (2007). Mini-digest of Education

Statistics-2006. Retrieved February 16, 2008 from

http://nces.ed.gov/pubs2007/2007067.pdf
163

National Commission on Excellence in Education (1983, April). A nation at risk: The

imperative for educational reform. Retrieved July 26, 2006 from

http://www.ed.gov/pubs/NatAtRisk/index. html

National Education Technology Plan (2004). Toward a new golden age in

American education: How the Internet, the law and today's students are

revolutionizing expectations. U.S. Department of Education.

NationalSchool Board Foundation (2005).Safe & Smart: Research and

Guidelines for Children's Use of the Internet. Retrieved December 4, 2005,

from http://www.nsbf.org/safe-smart/full-report.htm

Newkirk, T. Misreading masculinity:Speculations on the great gender gap in

writing. Language Arts, 77(4), 294-300.

No Child Left Behind Act (2002). P.L. 107-110.

Novak, J. R., Herman, J. L., & Gearhart, M. (1996). Establishing validity for

performance-based assessments: An illustration for collections of student

writing. Journal of Educational Research, 89, 220-233.

Page, M.S. (2002). Technology-enriched classrooms: Effects on students of low

socioeconomic status. Journal of Research on Technology in Education, 34,

389-409.

Pavia, C. M. (2004). Issues of attitude and access: A case study of basic writers

in a computer classroom. Journal of Basic Writing, 23(2), 4-22.

Peha, S. (2003). What is good writing? Retrieved January 23, 2007 from

http://www.ttms.org/writing_quality/writing_quality.htm

Poltilove, J. & Morelli, K. (2008). Warrant Issued For Deputy In Wheelchair Case.
164

The Tampa Tribune, Retrieved February 17, 2008 from

http://www2.tbo.com/content/ 2008/feb/15/man-thrown-wheelchair-giving­

sworn-statement/

Purcell, A. D., Ponomarenko, A. L., & Brown, Stephen C. (2006). A Fifth-Grader's

Guide to the World. Science and Children, 43(8), 24-27.

Pyke, N. (1998, June 12). England beats France when it comes to spelling and

writing in primary schools. The London Times Educational Supplement, p. 1.

Reed, R. (2003). Streaming technology improves student achievement.

T.H.E. Journal: Technological Horizons in Education, 30(7), 14-20.

Robinson, V. M. J., & Timperley, H. S. (2007). The leadership of the

improvement of teaching and learning: Lessons from initiatives with positive

outcomes for students. Australian Journal of Education, 51(3), 247-62.

Ross, C. E., & Brah, B. A. (2000). The roles of self-esteem and the sense of

personal control in the academic achievement process. Sociology of

Education, 73(4), 270-284.

Rowen, D. (2005). The write motivation: Using the Internet to engage students in

writing across the curriculum. Leaming & Leading with Technology, 32(5), 22-

23, 43.

Schweizer, H., & Kossow, B. (2007). WebQuests: Tools for differentiation. Gifted

Child Today, 30(1). 29-35.

Scott, K. (2002). Writing improvement for all. Journal of School Improvement, 3(1 ),

Retrieved January 22, 2007 from http://www.ncacasi.org/jsi/2002v3i1/writing

Sennett, F. (2004). Use class Web site to host historic guest lecturers.
165

Curriculum Review, 43(8), 6-7.

Shapiro, L. (2004). A writing program that scores with the 6-trait model. The New

England Reading Association Journal, 40(2), 35-40.

Silver-Pacuilla, H. & Fleischman, S. (2006). Technology to help struggling

students. Educational Leadership, 63(5), 84-85.

Singh, K., Allen, K. R., Schkler, R., & Darlington, L. (2007). Women in computer­

related majors: A critical synthesis of research and theory from 1994 to 2005.

Skylar, A. A., Higgins, K. & Boone, R. (2007). Strategies for adapting WebQuests

for students with learning disabilities. Intervention in School and Clinic, 43(1),

20-8

Stapleton, P. (2005). Using the Web as a research source: Implications for L2

academic writing. The Modem Language Journal, 89, 177-190.

Stephenson, C. (2006). Has the gender gap closed? No. Leaming and Leading

with Technology, 33(8), 6-7.

Stone, J. A., Hoffman, M. E., Madigan, E. M., & Vance, D. R. (2006). Technology

skills of incoming freshman: Are first-year students prepared? Journal of

Computing Sciences in Colleges, 21(6), 117-121.

Strassman, B. K., & D'Amore, M. (2002). The write technology. Teaching

Exceptional Children, 34(6), 28-31.

Strickland, J. & Nazzal, A. (2005). Using WebQuests to teach content:

Comparing instructional strategies. Contemporary Issues in Technology and

Teacher Education, 5(2),

Stringer, Morton, and Bonikowski (1999). Learning disabled students: Using


166

process writing to build autonomy and self esteem. Journal of Instructional

Psychology, 26(3), 196-200.

Swan, K., van 't Hoof, M., Kratcotski, A., & Unger, D. (2005). Uses and effects

of mobile computing devices in K-8 classrooms. Journal of Research on

Technology in Education, 38(1), 99-112.

Thomas, K. M. (2005). Fun with fundamentals: Games and electronic activities

to reinforce grammar in the college writing classroom. Teaching English

in the Two-Year College, 33(1), 62-69.

Thompson, B. (2007). Effect sizes, confidence intervals, and confidence intervals

for effect sizes. Psychology in the Schools, 44(5), 423-32.

Trupe, A. L. (1997). Academic literacy in a wired world: What should a literate

student text look like? Writing Instructor, 16(3), 113-25.

Tsai, C. C., Lin, S. S-J., & Tsai, M-J. (2001). Developing an Internet attitude scale

for high school students. Computers & Education, 37, 41-51.

U. S. Department of Education (2006). Building the legacy: IDEA 2004.

Retrieved January 3, 2007 from http://idea.ed.gov/

U.S. Department of Education (2005a). Press release: Secretary Paige issues

new policy for calculating participation rates under No Child Left Behind.

Retrieved December 5, 2005, from

http://www.ed.gov/news/pressreleases/2004/03/03292004.html

van Leeuwen, C. A., & Gabriel, M. A. (2007). Beginning to write with word

processing: Integrating writing process and technology in a primary

classroom. The Reading Teacher, 60(5), 420-429.


167

Varma, R. (2002). Women in information technology: A case study of

undergraduate students in a minority-serving institution. Bulletin of

Science, Technology, and Society, 22, 274-282.

Virginia Department of Education (1998). Technology Standards for Instructional

Personnel. Retrieved February 16, 2008 from

http://www.doe. virginia.govNDOE/Compliance/ TeacherED/tech .html

Virginia Department of Education (2005). Virginia Standards of Learning

Assessments. Blueprint Grade 5 Writing Test. Retrieved January 22, 2008

from http://www.doe.virginia.govNDOE/Assessment/

EnglishBlueprint05/BlueprintsG5writing.pdf p. 4.

Virginia Department of Education (2006a). Standards of Learning currently in

effect for Virginia public schools. Retrieved on January 3, 2006 from

http://www.pen.k12.va.usNDOE/Superintendent/Sols/home.shtml

VDOE (2006b). English Standards of Learning-Grade Four. Retrieved on January 9,

2008 from http://www.doe.virginia.govNDOE/Superintendent/

Sols/2002/English4.pdf

VDOE (2006c). Virginia Standards of Learning Assessments, End of Course English

writing prompt no. 280. Retrieved June 25, 2006 from http://www.pen.k12.va.

usNDOE/Assessment/Release2005/EOC_WP_280_RIB.pdf VDOE (1997).

Standards of Learning Assessment Program Blueprints for Grade Eight English:

Writing. Retrieved January 3, 2006 from

http://www.doe.virginia.govNDOE/Assessment/soltests/writing8.html

Walton, M. (2006). Web reaches new milestone: 100 million sites. Retrieved
168

February 16, 2008 from http://www.cnn.com/2006fTECH/internet

11/01/1OOmillionwebsites/index.html

Ware, P. D. (2004). Confidence and competition online: ESL student

perspectives on web-based discussions in the classroom. Computers and

Composition, 21, 451-468.

Weiler, A. (2004). Information-seeking behavior in Generation Y students:

Motivation, critical thinking, and learning theory. Journal of Academic

Librarianship, 31(1), 46-53.

What Works Clearinghouse. (2006). Evidence Standards for Reviewing Studies.

Retrieved October 10, 2006 from http://www.whatworks.ed.gov/

reviewprocess/study_standards_final.pdf

Wire, S. D. (2008). U.S. probes Chino slaughterhouse, supplier to school lunch

program. Retrieved February 16, 2008 from http://www.latimes.com/news

local/ la-me-humane31jan31,0,6750146.story

Womble, G. (1984). Do word processors work in the English classroom?

The Education Digest, 50, 40-42.

Yancey, K. B. (2004). Using multiple technologies to teach writing. Educational

Leadership, 62(2), 38-40.

Yumuk, A. (2002). Letting go of control to the learners: The role of the Internet in

promoting a more autonomous view of learning in an academic translation

course. Educational Research, 44(2), 141-156.

Zukas, A. (2000). Active learning, world history, and the Internet: Creating
169

knowledge in the classroom. International Journal of Social Education, 15 (1 ),

62-79.
170

Appendix A

InternetSelf-PerceptionScale
(Hinson, DiStefano, & Daniel, 2003)

Listed below are statements about Internet use. Please read each statement
carefully. Then circle the letters that show how much you agree or disagree with the
statement. Use the following scale for your answers:

SD=Strong Disagree D= Disagree U=Undecided


A= Agree SA=Strongly Agree

SD DU ASA

1. I think that I am good at using the Internet. SD DU ASA

2. I can tell that my teacher likes my Internet projects. SD DU ASA

3. My teacher thinks that my ability to use the Internet SD DU ASA


is fine.

4. I can use the Internet faster than other kids. SD DU ASA

5. I like to use the Internet. SD DU ASA

6. When I use the Internet, I can figure out how to find SD DU ASA
information better than other kids.

7. My classmates think that I am good at using the Internet. SD DU ASA

8. I feel good inside when I use the Internet. SD DU ASA

9. My classmates think that I use the Internet pretty well. SD DU ASA

10. When I use the Internet, I don't have to try as hard as SD DU ASA
I used to.

11. I seem to know more about using the Internet than SD DU ASA
other kids.

12. I am getting better at using the Internet. SD DU ASA

13. I understand how to use the Internet as well as SD DU ASA


other kids do.

14. When I use the Internet, I need less help than I used to. SD DU ASA
171

15. My teacher thinks that I am a good Internet user. SD DU A SA

16. Using the Internet is easier for me than it used to be. SD DU A SA

17. I am better at using the Internet than other kids in SD DU A SA


my class.

18.1 feel calm when I am using the Internet. SD DU A SA

19. I use the Internet more than other kids. SD DU A SA

20. I understand how to use the Internet better than I SD DU A SA


could before.

21. I can figure out how to find information on the SD DU A SA


Internet better than I could before.

22. I feel comfortable when I am using the Internet. SD DU A SA

23. I think using the Internet is relaxing. SD DU A SA

24. I am better now at using the Internet than I was before. SD DU A SA

25. Using the Internet makes me feel good. SD DU A SA

26. Other kids think that I am a good Internet user. SD DU A SA

27. People in my family think I can use the Internet SD DU A SA


pretty well.

28. I enjoy using the Internet. SD DU A SA

29. People in my family like to see me use the Internet SODU A SA

Thank you for your time!


172

Appendix B

Behavior Correlates Questionnaire


Author: D. Duran (2003)

For each question, check the box or boxes that best represent your Internet-related
behaviors(s). Questions 17-19 ask for demographic information. This is a survey of
people's behavior toward the Internet for educational purposes. Your responses to
all questions will be held in strict confidence.

1. Do you have a personal computer at home? _yes no

2. Are you connected to the Internet at home? _yes no

3. Do you have educational sites bookmarked as _yes no


a favorite site?

4. If you could get all class information from the Internet,


would you go to class? _yes no

5. Do you and your friends discuss/ share class-related


Information found on the Internet? _yes no

6. Given a choice, would you take a class that


required Internet use? _yes no

7. On average, how often do you browse the Internet?


Never
Once a month
Once a week
Several times a week
_Once a day
_Several times a day

8. On average, how often do you search the Internet to help you with writing
assignments?
Never
Once a semester
Several times a semester
Once a month
Several times a month
Once a week
Several times a week
_Every day
173

9. How did you first learn to use the Internet? (Please check all that apply)
Class / School
_ Magazine/ Book
Presentation
Friends
Parents
_ Other family members
_Public library
_ Self-taught
_ Other (please specify) -----------------

10. Which of the following features for using the Internet do you know how to
use? (Check all that apply):
E-mail Chat rooms
World Wide Web _ Instant messages
_ Newsgroups/forums _ Forwarding mail
File Transfer Protocol _ Downloading
_ Mailing lists _Uploading
Address book _ Online games
Social network sites _ Other (please specify)

(ie. Myspace/ Facebook)

11. How has the Internet affected the length of your writing assignments?
_ I write much longer papers
_ I write somewhat longer papers
_Using the Internet has had no effect on the length of my papers
_ I write somewhat shorter papers
_ I write much shorter papers

12. How has using the Internet affected the quality of your writing?
_ My writing is much better.
_ My writing is somewhat better.
_ The quality of my writing has not changed.
_ My writing is somewhat worse.
_ My writing is much worse.

Can you give specific ways in which using the Internet has affected your writing?

13. How has using the Internet for research in English class changed your
attitude toward writing?
_ I enjoy writing much more.
_ I enjoy writing somewhat more.
174

._ My attitude toward writing hasn't changed.


_ I enjoy writing somewhat less.
_ I enjoy writing much _less.

14. How has using the Internet for research in English class affected the ease or
dmiculty with which you write papers?
_ It is much easier to write papers using the Internet for research.
_ It is somewhat easier to write papers using the Internet for research.
_ Using the Internet hfor research has not made a difference in how hard or
easy it is for me to write papers.
_ It is somewhat more difficult to write papers using the Internet for
research.
_ It is much more difficult to write papers using the Internet for research.

15. On average, how often do you visit the school library to find research
material?
Never
Once a semester
Several times a semester
Once a month
Once a week
_Daily

16. For which of the following purposes do you use the Internet to �elp you write
English essays? (check all that apply)
Teacher said I should
Friend said I should
_ Homework assignments
_ Search for research material for assigned papers
Search school's online databases
_ Look at class page or school's website
_ Look at other students' papers already written
_ Other (please specify) ________________

17. Gender -- Male --- Female


18.Age ____ Race: ------------
19. Grade: __ 5 th

20. ls there anything else you would like to say about the Internet used for
English class? (write in this space here)
175

Appendix C

Grade 5 Scoring Rubrics


For Writing Test
(Virginia Department of Education, 2005)

...estf1.
The writer demonstrates consistent, though not necessarily perfect, control
of the composing domain's features. The piece is generally unified in that all
of the parts contribute to the creation of a dominant impression or idea. The
sharply focused central idea is fully, but not exhaustively, elaborated with
4 key examples, illustrations, reasons, events, or details. In all successful
responses, layers of elaboration are present. Surface signals, like
transitions, logically connect their respective statements into the whole of the
paper. In all types of writing, a strong organizational plan is apparent. Any
minor oganizational lapses that occur do not significantly detract from the
presentation. The writing provides evidence of unity by exhibiting a
consistent point of view (e.g., not switching from "I" to "you"), a lack of
digressions, appropriate transitions both within paragraphs and across the
entire piece, the presence of careful logic, and a strong lead and closure.

The writer demonstrates reasonable, but not consistent, control of the


composing domain's features; the writer may control some features more
than others. The clearly focused central idea is purposefully elaborated with
key examples, illustrations, reasons, events, or details. Occasionally, some
thinness or unevenness in elaboration may occur. In all types of writing, an
3
organizational plan is apparent. Any minor organizational lapses that occur
do not significantly detract from the piece. Although there may be occasional
lapses in coherence or cohesiveness, unity is evidenced by the fact that few,
if any, digressions or shifts in point of view occur. Transitions are, on the
whole, appropriate. The opening and closing show some skill, but not the
sophistication of a 4 performance.

The writer demonstrates inconsistent control of several features, indicating


significant weakness in the composing domain. At this score point, ideas
often compete, or no one idea emerges as central. Even if a single idea
dominates, the paper may lack focus because of little or no elaboration. The
paper may be a list of general, underdeveloped statements or the skeleton of
2
a narrative. In the case of persuasive writing, it may consist of a few
unelaborated reasons accompanied by inappropriate attempts (begging,
pleading, negotiating) to persuade. Typically, the writer extends ideas with a
few brief details and moves on, though chunks of irrelevant material may
appear as well. Often, no more than a hint of organization is apparent. Even
though an opening and closing may be present, the lack of a logically
elaborated central idea prevents unity from emerging.

The writer demonstrates little or no control of most of the composing


176

domain's features. The focus on a central idea is lacking, or the piece is so


sparse that the presence of a clear focus is insufficient for it to earn a higher
score. Typically, the writing jumps from point to point,
1 without a unifying central idea. No overall organizational strategy is
apparent. The writing seems haphazard, and sentences can be rearranged
without substantially changing the meaning.
Bare statement is the norm, but even in responses that are several pages
long, no purposeful elaboration is present.
177

Written Expression Rubric

The writer demonstrates consistent, though not necessarily perfect, control


of the written expression domain's features. The result is a purposefully
crafted message that the reader remembers, primarily because its precise
information and vocabulary resonate as images in the reader's mind. Highly
4 specific word choice and information also create a purposeful tone in the
writing and enhance the writer's voice. If metaphors, similes, personification,
or other examples of figurative language are present, they are appropriate to
the purpose of the piece. The writer repeats or varies sentence construction
for effect and appropriately subordinates ideas and embeds modifiers on a
regular basis, resulting in a rhythmic flow throughout the piece.

The writer demonstrates reasonable, but not consistent, control of the written
expression domain's features. On the whole, specific word choice and
information cause the message to be clear; occasionally, a few examples of
vivid or purposeful figurative language may be present. Along with instances
of successful control, some general statements or vague words may be
3
present, flattening the tone and voice of the piece somewhat. Overall, the
writing is characterized by a smooth rhythm created by the effective use of
normal word order and competent variation in sentence length and
complexity. An occasional awkward construction or the lack of structural
complexity is not distracting.

The writer demonstrates inconsistent control of several features, indicating


significant weakness in the written expression domain. Some specificity of
word choice might exist, but generally the piece is written in imprecise, bland
language. As a result, the writer's voice rarely emerges. The selection of
2 information may be uneven and/or consist of an attempt to tell everything
that the writer knows about a topic. A relative lack of sentence variety may
make reading monotonous, and occasional awkward constructions may be
distracting enough to make the writer's meaning unclear. While a few brief
rhythmic clusters of sentences may occur, an overall sense of
rhythmic flow is not present.

The writer demonstrates little or no control of most of the written expression


domain's features. Both word choice and information are general, vague,
1 and/or repetitive. A lack of sentence variety makes the presentation
monotonous. The existence of several extremely awkward constructions
reduces the paper's stylistic effect. The writer's lack of control of vocabulary
and information prevents both tone and voice from emerging.
178

Usage/Mechanics Rubric

The writer demonstrates consistent, though not necessarily perfect, control


of the domain's features of usage/mechanics. The writing demonstrates a
4 thorough understanding of usage and mechanics as specified in the Virginia
K-11 SOL. The author uses capitalization, punctuation, usage, and sentence
formation and applies the structural principles of spelling. A few errors in
usage and mechanics may be present. However, the writer's control of the
domain's many features is too strong for these mistakes to detract from the
performance.

The writer demonstrates reasonable, but not consistent, control of most of


the domain's features of usage/mechanics. The writing demonstrates a basic
3 understanding of usage and mechanics as
specified in the Virginia K-11 SOL. For the most part, the author
appropriately applies both the rules of capitalization, punctuation, usage, and
sentence formation and the structural principles of
spelling expected of high school students. Most of the errors contained in the
piece are not elementary ones.

The writer demonstrates inconsistent control of several features, indicating


significant weakness in the domain of usage/mechanics. Evidence of the
2 author's knowledge of features of this domain appears alongside frequent
errors. In terms of both usage and mechanics, the writer inconsistently
applies the rules of capitalization, punctuation, usage, spelling, and sentence
formation as specified in the Virginia K-11 SOL. Often, these papers exhibit
a lack of control of tense consistency, meaningful punctuation, and the
principles of spelling, thus making it difficult for the reader to follow the
writer's thought. The density of errors that emerges across features
outweighs the feature control present in the paper.

The writer demonstrates little or no control of most of the domain's features


of usage/mechanics. Frequent and severe errors distract the reader and
1 make the writing very hard to understand. Even when meaning is not
significantly affected, the density and variety of errors overwhelm the
performance and keep it from meeting minimum standards of competence.

You might also like