The Effect of Automated Corrective Feedback On L2 Writing in POS Categories
The Effect of Automated Corrective Feedback On L2 Writing in POS Categories
The Effect of Automated Corrective Feedback On L2 Writing in POS Categories
2022 3rd International Conference on Language, Art and Cultural Exchange (ICLACE 2022)
ABSTRACT
Automated corrective feedback is the processing of the Computer-Assisted Language Learning used in L2 English
writing assessment that is ubiquitous in current L2 practice and research (e.g.,Chen,2016; Chukharev & Saricaoglu,2016;
Gao et al, 2020).This research examined the effect of automated corrective feedback of Pigainet as a kind of Computer-
Assisted language learning instruments in English writing revision. Data were collected from 591 drafts of 31
participants who submitted their drafts on Pigainet and coded as errors frequency ratios according to POS category.
Findings suggested that Pigainet could help participants revise their writing errors of Article, Verb, Preposition, and
Noun.
While this study focuses on POS categories to sampling drafts errors so that the data coding could be
investigate the effect of automated corrective feedback more comprehensive and multidimensional.
after answering the following research questions: what is
the EFR of each type in POS categories? And what is the 4. DATA ANALYSIS AND RESULTS
EFR change of each type in POS categories?
We calculated the total number of errors identified by
3. RESEARCH METHOD triangle method on each draft, standardized the raw
number of errors, and then obtained descriptive statistics
Fifty freshmen in Jiamusi University as participants about the EFR of each type totally from the first draft to
took part in this study voluntarily. The pre-test and post- the final draft for all papers including pre-and post-tests,
test were designed to compare the EFR change and and the mean number of EFR changing from pretest to
investigate the improvement of accuracy in terms of POS posttest.
types. Both pre-test and post-test were taken in the
RQ1: What is EFR of each type in POS categories?
classroom for 30 minutes. Furthermore, ten tasks were
assigned to finish in 10 weeks, namely once a week. And As the Figure1 shows that in the whole pie, EFR of
the data could be collected to explore the characteristics the article(mean=.0342,SD=.011) is 30%, EFR of the
of POS types in L2 learners’ writing. verb (mean=.015,SD=.0067) is 22%, EFR of preposition
(mean=.0127,SD=.012) is 13%, EFR of the noun
From the participants, thirty-one students’ 591 drafts
(mean=.0096,SD=.0032) is 10%, EFR of pronoun
were selected from Pigai.org. In order to make the error
(mean=.0062,SD=.0055) is 9%, EFR of conjunction
coding category reliable, the triangle research methods
(mean=.009,SD=.019) is 8%, EFR of the adjective
are used in the research. This study combines two
(mean=.002,SD=.0021) is 4%, EFR of punctuation
automated writing evaluation tools, Pigainet and
(mean=.0014,SD=.0008) is 2% as well as adverb’s
Grammarly, and human rater, to code and analyze
portion.
Note: adj-adjective,adv-adverb,art-article,
con-conjunction,noun-noun,pre- preposition,
pro- pronoun,pun-punctuation,v-verb
RQ2: What is EFR change of each type in POS categories? post test in terms of article EFR change (z=4.103,
p[2tailed]<.05),verb EFR change (z=4.077,
The EFR change of each type is not a normal p[2tailed]<.05),preposition EFR change(z=3.163,
distribution. Therefore, Wilcoxon pair test should be p[2tailed]=.002),noun EFR change (z=2.175,p[2
conducted to check the EFR change of error types tailed]=.03). It means the improvement in accuracy of
between pre and post-test. As figure 2 shows, the lines of article, verb, preposition and noun between pre-and
EFR of error types are down. And the results of Wilcoxon posttest.
test shows there are significant differences between pre-
493
Advances in Social Science, Education and Humanities Research, volume 673
494
Advances in Social Science, Education and Humanities Research, volume 673
516.
https://doi.org/10.1080/09588221.2014.991795
[3] Gao, J., Li, X., Gu, P., & Liu, Z. (2020). An
Evaluation of China’s Automated Scoring System
Bingo English. International Journal of English
Linguistics, 10(6), 30.
[4] Grimes, D., & Warschauer, M. (2010). Utility in a
Fallible Tool: A Multi-site Case Study of Automated
Writing Evaluation. Journal of Technology,
Learning, and Assessment, 8(6), 4–44.13781.
[5] Ionin, T. (2003). Article semantics in second
language acquisition (Doctoral dissertation,
Massachusetts Institute of Technology).MIT
Libraries:http://dspace.mit.edu/handle/1721.1/7963
http://dspace.mit.edu/handle/1721.1/7582
[6] Li, Z., Feng, H. H., & Saricaoglu, A. (2017). The
Short-term and Long-term Effects of AWE
Feedback on ESL Students’ Development of
Grammatical accuracy. Calico Journal, 34(3), 355-
375.
[7] McNamara, D. S., Crossley, S. A., Roscoe, R. D.,
Allen, L. K., & Dai, J. (2015). A Hierarchical
Classification Approach to Automated Essay
Scoring. Assessing Writing, 23, 35-59.
[8] Rich, C. S. (2012). The Impact of Online Automated
Writing Evaluation: A Case Study from Dalian.
Chinese Journal of Applied Linguistics, 35(1), 63–
79. https://doi.org/10.1515/cjal-2012-0006
[9] Wang, M-J. & D. Goodman (2012). Automated
Writing Evaluation: Students’ Perceptions and
Emotional Involvement. English Teaching &
Learning36/3: 1–3
495