0% found this document useful (0 votes)
110 views23 pages

Wordtune and Writing

Uploaded by

shaimaa Hassan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
110 views23 pages

Wordtune and Writing

Uploaded by

shaimaa Hassan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/370471550

Using artificial intelligence to foster students' writing feedback literacy,


engagement, and outcome: a case of Wordtune application

Article in Interactive Learning Environments · May 2023


DOI: 10.1080/10494820.2023.2208170

CITATION READS

1 1,263

3 authors, including:

Hanieh Shafiee Rad Rasoul Alipour


Shahrekord University Shiraz University
16 PUBLICATIONS 79 CITATIONS 5 PUBLICATIONS 4 CITATIONS

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Hanieh Shafiee Rad on 21 August 2023.

The user has requested enhancement of the downloaded file.


Interactive Learning Environments

ISSN: (Print) (Online) Journal homepage: https://www.tandfonline.com/loi/nile20

Using artificial intelligence to foster students’


writing feedback literacy, engagement, and
outcome: a case of Wordtune application

Hanieh Shafiee Rad, Rasoul Alipour & Aliakbar Jafarpour

To cite this article: Hanieh Shafiee Rad, Rasoul Alipour & Aliakbar Jafarpour (2023):
Using artificial intelligence to foster students’ writing feedback literacy, engagement,
and outcome: a case of Wordtune application, Interactive Learning Environments, DOI:
10.1080/10494820.2023.2208170

To link to this article: https://doi.org/10.1080/10494820.2023.2208170

Published online: 03 May 2023.

Submit your article to this journal

View related articles

View Crossmark data

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=nile20
INTERACTIVE LEARNING ENVIRONMENTS
https://doi.org/10.1080/10494820.2023.2208170

Using artificial intelligence to foster students’ writing feedback


literacy, engagement, and outcome: a case of Wordtune
application
a b
Hanieh Shafiee Rad , Rasoul Alipour and Aliakbar Jafarpourc
a
Department of English, Shahrekord University, Shahrekord, Iran; bDepartment of Clinical Psychology, Faculty of
Psychology and Educational Sciences, Shiraz University, Shiraz, Iran; cEnglish Department, Shahrekord University,
Shahrekord, Iran

ABSTRACT ARTICLE HISTORY


Technology and artificial intelligence (AI) advancements are increasingly Received 30 March 2023
impacting second language (L2) learning, particularly in writing skills Accepted 22 April 2023
development. Motivated students seeking writing feedback can benefit
KEYWORDS
from AI implementation, with writing feedback literacy being crucial for Artificial intelligence; writing
effective use. Engaged students with specific writing goals benefit from feedback literacy; writing
ongoing practice and learning to attain high-quality outcomes, whether engagement; Wordtune App;
from formal or informal settings such as AI application. AI’s integration writing outcome
into L2 education underscores the need to comprehend its role in
improving L2 writing engagement, writing feedback literacy, and writing
outcome. A mixed-method study was conducted, which involved 46
students at the upper-intermediate level. The student cohort was divided
into two groups for the purpose of the study. One group was designated
the control group, while the other was the experimental group, each
comprising 23 students. By using Wordtune, an AI-based application, the
experimental group was able to significantly improve their writing
outcomes, engagement, and feedback literacy when compared to the
control group. Furthermore, the Wordtune application received positive
feedback from students interviewed about its ability to improve their
writing outcomes, engagement, and feedback literacy. These findings will
contribute to our understanding of AI in L2 learning and inform
instructional considerations for Wordtune’s application.

Introduction
Writing is a multifaceted process that involves using language to convey information, ideas, feelings or
experiences through written words (Zhou & Hiver, 2022). L2 educators (i.e. learners who have received
formal education in a non-native language, acquiring proficiency in areas like reading, writing, listen-
ing, and speaking) aim to develop writing literacy in learners, which involves acquiring strong knowl-
edge of grammar, vocabulary, punctuation, and sentence structure, and the ability to organize writing
appropriately for context, tone, and audience. Writing engagement is crucial in facilitating this devel-
opment in the L2 learning process, where active participation in writing tasks leads to better linguistic
outcomes, such as improved proficiency, fluency, accuracy, and complexity of written expression (Han
& Xu, 2021). Effective implementation of writing feedback can enhance writing feedback literacy and
the writer’s ability to positively respond to constructive criticism (Molloy et al., 2020).

CONTACT Hanieh Shafiee Rad haniyeshafi[email protected]


© 2023 Informa UK Limited, trading as Taylor & Francis Group
2 H. S. RAD ET AL.

Research on writing feedback draws on theories and practices from various fields, such as L2
writing, second language acquisition, higher education, and educational psychology (Dong et al.,
2023; Yu et al., 2022; Zheng & Yu, 2019). Recently, there has been a shift in research focus within
the field of L2 writing. In the past, there was a primary emphasis on whether or not feedback,
such as error correction, should be given. However, this emphasis has evolved to focus on determin-
ing the most effective strategies for supplying feedback and identifying an ideal feedback strategy
that will enhance learners’ writing skills. This change in research focus has been noted by scholars
(e.g. Banister, 2020; Hyland & Hyland, 2019; Lee, 2017). As feedback occurs within specific interper-
sonal, educational, and socio-cultural contexts and its effectiveness is influenced by both contextual
and individual factors (Dong et al., 2023), writing researchers and practitioners have more recently
begun to pay closer attention to how L2 learners assess and utilize feedback and self-regulate
their cognitive and affective reactions to feedback, which refers to feedback literacy in writing
(Sun & Wang, 2022; Yu et al., 2022).
The changing dynamics of education have led to the evolution of educational feedback and tasks,
with technology playing a significant role in facilitating personalized feedback for students (Jurs &
Špehte, 2021). This feedback can be facilitated through intelligent tutoring systems based on the stu-
dent’s proficiency level and interests (Ai, 2017; Kulik & Fletcher, 2016). In contrast to traditional
teacher-centered feedback models, a learner-oriented feedback model emphasizes the accountabil-
ity and function of both machine and student writers in the feedback process (Li & Han, 2022).
However, current qualitative research on L2 writing feedback literacy has been limited to theoretical
and case study approaches that restrict transferability across different learning contexts (Han & Xu,
2020; Yu & Liu, 2021). The use of technology has resulted in new ways of learning, including tech-
nology-enhanced feedback that evolves to meet students’ needs in terms of frequency, format,
and timeliness (Jurs & Špehte, 2021).
One of the technological tools utilized for enhancing educational feedback is AI. It refers to the
development of computer systems that can perform tasks associated with education that are con-
ducive to human intelligence. These tasks include grading examinations, generating personalized
learning resources, and proposing assignment tasks based on real-time data analysis, as expounded
by Chiu et al. (2023). The use of AI makes it possible to acquire information, solve problems, and
provide solutions. This can range from simple rule-based decisions to complex voice or picture rec-
ognition systems (Wang et al., 2023). Teachers can utilize AI to give quick feedback to students in
different situations, like language acquisition, by processing large amounts of educational data
with minimal disruption from time or space constraints (Chiu et al., 2022). AI-assisted writing feed-
back can provide learners with immediate, accurate, personalized, and contextualized feedback,
which may enhance their writing skills and performance. By examining the effects of using AI in
the feedback process for L2 students, the study may improve understanding of the effectiveness
of AI in L2 contexts and aid the creation of AI-assisted educational applications that promote
writing feedback literacy. The results may also help researchers and practitioners identify best prac-
tices for integrating AI into the writing feedback process, which may transform the way feedback is
given and received in the classroom setting (Li & Han, 2022).
As AI and advanced technologies are increasingly used in education worldwide, it is imperative that
researchers and practitioners be informed about their current and potential uses in the writing feed-
back process in L2 contexts (Huang et al., 2023a). In an effort to examine the efficacy of AI on L2 stu-
dents’ writing feedback literacy, engagement, and outcome, this study used the Wordtune App.

Reviewing the literature


Artificial intelligence and L2 learning
In spite of the difficulty of defining artificial intelligence widely, for the purpose of this study, it will be
defined as the ability of a computer or computer-controlled robot to learn, reason, and express itself
INTERACTIVE LEARNING ENVIRONMENTS 3

in a manner comparable to that of a human (Berendt et al., 2020; Williamson & Eynon, 2020). As a
general rule, AI for language learning makes use of technologies such as speech synthesis, natural
language processing, and voice recognition (Engwall & Lopes, 2022; Huang et al., 2023a; Wang
et al., 2022). The use of AI in language learning and precision education has demonstrated high
potential (Lin & Mubarok, 2021; Moussalli & Cardoso, 2020). However, research on AI-supported
language learning has so far remained nascent (Chen et al., 2021).
Behaviorism, cognitive and social constructivism, and complexity theory are the three theoretical
bases of AI. The basic theoretical framework for AI is behaviorism, which emphasizes the production
of meticulously prepared content sequences that result in the learner’s appropriate performance
(Skinner, 1953). Through preprogrammed instructions and introducing new concepts, providing
immediate feedback to the learner regarding incorrect responses, and maximizing positive reinforce-
ment, the learner-as-recipient paradigm of AI views learning as reinforcement of knowledge acqui-
sition (Greena et al., 1996; Schommer, 1990). Second, a cognitive and social constructivism approach
to learning holds that learning occurs when people, information, and technology are placed
together in a socially situated environment (Bandura, 1986; Vygotsky, 1978). The AI system and
the learner should have active, reciprocal interactions in order to optimize learner-centered, indivi-
dualized learning. Specifically, learners collaborate with AI systems to accomplish better or more
effective learning by communicating with them. In a third framework, complexity theory takes an
innovative approach to education, focusing on the importance of synergistic collaborations
between various system components in order to maximize the effectiveness of education (Mason,
2008). Students, teachers, and other human beings are part of a complex system of AI that must
be conceived and used (Riedl, 2019).
A variety of AI applications, including voice- and text-driven ones, have been included in L2 learn-
ing (Bibauw et al., 2019). Automatic oral communication and speech recognition are capabilities of
voice-driven AI systems like Amazon’s Alexa, social robots, and AI chatbots, whereas text-driven AI
programs may detect writing errors automatically and give immediate feedback (Bibauw et al., 2019).
A small number of researchers have also begun to test multimodal AI solutions by integrating AI with
other kinds of technology. For instance, in pilot research involving 10 university students, Divekar
et al. (2022) integrated AI with extended reality (XR) to produce an intelligent and immersive
language learning environment for students of Chinese as a foreign language. In their investigation,
the AI agents conversed with students in multimodal ways while hearing, seeing, and comprehend-
ing them in a virtual environment driven by XR, greatly enhancing their EFL vocabulary, speaking,
and listening abilities.
Language acquisition and affective language learning are the two main purposes of employing AI in
L2 learning. AI-supported language learning has been found to be helpful for L2 speaking, reading, and
listening in terms of language acquisition. For example, Dizon (2020) used a quasi-experimental study
to examine the effect of Amazon’s Alexa on encouraging learners’ EFL progress. The research findings
indicated that the experimental group improved more in L2 speaking competency when compared to
the control group, which included a total of 28 EFL university students. Regarding the affective advan-
tages, it has been demonstrated time and time again that AI apps have a good impact on learners’ atti-
tudes because they offer a setting that is less intimidating and a realistic communication experience
that makes students feel inspired and less anxious. In a flipped classroom setting, Huang et al.
(2023b) were among the first researchers to confirm the effectiveness of AI personalized video rec-
ommendations for improving learning outcomes and engagement. The study involved two groups,
one as a control group and the other as an experimental group, with only the experimental group
receiving AI-based personalized video recommendations. The results of the study showed a significant
improvement in the learning performance and engagement of students with moderate motivation
levels. Moreover, the impact of AI on learning outcomes and perceptions in education has been
confirmed by Zheng et al. (2021). The study found that AI had a high impact on learning achievement
but only a small impact on learning perception. The results showed that factors such as sample size,
learning domains, and hardware significantly influenced the effectiveness of AI.
4 H. S. RAD ET AL.

Overall, human-AI interactions in AI-supported language learning assist in reducing emotional


barriers, giving quick feedback for timely self-correction, and offering plenty of opportunities for
repeated practice, ultimately improving learners’ language acquisition and comprehension (Mous-
salli & Cardoso, 2020; Tai & Chen, 2020; Underwood, 2017). However, other study findings were
from brief experiments (e.g. 10–15 minutes over two days; Engwall & Lopes, 2022) with a limited
number of participants (e.g. 11 participants; Moussalli & Cardoso, 2020), rendering them susceptible
to novelty effects and difficult to generalize (Van den Berghe et al., 2019). Although AI applications
have been considered to have significant advantages for enhancing L2 learning, much earlier
research rarely looked at learners’ actual learning results (Zhang & Zou, 2022). Furthermore,
nothing is known about how L2 learners engage with AI-supported language acquisition, how
their interaction with AI relates to how much fun they have learning the language, or how well
they really learn it through writing feedback literacy.

L2 writing feedback literacy, engagement, and outcome


Based on Sutton’s (2012) initial conceptualization, student feedback literacy is defined as the ability
to interpret information and use it to improve tasks or learning processes. According to Sutton
(2012), who adopted the academic literacy perspective, feedback literacy encompasses more than
just teachable abilities for using feedback but also how students position themselves in the feedback
setting. He developed this idea using the three dimensions of knowing, being, and acting. Students
comprehend the significance of feedback and are aware of what and how they may learn from it.
This is known as the knowing aspect. Additionally, feedback-literate students should have strong
educational identities that are driven and self-assured so that they may develop their own distinct
learning styles. Students should also respond to feedback by interpreting and applying it. Carless
and Boud (2018) provided a process-oriented approach that included appreciating feedback,
forming judgments, controlling emotions, and acting. Student feedback literacy was a particular
focus of their study. They contend that in order to be feedback literate, students must both recognize
the value of feedback for learning and actively participate in the feedback process. Additionally, it
necessitates a highly developed capacity for assessing the quality of feedback and selecting the
appropriate level of outside support. Carless and Boud (2018) emphasize students’ regulation of
their emotions and behaviors, particularly in the face of critical feedback, in a way that echoes
Sutton’s (2012) description of self-assured learners. Additionally, feedback literacy calls for the use
of feedback to enhance current task performance as well as inform performance on upcoming com-
parable tasks.
Molloy et al. (2020) created a student feedback literacy framework. This framework includes four
elements: appreciating feedback processes; making judgments; managing affect; and taking action.
Students who appreciate feedback are aware of the importance of feedback and are actively parti-
cipating in the feedback process (Winstone & Carless, 2020). Evaluative judgment development is
connected to the process of making judgments. Students need to get a thorough understanding
of what makes good quality work as well as the skills necessary to form reliable judgments about
their own or other people’s work (Ma et al., 2021). The third component of student feedback literacy
is managing affect. It means that in order to participate in activities involving feedback, students
must acquire positive attitudes (such as learner engagement) (Han & Xu, 2020). A critical component
of feedback literacy is taking action. Learners must interact with feedback and take action on it in
order to fully realize its potential. Students should be equipped with a variety of techniques for uti-
lizing feedback and creating identities as proactive learners (Ma et al., 2021).
There is a paucity of research on how to improve writing feedback literacy for L2 students. There is
evidence that teacher feedback improves students’ literacy concerning peer feedback; nevertheless,
there are individual differences according to learner factors such as language beliefs, motivation, and
proficiency (Han & Xu, 2020). In the case of written corrective feedback, student writing feedback
literacy is likewise contextual and subject to change depending on local, textual, and instructional
INTERACTIVE LEARNING ENVIRONMENTS 5

settings, as well as learner factors like beliefs and motivation (Han & Xu, 2020). According to Yu and
Liu (2021), student writing feedback literacy depends on four important factors in the academic
writing context: technical facilitators (e.g. educational technologies that connect feedback providers
and receivers specifically AI), knowledge bases (e.g. multiple forms of knowledge needed by an aca-
demic writer who is feedback literate), learner agency (e.g. learner proactivity), and social scaffolding
that is participatory, supplied by teachers and peers, and reflects dialogue about feedback (e.g.
instructor-student dialogue around feedback). During the COVID-19 pandemic, Ma et al. (2021)
used learning-oriented online assessment to encourage students’ writing feedback literacy in an
L2 setting. Through learning-focused online testing, they presented a methodology for fostering
student writing feedback literacy. To comprehend how various instructional techniques promote
the development of student writing feedback literacy, a further empirical study is necessary (Win-
stone & Carless, 2020). It is unknown, for example, how L2 student writers may interpret an online
application of AI regarding their writing feedback literacy or whether individual variations are
evident in its creation.

This study
The rationale for examining writing engagement, writing outcome, and writing feedback literacy
through AI intervention in this study is threefold. First, writing is a critical skill that is essential
for communication and academic success in today’s information-driven society. However, many
students struggle with writing and may not receive adequate support or feedback to improve
their skills (Han & Xu, 2020). AI tools can assist in providing timely, accurate, and personalized
feedback to students, helping them develop their writing skills more effectively than traditional
feedback methods (Moussalli & Cardoso, 2020). Second, by incorporating AI interventions into
the writing process, teachers can help students learn how to write more effectively through
personalized feedback, increase their overall writing engagement through employing technol-
ogy, and improve the quality of their writing outcomes by receiving this personalized and
immediate feedback that helps them to learn from their mistakes, understand their strengths
and weaknesses, and develop better writing habits overall. Third, examining writing feedback
literacy through AI can help us understand key factors that contribute to feedback comprehen-
sion, utilization, and improvement in writing skills. The insights generated from analyzing feed-
back data and identifying best practices can help inform the design and development of AI-
powered writing feedback applications, which can then improve user outcomes by providing
personalized, actionable feedback to writers.
As more than 50% of individual proficiency levels, including writing, literacy, and problem-
solving, can be covered by AI, this trend of increasing AI use in feedback education and practices
is expected to expand (Miao et al., 2022). To ensure that the result helps teachers to offer more
effective feedback and students understand feedback better, applying AI to feedback literacy has
to be more specialized within the educational setting (Chiu et al., 2023). The purpose of this
study is to describe how AI is now used to provide feedback in the writing classroom and to
suggest potential locations for further research on technology-driven feedback practices. In other
words, this study used artificial intelligence to foster students’ feedback literacy and engagement
in L2 writing through the Wordtune App. The following three research questions were developed:

(1) Does AI-enhanced classroom have any significant effects on L2 learners’ writing outcome?
(2) Does AI-enhanced classroom have any significant effects on L2 learners’ writing feedback lit-
eracy and engagement?
(3) What perceptions did L2 learners have towards employing AI-enhanced classroom in writing
instruction?
6 H. S. RAD ET AL.

Method
In light of the wealth of resources and opportunities provided by Web 2.0 technology, this pilot
project was created in response to complaints that L2 writing instructors continue to employ out-
dated methods of teaching L2 writing (Elola & Oskoz, 2017). The advantages of AI-enhanced
writing compared to mobile-supported class-based writing instruction on learners’ feedback literacy
and engagement were examined using the comparative case study method (Creswell & Creswell,
2017). Using a convenience sampling of two intact classes, the comparison looked at participant
behaviors and actions during learning activities (Stake, 1995).

Design
According to the theoretical construct of writing development, quantitative and qualitative method-
ologies were specifically used to compare the phenomenon of using AI-based technology to analyze
two writing instructional modes (i.e. control and experimental groups). The researchers used the
writing instructional mode as the independent variable for the writing outputs and learner views
of the two modes as the dependent variables in an effort to explain the data from the context-
based environment.

Participants
For the comparative pilot experiment, two distinct groups of Persian students from two intact
modules (n = 23 in each class; M = 24, SD = .67) with the same course title (i.e. English Composition)
from the Iranian English language institute participated. The writing competence levels for both
classes ranged from upper-intermediate on the Oxford Placement Test (OPT), which were compar-
able to the C1 benchmarks on the Common European Framework of Reference for Languages: Learn-
ing, Teaching, and Assessment scale. According to the results of the pretest, there was no prior
difference in writing proficiency between the two classes.

Materials
The Academic Essay (How to Prepare, Develop, Write, and Edit) by Soles (2005) was the only textbook
used in both the control and experimental groups. A traditional face-to-face teaching method was
used in the control group. Additionally, participants were required to complete textbook exercises as
part of their homework assignment. But, in addition to the textbook and face-to-face lectures, the
experimental group had to use the Wordtune App, which was selected based on software that
was widely available online, to write and receive feedback (Zhao, 2022). Identical material was
given to both groups during the experiment (see Figure 1 for more details).

Wordtune app experiment


The Wordtune app was specifically used by the experimental group. Wordtune was developed by
AI21 Labs1, a company established in 2018 by leading experts in AI. The primary objective of this
organization was to bring about a radical change in the manner in which individuals read and
write (Zhao, 2022). They specialized in creating sophisticated AI tools and language models that pos-
sessed an understanding of the context and meaning of written text. These exceptional models
differentiated Wordtune as the premier writing companion that relied on AI, providing assistance
beyond simple grammar and spelling correction and assisting with the accurate transcription of
your ideas into written form. Wordtune, a digital writing assistant powered by AI, offered several
options for a text by changing the sentence structure or replacing words with synonyms while main-
taining the original meaning. Using machine learning techniques (Natural Language Processing), this
INTERACTIVE LEARNING ENVIRONMENTS 7

Figure 1. Elaborated content based on the sourcebook for both groups.

technology enabled the computer to comprehend and produce natural language from enormous
datasets of written content. With the use of AI, Wordtune provided alternatives for rewriting
one’s original sentences as opposed to using information from other web sources by identifying pat-
terns in a vast dataset.
This free online platform had been used to quickly assess learners’ writing skills and provide feed-
back in accordance with the Common European Framework of Reference for Languages: Learning,
Teaching, and Assessment (CEFR). The corrective feedback evaluated the learner’s performance in
terms of content, communicative achievement, organization, and language. Content was assessed
for how well a learner had completed a task; communicative achievement was assessed for
whether the writing was appropriate for the task; organization was assessed for whether the task
8 H. S. RAD ET AL.

was logically organized; and language was assessed for the accuracy of the language’s meaning,
morphology, and syntax.
Wordtune provided writing students with options for independent study, but it also had poten-
tial as a teaching tool. In this research, the teacher put up online writing exercises and encour-
aged students to use Wordtune while writing. She also invited students to consider what they
had learned through practicing the rewrites, including new words, synonyms, clauses, and
formal sentence structures that they would not have been able to utilize in their writing otherwise
(see Figure 2).
The teacher ensured that students used the Wordtune application as an out-of-classroom home-
work strategy by setting clear expectations and guidelines for its usage. This included specifying how
often the app should be used, how to incorporate it into written assignments, and how students’
engagement with the app would be assessed. Additionally, the teacher regularly checked in with
students on their progress with the app and provided feedback on their written work that demon-
strated the use of Wordtune. This helped hold students accountable for their use of the app and
ensured that they incorporated its features into their writing. Finally, keeping open communication
with the students about their experiences and challenges with Wordtune provided useful insights for
adapting the strategy to better meet their needs.

Procedure
This study was carried out across several phases. As a control and experimental group, we initially
selected two intact classes at random. The writing exam was given as a pretest after making sure
that everyone was equal in terms of age, native language, and proficiency level. Prior to the inter-
vention, both groups’ participants completed the feedback literacy and writing engagement ques-
tionnaires as a pretest. To eliminate any potential teacher-related bias, the same teacher was chosen
to serve as the instructor for both the control and experimental groups. It took around 11 weeks to
complete the experiment.
Both groups’ participants were required to attend face-to-face sessions twice a week for
90 minutes each. The control group’s participants experienced a conventional lecture-based teach-
ing strategy. In this group, the instructor was in charge of the lesson, and there was little student-
teacher interaction. The approach entails rigorously controlling input through adherence to the
required lecture-based course materials. The activities were given to the students as homework,
and they were given either instructor or classmate feedback. The experimental group, however,
used AI-enhanced applications to practice their writing abilities. The instructor was required to set
up the Wordtune App and prepare the students’ smartphones or tablets before class. The instructor
then discussed the app’s features and capabilities, including how to complete tasks, receive feed-
back, and use the upgrade system. After that, the teacher had to assign homework to students
and ask them to complete it using the Wordtune App.
The instructor used the same method of evaluation and test instrument to administer a posttest
to both the experimental and control groups following the intervention. Also, the students had to
complete a posttest questionnaire on feedback literacy and writing engagement features. The
researchers included a qualitative phase utilizing semi-structured interviews to provide a more in-
depth study. Eight students, including four from each group, were chosen at random and inter-
viewed for an in-depth interpretative analysis.
In order to avoid the Hawthorne effect, researchers do not disclose the objective or hypothesis of
the research to the respondents. In addition, researchers should not interfere with the participants’
routine activities, or at least reduce interference to minimize the chances of bias. Furthermore,
researchers should keep the observation minimal and natural rather than using an artificial setup
or laboratory environment. Finally, data were collected from participants without personally identi-
fying them to encourage candid and honest responses.
INTERACTIVE LEARNING ENVIRONMENTS 9

Figure 2. Screenshots from the Wordtune Environment.

Instruments
Writing task
In the pre- and post-tests, the EFL students’ writing outcome was evaluated using two 60-minute
writing tasks based on their previous knowledge. The pre- and post-test timed writing assignments
included the following topics:
Pretest Topic: Do I need to study English to find a new job or not? Why?
10 H. S. RAD ET AL.

Figure 2 Continued

Posttest Topic: Students Who Work Vs. Unemployed Students: Who Takes the Best of This Life?

The scores of the students’ written expositions were assessed using Hyland’s (2003) analytical
scoring rubric. The scoring system included format and substance (40 marks), organization and
coherence (20 marks), sentence structure (40 marks), and vocabulary (40 marks). The scores
ranged from 0 to 100. Interrater and intrarater reliability indices for the essays used for the
pretest and posttest were, respectively, 93, 98, 94, and 96. This criterion was used since it was
shown to be relevant and trustworthy in Hyland’s (2003) EFL context research. The raters were
two outstanding applied linguistics professors who were strangers.

Writing feedback literacy scale


28 items (e.g. I appreciate the role of feedback in continually improving work, rating from 1 = strongly
disagree to 5 = strongly agree) covering five dimensions (i.e. appreciating feedback, recognizing
INTERACTIVE LEARNING ENVIRONMENTS 11

diverse feedback sources, forming judgments, controlling affect, and taking action) were included in
the L2 student writing feedback literacy scale (L2-SWFLS), which Yu et al. (2022) created and vali-
dated and was used to measure students’ feedback literacy. High correlations among the five dimen-
sions and the five different components in the factorial structure of student writing feedback literacy
supported the construct’s multifaceted nature and interconnectedness (Yu et al., 2022). To ensure
that the items were understandable, they were translated into the participants’ native language
of Persian with fluency. We enlisted the aid of two official translators to guarantee the question-
naire’s content validity. To further confirm that the translation accurately represents the respon-
dents’ opinions, forward and backward translations were applied to some items. Lastly, the
translated questionnaire was piloted on 15 EFL learners before being distributed, and it was con-
sidered reasonable (α = .93).

Writing engagement scale


In this study, we utilized an event-level, scenario-based measure of L2 writing engagement. With a
particular past timepoint in mind, three different scenarios were utilized to extract information on
participants’ behavior, emotions, and cognitive involvement (i.e. their most recent writing class
session). Based on Martin’s (2009) Motivation and Engagement Scale, we modified these situations.
In each of the aforementioned circumstances, participants were asked to rate their level of partici-
pation in their most recent writing class session on a scale of 0 to 100. The average of these
many domains was then used to create a composite measurement of L2 writing engagement. In
the current sample, the modified scale showed high reliability (α = .92).

Semi-structured interview
Four students from each of the control and experimental groups were randomly chosen by the
researchers to participate in the qualitative phase. All of the students’ names were placed in a
pot, and four names were then drawn at random from the pot for each group in order to assure
random sampling. The goal of the qualitative phase of the research is to help elaborate and
thoroughly explain the findings of the statistical analysis (Creswell & Creswell, 2017). The author,
who conducted the interviews and was not the students’ instructor, obtained the opinions of the
students and sought to understand their motivations for doing so (See Appendix 1 for more details).
The interviewer also asked several detailed questions about the benefits and drawbacks of this
style of teaching and learning. These comprised open-ended and yes/no questions (See Appendix
1). To ensure deeper comprehension by the learners, the interviews were conducted in Persian.
The researcher audio-recorded and translated the interviews. Interviews were conducted about
20–25-minute. To verify participant accuracy and memory of their emotions, they were also con-
ducted 24 hours following the last course session at the language institute. To ensure objectivity,
the data were independently coded by two experts. Inter-rater reliability estimates revealed that
the consistency was acceptable (r = .90). Additionally, uncertain areas were addressed until agree-
ment was obtained. The dependability was confirmed by Cohen’s Kappa (.91), which measures
inter-rater reliability. The external auditor, a professor with experience in mixed-methods research,
delivered the final data.

Data analysis
SPSS V. 22 was used to further analyze the quantitative data gathered from the timed writing test,
feedback literacy, and writing engagement scales. Using descriptive and inferential statistics, the
data were analyzed. Descriptive analysis, including mean, standard deviation, skewness, and kurtosis,
is commonly used to summarize and interpret the characteristics of a dataset. It is a type of statistical
analysis that provides information about the central tendency and dispersion of a dataset. On the
other hand, inferential statistics, including one-way ANCOVA, is a set of mathematical tools and tech-
niques that are used in data analysis to draw conclusions about the population based on a sample.
12 H. S. RAD ET AL.

The logic behind inferential statistics is that it is not always practical or feasible to study an entire
population, so we often only have a sample of the population. Although we may observe and
analyze the behavior of the sample, what we truly seek is information or insights about the whole
population.
Using grounded theory, the qualitative phase’s data was examined (Urquhart, 2022). The audio
recordings of the interviews were entered verbatim into a Word document. Member verification
was carried out to ensure the reliability of the transcriptions (Dörnyei, 2014). After collecting the par-
ticipants’ feedback, the transcripts underwent a three-stage theme analysis to answer the study
question. The researchers initially read the transcriptions several times to identify the key themes
that captured the participants’ perspectives on their time in the writing class (i.e. the theme assign-
ment stage). After that, content analysis revealed the major categories and themes. The themes were
then integrated, altered, or eliminated, and themes with comparable material were merged into a
single category (i.e. the categorization stage). The researcher then reviewed the transcripts while
underlining the crucial details of the students’ anecdotes. In the margins of the transcripts,
several labels and notes were also utilized. Last but not least, a label corresponding to each cate-
gory’s underlying content was provided (i.e. the labeling stage). Interrater reliability (Cohen’s
Kappa = .93) was used to establish reliability.

Results
Quantitative phase
First, the statistical power of the study is calculated based on several factors, including the sample
size, the effect size of writing feedback literacy, writing engagement, writing outcome, the level of
significance (alpha), and the desired power level (usually 80% or 90%). Assume that the effect size of
writing feedback literacy, engagement, and outcome (d = 0.5, 0.6, and 0.45), the alpha level is set at
0.05, and the desired power level is 80%. The researchers conducted a power analysis using these
parameters and found that a sample size of 47 participants would be needed to achieve 80%
power. This means that if there truly is a moderate effect of writing feedback literacy, engagement,
and outcome, then there is an 80% chance that the study will detect this effect as statistically signifi-
cant, while there is only a 20% chance of making a type II error (failing to detect a true effect).
Descriptive statistics and statistical tests were employed in order to respond to the first and
second research questions and assess the impact of the AI application intervention on the writing
engagement, feedback literacy, and outcome of L2 students’ writing skills. The mean scores for
the experimental group significantly increased from the pretest to the posttest, as shown in Table 1.
The results for skewness and kurtosis were well within the range of ±1.5 when the descriptive stat-
istics were examined to make sure that the normality assumptions were met (Tabachnick & Fidell,
2013). As the results of the Kolmogorov–Smirnov test (i.e..15,.12,.14,.13, 11, and.16) indicated no indi-
cation of normality violations, analysis of covariance (ANCOVA) was utilized to address the study’s
central question. One-way ANCOVA may be preferred over an independent t-test in some situations
because it allows for controlling for a continuous covariate. In other words, it takes into consideration

Table 1. Writing Test, Feedback Literacy, and Engagement Scores of Two Classes Before and After Intervention
Pre-Intervention Post-Intervention d
N M SD Skew Kurt M SD Skew Kurt
Writing Outcome Experimental 23 38.3 .66 .45 −1.24 78.3 .69 −.24 −.67 .79
Control 23 38.8 .77 −.32 .88 41.9 .63 .81 −.47 .02
Writing Feedback Literacy Experimental 23 3.3 .75 .86 .73 4.16 .17 .78 .74 .69
Control 23 3.2 .91 1.04 .89 3.67 .53 .68 .91 .05
Writing Engagement Experimental 23 35.4 1.05 .46 .73 67.9 .76 .79 .55 .63
Control 23 36.5 .53 −.35 −.52 36.7 .45 1.02 1.02 .07
INTERACTIVE LEARNING ENVIRONMENTS 13

the effect that a continuous variable has on the dependent variable, while also comparing differ-
ences between groups. This is important because the covariate may affect the results and,
without controlling for it, could lead to misleading or erroneous conclusions (Tabachnick & Fidell,
2013). For the ANCOVA, the covariate reliability assumption was verified for the three variables
(.95, .93, .94).
There was no indication of a significant interaction between the covariate and the dependent
variable, showing that, furthermore, there was no evidence that the homogenous regression
slope assumption was violated (F (1, 46) = 1.40, p = .304; F (1, 46) = 1.60, p = .224; F (1, 46) = 1.35,
p = .124). After that, a one-way ANCOVA was used to investigate the impact of the AI-enhanced
intervention on the students’ posttest writing outcome, writing feedback literacy, and writing
engagement results while controlling for any pretest (covariate) differences Table 2.
The ANCOVA writing test results demonstrated a significant impact of the intervention on the
writing outcome of the experimental group, which outperformed the control group, F (1, 45) =
39.82, p = .000, partial η2 = .157. Similar to this, ANCOVA’s findings on the writing feedback literacy
scores revealed a significant impact, F (1, 45) = 9.23, p = .011, and partial η2 = .141. The writing
engagement scores were shown to have a significant impact, F (1, 45) = 37.11, p = .000, partial η2
= .161, demonstrating that the experimental group outperformed the control group. Overall, the
findings seem to support the positive impact of the AI-enhanced writing classroom on the develop-
ment of L2 writing abilities, feedback literacy, and writing engagement.

Qualitative phase
Information from the follow-up interviews was used to substantiate and explain the quantitative
findings and answer the third research question. The interviews’ audio recordings were first trans-
lated into a Word document. As advised by Dörnyei (2007), member verification was done to
make sure the transcriptions were accurate. The transcripts of the participants’ interviews were
then made available to them so they could check the data for accuracy and make any required
adjustments. The transcript was then carefully analyzed by the researchers to find themes that
would make good study questions. In order to address the research topic, the transcripts underwent
a three-stage theme analysis after taking participant input into account. The authors read through
the transcripts several times during the topic assignment stage to find the major themes that
best reflected the participants’ experiences in the pronunciation learning class. The next step was
to conduct a content analysis to identify the main categories and subjects. At the categorization
stage, topics that shared information or had overlapping topics were combined to form a single cat-
egory. During the labeling stage, the researcher underlined crucial information from the students’
narratives in the transcripts’ margins and applied a label corresponding to each category’s under-
lying material. Codes and snippets were categorized in order to create the major categories for
the learner interviews. A second expert in the qualitative area read the data and independently
coded it to ensure objectivity. Consistency was deemed to be good by inter-rater reliability esti-
mation with r = .88. Uncertain points were debated until consensus was obtained. With Cohen’s

Table 2. ANCOVAs for the Students’ Post-Intervention Writing Test, Feedback Literacy, and Engagement Scores.
Dimensions Group Sources Sum of Squares df Mean Square F Sig. Partial Eta Square
Writing Outcome Experimental Group 38.65 1 45.55 39.82 .000 .157
Control Total 8431.00 45
a. R2 = .88 (Adjusted R2 = .87)
Writing Feedback Literacy Experimental Group 54.46 1 3.46 9.23 .011 .141
Control Total 9312.00 45
a. R2 = .79 (Adjusted R2 = .78)
Writing Engagement Experimental Group 88. 13 1 47.34 37.11 .000 .161
Control Total 9571.00 45
a. R2 = .89 (Adjusted R2 = .88)
14 H. S. RAD ET AL.

Kappa = .94, the inter-rater reliability supported the data dependability. The final data were then
provided by an outside expert, a professor with experience in mixed-methods research.
Retrospective opinions of the face-to-face instruction by the students were divided into three
negative and one positive theme.

2. Your peers can compare and share notes with you. It seems that the interviewees shared a
general consensus that collaboration and sharing information among peers is important. They
stated that being able to compare and share notes with colleagues is a valuable tool for learning,
growth, and improving work performance. The interview data suggests that the team environ-
ment allows an opportunity for individuals to learn new perspectives and ideas from others.
As a result, they can generate better solutions to work problems, enhance their own skills, and
contribute more effectively to the overall success of their team. For example:

The short answer to the question “Should I share notes?” is yes. Working with classmates is well
worth the time and effort it takes to attend a face-to-face class.

3. Limited accessibility to feedback. Interviewees reported that without regular access to feed-
back, individuals may struggle to identify areas for improvement and could potentially plateau
in their progress towards achieving their goals. Limited accessibility to feedback can also lead
to a lack of communication and understanding between team members and stakeholders,
making it difficult to collaborate effectively. Implementing technology solutions that allow indi-
viduals to anonymously provide feedback can also create a safe space for honest and constructive
criticism. For example:

It’s more challenging to ask questions and make comments in real time due to the pressure of being
in a group setting.

4. Time constraints for receiving feedback. The majority of students emphasized the importance
of receiving feedback promptly, while teachers find themselves struggling to deliver timely feed-
back due to workload and other factors. In addition, several students pointed out that if they did
not receive feedback within a reasonable amount of time, they often lost motivation and focus on
the task at hand. In some cases, they felt that their manager did not value their contributions as
they had not received any feedback. For example:

I didn’t like traditional face-to-face learning at all. Face-to-face classes require students to be avail-
able at specific times and days, which often conflict with work or family schedules.

5. Lower writing enjoyment. Some participants expressed feeling disinterested or bored when
writing, which negatively impacted their motivation to complete written tasks. Some participants
suggested that writing prompts or new feedback writing techniques could potentially increase
their engagement and enjoyment of writing. Others reported feeling anxious or stressed about
their writing abilities, leading to feelings of self-doubt and decreased enjoyment. For example:

Our writing enjoyment was limited in the classroom since writing is often done alone and requires an
isolated focus, which can rarely exist within spacious classrooms filled with other students.
Participants in the experimental group agreed with this approach to learning L2 writing skills
(approximately 95%). It is interesting to note that students’ reflections focused on both the ways
AI-enhanced affordances increase their awareness of themselves and others, as well as their per-
ceived advantages of the context of collaborative abilities through raising feedback literacy and
writing engagement. Students that used AI-enhanced learning explicitly mentioned the following
instructional actions in their reflection journals:
INTERACTIVE LEARNING ENVIRONMENTS 15

1. Personalized learning and feedback. Students reported that personalized learning provided
more opportunities for student empowerment and ownership of their learning. This helped to
engage them, improve their motivation and increase their academic performance. Teachers
must have adequate training and support to create personalized learning environments, utilize
effective assessment tools and provide meaningful feedback. For example:

I now have a better understanding of my activities. In other words, I now have a better sense of who I
am. I previously knew that feedback would help me expand my open area, but after doing a self-evalu-
ation, I have concluded that self-disclosure is equally crucial for AI-enhanced learning.

2. Receiving spontaneous feedback. Majority of the participants embraced spontaneous feedback


acknowledging that it gives them honest, direct and timely insights into their performance. Some
participants shared that receiving spontaneous feedback can sometimes feel overwhelming or
stressful, especially when the feedback is critical or surprising. A few participants mentioned
that receiving feedback only a few times a year was not enough for growth and development,
and they would prefer more frequent spontaneous feedback to help them make improvements
faster. For example:

Whenever I accept feedback from the app, I’ve learned to be more open and responsive. I am aware
that feedback is provided so I may become a better learner. Getting feedback also enables me to recog-
nize hidden weaknesses in myself and what I can do to become a better version of myself.

3. An iterative and cyclical process of reflection and feedback. In the interview, the interviewee
emphasized the importance of an iterative and cyclical process of reflection and feedback in their
work. They mentioned that this approach allows for constant improvement and growth in both
personal and professional spheres. The interviewee also stressed the importance of being proac-
tive in seeking out feedback. They suggested that regularly asking for feedback from colleagues,
superiors, or even clients/customers can help identify blind spots and areas for improvement. For
example:

Although it was difficult for me to look critically at myself and my behavior in my first diary and to
consider the feedback offered by the AI application, doing the reflection exercise a second time was much
simpler. Also, I think that this time I have more clearly outlined my strategy for subsequent exchanges,
making it simpler to put into practice.

Discussion
This study makes several suggestions for boosting L2 student writing feedback literacy, writing
engagement, and writing outcome in the particular setting of employing Wordtune, an AI-enhanced
tool. The use of AI (e.g. the Wordtune App) could improve student writing feedback engagement by
encouraging a dialogic process between learner and machine to foster a sense of individual learning
pace and make students’ voices heard (Wongvorachan et al., 2022). This would improve writing feed-
back literacy from an outside perspective (i.e. from the machine’s side). In an AI environment, the
incorporation of semi-automatic feedback software, such as Wordtune, may provide learning analyti-
cal data that instructors can use to create relatable feedback that boosts student engagement by
taking appropriateness and context into account (Ma et al., 2021; Winstone & Carless, 2020). Also,
using predictors that are flexible and founded on learning theory, such as formative assignments,
could help to learn analytics and improve feedback literacy in the AI context (Yu & Liu, 2021). More-
over, through AI-based peer feedback, students may improve their evaluative judgment of their
writing skills. Overall, the utilization of AI-enhanced learning, personalized feedback, and creative
feedback strategies may be more well-received by students, boosting their writing feedback
16 H. S. RAD ET AL.

engagement and consequently their writing skills. One significant advantage of using AI-enhanced
writing programs is that the software can provide instant feedback to students regarding their
writing performance. In contrast, non-AI-enhanced writing classes are time-consuming and lack
the option of providing immediate feedback. Students do not have to wait for their teacher’s
return of an assignment in physical form or for feedback given through email, which may take
days or weeks. Instead, they receive immediate feedback that they can use to improve their
writing skills in real-time. Moreover, this instant feedback is based on objective criteria that the AI
program uses to measure their writing. Therefore, students receive precise feedback that they can
trust and be sure to follow to achieve better outcomes.
AI has proven to be a powerful tool when it comes to writing feedback literacy and writing
engagement in this study. AI capabilities are broad-reaching, from natural language processing
and text generation to intelligent search algorithms and automated writing evaluation (Chen
et al., 2021). In terms of providing feedback on student writing, AI can be incredibly effective in
areas such as grammar correction, spelling correction, style advice, tone identification, and more.
AI tools allow for the automated detection and correction of errors without the need for human
input or intervention. This reduces the time authors spend manually checking their topics for mis-
takes while allowing them to focus more on perfecting their ideas. AI can also provide personalized
feedback tailored to an individual’s needs and level of writing expertise. When considering AI-
enhanced writing in L2 instruction versus non-AI-enhanced writing classes, there are noteworthy
differences in writing feedback literacy, engagement and outcome. While non-AI-enhanced
writing settings provide feedback from teachers and peers that may not always be reliable or accu-
rate but not accessible, AI-enhanced feedback is more precise and objective and accessible. This
encourages learners by providing consistent feedback and reducing confusion from multiple
sources. AI can also increase students’ engagement levels as it provides more interactivity, enhan-
cing their confidence and encouraging them to participate fully. Additionally, AI-supported
language classes encourage critical thinking and individual feedback, promoting higher productivity
and performance. Personalization of feedback according to specific needs analysis can make learners
feel validated and recognized, resulting in improved outcomes. This result is supported by other
researchers in the language learning context (e.g. Bibauw et al., 2019; Divekar et al., 2022; Riedl,
2019). For instance, Bibauw et al. (2019) confirmed that the effectiveness of human and machine
interaction in improving language learners’ motivation and outcomes has been demonstrated. Fur-
thermore, AI has shown remarkable efficacy in providing feedback to students learning Chinese as a
foreign language, according to Divekar et al. (2022). Ridl (2019) suggests that AI systems should be
designed to enhance human capabilities instead of replacing them, which can significantly improve
human learning. When it comes to feedback literacy in writing, AI tools must be developed and uti-
lized to empower learners and enable them to give and receive high-quality feedback, potentially
similar to the Wordtune used in this study.
AI can also be used to enhance writing engagement by providing writers with automated story
generators that help them generate ideas while they’re stuck in a creative lull. It can even suggest
titles based on the topics being discussed; this is particularly useful for those who find themselves
struggling with finding appropriate titles or names for characters within fiction pieces. AI-based bots
can also engage in conversation with writers so they get a better sense of the overall story they are
creating, as well as create audio versions of stories that can then be shared or listened to at any time.
In face-to-face writing classes, students have direct interaction with their instructors, who can
provide timely and personalized feedback. This feedback is often accompanied by explanations
and discussions to help learners understand what they need to improve. This face-to-face interaction
may allow for questions and clarifications, but is not sufficient to enhance learners’ writing feedback
literacy, engagement, and outcome. On the other hand, AI-enhanced writing tools provide instant
writing feedback on various aspects of writing, such as grammar, syntax, and vocabulary. These tech-
nologies use algorithms to analyze texts and provide suggestions for improvement. While they may
lack the personal touch of an instructor, they offer great convenience by providing writing feedback
INTERACTIVE LEARNING ENVIRONMENTS 17

almost immediately. However, learners must possess a certain level of writing feedback literacy to
interpret and act on the feedback provided by AI-enhanced tools. To best support L2 academic
writing development, writing feedback literacy, and writing engagement, AI-enhanced tools need
to be used in conjunction with face-to-face instruction.
For instance, teachers can integrate AI-enhanced tools into classroom activities such as peer
review, where students can use these tools to check each other’s work. Teachers can also use AI-
enhanced tools to generate initial feedback on student writing, which can then be discussed and
elaborated on in class to help learners understand the areas in need of improvement. The integration
of AI-enhanced tools should prioritize learners’ writing feedback literacy and engagement rather
than replacing traditional teaching methods. Learners need to develop a deeper understanding of
the rationale behind feedback, which can only be achieved through conversations and interactions
with experienced instructors. Combining AI-enhanced tools with face-to-face instruction in a peda-
gogically sound way can help learners develop writing skills while also becoming feedback-literate
individuals. The effectiveness of AI-based applications in language learners’ engagement has been
confirmed in the same way as ours (e.g. Engwall & Lopes, 2022; Huang et al., 2023a). For example,
Engwall and Lopes (2022) have confirmed the effectiveness of robot-assisted language learning
and machine learning in adult L2 learners. They also highlighted the role of their interaction in
enhancing the learning process and increasing engagement. Similarly, according to Huang et al.
(2023a), AI is effectively utilized to help students acquire writing skills through techniques such as
natural language processing, automated speech recognition, and learner profiling. These techniques
are widely used to create automated writing assessments, bespoke learning, and intelligent tutoring
systems that can help increase student engagement.
In addition, AI provides numerous advantages over non-AI-enhanced methods when it comes to
providing writing feedback that promotes literacy and enables writers to have more enjoyable
writing experiences. It allows authors to spend less time worrying about errors during proofreading
sessions while also freeing up space for idea expansion, testing out strategy hypotheses quickly and
efficiently via simulations, and having access to various datasets (e.g. using knowledge graphs),
which allows the author more room for creativity when constructing stories or articles that appeal
to their target audience. This is in line with other researchers’ reports (e.g. Chiu et al., 2022; Chiu
et al., 2023). For example, Chiu et al. (2022) found that AI has the potential to improve student’s
learning outcomes in art, including achievement, acceptance of technology, attitude toward learn-
ing, motivation, self-efficacy, satisfaction, and performance. Furthermore, in a subsequent study
(Chiu et al., 2023), the researchers demonstrated that combining teacher support with student
expertise in self-regulated learning and digital literacy increased intrinsic motivation and the
ability to learn through AI.
AI can also be useful for providing individualized instruction tailored to each student’s level and
unique needs. For instance, some AI programs could be used as virtual tutors, able to provide tai-
lored practice exercises based on the individual student’s progress or mistakes they make while
working individually. Additionally, automatic essay scoring capabilities can offer feedback quickly
and consistently, which could assist students who are writing essays during class time or at home
in studying for tests more efficiently and effectively. Moreover, the effectiveness of AI as an indivi-
dualized learning process has been reported, which supports this study’s result (Moussalli &
Cardoso, 2020; Underwood, 2017). For example, according to Moussalli and Cardoso (2020), the
use of AI assistants can help learners focus on their learning goals and improve their results. This
approach has also been supported by Underwood (2017), who reports that AI language assistants
have been used to encourage EFL students to conceptualize and create ideas for classroom activities.
The study further reveals that the use of AI assistants has led to increased student engagement as
they found it enjoyable to communicate with these assistants. Therefore, the use of AI assistants
is an innovative approach to optimize the learning experience and drive positive outcomes.
AI’s predictive features can also help novice writers generate ideas faster. As AI advances, even
more sophisticated capabilities like summarizing long, complex documents, identifying sentiment
18 H. S. RAD ET AL.

analysis, and building personal writing styles are becoming available. Crucially, AI isn’t designed to
replace human creativity; instead, it liberates writers from the mundane tasks associated with the
written word so that they can enjoy an even richer writing experience. Through automation and
other advanced features, AI can help writers become more productive and efficient while challen-
ging them with exciting new concepts to explore. In this way, AI has the potential to make it
easier for anyone who enjoys writing to achieve success in their chosen media outlet while also
helping experienced professionals keep up with the cutting edge of their field (Ai, 2017; Jurs &
Špehte, 2021; Kulik & Fletcher, 2016).

Conclusion
This study elucidates ideas that might improve second-language learners’ writing feedback literacy,
writing engagement, and writing outcomes and contextualizes them within the perspective of AI.
The suggested concepts might benefit the field on both a broad and a focused level. This study
might, on the surface, be used as a guide to enhance feedback efficacy and its perceived value
for the efficient use of AI to enhance writing feedback literacy, thereby reducing the tension
between instructors’ feedback practices and students’ expectations through personalized feedback.
Instructors should adopt dialogic teaching, where student-teacher interaction is less emphasized to
establish tolerance and acceptance with students as active members of the process instead of boost-
ing themselves to a higher position with feedback as a unilateral process, to best promote and
sustain student writing feedback literacy. In sum, Wordtune is well regarded for supporting students
in their understanding of such information during the writing process and benefits from new data
displayed in AI.
There are a couple of limitations to our study, despite the important insights regarding writing
feedback literacy, writing engagement, and writing outcome in AI that have been provided. The
designs and strategies presented in this paper were created using the fundamental principles of
AI, which is the Wordtune App. Yet, each AI application’s qualities vary depending on the goal of
the course’s topic. For each particular application of AI, researchers should develop ways to
improve feedback literacy that are as context-relevant as possible. This paper considers L2 upper-
intermediate as a testing phase for our strategy. When the curriculum changes, the application of
AI to other educational levels may differ. Furthermore, because this is a novel issue, this study
relies on a limited literature review rather than a whole literature database.
Future studies might generate theoretical and empirical evidence of the usefulness of feedback
from various AI apps to encourage feedback literacy in L2 learners as it becomes more and more
accepted by society and is popular among the younger generation. To avoid the pitfalls of assimila-
tionist practice through AI apps, researchers might also think about contextualizing the framework
more deeply from an intercultural perspective, particularly in an international setting where peda-
gogical practice and student approaches to education vary from one geographical location to
another. Moreover, teachers may employ AI apps to create a framework for different L2 skills that
would help students build feedback literacy. One example of this would be the use of automatically
graded programming assignments in testing classes.
In addition, AI can be a valuable asset for enhancing students’ writing skills by improving the
quality of their feedback. The Wordtune application can offer real-time feedback, increasing
student engagement levels and encouraging them to take control of their learning by identifying
areas for improvement. Such personalized learning experiences are essential in promoting better
performance by developing key habits and techniques. Using AI also allows for more objective
and consistent evaluations of work, thus avoiding biases or inconsistencies arising from traditional
human-based grading systems. By using technology to support feedback, teachers can save time
and direct their energies towards other important teaching aspects, such as delivering engaging
lessons or addressing individual student needs. However, successful implementation requires suit-
able resources, infrastructure, and training for both students and instructors, a vital consideration
INTERACTIVE LEARNING ENVIRONMENTS 19

for adopting any technological innovation. To promote engagement and achievement, the Word-
tune application should be integrated into the curriculum, particularly in language and writing
courses. This will encourage students to use the tool regularly and apply the feedback provided
to their writing assignments, which is crucial to developing writing skills. The use of Wordtune
can be enhanced by encouraging peer review and collaboration. Students can work in groups
and use the tool to provide feedback on each other’s writing assignments, further improving
their writing skills and fostering a sense of community.

Note
1. https://www.wordtune.com/

Disclosure statement
No potential conflict of interest was reported by the author(s).

ORCID
Hanieh Shafiee Rad http://orcid.org/0000-0002-8244-0637
Rasoul Alipour http://orcid.org/0000-0003-2024-1103

References
Ai, H. (2017). Providing graduated corrective feedback in an intelligent computer-assisted language learning environ-
ment. ReCALL, 29(3), 313–334. https://doi.org/10.1017/S095834401700012X
Bandura, A. (1986). The explanatory and predictive scope of self-efficacy theory. Journal of Social and Clinical Psychology,
4(3), 359–373. https://doi.org/10.1521/jscp.1986.4.3.359
Banister, C. (2020). Exploring peer feedback processes and peer feedback meta-dialogues with learners of academic and
business English. Language Teaching Research, 27, 746–764. https://doi.org/10.1177/1362168820952222
Berendt, B., Littlejohn, A., & Blakemore, M. (2020). AI in education: Learner choice and fundamental rights. Learning,
Media and Technology, 45(3), 312–324. https://doi.org/10.1080/17439884.2020.1786399
Bibauw, S., François, T., & Desmet, P. (2019). Discussing with a computer to practice a foreign language: Research syn-
thesis and conceptual framework of dialogue-based CALL. Computer Assisted Language Learning, 32(8), 827–877.
https://doi.org/10.1080/09588221.2018.15355
Carless, D., & Boud, D. (2018). The development of student feedback literacy: Enabling uptake of feedback. Assessment &
Evaluation in Higher Education, 43(8), 1315–1325. https://doi.org/10.1080/02602938.2018.1463354
Chen, X., Zou, D., Xie, H., & Cheng, G. (2021). Twenty years of personalized language learning. Educational Technology &
Society, 24(1), 205–222.
Chiu, M. C., Hwang, G. J., Hsia, L. H., & Shyu, F. M. (2022). Artificial intelligence-supported art education: A deep learning-
based system for promoting university students’ artwork appreciation and painting outcomes. Interactive Learning
Environments, 1–19. https://doi.org/10.1080/10494820.2022.2100426
Chiu, T. K., Moorhouse, B. L., Chai, C. S., & Ismailov, M. (2023). Teacher support and student motivation to learn with
artificial intelligence (AI) based chatbot. Interactive Learning Environments, 1–17. https://doi.org/10.1080/
10494820.2023.2172044
Creswell, J. W., & Creswell, J. D. (2017). Research design: Qualitative, quantitative, and mixed methods approaches. Sage
publications.
Divekar, R. R., Drozdal, J., Chabot, S., Zhou, Y., Su, H., Chen, Y., … Braasch, J. (2022). Foreign language acquisition via
artificial intelligence and extended reality: Design and evaluation. Computer Assisted Language Learning, 35(9), 1–
29. https://doi.org/10.1080/09588221.2021.1879162
Dizon, G. (2020). Evaluating intelligent personal assistants for L2 listening and speaking development. Language
Learning & Technology, 24(1), 16–26. https://doi.org/10.125/44705
Dong, Z., Gao, Y., & Schunn, C. D. (2023). Assessing students’ peer feedback literacy in writing: Scale development and
validation. Assessment & Evaluation in Higher Education, 1–16. https://doi.org/10.1080/02602938.2023.2175781
Dörnyei, Z. (2007). Research methods in applied linguistics: Quantitative, qualitative and mixed methodologies. Oxford
University Press.
Dörnyei, Z. (2014). Researching complex dynamic systems: ‘Retrodictive qualitative modelling’ in the language class-
room. Language Teaching, 47(1), 80–91. https://doi.org/10.1017/S0261444811000516
20 H. S. RAD ET AL.

Elola, I., & Oskoz, A. (2017). Writing with 21st century social tools in the L2 classroom: New literacies, genres, and writing
practices. Journal of Second Language Writing, 36, 52–60. https://doi.org/10.1016/j.jslw.2017.04.002
Engwall, O., & Lopes, J. (2022). Interaction and collaboration in robot-assisted language learning for adults. Computer
Assisted Language Learning, 35(5–6), 1273–1309. https://doi.org/10.1080/09588221.2020.1799821
Greena, J. G., Collins, A., & Resnick, L. B. (1996). Cognition and learning. In D. C. Berliner & R. C. Calfee (Eds.), Handbook of
educational psychology (pp. 15–46). Macmillan.
Han, Y., & Xu, Y. (2020). The development of student feedback literacy: The influences of teacher feedback on peer feed-
back. Assessment & Evaluation in Higher Education, 45(5), 680–696. https://doi.org/10.1080/02602938.2019.1689545
Han, Y., & Xu, Y. (2021). Student feedback literacy and engagement with feedback: A case study of Chinese undergradu-
ate students. Teaching in Higher Education, 26(2), 181–196. http://dx.doi.org/10.1080/13562517.2019.1648410.
Huang, A. Y., Lu, O. H., & Yang, S. J. (2023b). Effects of artificial intelligence–enabled personalized recommendations on
learners’ learning engagement, motivation, and outcomes in a flipped classroom. Computers & Education, 194,
104684. doi:10.1016/j.compedu.2022.104684
Huang, X., Zou, D., Cheng, G., Chen, X., & Xie, H. (2023a). Trends, research issues and applications of artificial intelligence
in language education. Educational Technology & Society, 26(1), 112–131. https://doi.org/10.30191/ETS.202301_26(1).
0009
Hyland, K. (2003). Second language writing. Cambridge University Press.
Hyland, K., & Hyland, F. (2019). Contexts and issues in feedback on L2 writing. In K. Hyland & F. Hyland (Eds.), Feedback in
second language writing: Contexts and issues (pp. 1–22). Cambridge University Press. https://doi.org/10.1017/
9781108635547.003
Jurs, P., & Špehte, E. (2021). The role of feedback in the distance learning process. Journal of Teacher Education for
Sustainability, 23(2), 91–105. https://doi.org/10.2478/jtes-2021-0019
Kulik, J. A., & Fletcher, J. D. (2016). Effectiveness of intelligent tutoring systems: A meta-analytic review. Review of
Educational Research, 86(1), 42–78. https://doi.org/10.3102/0034654315581420
Lee, I. (2017). Working hard or working smart: Comprehensive versus focused written corrective feedback in L2 aca-
demic contexts. In I. Lee (Ed.), Teaching writing for academic purposes to multilingual students (pp. 182–194).
Routledge. doi:10.4324/9781315269665-11
Li, F., & Han, Y. (2022). Student feedback literacy in L2 disciplinary writing: Insights from international graduate students
at a UK university. Assessment & Evaluation in Higher Education, 47(2), 198–212. https://doi.org/10.1080/02602938.
2021.1908957
Lin, C. J., & Mubarok, H. (2021). Learning analytics for investigating the mind map-guided AI chatbot approach in an EFL
flipped speaking classroom. Educational Technology & Society, 24(4), 16–35.
Ma, J., Wang, C., & Teng, M. F. (2021). Using learning-oriented online assessment to foster students’ feedback literacy in
L2 writing during COVID-19 pandemic: A case of misalignment between micro-and macro-contexts. The Asia-Pacific
Education Researcher, 30, 597–609. https://doi.org/10.1007/s40299-021-00600-x
Martin, A. J. (2009). Motivation and engagement across the academic life span: A developmental construct validity study
of elementary school, high school, and university/college students. Educational and Psychological Measurement, 69
(5), 794–824. https://doi.org/10.1177/0013164409332214
Mason, M. (2008). Complexity theory and the philosophy of education. Educational Philosophy and Theory, 40(1), 4–18.
https://doi.org/10.1111/j.1469-5812.2007.00412.x
Miao, Z., Zhong, H., Wang, Y., Zhang, H., Tan, H., & Fierro, R. (2022). Low-complexity leader-following formation control of
mobile robots using only FOV-constrained visual feedback. IEEE Transactions on Industrial Informatics, 18(7), 4665–
4673. http://dx.doi.org/10.1109/TII.2021.3113341.
Molloy, E., Boud, D., & Henderson, M. (2020). Developing a learning-centered framework for feedback literacy.
Assessment & Evaluation in Higher Education, 45(4), 527–540. https://doi.org/10.1080/02602938.2019.1667955
Moussalli, S., & Cardoso, W. (2020). Intelligent personal assistants: Can they understand and be understood by accented
L2 learners? Computer Assisted Language Learning, 33(8), 865–890. https://doi.org/10.1080/09588221.2019.1595664
Riedl, M. O. (2019). Human-centered artificial intelligence and machine learning. Human Behavior and Emerging
Technologies, 1(1), 33–36. https://doi.org/10.48550/arXiv.1901.11184
Schommer, M. (1990). Effects of beliefs about the nature of knowledge on comprehension. Journal of Educational
Psychology, 82(3), 498–504. https://doi.org/10.1037/0022-0663.82.3.498
Skinner, B. F. (1953). Some contributions of an experimental analysis of behavior to psychology as a whole. American
Psychologist, 8(2), 69–78. https://doi.org/10.1037/h0054118
Soles, D. (2005). The academic essay: How to plan, draft, write and edit. Studymate Limited.
Stake, R. E. (1995). The art of case study research. Sage.
Sun, H., & Wang, M. (2022). Effects of teacher intervention and type of peer feedback on student writing revision.
Language Teaching Research, 136216882210805. https://doi.org/10.1177/13621688221080507
Sutton, P. (2012). Conceptualizing feedback literacy: Knowing, being, and acting. Innovations in Education and Teaching
International, 49(1), 31–40. https://doi.org/10.1080/14703297.2012.647781
Tabachnick, B. G., & Fidell, L. S. (2013). Using multivariate statistics (6th ed.). Pearson.
INTERACTIVE LEARNING ENVIRONMENTS 21

Tai, T. Y., & Chen, H. H. J. (2020). The impact of Google assistant on adolescent EFL learners’ willingness to communicate.
Interactive Learning Environments, 1–18. https://doi.org/10.1080/10494820.2020.1841801
Underwood, J. (2017). Exploring AI language assistants with primary EFL students. In K. Borthwick, L. Bradley, & S.
Thouësny (Eds.), CALL in a climate of change: Adapting to turbulent global conditions – short papers from
EUROCALL 2017 (pp. 317–321). Research-publishing.net. https://doi.org/10.14705/rpnet.2017.eurocall2017.733
Urquhart, C. (2022). Grounded theory for qualitative research: A practical guide. Sage.
Van den Berghe, R., Verhagen, J., Oudgenoeg-Paz, O., Van der Ven, S., & Leseman, P. (2019). Social robots for language
learning: A review. Review of Educational Research, 89(2), 259–295. https://doi.org/10.3102/0034654318821286
Vygotsky, L. S. (1978). Mind in society. Harvard University Press.
Wang, X., Liu, Q., Pang, H., Tan, S. C., Lei, J., Wallace, M. P., & Li, L. (2023). What matters in AI-supported learning: A study
of human-AI interactions in language learning using cluster analysis and epistemic network analysis. Computers &
Education, 194, 104703. https://doi.org/10.1016/j.compedu.2022.104703
Wang, X., Pang, H., Wallace, M. P., Wang, Q., & Chen, W. (2022). Learners’ perceived AI presences in AI-supported
language learning: A study of AI as a humanized agent from community of inquiry. Computer Assisted Language
Learning, 1–27. https://doi.org/10.1080/09588221.2022.2056203
Williamson, B., & Eynon, R. (2020). Historical threads, missing links, and future directions in AI in education. Learning,
Media and Technology, 45(3), 223–235. https://doi.org/10.1080/17439884.2020.1798995
Winstone, N., & Carless, D. (2020). Designing effective feedback processes in higher education: A learning-focused approach.
Routledge.
Wongvorachan, T., Lai, K. W., Bulut, O., Tsai, Y. S., & Chen, G. (2022). Artificial intelligence: Transforming the future of
feedback in education. Journal of Applied Testing Technology, 1–23.
Yu, S., Di Zhang, E., & Liu, C. (2022). Assessing L2 student writing feedback literacy: A scale development and validation
study. Assessing Writing, 53, 100643. https://doi.org/10.1016/j.asw.2022.100643
Yu, S., & Liu, C. (2021). Improving student feedback literacy in academic writing: An evidence-based framework.
Assessing Writing, 48, 100525. https://doi.org/10.1016/j.asw.2021.100525
Zhang, R., & Zou, D. (2022). Types, features, and effectiveness of technologies in collaborative writing for second
language learning. Computer Assisted Language Learning, 35(9), 1–31. https://doi.org/10.1080/09588221.2021.
1880441
Zhao, X. (2022). Leveraging artificial intelligence (AI) technology for English writing: Introducing Wordtune as a digital
writing assistant for EFL writers. RELC Journal, 003368822210940. https://doi.org/10.1177/00336882221094089
Zheng, L., Niu, J., Zhong, L., & Gyasi, J. F. (2021). The effectiveness of artificial intelligence on learning achievement and
learning perception: A meta-analysis. Interactive Learning Environments, 1–15. Advance online publication. http://dx.
doi.org/10.1080/10494820.2021.2015693
Zheng, Y., & Yu, S. (2019). What has been assessed in writing and how? Empirical evidence from assessing writing (2000–
2018). Assessing Writing, 42, 100421. https://doi.org/10.1016/j.asw.2019.100421
Zhou, S. A., & Hiver, P. (2022). The effect of self-regulated writing strategies on students’ L2 writing engagement and
disengagement behaviors. System, 106, 102768. http://dx.doi.org/10.1016/j.system.2022.102768

Appendix 1

Semi-structured Interview Questions


Some potential interview questions:

1. Can you describe your experience of the writing classroom?


2. Do you find your writing feedback methods helpful?
3. Has your classroom methodology had any effect on your engagement with the writing process?
4. Has your classroom methodology improved the quality of your writing?
5. Has your classroom methodology helped you to better understand specific writing issues or concepts?
6. Have you experienced any difficulties or limitations when using your classroom methodology for writing feedback?
7. Would you recommend your classroom methodology to other students for writing?
8. Did you receive any training or support on how to learn writing effectively?
9. How do you feel about the writing course process?

View publication stats

You might also like