Innovations and Applications of Technology in Language Education (2024)
Innovations and Applications of Technology in Language Education (2024)
Published
Computational Intelligence in Industry 4.0 and 5.0 Applications: Challenges and
Future Prospects
Joseph Bamidele Awotunde, Kamalakanta Muduli, and Biswajit Brahma
ISBN: 978‑1‑032‑539225
Deep Learning for Smart Healthcare: Trends, Challenges and Applications
K. Murugeswari, B. Sundaravadivazhagan, S. Poonkuntran, and Thendral Puyalnithi
ISBN: 978‑1‑032‑455815
Edge Computational Intelligence for AI‑Enabled IoT Systems
By Shrikaant Kulkarni, Jaiprakash Narain Dwivedi, Dinda Pramanta, and
Yuichiro Tanaka
ISBN: 978‑1‑032‑207667
Explainable AI and Cybersecurity
By Mohammad Tabrez Quasim, Abdullah Alharthi, Ali Alqazzaz,
Mohammed Mujib Alshahrani, Ali Falh Alshahrani, and Mohammad Ayoub Khan
ISBN: 978‑1‑032‑422213
Machine Learning in Applied Sciences
By M. A. Jabbar, Shankru Guggari, Kingsley Okoye, and Houneida Sakly
ISBN: 978‑1‑032‑251721
Social Media and Crowdsourcing
By Sujoy Chatterjee, Thipendra P Singh, Sunghoon Lim, and
Anirban Mukhopadhyay
ISBN: 978‑1‑032‑386874
AI and IoT Technology and Applications for Smart Healthcare
By Alex Khang
ISBN: 978‑1‑032‑684901
https://www.routledge.com/Advances-in-Computational-Collective-Intelligence/
book-series/ACCICRC
Innovations and
Applications of Technology
in Language Education
Edited by
Hung Phu Bui, Raghvendra Kumar,
and Nilayam Kumar Kamila
First edition published 2025
by CRC Press
2385 NW Executive Center Drive, Suite 320, Boca Raton FL 33431
and by CRC Press
4 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN
CRC Press is an imprint of Taylor & Francis Group, LLC
© 2025 selection and editorial matter, Hung Phu Bui, Raghvendra Kumar, and Nilayam Kumar
Kamila; individual chapters, the contributors
Reasonable efforts have been made to publish reliable data and information, but the author and
publisher cannot assume responsibility for the validity of all materials or the consequences of
their use. The authors and publishers have attempted to trace the copyright holders of all material
reproduced in this publication and apologize to copyright holders if permission to publish in this
form has not been obtained. If any copyright material has not been acknowledged please write and
let us know so we may rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced,
transmitted, or utilized in any form by any electronic, mechanical, or other means, now known
or hereafter invented, including photocopying, microfilming, and recording, or in any information
storage or retrieval system, without written permission from the publishers.
For permission to photocopy or use material electronically from this work, access www.
copyright.com or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive,
Danvers, MA 01923, 978‑750‑8400. For works that are not available on CCC please contact
[email protected]
Trademark notice: Product or corporate names may be trademarks or registered trademarks and are
used only for identification and explanation without intent to infringe.
Library of Congress Cataloging‑in‑Publication Data
Names: Bui, Hung Phu, editor. | Kumar, Raghvendra, 1987- editor. |
Kamila, Nilayam Kumar, editor.
Title: Innovations and applications of technology in language education /
edited by Hung Phu Bui, Raghvendra Kumar, and Nilayam Kamila.
Description: Boca Raton FL : CRC Press, 2024. | Series: Computational
collective intelligence | Includes index. | Identifiers: LCCN 2023053008 (print) |
LCCN 2023053009 (ebook) | ISBN 9781032560731 (hardback) |
ISBN 9781032754222 (paperback) | ISBN 9781003473916 (ebook)
Subjects: LCSH: English language—Study and teaching—Technological innovations. |
English language—Computer-assisted instruction. | LCGFT: Essays.
Classification: LCC PE1128.A2 I555 2024 (print) | LCC PE1128.A2 (ebook) |
DDC 420.78/566—dc23/eng/20240304
LC record available at https://lccn.loc.gov/2023053008
LC ebook record available at https://lccn.loc.gov/2023053009
ISBN: 978‑1‑032‑56073‑1 (hbk)
ISBN: 978‑1‑032‑75422‑2 (pbk)
ISBN: 978‑1‑003‑47391‑6 (ebk)
DOI: 10.1201/9781003473916
Typeset in Garamond
by codeMantra
Contents
Editors................................................................................................................vii
Preface...............................................................................................................viii
List of Contributors............................................................................................xi
v
vi ◾ Contents
Hung Phu Bui h olds a PhD in language education. He is now a lecturer and researcher
at the School of Foreign Languages, University of Economics Ho Chi Minh City
(UEH University). Hung also serves as an editor for several international journals. His
research interests have stretched across different aspects of second/foreign language
(L2) education and technology in language education. His recently published works
have mainly concentrated on applications of cognitive linguistics in L2 acquisition,
sociocultural theory in L2 acquisition, L2 students’ interaction, L2 classroom assess‑
ment, teaching English for specific purposes, and computer‑assisted language teaching
and learning. Serving as the keynote and plenary speaker in many national and inter‑
national conferences in the world, Hung has had opportunities to spread his knowl‑
edge and research interests to students, colleagues, and novice researchers in Asia.
Nilayam Kumar Kamila is a lead software engineer in Capital One Wilmington,
USA. He holds a PhD in Applied Technology. He received his multiple masters
and certificates from India and the United States of America. He is especially inter‑
ested in educational technology. He has expertise on application development and
innovative system designs. His publications mainly focus on applications of artifi‑
cial intelligence in management, machine learning, and wireless sensor network.
He has published many journal articles and book chapters with international large
publishers, such as Elsevier and Springer. He has also attended several international
conferences around the world. For further information about him, please contact
[email protected].
vii
Preface
viii
Preface ◾ ix
In Chapter 12, Herri and his colleagues report a study on the interplay between
digital technology use and willingness to communicate.
All the 12 endeavors provide insights into and perspectives on different aspects
of language education. They all recommend the use of technology to improve lan‑
guage education and professional learning. This book suggests topics of current
interests and gives directions for the future of technology in language education; it
is especially important to undergraduate and graduate students and novice research‑
ers and therefore deserves much attention from the academic world.
Professional Learning
for CALL Teachers: A
Research-Based Approach
Loc Tan Nguyen
DOI: 10.1201/9781003473916-23
4 ◾ Innovations and Applications of Technology
One of the responses to this query has been the use of teacher professional learning
(TPL) activities. In this chapter, the term TPL is used instead of teacher profes‑
sional development (TPD) for two main reasons. First, TPD, also referred to as
in-service teacher education (Bayer, 2017; Borg, 2011), differs from TPL in that
the former usually involves activities intended for developing teachers’ expertise,
skills, and specialised knowledge, whereas the latter aims for changes in classroom
instruction and improved student learning (Timperley, 2011). Second, TPD is
“almost contrary to our mission of ensuring high levels of learning for all students
and staff” (Murray & Zoul, 2015, p. 8), while TPL “recognises teachers as agents
of their growth and emphasises that learning is an experience driven largely by the
learner” (Calvert, 2016, p. 4).
Following Garone et al. (2022) and Timperley (2011), TPL is used in this chap‑
ter as an inclusive term to embrace both formal and informal activities for CALL
teachers to develop their skills and expertise, which in turn improve teaching qual‑
ity and students’ learning outcomes (SLOs). As Richards (2008) holds, the goal
of teacher learning is not to translate what they have learnt into classroom prac‑
tice but “constructing new knowledge and theory through participating in spe‑
cific social contexts and engaging in particular types of activities and processes”
(p. 164). However, a current consensus in the international literature is that TPL
is a job-embedded learning opportunity (Darling-Hammond et al., 2017; Muijs et
al., 2014) for teachers to transform the professional skills and specialised knowledge
they have received into practice for improved teaching quality and the benefit of
student learning (Barnes & Verwey, 2008; Tsotetsi & Mahlomaholo, 2015). It is
a complex process (Avalos, 2011; Collins & Clarke, 2008) since “there are various
dynamics at work in social behaviour and these interact and combine in different
ways, such that even the simplest decisions can have multiple causal path ways”
(Opfer & Pedder, 2011, p. 378). Regardless of how differently TPL is conceptual‑
ised, it has widely been accepted that the ultimate goal of TPL is to improve SLOs
(Darling-Hammond et al., 2017; Diefes-Dux, 2014; Philipsen et al., 2019).
The past few decades have witnessed worldwide reforms in education for the
sake of enhanced SLOs. Research has shown TPL to be one of the most crucial ele‑
ments of such educational reforms (Alton-Lee, 2011; Diefes-Dux, 2014; Timperley,
2011). According to Mana (2011), educational reforms will be deficient or even fail
unless there is adequate effective TPL. In essence, TPL activities are efficient when
they produce measurable gains in teachers’ professional skills and knowledge, lead‑
ing to changes in their instructional practices that have a positive impact on SLOs
(Darling-Hammond et al., 2009; Diefes-Dux, 2014; Opfer & Pedder, 2011).
Good teaching methods positively impact what students learn and how they
learn (Darling-Hammond et al., 2017); thus, those who want to become good teach‑
ers correspondingly make more efforts to improve professional skills and knowledge
in order to enhance their classroom practices. In this regard, TPL is “indispensable
to bringing about sustainable school improvement, for the ultimate improvement
of student learning” (Steyn, 2011, p. 212). TPL is of great necessity for teachers to
develop a wide range of capacity to respond to the needs of particular groups of
Professional Learning for CALL Teachers ◾ 5
teachers who have received training can pass on their promoted professional skills
and knowledge to others. Entailing the notion of training-the-trainer (Baker, 2016;
Ono & Ferreira, 2010), TPL traditionally promoted through the top-down approach
is acknowledged to be cost-effective as it can involve a large number of teacher par‑
ticipants within a limited timeframe (Pancucci, 2007; Penuel et al., 2007).
However, traditional TPL is typically implemented independently of teachers’
classroom contexts and this compromises its impact on instructional practices and
SLOs (Armour & Makopoulou, 2012; Murray & Zoul, 2015). Given that TPL activi‑
ties of this type are isolated from teachers’ classroom realities, they are likely to reject
new teaching strategies and/or content as inappropriate and/or irrelevant to their teach‑
ing circumstances (Nguyen & Newton, 2021; Timperley et al., 2008). Additionally,
through the top-down approach, dilution and/or misinterpretations of the professional
skills and new knowledge may emerge when such a TPL activity is transferred to the
next level (Baker, 2016; Dichaba & Mokhele, 2012). As Bantwini (2009) argues, since
the expert-driven top-down approach aims to train teachers to competently follow pre‑
scribed rigid patterns with little inclusion of teachers’ existing knowledge and their own
classroom situations, they become passive learners in such TPL activities. Accordingly,
an alternative approach has been put forward to minimise these potential drawbacks
and optimise teachers’ learning as part of their daily work (Muijs et al., 2014).
based on teachers’ classroom practice and curriculum enactment and on how teach‑
ers use curriculum materials and make informed pedagogical choices to optimally
assist student learning (Hill, 2009; Penuel et al., 2007). According to Cohen and
Hill (2008), this TPL model is most likely to bring about teachers’ gains in terms
of professional skills, specialised knowledge, and instructional practice, leading to
a positive impact on SLOs.
One of the most ubiquitous activities of the alternative approach to TPL is
lesson study (Hill, 2009). This TPL activity involves teachers as a whole group
planning, examining, and revising a lesson in focus through exchanged teaching
sessions accompanied by peer observations and followed by group discussions for
lesson improvement (Elliott, 2019). In this model, the intended lesson is imple‑
mented each time to a different group of learners and collaborative self-study of
instructional practice is used to improve teaching and learning quality. Through the
process of planning, discussing, teaching, and lesson revising, teachers are able to
systematically examine the way they tailor their classroom instruction so as to teach
more effectively (Fernandez & Chokshi, 2002). Research shows that this practice of
collaborative learning is facilitative to teachers’ development of expertise and teach‑
ing skills (Bui, 2019; Nguyen & Newton, 2021).
able and willing to make changes in their pedagogical decisions leading to achieve‑
ments in SLOs (Timperley, 2011; Van Driel & Berry, 2012).
Second, research has shown many TPL programmes to be very ineffective due
in part to practitioners not being involved in designing such programmes. As Bayer
(2017) and Myende (2014) have pinpointed, teachers’ needs usually are not con‑
sulted properly before an intended TPL activity is designed. For this reason, many
teachers may find the activity irrelevant to their own teaching situations (Darling-
Hammond et al., 2017; Timperley et al., 2008). Within the field of adult education
in Greece, for instance, “all development programmes have been inconsistent with
actual needs of the teaching staff […] programme contents have been randomly
selected, rather than being grounded on systematic investigation of practitioners’
needs, limiting thus, their impact on teaching practice” (Papastamatis et al., 2009,
p. 85). Therefore, understanding teachers’ needs is an important step to inform
effective design of TPL programmes (Avalos, 2011; Nguyen & Newton, 2021;
Tsotetsi & Mahlomaholo, 2013; Wayne et al., 2008).
Third, TPL has also been criticised for focusing only on specific teaching skills
without establishing from evidence of whether such skills positively influence
SLOs (Papastamatis et al., 2009; Timperley et al., 2008; Wayne et al., 2008). TPL
addressing content knowledge alone does not yield much impact either (Day &
Townsend, 2009; Vazir & Meher, 2010) because teachers find it challenging to
translate what they have achieved from TPL activities into practice (Tsotetsi &
Mahlomaholo, 2015). Thus, only when professional skills and knowledge and the
connection between these and SLOs become central to a TPL activity can its effec‑
tiveness be maximised (Timperley et al., 2008; Van Ha & Murray, 2021; Wayne
et al., 2008). Also, the nexus between teachers’ existing experience and new knowl‑
edge should be facilitated (Muijs et al., 2014; Opfer & Pedder, 2011), engaging
them in active learning and sharing of instructional practice (Borko et al., 2010;
Darling-Hammond et al., 2009; Nguyen & Hung, 2021).
Fourth, TPL leaders often design TPL programmes with an attempt to first
change teachers’ beliefs prior to changes in their instructional practice (Guskey,
2002). From the perspective of change sequencing, Borg (2015) and Yoon et al.
(2007) argued that enhanced professional knowledge leads to changes in classroom
practice, which subsequently have an impact on SLOs. Then, only when teachers
see the positive effects of their changed classroom practice on student learning do
they perceive their teaching as successful (Bayer, 2017; Frost, 2012; Tschannen-
Moran & Hoy, 2007). This follows that changes in classroom practice that posi‑
tively impact SLOs precede changes in teachers’ beliefs. To this end, researchers
have recently supported the call for a shift away from changing teacher beliefs as an
ultimate goal in TPL design toward enhanced instructional practices that optimally
foster SLOs (Loewenberg Ball & Forzani, 2009; McDonald et al., 2013).
Fifth, research has further revealed that many TPL programmes did not work
well given the lack of involvement and support of administrators and school leaders
(Darling-Hammond et al., 2009; Tsotetsi & Mahlomaholo, 2013). For example,
Professional Learning for CALL Teachers ◾ 9
Acknowledgements
This publication was funded by University of Economics Ho Chi Minh City
(UEH), Vietnam.
14 ◾ Innovations and Applications of Technology
References
Alton-Lee, A. (2011). (Using) evidence for educational improvement. Cambridge Journal of
Education, 41(3), 303–329. https://doi.org/10.1080/0305764X.2011.607150
Antoniou, P., & Kyriakides, L. (2011). The impact of a dynamic approach to professional
development on teacher instruction and student learning: Results from an experimen‑
tal study. School Effectiveness and School Improvement, 22(3), 291–311. https://doi.org/
10.1080/09243453.2011.577078
Armour, K. M., & Makopoulou, K. (2012). Great expectations: Teacher learning in a
national professional development programme. Teaching and Teacher Education, 28(3),
336–346. https://doi.org/10.1016/j.tate.2011.10.006
Avalos, B. (2011). Teacher professional development in Teaching and Teacher Education over
ten years. Teaching and Teacher Education, 27(1), 10–20. https://doi.org/10.1016/j.
tate.2010.08.007
Baker, L. (2016). Re-conceptualizing EFL professional development: Enhancing communi‑
cative language pedagogy for Thai teachers. TEFLIN Journal, 27(1), 23. https://doi.
org/10.15639/teflinjournal.v27i1/23-45
Bantwini, B. D. (2009). District professional development models as a way to intro‑
duce primary-school teachers to natural science curriculum reforms in one dis‑
trict in South Africa. Journal of Education for Teaching, 35(2), 169–182. https://doi.
org/10.1080/02607470902771094
Barnes, H., & Verwey, H. (2008). Teacher Education Review. University of Pretoria.
Bayer, B. L. (2017). Assessing the factors impacting professional learning for teachers in Seventh-
day Adventist schools: A comparison of millennials and non-millennials. (Unpublished
PhD thesis), Andrews University. Berrien Springs, MI.
Biputh, B., & McKenna, S. (2010). Tensions in the quality assurance processes in
post‐apartheid South African schools. Compare, 40(3), 279–291. https://doi.
org/10.1080/03057920902955892
Blank, R. K., & De Las Alas, N. (2009). Effects of Teacher Professional Development on Gains in
Student Achievement: How Meta-Analysis Provides Scientific Evidence Useful to Education
Leaders. Council of Chief State School Officers.
Blank, R. K., De Las Alas, N., & Smith, C. (2007). Analysis of the Quality of Professional
Development Programs for Mathematics and Science Teachers: Findings from a Cross-State
Study. Council of Chief State School Officers.
Borg, S. (2011). The impact of in-service teacher education on language teachers’ beliefs.
System, 39(3), 370–380. https://doi.org/10.1016/j.system.2011.07.009
Borg, S. (2015). The benefits of attending ELT conferences. ELT Journal, 69(1), 35–46.
https://doi.org/10.1093/elt/ccu045
Borko, H., Jacobs, J., & Koellner, K. (2010). Contemporary approaches to teacher profes‑
sional development. In P. L. Peterson, E. Baker, & B. McGaw (Eds.), Third International
Encyclopedia of Education (pp. 548–556). Elsevier.
Bui, T. (2019). The implementation of task-based language teaching in EFL primary school
classrooms: A case study in Vietnam. (Unpublished PhD thesis), Victoria University of
Wellington, New Zealand.
Calvert, L. (2016). Moving from Compliance to Agency: What Teachers Need to Make Professional
Learning Work. Learning Forward and NCTAF.
Capps, D. K., Crawford, B. A., & Constas, M. A. (2012). A review of empirical literature on
inquiry professional development: Alignment with best practices and a critique of the
findings. Journal of Science Teacher Education, 23(3), 291–318. https://doi.org/101007/
s10972-012-9275-2
Professional Learning for CALL Teachers ◾ 15
Carlson, S., & Gadio, C. T. (2002). Teacher professional development in the use of technol‑
ogy. In W. D. Haddad and A. Draxler (Eds.), Technologies for Education: Potentials,
Parameters, and Prospects (pp. 118–132). UNESCO and the Academy for Educational
Development.
Chan, M. C. E., Clarke, D. J., Roche, A., & Clarke, D. M. (2019). How do teachers learn?
Different mechanisms of teacher in-class learning. In G. Hine, S. Blackley, & A. Cooke
(Eds.), Proceedings of the 42nd Annual Conference of the Mathematics. Education Research
Group of Australasia (pp. 164–171). Perth: MERGA.
Cohen, D. K., & Hill, H. C. (2008). Learning Policy: When State Education Reform Works.
Yale University Press.
Collins, S., & Clarke, A. (2008). Activity frames and complexity thinking: Honoring
both public and personal agendas in an emergent curriculum. Teaching and Teacher
Education, 24(4), 1003–1014. https://doi.org/10.1016/j.tate.2007.11.002
Cuhadar, C. (2018). Investigation of pre-service teachers’ levels of readiness to technology
integration in education. Contemporary Educational Technology, 9(1), 61–75. https://
doi.org/10.30935/cedtech/6211
Darling-Hammond, L., Hyler, M., & Gardner, M. (2017). Effective Teacher Professional
Development. Learning Policy Institute. https://doi.org/10.54300/122.311
Darling-Hammond, L., & Richardson, N. (2009). Research review/teacher learning: What
matters? Educational Leadership, 66(5), 46–53.
Darling-Hammond, L., Wei, R. C., Andree, A., Richardson, N., & Orphanos, S. (2009).
Professional learning in the learning profession. Technical Report. National Staff
Development Council.
Day, C., & Townsend, A. (2009). Practitioner action research: Building and sustaining suc‑
cess through networked learning communities. In E. N. Susan & S. Bridget (Eds.), The
SAGE Handbook of Educational Action Research (pp. 178–189). SAGE Publications.
Dede, C., Jass Ketelhut, D., Whitehouse, P., Breit, L., & McCloskey, E. M. (2009). A research
agenda for online teacher professional development. Journal of Teacher Education,
60(1), 8–19. https://doi.org/10.1177/00224871083275
Dele-Ajayi, O., Fasae, O. D., & Okoli, A. (2021). Teachers’ concerns about integrating
information and communication technologies in the classrooms. PLoS One, 16(5),
e0249703. https://doi.org/10.1371/journal.pone.0249703
DeMonte, J. (2013). High-Quality Professional Development for Teachers: Supporting Teacher
Training to Improve Student Learning. Center for American Progress.
Desimone, L. M. (2009). Improving impact studies of teachers’ professional development:
Toward better conceptualizations and measures. Educational Researcher, 38(3), 181–
199. https://doi.org/10.3102/0013189X08331140
Desimone, L. M., Porter, A. C., Garet, M. S., Yoon, K. S., & Birman, B. F. (2002). Effects
of professional development on teachers’ instruction: Results from a three-year longi‑
tudinal study. Educational Evaluation and Policy Analysis, 24(2), 81–112. https://doi.
org/10.3102/01623737024002081
Dichaba, M. M., & Mokhele, M. L. (2012). Does the cascade model work for teacher train‑
ing? Analysis of teachers’ experiences. International Journal of Educational Sciences, 4(3),
249–254. https://doi.org/10.1080/09751122.2012.11890049
Diefes-Dux, H. A. (2014). In-service teacher professional development in engineering educa‑
tion. In Ş. Purzer, J. Strobel, & M. E. Cardella (Eds.), Engineering in Pre-college Settings:
Synthesizing Research, Policy, and Practices (pp. 233–257). Purdue University Press.
Elliott, J. (2019). What is lesson study? European Journal of Education, 54(2), 175–188.
https://doi.org/10.1111/ejed.12339
16 ◾ Innovations and Applications of Technology
Ene, E., & Serban, V. (2023). Barriers and opportunities in CALL PD in Romania: Judging
the effectiveness of teacher-led PD. In D. Tafazoli & M. Picard (Eds.), Handbook of
CALL Teacher Education and Professional Development (pp. 227–244). Springer. https://
doi.org/10.1007/978-981-99-0514-0_14
Fernandez, C., & Chokshi, S. (2002). A practical guide to translating lesson study for a US setting.
Phi Delta Kappan, 84(2), 128–134. https://doi.org/10.1177/003172170208400208
Fishman, B. J., Marx, R. W., Best, S., & Tal, R. T. (2003). Linking teacher and student
learning to improve professional development in systemic reform. Teaching and Teacher
Education, 19(6), 643–658. https://doi.org/10.1016/S0742-051X(03)00059-3
Frost, D. (2012). From professional development to system change: Teacher leadership and
innovation. Professional Development in Education, 38(2), 205–227. https://doi.org/
10.1080/19415257.2012.657861
Garet, M. S., Porter, A. C., Desimone, L., Birman, B. F., & Yoon, K. S. (2001). What
makes professional development effective? Results from a national sample of
teachers. American Educational Research Journal, 38(4), 915–945. https://doi.
org/10.3102/000283120380049
Garone, A., Bruggeman, B., Philipsen, B., Pynoo, B., Tondeur, J., & Struyven, K. (2022).
Evaluating professional development for blended learning in higher education: A syn‑
thesis of qualitative evidence. Education and Information Technologies, 27, 7599–7628.
https://doi.org/10.1007/s10639-022-10928-6
Glazer, E. M., & Hannafin, M. J. (2006). The collaborative apprenticeship model: Situated
professional development within school settings. Teaching and Teacher Education,
22(2), 179–193. https://doi.org/10.1016/j.tate.2005.09.004
Guskey, T. R. (2002). Professional development and teacher change. Teachers and Teaching,
8(3), 381–391. https://doi.org/10.1080/135406002100000512
Hill, H. C. (2009). Fixing teacher professional development. Phi Delta Kappan, 90(7), 470–
477. https://doi.org/10.1177/003172170909000705
Hubbard, P. (2023). Contextualizing and adapting teacher education and profes‑
sional development. In D. Tafazoli & M. Picard (Eds.), Handbook of CALL
Teacher Education and Professional Development (pp. 3–14). Springer. https://doi.
org/10.1007/978-981-99-0514-0_1
Kohnke, L., & Foung, D. (2023). Exploring microlearning for teacher professional develop‑
ment: Voices from Hong Kong. In D. Tafazoli & M. Picard (Eds.), Handbook of CALL
Teacher Education and Professional Development (pp. 279–292). Springer. https://doi.
org/10.1007/978-981-99-0514-0_17
Kubanyiova, M., & Feryok, A. (2015). Language teacher cognition in applied linguistics
research: Revisiting the territory, redrawing the boundaries, reclaiming the relevance.
The Modern Language Journal, 99(3), 435–449. https://doi.org/10.1111/modl.12239
Kutame, A. P. (2010). Evaluating the link between learner assessment and teacher devel‑
opment: Implementation of integrated quality management system in South Africa.
Caribbean Educational Research Journal, 2(1), 96–103.
Lawless, K. A., & Pellegrino, J. W. (2007). Professional development in integrating tech‑
nology into teaching and learning: Knowns, unknowns, and ways to pursue better
questions and answers. Review of Educational Research, 77(4), 575–614. https://doi.
org/10.3102/0034654307309921
Lian, A., Lay, N., & Lian, A. (2023). Secondary pre-service English teachers’ response to
CALL innovation in Cambodia. In D. Tafazoli & M. Picard (Eds.), Handbook of CALL
Teacher Education and Professional Development (pp. 49–63). Singapore: Springer.
https://doi.org/10.1007/978-981-99-0514-0_4
Professional Learning for CALL Teachers ◾ 17
Loeb, S., Miller, L. C., & Strunk, K. O. (2009). The state role in teacher professional devel‑
opment and education throughout teachers’ careers. Education, 4(2), 212–228. https://
doi.org/10.1162/edfp.2009.4.2.212
Loewenberg Ball, D., & Forzani, F. M. (2009). The work of teaching and the challenge
for teacher education. Journal of Teacher Education, 60(5), 497–511. https://doi.
org/10.1177/0022487109348479
Mana, M. (2011). Arabic-teacher training and professional development: A view from
STARTALK. Al-’Arabiyya, 44/45, 87–101. https://www.jstor.org/stable/43208725
McDonald, M., Kazemi, E., & Kavanagh, S. S. (2013). Core practices and pedagogies of
teacher education: A call for a common language and collective activity. Journal of
Teacher Education, 64(5), 378–386. https://doi.org/10.1177/00224871134938
Mitchell, C., & Sackney, L. (2011). Profound Improvement: Building Capacity for a Learning
Community (2nd ed.). Routledge.
Muijs, D., Kyriakides, L., van der Werf, G., Creemers, B., Timperley, H., & Earl, L. (2014).
State of the art-teacher effectiveness and professional learning. School Effectiveness and
School Improvement, 25(2), 231–256. https://doi.org/10.1080/09243453.2014.885451
Murray, T. C., & Zoul, J. (2015). Leading Professional Learning: Tools to Connect and Empower
Teachers. Corwin Press.
Myende, P. E. (2014). Improving academic performance in a rural school through the use of an
asset-based approach as a management strategy. (Unpublished PhD thesis), University of
the Free State, Bloemfontein.
Nguyen, H. T. M. (2008). Mentoring beginning EFL teachers at tertiary level in Vietnam.
The Asian EFL Journal, 10(1), 111–132.
Nguyen, L. T., & Hung, B. P. (2021). Communicative pronunciation teaching: Insights
from the Vietnamese tertiary EFL classroom. System, 101, 102573. https://doi.
org/10.1016/j.system.2021.102573
Nguyen, L. T., & Newton, J. (2021). Enhancing EFL teachers’ pronunciation pedagogy
through professional learning: A Vietnamese case study. RELC Journal, 52(1), 77–93.
https://doi.org/10.1177/0033688220952476.
Ono, Y., & Ferreira, J. (2010). A case study of continuing teacher professional development
through lesson study in South Africa. South African Journal of Education, 30(1), 59–74.
https://doi.org/10.4314/saje.v30i1.52602
Opfer, V. D., & Pedder, D. (2011). Conceptualizing teacher professional learning. Review of
Educational Research, 81(3), 376–407. https://doi.org/10.3102/00346543114136
Pancucci, S. (2007). Train the trainer: The bricks in the learning community scaffold of
professional development. International Journal of Humanities and Social Sciences, 2(1),
14–21. https://doi.org/10.5281/zenodo.1076078
Papastamatis, A., Panitsidou, E. A., Giavrimis, P., & Papanis, E. (2009). Facilitating teachers’
and educators’ effective professional development. Review of European Studies, 1(2), 83.
https://doi.org/10.5539/res.v1n2p83
Park, S., & Khoshnevisan, B. (2019). Literacy meets augmented reality (AR): The use of AR
in literacy. In W. B. James, & C. Cobanoglu (Eds.), Proceedings of the Global Conference
on Education and Research (GLOCER) Conference (Vol. 3, pp. 93–99). ANAHEI
Publishing, LLC.
Penuel, W. R., Fishman, B. J., Yamaguchi, R., & Gallagher, L. P. (2007). What makes
professional development effective? Strategies that foster curriculum implemen‑
tation. American Educational Research Journal, 44(4), 921–958. https://doi.
org/10.3102/00028312073082
18 ◾ Innovations and Applications of Technology
Philipsen, B., Tondeur, J., Pareja Roblin, N., Vanslambrouck, S., & Zhu, C. (2019).
Improving teacher professional development for online and blended learning: A sys‑
tematic meta-aggregative review. Educational Technology Research and Development,
67(5), 1145–1174. https://doi.org/10.1007/s11423-019-09645-8
Philipsen, B., Tondeur, J., Scherer, R., Pynoo, B., & Zhu, C. (2021). Measuring institutional
support for online and blended learning professional development: Validating an instru‑
ment that examines teachers’ perceptions. International Journal of Research & Method in
Education, 45(2), 164–179. https://doi.org/10.1080/1743727X.2021.1926973
Richards, J. C. (2008). Second language teacher education today. RELC Journal, 39(2), 158–
177. https://doi.org/10.1177/0033688208092182
Robinson, V. M., Lloyd, C. A., & Rowe, K. J. (2008). The impact of leadership on student out‑
comes: An analysis of the differential effects of leadership types. Educational Administration
Quarterly, 44(5), 635–674. https://doi.org/10.1177/0013161X08321509
Rogers, M. P., Abell, S., Lannin, J., Wang, C.‑Y., Musikul, K., Barker, D., & Dingman, S.
(2007). Effective professional development in science and mathematics education:
Teachers’ and facilitators’ views. International Journal of Science and Mathematics
Education, 5(3), 507–532. https://doi.org/10.1007/s10763‑006‑9053‑8
Schneider, A. K., & Ene, E. (2023). An English literature professor applies CALL PD in
her classroom and outreach programs: Reflections and implications. In D. Tafazoli &
M. Picard (Eds.), Handbook of CALL Teacher Education and Professional Development
(pp. 367–386). Springer. https://doi.org/10.1007/978‑981‑99‑0514‑0_22
Schwille, J., Dembélé, M., & Schubert, J. (2007). Global Perspectives on Teacher Learning:
Improving Policy and Practice. IIEP Publications.
Sikwibele, A. L., & Mungoo, J. K. (2009). Distance learning and teacher education in
Botswana: Opportunities and challenges. The International Review of Research in Open
and Distributed Learning, 10(4), 1–16. https://doi.org/10.19173/irrodl.v10i4.706
Stack, S., Beswick, K., Brown, N., Bound, H., & Kenny, J. (2011). Putting partnership at the
centre of teachers’ professional learning in rural and regional contexts: Evidence from
case study projects in Tasmania. Australian Journal of Teacher Education, 36(12), 1–20.
https://doi.org/10.14221/ajte.2011v36n12.7
Steyn, T. (2011). Implementing continuing professional teacher development: Policy and
practice. Acta Academica, 43(1), 211–233. https://hdl.handle.net/11660/2799
Timperley, H. (2011). A background paper to inform the development of a national professional
development framework for teachers and school leaders. Australian Institute for Teaching
and School Leadership (AITSL).
Timperley, H., & Alton‑Lee, A. (2008). Reframing teacher professional learning: An alterna‑
tive policy approach to strengthening valued outcomes for diverse learners. Review of
Research in Education, 32(1), 328–369. https://doi.org/10.3102/0091732X0730896
Timperley, H., Wilson, A., Barrar, H., & Fung, I. (2008). Teacher Professional Learning and
Development: Best Evidence Synthesis Iteration (BES). Ministry of Education, Wellington,
New Zealand
Torsani, S. (2023). Teacher education in mobile assisted language learning for adult migrants:
A study of provincial centres for adult education in Italy. In D. Tafazoli & M. Picard
(Eds.), Handbook of CALL Teacher Education and Professional Development (pp. 179–
192). Springer. https://doi.org/10.1007/978‑981‑99‑0514‑0_11
Tschannen‑Moran, M., & Hoy, A. W. (2007). The differential antecedents of self‑efficacy
beliefs of novice and experienced teachers. Teaching and Teacher Education, 23(6), 944–
956. https://doi.org/10.1016/j.tate.2006.05.003
Professional Learning for CALL Teachers ◾ 19
Designing and
Implementing WhatsApp
Communication Tasks
for the English as a
Foreign Language
Classroom: A Case Study
Mokh. Arif Bakhtiyar, Toni Dobinson, and Julian Chen
2.1 Introduction
Research on computer‑mediated communication (CMC) has burgeoned in line
with technological advancements. In the educational context, synchronous and
asynchronous communication, which are features of CMC (Romiszowski &
Mason, 2013), have unbolted opportunities for online pedagogy, constituting any‑
where and anytime learning (Ahern, 2008; Hung et al., 2022). Specifically, scholars
and practitioners have shown interest in exploring mobile‑assisted language learn‑
ing (MALL), wherein mobile tools are utilized for the particular advantages they
offer in the language learning process (Bui et al., 2023; Kukulska‑Hulme, 2013).
MALL offers students the potential to create content through peer collaboration
and interaction (Morgana, 2021), and the results of such collaborative work can
be shared with teachers, friends, and others online (Morgana, 2019). One of the
(Adams & Newton, 2009; Ellis, 2020). To tackle these challenges, TBLT needs to
be adjusted to local needs (Kessler et al., 2021; Thomas, 2015), resulting in a modi‑
fied or weaker version of TBLT (Long, 2015; Serafini, 2021) or “structure‑trapping
tasks” (Skehan, 1998, pp. 122–123), referred to as task‑supported language teach‑
ing (TSLT) (Ellis, 2003).
According to Ellis (2019b, pp. 457–458), TSLT is “synthetic and product‑ori‑
ented, drawing on a structural syllabus and an accuracy‑oriented methodology,” as
opposed to TBLT, which is “analytic and process‑oriented, drawing on a task‑based
syllabus and a fluency‑oriented methodology.” The syllabus used in TSLT is distinct
from that used in TBLT in that the former is ready‑made and structurally oriented,
and the school or government prescribes the content. By contrast, the TBLT sylla‑
bus is not prescribed or designed specifically for task‑based instruction and contains
a list of tasks to be performed (Ellis, 2017b). In TSLT, a task is used to practice the
pre‑taught linguistic forms while achieving an interactive outcome. Meanwhile, in
TBLT, a task serves as a language practice used naturally to achieve a communica‑
tive outcome (Ellis, 2019b).
The tasks used in this study fall under TSLT for two reasons. First, the tasks
were developed based on a needs analysis (NA), in which one of the components
of the analysis was a prescribed syllabus. Second, the pre‑ and post‑task stages pri‑
oritized the development of lexical knowledge. The decision regarding the second
motive is based on the first author’s teaching experience and NA. As an English
language educator for over a decade, teaching across educational levels from pri‑
mary school to university, the first author has observed a high demand for English
vocabulary knowledge among Indonesian students. The NA (see Table 2.1) clarified
that teachers identified their students’ needs for vocabulary development. Students
supported this notion by revealing the need for a richer English lexicon.
can utilize text chat combined with emojis to make the story more multimodal,
visually engaging, and meaningful. By facilitating multimodal communication,
WhatsApp enables learners to engage with the target language through commu‑
nication that is more attuned to real life and one that operates not solely in one
medium (e.g., text) but in a combination of different modes, which can enhance
the learning experience.
2.3.1 Participants
The current case study comprises three stages: NA, pilot study, and main study.
In this chapter, we focus only on the NA and main study components. This case
study was conducted in two public senior high schools in two towns in East Java,
Indonesia. Before the study, the first author contacted each school headmaster to
seek permission and endorsement for the study (Singh & Wassenaar, 2016). Upon
approval, the headmasters assigned an English language teacher to each school to
recruit student participants. The teachers informed their students in Grade 11 about
the recruitment to identify those willing to participate voluntarily (Etikan et al.,
2016). English language teachers from both schools joined the WhatsApp groups
created for this project, as they were eager to learn how to implement the tasks
through WhatsApp. However, their role was purely as an observer. Eighteen students
from school A and 24 from school B participated in this study. The participants were
male and female students aged between 16 and 17 years. Their English proficiency
ranged from beginner to intermediate based on the levels of the Common European
Framework of Reference for Languages (CEFR). Two consent forms were used for
each student: one for the student and the other for their parents (David et al., 2001).
2.3.2 Context
As Javanese Indonesians, the students speak the local language in their daily com‑
munication. Bahasa Indonesia, the national language, is used as the classroom
teaching and learning medium. English is learned and taught as a foreign language.
Historically, under Kurikulum 2013, the national curriculum, English has been
compulsory and optional. At a minimum, the students learned the language once a
week, with 2 × 45 minutes sessions in normal situations and 2 × 30 minutes sessions
Designing and Implementing WhatsApp Communication Tasks ◾ 25
during the COVID‑19 crisis. Those interested in augmenting their English language
learning could join the designated optional English language class containing stu‑
dents from different classrooms but at the same grade level. The time allocation for
this elective English class was the same as that for the compulsory English class. As
mentioned above, the prescribed syllabus and textbooks are the basis for English
language teaching and learning. When data were collected, both schools used the
Kurikulum 2013 syllabus and textbooks, particularly the 2017 revision.
Data were collected through WhatsApp chatlogs, interviews, and learning jour‑
nals. Manual transcription was performed because the chatlogs did not include
voice chat files. The chatlogs were retrieved by “joining the conversation” (Kohne
et al., 2023, p. 182), where the first author participated in the WhatsApp groups cre‑
ated for the study. The interviews were semi‑structured and conducted one‑on‑one.
Students completed the learning journals immediately after performing each task.
The journals were sent to the first author via WhatsApp. At the time of data col‑
lection, COVID‑19 restrictions were gradually relaxed. The teaching and learn‑
ing process was conducted in a hybrid mode, where students took turns to attend
school. Those who did not attend school learned from home online. Therefore, the
first author was able to conduct face‑to‑face interviews. Data were analyzed using
content analysis to capture the development and implementation of WhatsApp
communication tasks in this specific context (Chen, 2012).
In the implementation phase, the tasks were undertaken based on the head‑
masters’ directions, outside school hours when most students were at home. The
schedule of the WhatsApp task class was decided together by the first author and
the students before implementation. Students in school A opted to have the online
class on Friday evenings at 8 p.m., while those in school B chose Saturday morn‑
ings at 9 a.m. (both East Java, Indonesia time). The first author’s role was as a task
manager (Ellis, 2019a). He sent the task instructions and materials to the class
WhatsApp group created for the project, set a time limit, and monitored learners’
performance by observing the WhatsApp groups simultaneously as students per‑
formed each task. When required, he also acted as a communicator to ensure that
learners understood the task instructions and as an instructor to correct learners’
errors or provide feedback (Ellis, 2019a).
Learner‑preferred 1. Food
discussion topics 2. Online shopping
3. Online learning
4. Music
5. Video games
6. Movies
7. Tourism spots
8. Trending topics on
YouTube or Twitter
Document analysis involved the Kurikulum 2013 English syllabus and text‑
books prescribed by the government. The teachers supported the analysis; they
revealed in the interviews that students needed to understand and master basic
competencies such as asking questions and expressing opinions related to sugges‑
tions, perspectives, and thoughts on a given topic. These basic competencies are
explicitly stated in the syllabus and textbooks.
We identified six target tasks based on the interviews and documents (NA): ordering
food online, providing suggestions for a friend’s problem, visiting tourism spots,
understanding a song, describing an accident, and asking for and giving directions.
From the target tasks, we developed six pedagogic tasks that were used in our study.
Before implementing the tasks in the main study, a pilot study was conducted as a
trial. Owing to space constraints in this chapter, we present one task as an example
in Table 2.2, and an analysis of the task follows.
The “ordering food online” task aimed to facilitate among students the experi‑
ence of communicating collaboratively to select meals in English from a restaurant
before ordering food online. The task was developed based on a basic competency
that underlines the capability of asking for and providing information concerning
opinions and thoughts on a subject. As the students were required to post their food
preferences toward the end of the task, they worked jointly on which dishes to select
from by expressing their opinions. The budget limit also encouraged students to
think about not only the menu but also the number of dishes to order. In addition
to the basic competency mentioned above, the task was designed based on students’
preferences to discuss topics related to food and online shopping (see Table 2.1).
To infuse real‑life relevance and authenticity (Buendgens‑Kosten, 2014), we
selected a real restaurant located in Jakarta, Indonesia, for the task. We understood
that a restaurant geographically proximal to students would be ideal. However, the
scarcity of such restaurants with menus in English forced us to choose this specific
restaurant. Incidentally, the topic of ordering food online aligned with the situation
in which many restaurants remained closed because of the COVID‑19 crisis. The
language used in the task also underscored the current circumstances; for instance,
“As we are now in a pandemic situation, maintaining social or physical distance is
essential. Online shopping, including buying meals, could be an alternative to reduce
physical contact” (pre‑task) and “As you cannot visit the restaurant due to COVID‑19
restrictions, you need to order the food online through this link” (main task).
The pre‑task served to introduce the (target) words that might be useful for stu‑
dents in performing the main task (Ellis & Shintani, 2014; Ellis et al., 2020; Willis,
1996). Instead of sending the words manually through WhatsApp, the words were
28 ◾ Innovations and Applications of Technology
As we are now in a Here is the main task. One of Before this session
pandemic situation, you has just won a voucher ends, let’s play a
maintaining social or to eat dinner at a restaurant game. You will
physical distance is in Jakarta. The voucher is receive eight
essential. Online worth IDR 500.000 and is for pictures of
shopping, including four people maximum. As ingredients for
buying meals, could you cannot visit the meals successively.
be an alternative to restaurant due to COVID‑19 Guess each of the
reduce physical restrictions, you need to ingredients in the
contact. Have you order the food online pictures by typing
tried ordering food through this link https://www. out the relevant
online? What do you kaum.com/jakarta/ English name in
like about it and what food‑ menu/#small‑plates. the WhatsApp
don’t you like about group. After
it? What are the guessing the
advantages and pictures, you will
disadvantages of receive the correct
ordering food online English words of
in your opinion? the pictures.
Check the words
Before performing Open the link and decide together. Whoever
today’s task, knowing together using voice chat to can guess the
some English select from the menus: most number of
vocabulary related to meals, soup and stew, or words correctly is
the ingredients of desserts, that you want. Don’t the winner.
food available in a forget that you have a budget
restaurant would be limit. Next, use text chat to
helpful for you. list the selected dishes and
Open this link provide reasons why you
https://chegg‑prep. chose them and then upload
app.link/ your reasons as a group to
yCGBuZHprib and the class WhatsApp group.
play the flashcard You have 30 minutes to carry
game. out this task.
introduced through digital flashcards. The use of digital flashcards reflected what
the teachers suggested regarding the combination of WhatsApp and other apps
(see Table 2.1). Each flashcard presented an English word on one side and its corre‑
sponding image on the other. Figure 2.1 captures the content of one such flashcard.
It comprised eight words (vinegar, jicama, pomelo, bean sprout, beef ribs, cashew,
Designing and Implementing WhatsApp Communication Tasks ◾ 29
shallot, and bean curd) that were retrieved from the food ingredients used in the
restaurant. Through these lexical items, the students were expected to learn about
some of the ingredients used in the restaurant when they opened the link provided
in the main task.
In the main task, students were required to work together to choose food items
under a certain budget. The students needed to utilize voice chat to communicate
with each other while deciding on the dishes and text chat to list the final order items
before posting them to the class chat. The use of these features echoed the students’
preferences, as shown in Table 2.1. In addition, it has been suggested that using voice
chat may be useful for speaking (Andujar & Salaberri‑Ramiro, 2021), listening, and
pronunciation practice. Meanwhile, the use of text chat was based on the ease of
perusal and comprehension of other learners who read the selected food items from
a certain group. Regarding time allocation, we limited the time to 30 minutes to
encourage students to interact more efficiently (Ellis & Shintani, 2014).
The activity in the main task is based on that proposed by Ellis (2009). First, it
focuses on meaning. To accomplish the task, students must communicate collab‑
oratively to decide which food items should be ordered. Second, there is a gap that
must be addressed. Each individual would have their own preferences; however,
they had to reach a consensus on which dishes to select. Third, they should use
linguistic or non‑linguistic resources. Accordingly, the students used English as
the target language. They could also engage in translanguaging and use their local
or national language to avoid communication breakdown (Dobinson et al., 2023).
Further, students could use emojis or stickers to support their interactions. Fourth,
there needs to be an outcome. The (main) task required students to post details of
their selected food items to the class chat as the outcome of the task. Additionally,
the task fulfilled the collaboration and interaction criteria mentioned by González‐
Lloret (2020). Evidently, to complete the task, students had to collaborate and inter‑
act with each other to determine which dishes to order.
30 ◾ Innovations and Applications of Technology
This section describes how the “ordering food online” task was performed from
the pre‑task to post‑task stages and highlights noteworthy points from students’
exchanges. To illustrate the same, we present screenshots of students’ interactions,
supported by related narratives from students’ interviews or learning journals. The
first author sent task instructions to the WhatsApp class group at 9 p.m. It is worth
noting that the screenshot was taken when the first author was in Western Australia,
resulting in a one hour time disparity between East Java and Western Australia; it
was 8 p.m. in the former location.
The tasks we developed were to be performed in groups and dyads. To accom‑
plish this, we created three WhatsApp groups before task implementation. The first
group was the WhatsApp class group, a centralized hub including the first author;
two and three English language teachers from schools A and B, respectively; and
students. The WhatsApp class group was a platform for the first author to send the
task instructions or for students to post the task outcomes. In some tasks, the group
was used by the students to upload their warm‑up questions, as in the “ordering
food online” task. Other WhatsApp groups were created for tasks to be performed
in groups and dyads. The former group comprised the first author, English language
teachers, and four students, while the latter consisted of similar members to the
former but with only two students. Both WhatsApp groups served as spaces for the
students to perform their main tasks. In some tasks, the students completed the
pre‑task stage in the four‑student WhatsApp group.
Based on our observations, all the students responded to the pre‑task warm‑up
questions. However, it was interesting to note the time required by the students to
Designing and Implementing WhatsApp Communication Tasks ◾ 31
respond to the questions. Figure 2.2 offers a snapshot of the students’ responses to
the task instructions. As can be seen, the students commenced responding at 9.08
p.m., indicating that they needed approximately eight minutes before posting their
answers. In another school, we noted that students started to respond after 12 min‑
utes. We found that 8 and 12 minutes were rather long durations to compose such
sentences, leading to further investigation on this matter.
Vincent provided his viewpoint on this issue during the interview, stating:
This was possibly because [their] mother language is Bahasa Indonesia,
so they were not used to it [English]. It also took time to compose sen‑
tences that [were perfect] … and it might have been influenced by our
culture [feeling shy about submitting responses first before others] …
so yea, the feeling and a will to be perfect that drove [them] to compose
sentences as good and neat as possible.
Vincent offered three reasons why the students took a bit longer to respond to the
task instructions. First, the students were not used to communicating in English,
including writing or texting; therefore, they might have needed to adjust to the situ‑
ation and express themselves accordingly. In addition, the “ordering food online”
task was the first task in the project, creating the need to adapt to different learning
circumstances among students. Second, the students (although perhaps not all of
them) sought to construct sentences as accurately as possible. Momok supported
32 ◾ Innovations and Applications of Technology
this, as he conveyed in his learning journal that “the difficulty in converting vocab‑
ulary into sentences that were good to read” was a major challenge in the task. They
may have consulted an online dictionary or grammar book to do so. The desire to
produce perfect sentences might be because the students realized that their teachers
were in the WhatsApp group; therefore, they preferred to be seen as perfect in front
of their teachers and other students. Indeed, the presence of teachers might influ‑
ence students’ behaviors. However, the teachers’ presence in this study mirrored the
real implementation of WhatsApp online classrooms, where students perform tasks
under teachers’ observation and monitoring. Finally, the students possibly felt shy
about being the first to post their responses to the WhatsApp group. This cultural
factor was evidenced when a student (Dominic) posted his response, and the other
students followed immediately (see Figure 2.2).
In the main task, the students interacted using voice chat when they selected
the food items and text chat when they listed the final meals and posted their
details on the class WhatsApp group. Excerpt 2.1 contains a transcription of stu‑
dent exchanges in voice chat format. Based on this excerpt, each student in the
group initially expressed their food preferences. However, negotiations occurred at
9:34 p.m., when Grainybrute warned or expressed an opinion to Dominic that his
proposed food choices were too expensive. This negotiation was part of the task; all
group members had their own preferences, yet they had to agree on which dishes to
eventually select by considering their budgets.
WhatsApp features, such as voice chat, facilitated the interaction or negotiation
to occur naturally and easily. For example, Grainybrute deleted a message at 9:30
p.m., which was probably because he mispronounced some words or was unsure
about the message. By deleting the message, Grainybrute could remove and replace
it with an appropriate one, thus avoiding misunderstanding for others. In addi‑
tion, although it does not appear in the excerpt, the students frequently utilized
the “quote and reply” feature (an example of the use of this feature can be seen in
Figure 2.2 when Momok quoted and replied to the task instructions sent by the
first author). Through this feature, students could select the specific message they
wanted to reply to. Hence, it prevented confusion among other students due to mul‑
tiple or simultaneous postings in text or voice‑based interactions. Overall, the use
of voice chat, or to an extent, text chat, and other features such as message deletion
and quote and reply, demonstrates that WhatsApp can facilitate the negotiation of
meaning and hence enhance task‑based or task‑supported interactions.
Figure 2.3 illustrates how students performed the post‑task. After the first author
posted the image of bean curd, the students immediately typed the corresponding
English word for the item shown in the image. Indeed, there was a possibility that
a student copied another student’s answer. However, the purpose of this activity
was to strengthen students’ word memory. It was not designed merely to determine
who could answer the most quickly. As such, by viewing the image and typing or
Designing and Implementing WhatsApp Communication Tasks ◾ 33
9:32 p.m. Dominic : Yes. You can choose the menu from
the link I shared earlier
copying the word, we expected that the game would aid in word retention. In this
regard, some students expressed their views in their learning journals. For example,
Moon revealed that she could remember the words jicama, bean curd, beef ribs,
vinegar, and cashew because “…we competed in typing our answers to the quiz
quickly, so [we could] memorize [the words] automatically.” In line with Moon,
Peenat stated that she could remember the words vinegar, jicama, beef ribs, pomelo,
and cashew because “when the images were sent, we had to answer together. This
challenged me to quickly recall the English words for the food ingredients.” Moon
highlighted the game’s positive impact on her memory, while Peenat suggested a
similar effect of the digital flashcards in the pre‑task stage.
In addition to its potential to strengthen word memory, the game also engaged
students. Figure 2.4 portrays the final interactions in the post‑task phase. Having
posted all the images and received all student responses, the first author inquired
about the winner. The students seemed motivated to answer the question, as indi‑
cated by the use of such emojis. They confirmed this in their interviews and learn‑
ing journals. For instance, Nakko expressed in her learning journal, “… Thank you
for preparing such a great game. This was challenging because it made me rush and
Designing and Implementing WhatsApp Communication Tasks ◾ 35
tremble while typing. It was [also] because other students were so fast and respon‑
sive.” Likewise, Azazel expressed his viewpoint during the interview, “…in the third
activity [post‑task], for example, we played a guessing game, which created a com‑
petition between our friends to answer quickly. There was passion [from the game].”
Students’ reflections on the “ordering food online” task were varied. Vincent
commented in his learning journal, “Today’s task was so pleasant. We were asked to
provide opinions on a topic that happens in everyday life.” Another student, Bunny,
highlighted the utilization of voice chat or note, “The task was so exciting because
we rarely communicated in English using voice note, so this was a new experience
for me.” Whereas Vincent linked his engagement to the authenticity of the task and
topic, Bunny associated the voice chat feature with a novel learning experience. This
implies that an authentic task developed based on daily topics and performed using
the multimodal features of WhatsApp, including voice chat, can be meaningful,
and language learning can thus be more memorable (Muntaha et al., 2023).
The task also has the potential for vocabulary development (Chou, 2020);
some students revealed that the WhatsApp task helped them retain or recall lexi‑
cal knowledge owing to the fast typing of words and responses to images provided
in the pre‑ and post‑task stages. To this end, designing a task under the TSLT
36 ◾ Innovations and Applications of Technology
framework, where the focus on linguistic forms was emphasized in both pre‑ and
post‑tasks, resulted in positive views from the participants in this study. Although
this project did not examine students’ vocabulary learning (e.g., using pre‑ and
post‑tests), the positive perceptions indicated that the tasks are worth considering in
an online language classroom, particularly for those targeting lexical development.
In addition, the perceptions support the NA results, indicating that students require
more lexical development.
Despite positive evaluations from the participants, designing WhatsApp com‑
munication tasks is not without challenges. Preparation is required to turn a pre‑
scribed textbook‑driven language teaching and learning format following the
present, practice, and produce (PPP) framework into a meaningful and authentic
task design supported by technology (Lai et al., 2012). Teachers who are accustomed
to following the national curriculum should familiarize themselves with the TSLT
framework in order to design tasks that develop learners’ agency in their learning
and produce unrehearsed language outputs (Chen, 2012). Teachers also need to
update their knowledge about cutting‑edge technology, especially the features of
WhatsApp; hence, they can explore the features that can be integrated into tasks
designed using WhatsApp and learn troubleshooting strategies if a technical prob‑
lem occurs when learners perform tasks on the platform (Chong & Reinders, 2020).
Conducting an NA to inform the development of relevant pedagogic tasks is also
essential (Baralt & Morcillo Gómez, 2017); however, it requires time, which teach‑
ers need to anticipate. Although syllabi and textbooks are commonly available in
TSLT (Ellis, 2017a, 2019b), NA will result in an understanding of students’ needs
(González‑Lloret, 2016; González‑Lloret & Ortega, 2014); for instance, in our case,
one of the needs is vocabulary development, or topics that students prefer, since not
all learning topics in the textbooks suit students’ wants or interests. Additionally,
students’ preferences for using specific WhatsApp features (e.g., text and voice chat
or audio and video calls) may differ across schools and regions. As such, conducting
an NA will reveal the specific features of the app that students want to use to per‑
form learning tasks. In relation to the challenges in task design, we suggest that the
government or teacher associations organize teacher training (Hasnain & Halder,
2023) by inviting national or international technology‑mediated TBLT or TSLT
experts and conducting workshops in person or online, respectively.
The implementation of WhatsApp communication tasks also presents challenges
that have important implications for teachers. First, internet connection was an issue
for a few students in this study. For instance, Viona expressed in the learning journal,
“[I] experienced issues with the internet signal when downloading voice notes from
partners.” Thus, teachers should initially inform students that they must choose a place
where the internet connectivity is adequate to perform the tasks (Baralt & Morcillo
Gómez, 2017). Teachers should also have a backup plan in case some students still
experience this issue, for example, recording the session. Second, some students found
that their partners’ voices in the voice chats were somewhat unclear, particularly in
the middle of the voice note. A technical problem did not cause this issue, rather it
was because the students did not speak clearly. Therefore, before students commence
Designing and Implementing WhatsApp Communication Tasks ◾ 37
the tasks, teachers should emphasize that they need to record their voices as clearly
as possible and use text chats to clarify when the voice is unclear. Third, as shown in
the findings section, the students took a long time to post a response. Regarding this
issue, teachers should inform students before task implementation that they need to
be responsive in performing the tasks. Finally, students may not all be online when
the WhatsApp classes begin. This can interfere with class dynamics as learners can‑
not accomplish interactive tasks without a partner (Lai et al., 2012). To address this
issue, teachers should coordinate with students to agree on schedules a few days prior
to task implementation. If students are absent, teachers should consider an alternative
strategy, possibly rearranging the pairs or rescheduling online meetings if many stu‑
dents cannot join. With respect to the challenges of task implementation, we suggest
that teachers conduct co‑teaching with other teachers or provide professional learning
support by joining locally or internationally established groups.
2.5 Conclusion
This chapter describes the development, implementation, and evaluation of
WhatsApp communication tasks for Indonesian EFL secondary school students.
We present the development of the tasks from the stages of NA to performing peda‑
gogic tasks, followed by an example of our task. We also illustrate how the task was
implemented in EFL classes using WhatsApp communication tasks outside school
hours, as additional language support. While the evaluation exhibits positive stu‑
dent perceptions of the task, this study has a limitation. Owing to the limited num‑
ber of participants included in NA, the findings may not be generalizable. However,
it is representative of the particular context group targeted, which also shares the
characteristics of other EFL student groups in secondary schools in Indonesia.
This study contributes to the literature in two ways. First, it contributes to the
body of knowledge on technology‑mediated TBLT and TSLT, particularly regard‑
ing the use of MIM platforms (Smith & González‑Loret, 2020). Second, although
the intensity of the COVID‑19 pandemic has diminished, meaning that language
pedagogy has returned to the face‑to‑face mode, this study may inspire teachers to
implement the task in their traditional classrooms with some adaptations. For exam‑
ple, the digital flashcards used in the pre‑task stage of the “ordering food online”
task can be replaced by traditional flashcards. With regard to the restaurant menu
provided in the main task through the website link, teachers can print out one page
from the menu and distribute this to students. Finally, in the post‑task phase, teach‑
ers can reuse the traditional flashcards to play word games with learners.
We conclude that designing and implementing WhatsApp communication
tasks in online pedagogy can be challenging but rewarding. For example, perform‑
ing NA requires time, but the students’ positive perceptions of the development of
lexical items and task topics indicate that it is a worthwhile endeavor. These tasks
can also be an alternative for EFL teachers to provide their students with novel or
distinct learning experiences while adhering to prescribed syllabi and textbooks.
38 ◾ Innovations and Applications of Technology
References
Adams, R., & Newton, J. (2009). TBLT in Asia: Constraints and opportunities. Asian Journal
of English Language Teaching, 19, 1–17. https://doi.org/10.26686/wgtn.12720848.v1
Ahern, T. C. (2008). CMC for language acquisition. In F. Zhang & B. Barber (Eds.),
Handbook of research on computer‑enhanced language acquisition and learning (pp. 295–
306). IGI Global. https://doi.org/10.4018/978‑1‑59904‑895‑6.ch017
Alamer, A., & Al Khateeb, A. (2023). Effects of using the WhatsApp application on language
learners motivation: A controlled investigation using structural equation modelling.
Computer Assisted Language Learning, 36(1–2), 149–175. https://doi.org/10.1080/09
588221.2021.1903042
Andujar, A. (2016). Benefits of mobile instant messaging to develop ESL writing. System, 62,
63–76. https://doi.org/10.1016/j.system.2016.07.004
Andujar, A., & Salaberri‑Ramiro, M. S. (2021). Exploring chat‑based communication in the
EFL class: Computer and mobile environments. Computer Assisted Language Learning,
34(4), 434–461. https://doi.org/10.1080/09588221.2019.1614632
Andújar‑Vaca, A., & Cruz‑Martínez, M.‑S. (2017). Mobile instant messaging: WhatsApp
and its potential to develop oral skills. Comunicar: Revista Científica de Comunicacíon y
Educacíon, 25(50), 43–52. https://doi.org/10.3916/C50‑2017‑04
Bakhtiyar, M. A. (2017). Promoting blended learning in vocabulary teaching through
WhatsApp. Nidhomul Haq: Jurnal Manajemen Pendidikan Islam, 2(2), 106–112.
https://doi.org/10.31538/ndh.v2i2.146
Baralt, M., & Morcillo Gómez, J. (2017). Task‑based language teaching online: A guide for
teachers. Language Learning & Technology, 21(3), 28–43. https://doi.org/10125/44630
Buendgens‑Kosten, J. (2014). Authenticity. ELT Journal, 68(4), 457–459. https://doi.
org/10.1093/elt/ccu034
Bui, H. P., Bui, H. P. H., & Dinh, P. D. (2023). Vietnamese students’ use of smartphone apps in
English learning. LEARN Journal: Language Education and Acquisition Research Network,
1(6), 28–46. https://so04.tcithaijo.org/index.php/LEARN/article/view/263430
Canals, L., & Mor, Y. (2023). Towards a signature pedagogy for technology‑enhanced
task‑based language teaching: Defining its design principles. ReCALL, 35(1), 4–18.
https://doi.org/10.1017/S0958344022000118
Chen, J. (2023). Second Life as a virtual playground for language education: A practical guide for
teaching and research. Routledge. https://doi.org/10.4324/9781003152958
Chen, J. C. (2012). Designing a computer‑mediated, task‑based syllabus: A case study in a
Taiwanese EFL tertiary class. Asian EFL Journal, 14(3), 62–97.
Chong, S. W., & Reinders, H. (2020). Technology‑mediated task‑based language teaching: A
qualitative research synthesis. Language Learning and Technology, 24(3), 70–86. https://
doi.org/10125/44739
Chou, M.‑H. (2020). Task‑supported language teaching to enhance young EFL adolescent
learners’ comprehension and production of English phrasal verbs in Taiwan. Education
3‑13, 48(4), 455–470. https://doi.org/10.1080/03004279.2019.1617328
Church, K., & De Oliveira, R. (2013). What’s up with WhatsApp? Comparing mobile
instant messaging behaviors with traditional SMS. Proceedings of the 15th International
Conference on Human‑Computer Interaction with Mobile Devices and Services.
David, M., Edwards, R., & Alldred, P. (2001). Children and school‑based research: ‘Informed
consent’ or ‘educated consent’? British Educational Research Journal, 27(3), 347–365.
https://doi.org/10.1080/01411920120048340
Designing and Implementing WhatsApp Communication Tasks ◾ 39
Dobinson, T., Dryden, S., Dovchin, S., Gong, Q., & Mercieca, P. (2023). Translanguaging
and “English only” at Universities. TESOL Quarterly, 58(1), 307–333. https://doi.
org/10.1002/tesq.3232
Ellis, R. (2000). Task‑based research and language pedagogy. Language Teaching Research,
4(3), 193–220. https://doi.org/10.1177/136216880000400302
Ellis, R. (2003). Task‑based language learning and teaching. Oxford University Press.
Ellis, R. (2009). Task‑based language teaching: Sorting out the misunderstandings.
International Journal of Applied Linguistics, 19(3), 221–246. https://doi.org/
10.1111/j.1473‑4192.2009.00231.x
Ellis, R. (2017a). Position paper: Moving task‑based language teaching forward. Language
Teaching, 50(4), 507–526. https://doi.org/10.1017/S0261444817000179
Ellis, R. (2017b). Task‑based language teaching. In S. Loewen & M. Sato (Eds.), The Routledge
handbook of instructed second language acquisition. Routledge.
Ellis, R. (2019a). Introducing task‑based language teaching. TEFLIN Publication.
Ellis, R. (2019b). Towards a modular language curriculum for using tasks. Language Teaching
Research, 23(4), 454–475. https://doi.org/10.1177/1362168818765315
Ellis, R. (2020). Teacher‑preparation for task‑based language teaching. In C. Lambert &
R. Oliver (Eds.), Using tasks in second language teaching: Practice in diverse contexts
(pp. 99–120). Multilingual Matters. https://doi.org/10.21832/LAMBER9448
Ellis, R., & Shintani, N. (2014). Exploring language pedagogy through second language acquisi‑
tion research. Routledge.
Ellis, R., Skehan, P., Li, S., Shintani, N., & Lambert, C. (2020). Task‑based language teaching:
Theory and practice. Cambridge University Press. https://doi.org/10.1017/9781108643689
Etikan, I., Musa, S. A., & Alkassim, R. S. (2016). Comparison of convenience sampling and
purposive sampling. American Journal of Theoretical and Applied Statistics, 5(1), 1–4.
https://doi.org/10.11648/j.ajtas.20160501.11
García‑Gómez, A. (2022). Learning through WhatsApp: Students’ beliefs, L2 pragmatic
development and interpersonal relationships. Computer Assisted Language Learning,
35(5–6), 1310–1328. https://doi.org/10.1080/09588221.2020.1799822
Gilabert, R., & Malicka, A. (2021). From needs analysis to task selection, design, and
sequencing. In M. J. Ahmadian & M. H. Long (Eds.), The Cambridge handbook of
task‑based language teaching (pp. 226–249). Cambridge University Press. https://doi.
org/10.1017/9781108868327
González‑Lloret, M. (2016). A practical guide to integrating technology into task‑based language
teaching. Georgetown University Press.
González‐Lloret, M. (2017). Technology for task‐based language teaching. In C. A. Chapelle
& S. Sauro (Eds.), The handbook of technology and second language teaching and learning
(pp. 234–247). John Wiley & Sons, Inc.
González‐Lloret, M. (2020). Collaborative tasks for online language teaching. Foreign
Language Annals, 53(2), 260–269.
González‑Lloret, M., & Ortega, L. (2014). Technology‑mediated TBLT: Researching technology
and tasks. John Benjamins Publishing Company. https://doi.org/10.1075/tblt.6
González‑Lloret, M., & Ziegler, N. (2021). Technology‑mediated task‑based language
teaching. In M. J. Ahmadian & M. H. Long (Eds.), The Cambridge handbook of
task‑based language teaching (pp. 326–345). Cambridge University Press. https://doi.
org/10.1017/9781108868327
40 ◾ Innovations and Applications of Technology
Hasnain, S., & Halder, S. (2023). Exploring the impediments for successful implementation
of the task‑based language teaching approach: A review of studies on teachers’ percep‑
tions. The Language Learning Journal, 51(2), 208–222. https://doi.org/10.1080/0957
1736.2021.1989015
Hung, B. P., & Nguyen, L. T. (2022). Scaffolding language learning in the online classroom.
In R. Sharma & D. Sharma (Eds.), New trends and applications in Internet of
Things (IoT) and big data analytics (pp. 109–122). Springer. https://doi.org/
10.1007/978‑3‑030‑99329‑0_8
Hung, B. P., Pham, D. T. A., & Purohit, P. (2022). Computer mediated communication in
second language education. In R. Sharma & D. Sharma (Eds.), New trends and applica‑
tions in Internet of Things (IoT) and big data analytics (pp. 45–60). Springer. https://doi.
org/10.1007/978‑3‑030‑99329‑0_4
Kessler, M., Solheim, I., & Zhao, M. (2021). Can task‐based language teaching be “authen‑
tic” in foreign language contexts? Exploring the case of China. TESOL Journal, 12(1),
e00534. https://doi.org/10.1002/tesj.534
Kohne, J., Elhai, J. D., & Montag, C. (2023). A practical guide to WhatsApp data in social
science research. In C. Montag & H. Baumeister (Eds.), Digital phenotyping and mobile
sensing: New developments in psychoinformatics (2nd ed.). Springer.
Kukulska‑Hulme, A. (2013). Mobile‑assisted language learning. In C. A. Chapelle (Ed.), The
encyclopedia of applied linguistics (pp. 3701–3709). Wiley.
Lai, C., & Li, G. (2011). Technology and task‑based language teaching: A critical
review. CALICO Journal, 28(2), 498–521. https://www.jstor.org/stable/10.2307/
calicojournal.28.2.498
Lai, C., Zhao, Y., & Wang, J. (2012). Task‐based language teaching in online ab initio for‑
eign language classrooms. The Modern Language Journal, 95, 81–103. https://doi.
org/10.1111/j.1540‑4781.2011.01271.x
Lambert, C. (2019). Referent similarity and nominal syntax in task‑based language teaching.
Springer.
Long, M. (2015). Second language acquisition and task‑based language teaching. John Wiley
& Sons.
Long, M. H. (2016). In defense of tasks and TBLT: Nonissues and real issues. Annual Review
of Applied Linguistics, 36, 5–33. https://doi.org/10.1017/S0267190515000057
Morgana, V. (2019). A review of MALL: From categories to implementation. The case
of Apple’s iPad. The EUROCALL Review, 27(2), 1–12. https://doi.org/10.4995/
eurocall.2019.11024
Morgana, V. (2021). Mobile assisted language learning across different educational settings:
An Introduction. In V. Morgana & A. Kukulska‑Hulme (Eds.), Mobile assisted language
learning across educational contexts (pp. 1–9). Routledge.
Muntaha, M., Chen, J., & Dobinson, T. (2023). Exploring students’ experiences of using mul‑
timodal CMC tasks for English communication: A case with Instagram. Educational
Technology & Society, 26(3), 69–83. https://doi.org/10.30191/ETS.202307_26(3).0006
Oliver, R. (2020). Developing authentic tasks for the workplace using needs analysis: A case
study of Australian aboriginal students. In C. Lambert & R. Oliver (Eds.), Using tasks
in second language teaching: Practice in diverse contexts (pp. 146–161). Multilingual
Matters.
Rashtchi, M., & Yazdani, P. (2020). Intentional vocabulary learning via WhatsApp: Does
the type of input matter? English Language Teaching Educational Journal, 3(2),
118–132.
Designing and Implementing WhatsApp Communication Tasks ◾ 41
3.1 Introduction
Since the birth of artificial intelligence (AI) and the use of digital tools in lan‑
guage education, numerous applications have been created to enable self‑study and
integration of technology to enhance learning outcomes. ELSA (English Language
Speech Assistance) Speak was designed to improve the pronunciation of non‑native
speakers of English. It is an English‑speaking application utilizing AI‑based tools,
which are able to recognize speech, to evaluate, and to give comments on language
performance regarding pronunciation and fluency.
ELSA Speak has been found to increase students’ pronunciation skills (Kholis,
2021; Wicaksana, 2022), but how students evaluate the use of the application
remains a question to explore. Besides, such evaluation would benefit students
themselves in terms of continuing with ELSA Speak, the application designers, and
teachers if they want to improve the tool and integrate the software into their les‑
sons. To fill these gaps, the current study was implemented to answer the research
questions as follows:
Research questions
Relevance The app connects The app connects There is limited The app does not
strongly with the with the learning connection to the connect to the
learning purpose and purpose and is learning purpose and learning purpose and
is appropriate for mostly appropriate may not be appropriate is not appropriate for
students for students for students’ learning students
Customization The app provides high The app provides The app provides limited The app offers no
level of flexibility to some degree of flexibility to change flexibility to meet
change content and flexibility to alter content and settings to students’ learning
settings to meet content and settings meet students’ learning needs
students’ learning to meet students’ needs
needs learning needs
Feedback The app gives students The app gives The app gives students The app does not give
specific feedback students feedback limited feedback students feedback
Thinking skills The app promotes The app promotes The app promotes mostly The app is limited to
higher order thinking higher order thinking lower order thinking the use of lower order
skills such as creating, skills including skills like understanding thinking skills
evaluating, and evaluating, analyzing, and remembering including
analyzing and applying understanding and
remembering
(Continued )
Table 3.1 (Continued ) Educational App Evaluation Rubrics
4 3 2 1
Engagement Students are highly Students use the app Students perceive the Students avoid using
motivated to learn as instructed by the app as “more the app
with the app teacher schoolwork” and may
work off‑task sometimes
Sharing The app saves a The app saves a The app saves limited No students’
summary of students’ summary of students’ performance data of performance summary
specific performance specific performance students is saved
that can be exported but exporting is
to the teacher or limited
others
Source: Adapted from Vincent (2012).
45
46 ◾ Innovations and Applications of Technology
positive development throughout the process of one academic year, together with
increased interest in the use of the application.
Focusing on how ELSA Speak’s supporting features and how students study
autonomically pronunciation, Taqy (2021) conducted an in‑depth case study with
three students of English who had already used the app for the minimum time of
one month. The interview results indicated that the ELSA Speak supported pronun‑
ciation learning including vowel and consonant sounds, diphthongs, word stress,
and intonation. Furthermore, students’ learning autonomy when learning with the
application was realized via their actions in making decision of the learning topic,
learning time, and correcting their mistakes during their learning. Other perceived
benefits include evaluating learning performance, determining the needs in pro‑
nunciation study, independent learning, assessing their learning progress, being
responsible for their own learning, and practicing.
The outcome of learning English with the support of ELSA Speak was also
reflected in the study by Nguyen (2020), who found that the students held positive
belief in the application and reported that their anxiety about speaking English was
alleviated when learning with ELSA Speak, thus improving their speaking skills
among the students. However, ELSA only focuses on the pronunciation part, per‑
ceived by the students to lead to the imbalance in speeches regarding suprasegmen‑
tal and segmental pronunciation.
In general, ELSA Speak has been evaluated for its benefits in improving learners’
pronunciation and alleviating learners’ anxiety when learning English. There needs
to be a further study to see how this digital application serves learners’ self‑study,
focusing on using habits, activities, and their evaluation of the application use.
Lee and Cherner (2015), and Nguyen (2019). The questions of the interview were
designed to investigate the students’ habits and opinions about ELSA Speak. There
were three parts of the interview protocol, which concentrated on the students’
background, the common ways and time of using ELSA Speak, and the evaluation
of the students about ELSA Speak use.
To analyze questionnaire data, SPSS (Statistical Package for Social Sciences)
was used to calculate and analyze the internal consistency of the questionnaire and
descriptive statistics. Besides, the interviews with 15 students were conducted in
Vietnamese for about 25 minutes for each interview. The interviewees were coded
S1 to Sn (student 1 to student n) not to reveal their identities. The interview tran‑
scripts were first translated into English and coded with themes including time
and frequency of using ELSA Speak, what to use the application for, most valuable
features of ELSA Speak, reported improvement in language use after using the
application, overall evaluation of ELSA Speak, and recommendations for its use.
Besides, for each coded theme, the frequency of answers was counted, and typical
responses were cited.
3.4 Findings
3.4.1 EFL Students’ Use of ELSA Speak
Data were collected from the questionnaire completed by 185 EFL students, and
interviews with 15 students were explored to see how they used ELSA Speak in
their learning. First, the reliability statistics of the questionnaire were run, and it
reached a Cronbach’s Alpha value of 0.878, suggesting the high reliability for the
questionnaire to be used in this study. Besides, the mean scores of the two clusters,
the use of ELSA Speak and evaluation of the application use, were tabulated. The
mean scores reached 3.6492 for the cluster of ELSA Speak use and 3.9003 for its
evaluation, which overall indicates the positive perception and appreciation of the
students of using and evaluating and the similarity among the students’ answers.
The following section will present the findings to answer the first research question
on using the application as reported by EFL students (Table 3.2).
As can be seen from Table 3.1, item 4 achieves the highest mean score, M = 4.1135,
SD = 0.73948, while item 6 has the lowest mean score, at 3.1784, SD = 1.19581. This
indicates the application was considered to be mainly a self‑learning tool by the
majority. However, the students did not show much agreement with the applica‑
tion to be considered the main resource of the self‑study tool, as the mean score of
item 2 achieves only 3.4054, SD = 0.96288, and item 1 only shows the neutrality
of the students in using ELSA as a general tool for learning English. Slightly lower
than the highest item, item 8, with a mean score of 4.0270, SD = 0.76919, indicates
the positive benefit of ELSA Speak concerning pronunciation accent‑wise. Apart
from the pronunciation aspect, the various English‑speaking skill practices that the
Evaluation of ELSA Speak Used by Vietnamese ◾ 49
application provides are also the reason why the students chose this application, as
illustrated through the mean score of item 3 (M = 3.9459, SD = 0.87059). Similarly,
the same quantity of opinions of item 9, M = 3.9676, SD = 0.78645, shows satisfac‑
tion with the free version of ELSA, which contradicts the opinions of the inter‑
viewed students who recommended the paid version of ELSA Speak. Additionally,
item 10 (M = 3.4486, SD = 1.04195) further confirms the difference between the
users’ preference toward the versions of ELSA Speak.
50 ◾ Innovations and Applications of Technology
The whole cluster of the ELSA Speak use records the standard deviation at
0.52005 for the mean score of 3.6492, which indicates that 68% of the students giv‑
ing answers ranges from 3.12915 to 4.16925 and that the overall manifestation of
the answers is positive or from “neutral” to “agree.” The highest standard deviation
scores were seen in items 6, 7, and 10, at 1.04195, 1.06054, and 1.19581, respec‑
tively, showing the remarkable difference in perspectives of the participants when
they chose the paid version of the application, how they integrated ELSA Speak in
their school work, and how they used the application collaboratively.
Besides data from the questionnaire, extracts from group interviews with the
students at the research site were used for analysis. In the first group interview, two
students stated that they started to use ELSA Speak in year one, while one student
in year two and one student said she had utilized the application since high school.
The average time everyone spent using the app was 15 minutes each time, and two
students claimed that they used ELSA Speak daily because of the set schedule as
well as the performance streak. Student one said, “To be honest, I think the atten‑
dance streak of the application has motivated me even more than the knowledge
benefits that it brings me that makes me want to come back to use the application.”
In addition, the interviewed students reported using ELSA Speak for differ‑
ent purposes. For example, student 4 said, “I used ELSA because the interaction
between AI and users made it clear for me to learn. I especially appreciated the clear
feedback on pronunciation.” Students 1 and 2 showed significant interest in the
correction of mouth shape and the accent adaptation of the application. They said,
“ELSA Speak helped me practice pronunciation with detailed feedback which cor‑
rected my mouth shape, and it also provided illustrations that I could see and try to
adapt to.” Student 3 shared another different view on the benefit of the application,
which was the accent‑based learning of the content of the application. This means
that the users could achieve certain levels of native‑like accents of English‑speaking
regions that they aimed at through studying with ELSA Speak.
The most claimed improvement of students in the interview with group one was
pronunciation, intonation, and stress due to the specific practice and revision of the
sounds not available in their mother tongue. Below is an example:
I got the chance to practice pronouncing words in the most comprehen‑
sive way. Thanks to the feedback system and AI technology of ELSA
Speak, I have meliorated the words that I usually mispronounce due to
my habits related to my native language. Moreover, my intonation and
stress when speaking were noticed very carefully. (Student 1, interview)
In group interview two, two students stated that they started to use the applica‑
tion in high school, and one began to use it in year three at university, whereas
two students said they utilized ELSA Speak from year two. The average use time
of everyone was about 15 minutes each time. One student reported using ELSA
Speak for up to one hour by repeating the same exercise until she mastered it. She
Evaluation of ELSA Speak Used by Vietnamese ◾ 51
explained that in this way of learning, she could ensure that she had fully attained
the required knowledge of the session, which helped her memorize the lesson lon‑
ger. It was also a good way to utilize the limited content of the accessible version of
ELSA Speak, “I tend to stick with one lesson for a good amount of time until I feel
like I have perfected my knowledge so that I can move on to the next lesson and
utilize the content that I was provided from ELSA Speak.”
The interviewed students in group two reported using ELSA Speak for the same
purpose: practicing English pronunciation. Students 6 and 7 explained that ELSA
application provided them with detailed and comprehensive feedback on the users’
pronouncing performance.
The reported improvement of students in interview group two can be listed
as the improved pronunciation of intricate sounds (student 6); pronouncing and
reading words correctly in accordance with International Phonetic Alphabet (IPA)
standard (student 8); producing clearer ending sounds and having better intonation
(student 9); and reforming and adapting the learned sounds and words (student 7).
In contrast, student 5 reported not to have significant progress, yet his intonation
was signified after using the application.
In group interview three, three students said that they started learning with
ELSA Speak in their first year of university, and one student said to have used the
application from year two of university, whereas two other students said they started
with the application from high school. The average time of using of everyone was
reported to last 15 minutes each time. In addition, the interviewed students in group
three reported using ELSA Speak for different purposes. For instance, students 12
and 14 preferred using the application for self‑study thanks to its detailed feedback
and clear directions and instructions to correct their pronunciation. Meanwhile,
students 11 and 15 found that ELSA Speak was interesting and the material of the
application was suitable, especially for new learners. Last but not least, students 10
and 13 both stated that they valued the vocabulary diversity of ELSA Speak and
objective lessons that can help them revise and practice to better their pronunciation.
Student 10 answered, “I like how we can just randomly search the words I want to
learn, and ELSA application will provide me with a detailed lesson about that single
word. There can be the IPA form, the illustration and the AI feedback for it.”
The reported improvement in language use after using ELSA Speak was mani‑
fested similarly among the students, which was the development in pronunciation
skills with an addition of the extension of the English lexical resource.
11. The contents and materials of ELSA 185 3.00 5.00 3.84 .64
Speak are appropriate and relevant
for my English‑speaking learning.
12. The contents and materials of ELSA 185 1.00 5.00 4.00 .72
Speak are presented clearly.
13. ELSA Speak has various educational 185 1.00 5.00 3.84 .79
activities that have media
integration.
14. The level of the learning materials 185 1.00 5.00 3.33 1.00
of ELSA
Speak is lower than my English
proficiency level.
16. Using ELSA application improves 185 2.00 5.00 3.82 .79
my English‑speaking skill very
quickly.
17. I can customize and set the lessons 185 2.00 5.00 3.65 .84
and the speaking exercises in ELSA
Speak.
18. ELSA Speak provides detailed and 185 2.00 5.00 3.92 .82
specific feedback on my
English‑speaking skill when I finish
the exercises.
19. I can see my good and bad points of 185 1.00 5.00 3.97 .79
my English‑speaking skill thanks to
feedback from ELSA application.
20. I can apply the knowledge that I 185 2.00 5.00 3.92 .74
have learned from ELSA Speak to
English speaking in real life.
(Continued )
Evaluation of ELSA Speak Used by Vietnamese ◾ 53
Table 3.3 (Continued ) University EFL Students’ Evaluation of ELSA Speak Use
Statement N Min. Max. Mean SD
22. ELSA application is easy to use. 185 2.00 5.00 4.25 .73
24. The themes of ELSA Speak are 185 2.00 5.00 3.97 .68
friendly and aesthetic.
25. The contents and materials of ELSA 185 2.00 5.00 3.97 .71
Speak are interesting and related to
the real life.
26. I am motivated to learn more about 185 1.00 5.00 3.83 .77
English when using ELSA Speak for
my English‑speaking skills.
27. I find it easier to speak English 185 2.00 5.00 3.87 .74
thanks to the vocabulary and
sentence structures given in the
lessons by ELSA Speak.
29. I can share my performance and 185 1.00 5.00 3.74 .92
development summary of my
English‑speaking skill in the
application with my teachers or my
friends.
30. Thanks to ELSA application specific 185 2.00 5.00 4.17 .76
feedback, my English pronunciation
has been improved.
score is equally distributed in the value of 5. The highest mean score reaches 4.2595
for item 22 about the easy usability of the application. In contrast, the lowest score
belongs to item 14, M = 3.3351, SD = 1.00331, indicating the appropriateness and
accessibility of ELSA Speak proficiency, which is suitable yet quite low to the users.
The clarity and the diversity of the content of the application were appreciated
by the majority of the respondents through item 12, M = 4.0054, SD = 072605, and
item 13, M = 3.8486, SD = 0.79318. Besides, item 15, M = 3.9730, SD = 0.67108,
and item 16, M = 3.8270, SD = 0.79559, indicate the improvement of the users and
the prominent efficiency of the process learning with the application. Besides, the
vocabulary and the sentence structure provided alongside the content are believed
to have contributed to facilitation during the learning process with ELSA Speak
presented in item 27, M = 3.8757, SD = 0.74506. Moreover, the theme and the mate‑
rials and the authenticity of ELSA are highly appreciated in item 24, M = 3.9730,
SD = 0.68709, and item 25, M = 3.9730, SD = 0.68709.
The feedback system of ELSA Speak surveyed in item 18, M = 3.9297,
SD = 0.82780; item 19, M = 3.9784, SD = 0.79370; and item 30, M = 3.9003,
SD = 0.40306, indicates the helpfulness and sensibility of the app use during their
learning. Additionally, the development tracking system is also evaluated to have
effectiveness in assisting the learning process of the users (item 28, M = 3.9514,
SD = 0.73928). The after‑effect and value of ELSA Speak in terms of communica‑
tive competence in real life is regarded to be very positive as in item 20, M = 3.9297,
SD = 0.74486, and item 21, M = 3.9081, SD = 0.77823. Furthermore, the motiva‑
tion of speaking and learning English is mostly agreed to be enhanced through
ELSA Speak in item 26, M = 3.8378, SD = 0.77015.
Besides, the interviews were analyzed to obtain an in‑depth evaluation of the
application use by EFL students. In the interview, students two and three said
that the flexible customization depended on the users’ level and the diversity of
the up‑to‑date conversations and topics of lessons are the two most significant
advantages of ELSA. Student one agreed that the application could remarkably
improve the user’s speaking skills in terms of pronunciation, intonation, and
stress. Additionally, student three showed interest in the feedback system and the
easy‑to‑use and eye‑catching features alongside a rich repository of topics of ELSA
Speak. Since the application only focuses on improving pronunciation, student
two did not find it suitable to learn English comprehensively. Student four added,
“Although ELSA’s AI feedback system is helpful for my speaking foundation, I can‑
not develop in the long period since I cannot practice real‑life situations.”
All the interviewed students of group one agreed that ELSA Speak was worth
being tried, especially for low‑level learners who want to build up and improve
their foundation which is pronunciation. In terms of a self‑learning resource, ELSA
Speak stood as a trust‑worthy application among the interviewed students and all
the students agreed that ELSA Speak is suitable for beginners and a great app to
strengthen the pronunciation foundation. However, four out of five students shared
the skepticism of growing in the long run since the free version of ELSA application
does not offer appropriate long‑term practice for free.
Evaluation of ELSA Speak Used by Vietnamese ◾ 55
The most useful features of ELSA Speak highlighted by the students in inter‑
view group two included a clear arrangement of the lessons and study time (students
8 and 6) and thorough pronunciation correction (students 6, 8, and 9). Besides, the
worthiest aspect of ELSA Speak according to students 13, 14, 10, and 15 is the way
the AI technology of the application assessed the users’ level and designed a suit‑
able learning path for them in the application. Moreover, this practice is also found
in the process of using ELSA Speak when the application would reflect the users’
performance and arrange the lessons scientifically. A student said, “My experience
with ELSA application was quite pleasant, I did not have to do much. Actually, all I
had to do was to follow the instructions and let the AI technology of the application
do all the work for me. All the lessons plan or development route or self‑assessment,
I only had to follow the given assignments.” In addition, students 11 and 12 were
fond of the diverse and up‑to‑date contents with a convenient dictionary alongside.
They pointed out, “I remember watching this advertisement on Facebook of ELSA
application, it was using the theme of Halloween which made me interested and
intrigued my curiosity to use this application.”
In terms of a self‑learning resource, all the interviewed students agreed that
the ELSA application is suitable for beginners and it is a great app to strengthen
pronunciation foundation. However, students 10, 11, 14, and 15 shared the concern
of development in the long run since the free version of ELSA application does not
offer appropriate opportunities with limited lessons and knowledge, “Although the
content of ELSA Speak is appropriate and very attractive to study with, I could not
advance to a higher level because the free version does not offer so.”
Besides, according to students 10 and 11, although the application can help new
users to self‑study at home, it is required that the users need to know about the pho‑
netic transcription beforehand in order to utilize the benefits of the application, “I
remember that time when I first used ELSA application the IPA form of it was like
a code for me to decipher. I had to listen to the word from the audio and blind‑try
to say it. I think it would be more effective if the IPA can be assistant in this case.”
Furthermore, student 14 did not find the application as effective since the users
can only use ELSA Speak alone while English speaking requires more interactive
practice. Similarly, student 12 found the self‑learning aspect of the application was
just as efficient when the users use it by themselves since the application has AI
technology and provided comprehensive feedback.
Overall, the students in group three considered the ELSA application to be a
prominent tool to practice and reinforce pronunciation skills. The effectiveness of
the application is validated by the majority, and the contents of the lessons are of
high quality. However, the free version of ELSA Speak offers limited access and
does not provide advanced learning for users (students 12, 13, 14, and 15). Students
10 and 13 also added that the application can only improve pronunciation and
moderately enhance the users’ vocabulary; thus, the users can not develop their
English‑speaking skills thoroughly and comprehensively.
In group three, the views on recommending the ELSA application were vari‑
ous. Student 10 would not recommend students in Vietnam to make full use of the
56 ◾ Innovations and Applications of Technology
application because she thought the users must have self‑discipline and dedicated
learning will if they would like to make the most out of the application. In com‑
parison, students 11, 12, and 13 found the application would be a great assistant for
low‑level learners to develop their foundation in English. Lastly, students 14 and 15
would not recommend the application since the content of the free version is quite
limited and the users need little knowledge of English before using such as knowing
the IPA so as to take full advantage of ELSA Speak.
3.5 Discussion
This study attempted to answer two research questions: how EFL university stu‑
dents use ELSA and how EFL university students evaluate the use of ELSA Speak.
Data were collected from 185 EFL students who had used the application via the
use of a questionnaire and interviews. In general, the findings show that the major‑
ity of the students perceived the use of the ELSA application positive. In terms of
using habit, the students mainly considered the application to be a self‑learning tool
at home that could help them improve their pronunciation and English‑speaking
skills. This result shows the similarity to the findings of the research by Haryadi and
Aprianoto (2020) which found that the majority of the students showed high appre‑
ciation for the application and preferred using it as a self‑tutor at home. However,
ELSA Speak was not considered the main resource in terms of a self‑learning tool
in the current study. To explain this notion, the function of ELSA Speak is reported
to focus mainly on pronunciation education, so English‑speaking self‑learning in
general can not be improved with the application.
The study also found that the reported reasons for using ELSA application were
diverse. The first reason for the students to choose the application comes from pro‑
nunciation practices and accent‑based improvement, which were the most com‑
mon reason among the participants. Another reason is that the students could see
their mistakes of speaking English from the application’s feedback which integrated
well‑developed AI technology; thus, they were able to reinforce and receive the
appropriate exercises for better improvement and development of other skills of
English. The side development that ELSA Speak offers was reported to make the
students want to try it, especially for the vocabulary and syntax knowledge recog‑
nized through the variety of in‑app contents of the given lessons and exercises.
In terms of learning attitude and motivation, there was resemblance between
the current study with the research by Triwardani and Azi (2022) in which the stu‑
dents raised their intention of learning and using English in daily life and in school
tasks significantly while and after using ELSA Speak. It is recorded that most of
the students in the current study found themselves more confident thanks to ELSA
application use. Through the use of ELSA Speak, they reported to gain confidence
in using English in regular school time and tend to build a more curious mindset
in learning English. Furthermore, in accordance with the research of Anggraini
Evaluation of ELSA Speak Used by Vietnamese ◾ 57
(2022), similar results were found when the stimulation of practicing English on
the students in the classroom was approved by the majority of the current research
participants.
The newly discovered aspects of ELSA application from the current study are
that the “cliffhanger” of ELSA Speak was reported to be the streak point system
and the set learning schedule of the application. The students tended to come back
to learn with the application not only because they wanted to practice English but
also to keep themselves attentive to the ranking system and competition with their
friends from the linking platforms.
Different from the other applications, ELSA Speak provides lessons and exercises
that focus exclusively and intensively on pronunciation. This can be seen through
the detail feedback system which analyzes the smallest fraction of the users’ pro‑
nunciation. With these features of ELSA Speak, the students in the current study
generally evaluated the ELSA application to be potentially helpful for practicing
and reburnishing their foundation in English in terms of pronunciation.
However, one problem that the participants in the current study realized was
that they could not extend their development outside this aspect. To add to this
notion, the level of the lessons in ELSA Speak is intermediate, so the students could
not broaden their vocabulary range and they could not enhance their competence
in real‑life conversations since the interaction in the application is quite rigid and
the content is rather similar to each other. Hence, this finding reiterates what was
reported in the research by Nguyen (2020) about the influence of ELSA Speak on
students in Vietnam and their perception toward the application.
One drawback of ELSA Speak recognized in the research of Nguyen (2020)
about the deficiency of balance in speeches in terms of suprasegmental and segmen‑
tal aspects has been found to be quite different from the current study’s findings. In
the interview with the students in the current study, it is reported that the applica‑
tion in recent years has updated the new features which allow the users to review
their intonation and stress in detail. Furthermore, the users can now learn from the
models given visually to illustrate the pronunciation of the words.
In the current study, it seems that the majority of the users of ELSA Speak were
reported to be the students at secondary and first‑year university students because
the paid version of the ELSA application was not affordable, yet this version is con‑
sidered to be more diverse in content and exercises that make the free version of the
application too limited. This is also the reason why the students in the current study
did not consider ELSA Speak to be a main self‑practicing resource and showed
concern for the development in the long run. Besides, ELSA Speak was regarded
as an intensive self‑learning tool in terms of pronunciation which has friendly fea‑
tures and modernized technology. Besides, the ELSA application was rated a well‑
developed application. More specifically, first, the English‑speaking educational
aspect of the application can be clearly seen and experienced. AI technology and
the comprehensive feedback of the application was seen to be the strongest point
validated by mostly all the students in this study. Hence, the students were able to
58 ◾ Innovations and Applications of Technology
see their mistakes and weaknesses pointed out directly and in detail with ways to
improve. Besides, the interesting content and the instant benefit the application
brought about made the students want to use it more. They especially appreciated
how ELSA Speak was able to trigger the students’ competitiveness by presenting the
attendance streak and the online ranking system.
Additionally, ELSA Speak was evaluated to be an easy‑to‑use app which stu‑
dents could get used to and self‑practice without any help from a third party. The
students pointed out that they could customize the content within the application,
and the app also provided them with suitable recommendations for lessons and
schedules. Furthermore, they could share their performance at ease with the con‑
nection of the application with Facebook and other social media platforms.
However, in terms of English‑speaking skill in general, ELSA Speak can only
provide pronunciation and foundation reinforcement. Furthermore, since the app
only focuses on the pronunciation part, its use could help the students only under‑
stand and remember while practicing with it but does not offer users opportunities
to practice higher‑level thinking skills like examining, evaluating, or creating.
The application was found to be a good English learning application from the
current study’s finding based on the seven evaluation criteria of Nguyen (2019).
First, the content was seen to be suitable for the user’s desire and proficiency. When
first using ELSA Speak, the users could have a proficiency test and the application
could automatically arrange the lesson route to suit them. However, while the level
of the lessons was reported to be appropriate, it could not meet the expectation of
the students in terms of advancing in the free version.
Second, ELSA Speak as evaluated by the participants in the current study is
easy to use with help, guide, or tutorial since it provides a well‑detailed tutorial and
the interaction in‑app is simple to use, and the application has an attractive and
user‑friendly interface with a vibrant theme and friendly multimedia. Besides, the
theme of ELSA is considered friendly and attractive, depending on the social events.
The related multimedia was reported by this study’s participants to be diverse and
convenient with affiliation with Facebook, Google, and other platforms.
In terms of relevant and quality content, the content of ELSA Speak according
to the interviewees was up‑to‑date with various topics and model conversation for
the students with responses for better progression and revision. Moreover, feed‑
back and progress recognition were also the strength offered by the application.
However, the free version was seen to provide limited resources, so the development
is noticeably affected. Although ELSA Speak is highly interactive where one or
multiple learning modes can be included for a more engaging study environment.
The free version of the application, which is the main version used by the majority of
the students, only provided some modes in accordance with practicing pronuncia‑
tion and listening. However, the available modes as found in the current study were
quite useful and engaging since they provide the users with adequate and customiz‑
able study and performance records.
Evaluation of ELSA Speak Used by Vietnamese ◾ 59
3.6 Conclusion
In the attempt to examine EFL students’ use of ELSA Speak and their evalua‑
tion of the app impact, the current study found that the application was reported
to be mainly appropriate for the low‑level learners who wanted to reinforce their
English pronunciation and that the majority of students chose ELSA application for
self‑practicing. The application was seen not only to benefit the students in pronun‑
ciation but also other aspects such as vocabulary, listening skills, and syntax knowl‑
edge. The average time for using ELSA Speak was reported to be about 15 minutes
each time of use, and the frequency was dependent on some factors including mood
or in‑app schedule or the ranking system. The number of students participating
in this study who preferred the free version of the ELSA application significantly
outnumbered those who preferred the paid version.
Second, in terms of the evaluation of the app use, it is found that the application
features are friendly and easy to access with simple yet eye‑catching and up‑to‑date
interaction and appearance. Furthermore, the content and materials of ELSA Speak
were highly appreciated since they are diverse and up‑to‑date with various topics
and relevant vocabulary. However, the skills the students can practice and develop
are constrained to only pronunciation, so their English‑speaking skills cannot be
improved holistically. The effectiveness of the application is confirmed by the major‑
ity of the students in the current study who reported that their pronunciation was
improved noticeably. This is mainly contributed to the feedback system with the help
of AI technology which can point out the steps for the students to follow in detail
and specifically. Several updates have been recognized; those are a visual illustration
of the model of pronunciation or the testing system to track the users’ performance.
Moreover, the motivation of the students in learning and practicing English‑speaking
skills was perceived to enhance during and after using ELSA Speak.
From the study’s findings, several implications are made for teachers and stu‑
dents to utilize the ELSA Speak. First, teachers should be aware of the prominent
advantages of the application which are foundation fortification and motivation
amplification of English pronunciation learning. Therefore, the teachers can intro‑
duce and integrate ELSA Speak in lessons and practice exercises to help the students
who are deficient in their fundamental English‑speaking skills or to correct the
students’ wrong habits of pronunciation which were accumulated from the past.
Not only can this application play a role in shaping and rectifying the students’
premise of English for their future development but it also provides a refreshing
and interesting way of learning English, thus enhancing motivation of students in
learning and practicing English.
As for students, ELSA application is found to be used mainly as a self‑learning
tool due to its mobility. Hence, the students can take the most advantage of the
application at home or anywhere to practice and perfect their pronunciation and
broaden their vocabulary. Although the findings show that the free version of ELSA
60 ◾ Innovations and Applications of Technology
Speak is limited in terms of content and level, it is useful for the students to build
up and strengthen their pronunciation at the establishment level.
To use ELSA Speak, students are recommended to practice persistently because
repetition is crucial in making progress. An interviewee in the current study
reported to spend an hour practicing only one exercise until she grasped the skill
and could perform her best so that she could move on to another exercise. By doing
this, she was able to utilize the limited aspect of the content of the free version and
fully develop her sense when learning a particular topic. Since the application has
set schedules for the users, the students can follow them and practice daily to main‑
tain productivity. Furthermore, it was found that the students can not only learn
pronunciation through designed lessons, but they can also practice the vocabulary
topically and exclusively.
This study’ findings shed light on specific usage patterns of ELSA Speak in learning
English pronunciation by EFL students. In particular, it documented the time frame,
frequency, and features of the application that were made use of by EFL students to
self‑study pronunciation. Especially, users’ moods, in‑app schedule, and the ranking
system were found to be factors impacting the use of the application. Furthermore, the
study’s findings revealed the features that made the ELSA application appreciated by
the EFL students. Those features included easy access, attractive appearance, diverse
topics, and relevant vocabulary. Especially, the feedback system powered by AI tech‑
nology to provide feedback on the steps for the students to follow in detail to make
progress with their pronunciation was highly evaluated by this study’s participants.
The current study’s results were rather limited to be more quantitative and thus
mainly presented in descriptives. Future research can be carried out on a group of
participants with the same level of English for an amount of time to see the progress
and outcome of the actual use of ELSA Speak. Furthermore, experimental studies
should be implemented to measure the impact of ELSA Speak on learning English
pronunciation of users.
References
Aeni, N., Nur Fitri, S., Hasriani, H., & Asriati, A. (2021). Inserting ELSA application in
hybrid learning to enhance the students’ motivation in speaking. Celebes Journal of
Language Studies, 1(2), 271–277. https://doi.org/10.51629/cjls.v1i2.703
Anggraini, A. (2022). Improving students’ pronunciation skills using ELSA speak application.
Journal of English Language and Pedagogy, 5(1), 135–141. https://doi.org/10.33503/
journey.v5i1.1840
Bui, H. P., Bui, H. P. H., & Dinh, P. D. (2023). Vietnamese students’ use of smartphone apps in
English learning. LEARN Journal: Language Education and Acquisition Research Network,
1(6), 28–46. https://so04.tcithaijo.org/index.php/LEARN/article/view/263430
Haryadi, S., & Aprianoto, A. (2020). Integrating “English pronunciation” app into pronunci‑
ation teaching: How it affects students’ participation and learning. Journal of Languages
and Language Teaching, 8(2), 202–212. https://doi.org/10.33394/jollt.v8i2.2551
Evaluation of ELSA Speak Used by Vietnamese ◾ 61
Hung, B. P., & Nguyen, L. T. (2022). Scaffolding language learning in the online classroom. In
R. Sharma & D. Sharma (Eds.), New trends and applications in Internet of Things (IoT) and
big data analytics (109–122). Springer. https://doi.org/10.1007/978‑3‑030‑99329‑0_8
Huong, L. P. H. (2021). Textbook mediation in EFL university students’ learning. Language
Related Research, 12(3), 255–276. https://doi.org/10.29252/LRR.12.3.9
Huong, L. P. H., & Hung, B. P. (2021). Mediation of digital tools in English learning.
LEARN Journal, 14(2), 512–528. https://so04.tci‑thaijo.org/index.php/LEARN/index
Kholis, A. (2021). ELSA speak app: Automatic speech recognition (ASR) for supplement‑
ing English pronunciation skills. Pedagogy: Journal of English Language Teaching, 9(1),
1–14. https://doi.org/10.32332/joelt.v9i1.2723
Kukulsa‑Hulme, A., & Shields, L. (2008). An overview of mobile assisted language learning:
From the content delivery of supported collaboration and interaction. ReCALL, 20(3),
271–289. https://doi.org/10.1017/S0958344008000335
Lantz‑Andersson, A., Linderoth, J., & Säljö, R. (2009). What’s the problem? Meaning, mak‑
ing and learning to do mathematical word problems in the context of digital tools.
Instructional Science, 37(4), 325–343. https://doi.org/10.1007/s11251‑008‑9050‑0
Lee, C.‑Y., & Cherner, T. S. (2015). A comprehensive evaluation rubric for assessing instruc‑
tional apps. Journal of Information Technology Education: Research, 14, 21–53.
Lee, J. (2015). The mediating role of self‑regulation between digital literacy and learning out‑
comes in the digital textbook for secondary school. Educational Technology International,
16(1), 58–83.
Makhlouf, M. K. (2021). Effect of artificial intelligence‑based application on Saudi
preparatory‑year students’ EFL speaking skills at Albaha University. International Journal
of English Language Education, 9(2), 36. https://doi.org/10.5296/ijele.v9i2.18782
Mercer, N., Hennessy, S., & Warwick, P. (2019). Dialogue, thinking together and digi‑
tal technology in the classroom: Some educational implications of a continuing line
of inquiry. International Journal of Educational Research, 97, 187–199. https://doi.
org/10.1016/j.ijer.2017.08.007
' '
Nguyen, N. H. G. (2019). Các tiêu chí chon · lưa
. môt . app hoc
· tiêng Anh tÔt [Criteria to
choose a mobile application to learn English well]. ICT in Education. https://ictedupro.
com/2019/05/27/cac‑tieu‑chi‑lua‑chon‑mot‑mobile‑app‑hoc‑tieng‑anh‑tot/
Nguyen, T. M. T. (2020). Evaluating the effects of technology use on English speaking skills
of students. Industry and Trade Magazine. Available at https://tapchicongthuong.vn/
bai‑viet/danh‑gia‑hieu‑qua‑su‑dung‑cong‑nghe‑di‑dong‑den‑ky‑nang‑noi‑tieng‑anh‑
cua‑sinh‑vien‑76327.htm
Taqy, M. R. (2021). The use of ELSA speak application as the media to learn pronunciation auton‑
omously. Graduating paper for the degree of Sarjana Pendidikan. University of Salatiga.
Triwardani, H. R., & Azi, R. N. (2022). The effectiveness of Elsa Speak application to improve
pronunciation ability. Jurnal Fakultas Keguruan & Ilmu Pendidikan Kuningan, 3(1),
28–33.
Vincent, T. (2012). Educational app evaluation rubric. Available at https://learninginhand.
com/blog/ways‑to‑evaluate‑educational‑apps.html
Wicaksana, S. N. (2022). The use of ELSA Speak in learning English pronunciation skill.
Doctoral dissertation. Universitas Muhammadiyah Yogyakarta.
Chapter 4
Critical Appraisal of
Artificial Intelligence-
Mediated Communication
in Language Education
Dara Tafazoli
4.1 Introduction
Artificial intelligence (hereafter, AI) has been progressively utilized in education in
recent years, leading to a surge in research and applications of AI in education (hence‑
forth, AIED) (Luckin et al., 2016). Research on AIED is interdisciplinary, involv‑
ing AI, pedagogy, psychology, and other related disciplines (Luckin et al., 2016;
Steenbergen-Hu & Cooper, 2014). The objective of AIED research is to enhance the
fields of AI, cognitive science, and education by incorporating computer-supported
education (Conati et al., 2002). A number of AIED applications are being applied to
adaptive learning and evaluation in order to enhance educational effectiveness and
efficiency, modify teaching approaches in real time, and gain a deeper insight into
how students acquire knowledge (Beal et al., 2010; Chassignol et al., 2018; Shute &
Psotka, 1996; VanLehn et al., 2007), in various fields like language education.
In this chapter, I concentrate specifically on the integration of AI in language
education. AI-based tools applied in language education are part of computer-
assisted language learning (CALL) and intelligent CALL (ICALL) (Tafazoli
et al., 2019). Based on the scope of the book (i.e., artificial intelligence-mediated
communication or CMC), I chose to shift the attention away from the device
adapt to users’ needs. This involves observing user behavior by gathering, retain‑
ing, and scrutinizing data from their past task responses. Additionally, user model‑
ing seeks to predict future behavior by tracking personal memory curves. Expert
modeling, in addition to user modeling, is a crucial element of ITSs. Statistical and
predictive analysis are both involved in user and expert modeling using big data
(Fryer & Carpenter, 2006).
ICALL tools have various applications in language education, including but
not limited to machine translation (MT), AI-powered virtual language exchange
platforms and language learning communities, text-to-speech technology, intel‑
ligent content recommendations, chatbots, automated grading systems, adaptive
learning games, pronunciation analysis tools, automatic speech recognition (ASR),
sentiment analysis tools, speech synthesis technology, ITS, extended reality (XR),
intelligent writing assistants, AI-powered interactive textbooks, language learning
analytics, cognitive learning, gamified learning platforms, predictive analytics, and
AI-powered language learning apps. In the following, I pedagogically reflected on
some of the abovementioned ICALL tools based on the literature and provided
some recommendations for language teachers.
AI-powered ASR technology can improve students’ oral proficiency and fluency
in a foreign language (e.g., Chen, 2022; Dai & Wu, 2023; Evers & Chen, 2021,
2022; Mroz, 2018; Song, 2020; van Doremalen et al., 2016). Many of the ASR soft‑
ware can be obtained for free (e.g., Google Speech Recognition, Windows Speech
Recognition, Siri assistant, iFlyRec, and AT&T Watson) (Evers & Chen, 2022).
These programs have access to a large speech database, which improves their ability
to decode speech. Additionally, the free availability of these programs makes them
easy to use for classroom or self-study purposes, particularly for constant corrective
feedback and self-monitoring. Neuroscience studies have shown that when foreign
language learners speak, they tend to monitor their speech production from the
perspective of their first language (L1) rather than the target language due to their
L1 filtering the sounds in their external monitoring system (Meekings & Scott,
2021). This can make it difficult for them to self-monitor their speech in the target
language, and while language teachers can provide feedback, it may not be timely
or sufficient for individual learners. To address this issue, ICALL tools such as ASR
are becoming more widely used in foreign language learning (Bashori et al., 2021;
Evers & Chen, 2021; Mroz, 2018).
Despite their advantages, ASR dictation programs have limitations when it
comes to pronunciation instruction. While these resources offer extensive exercises
and prompt responses, they do not encompass features related to phonetic descrip‑
tions, like clarifying the utilization of the vocal apparatus for specific sounds or the
variations between target sounds and the user’s native language (Liakin et al., 2014).
Learners require more assistance to understand pronunciation, which could be why
earlier research observed advancements in speaking abilities but not in listening capa‑
bilities (Hung et al., 2022; Liakin et al., 2014). Moreover, despite advancements in
ASR technology, recognition accuracy is still lower than that of human evaluation,
especially when noisy environments are present (Evers & Chen, 2022). The Google
Critical Appraisal of AI-Mediated Communication ◾ 65
Speech Recognition system has an accuracy rate of 93% for free non-native speech
(McCrocklin, 2019), while other systems like Windows Speech Recognition and Siri
are less accurate, with rates of 74% and 69%, respectively (Daniels & Iwago, 2017;
McCrocklin, 2019). Transcription inaccuracies can cause frustration and demotiva‑
tion among students, as reported by participants in various studies (Liakin et al.,
2017; McCrocklin, 2019; Mroz, 2018). Efforts may be undertaken in the near future
to address this challenge by helping learners exchange viewpoints about their pro‑
nunciation, which may be more accurate than software feedback (Evers & Chen,
2022). Also, it should be noted that most ASR programs were not designed to cater
to language learning needs and do not offer any support to modify pronunciation or
rectify mistakes. Therefore, some scholars suggest that combining ASR software with
scaffolding activities could enhance its effectiveness in language teaching (Evers &
Chen, 2022; Mroz, 2018).
Machine translation (henceforth, MT), like Google Translate, can be used to
translate text and speech between different languages. MT has gained popularity
in the realm of foreign language education as well as in daily use over the past few
years because of its convenience, multilingual capabilities, affordability, and imme‑
diacy (Alhaisoni & Alhaysony, 2017; Briggs, 2018). A novel translation system,
which is released by Google as a neural MT, uses statistical methods to identify the
most probable match in the target language from a vast amount of data when trans‑
lating source texts. Consequently, it has achieved notable improvements in accuracy
and comprehensibility compared to its predecessor, a phrase-based statistical MT
(SMT) system (Briggs, 2018).
Recent studies have emphasized the benefits of utilizing MT in the field of
foreign language education, especially in language writing (Garcia & Pena, 2011;
O’Neill, 2016). MT enables students to write more fluently, communicate more
effectively, and concentrate more on content in a second/foreign language with fewer
errors (Garcia & Pena, 2011; Shadiev et al., 2019). Furthermore, MT helps learn‑
ers to minimize errors in vocabulary, grammar, syntax, and spelling (Fredholm,
2019; Lee, 2020; Tsai, 2019), thereby producing higher-quality writing (O’Neill,
2016). In spite of MT’s assistance being limited to the sentence level in L2 writing,
linguistic errors can severely impact the overall quality of L2 writing, thus dem‑
onstrating that MT helps with L2 writing by reducing lexicogrammatical errors
(Lee, 2020; Tsai, 2019). Additionally, beyond the linguistic domain, several studies
have reported a range of advantages of using MT in the affective and metacognitive
domains of foreign language learning (Shadiev et al., 2019).
Despite the potential advantages of utilizing MT, language educators frequently
view it as an insufficient or potentially detrimental resource when used in teaching
foreign languages for various reasons, including ethical concerns or students’ exces‑
sive reliance on MT (Lee, 2020; Nguyen et al., 2023). Nonetheless, a considerable
number of students already utilize MT for various educational purposes and regard
it as a valuable tool for language learning (Alhaisoni & Alhaysony, 2017; Briggs,
2018). While MT can improve writing outcomes by reducing errors, it may not lead
to language learning without proper pedagogical design (Lee, 2023). Pedagogical
66 ◾ Innovations and Applications of Technology
designs should not only focus on effectively using MT in tasks but also on cultivat‑
ing language learning over a longer period. Teachers’ concerns are understandable,
but they should also be aware of this gap and consider MT a language learning
aid. In order to make informed decisions with regard to pedagogy, it is essential for
teachers to have a thorough understanding of the benefits and limitations of cur‑
rent MT technologies, as well as their potential as a tool for language education.
Teachers should also consider the impact of highly accurate MT on language learn‑
ing, including demotivation and potential academic dishonesty (Murtisari et al.,
2019). They need to provide students with clear guidelines on ethical use and pre‑
pare for the future of language learning classrooms (Nguyen et al., 2023).
ITSs employ AI and machine learning technologies to engage with learners and
carry out educational tasks. These systems collect information regarding student
reactions; create a model of each student’s understanding, awareness, motivation, or
sentiment; and deliver customized guidance. ITSs feature interfaces for students to
interact with throughout the learning activity, allowing for more detailed student
modeling, step-level hints, and feedback (Mousavinasab et al., 2021). ITSs utilize
AI methodologies to support education by following the principles of cognitive psy‑
chology and student learning models (Anderson et al., 1995; Shute & Psotka, 1996;
Xu et al., 2019). Researchers in education have devoted their efforts to developing
teaching methods that enhance teaching outcomes (Graesser et al., 2005; Kolodner,
2002; Luckin et al., 2016). For instance, Graesser et al. (2005) investigated how
pedagogical strategies that embraced constructivist approaches could be integrated
into ITS instruction, revealing that learning outcomes were inversely proportional
to boredom and directly proportional to a state of flow, and also probed the rela‑
tionship between emotions and the learning process. These systems are interactive,
capturing and analyzing learner performance, selecting corresponding tasks, and
presenting appropriate information to the learner (Shute & Zapata-Rivera, 2008).
This information provides tailored feedback and creates adaptive instructional
input during tutoring sessions (Anderson et al., 1995; Atkinson, 1968; Shute &
Psotka, 1996; Xu et al., 2019).
ITSs designed for language education generally consist of various components,
such as modeling, forecasting, feedback provision, adaptable lessons and activities,
and scaffolding (Hung & Nguyen, 2022; McNamara et al., 2007). The system
ensures real-time monitoring of individual students’ progress and provides neces‑
sary assistance as needed (Graesser et al., 2011a). Other computer programs that
adapt to learners do not utilize complex learning principles or track cognitive and
emotional states like ITSs (Graesser et al., 2011b). Scholars also applied various
types of ITSs in their studies (Khella & Abu-Naser, 2018; Mayo et al., 2000;
Michaud et al., 2000). A tool was developed by Michaud et al. (2000) to enhance
the literacy skills of deaf high school and college students who communicate in
American Sign Language (ASL) as their primary language. The system evaluates
the student’s written text for mistakes in grammar and provides a tutorial dialogue
to suggest the necessary corrections. The system adapts to the user’s knowledge level
Critical Appraisal of AI-Mediated Communication ◾ 67
and learning style and uses both English and the user’s native language for tutorial
instruction. The study showed how ITS successfully created a flexible, multi-modal,
and multilingual system that improved the literacy skills of deaf students who use
ASL. Mayo et al. (2000) introduced a newly developed ITS that instructs students
on the mechanical aspects of English capitalization and punctuation. The mecha‑
nism necessitates that students engage in an interactive process where they apply
capitalization and punctuation to brief passages of text that are initially written in
lowercase. It defines the field using a series of limitations that outline the appropri‑
ate punctuation and capitalization formats and provides responses when students
deviate from these limitations. During classroom testing of the ITS, a set of stu‑
dents between the ages of 10 and 11 were involved, and the results indicate that the
students were successful in learning the 25 rules included in the system. In another
paper, Khella and Abu-Naser (2018) outlined the design of a digital ITS aimed
at helping students overcome difficulties in learning French. The system aims to
provide a compelling introduction to learning French by presenting the purpose of
learning the language and generating related problems for students to solve. It also
adjusts to the personal progress of every student in actual time. The system offers
explicit assistance and can be flexibly adjusted to the needs of each learner. Based
on the mentioned features, ITS, as an exemplar of ICALL, has been found to be
almost as effective as teachers (VanLehn, 2011). These intelligent tools can boost the
functions of teachers and students (Spector, 2014).
AI-powered chatbots are designed to interact with users and process natural
language inputs. Over half a century ago, ELIZA became the pioneering chatbot
(Weizenbaum, 1966). Currently, chatbots have gained immense popularity as a
highly effective medium for providing information and addressing frequently asked
questions (Smutny & Schreiberova, 2020). Chatbots have been employed in edu‑
cational environments for various purposes in recent times, including sustaining
learners’ motivation in scientific studies, supporting first-year students with their
college experiences, and aiding educators in managing substantial classroom activi‑
ties (Carayannopoulos, 2018; Schmulian & Coetzee, 2019).
The potential of chatbots in language teaching has attracted the attention of
researchers (Chiu et al., 2023; Fryer et al., 2019; Jia et al., 2012; Xu et al., 2021).
Chatbot-supported language learning involves using chatbots to interact with stu‑
dents in the target language for daily practice (Fryer et al., 2019), answering ques‑
tions (Xu et al., 2021), and conducting assessments (Jia et al., 2012). Chatbots can be
a valuable tool in language practice for students. They can help reduce shyness and
make the learning experience more comfortable for all involved (Fryer & Carpenter,
2006). Additionally, chatbots can help bridge the gap between learners and instruc‑
tors in online learning environments, which can reduce the transactional distance
and improve the overall experience (Huang et al., 2022). Visual chatbot develop‑
ment platforms, such as Dialogflow and BotStar, allow teachers to create customized
chatbots without prior programming experience. These platforms provide a design
dashboard that enables teachers to script students’ learning experiences and meet
68 ◾ Innovations and Applications of Technology
their learning objectives (Huang et al., 2022). To learn a new language effectively, it’s
essential to practice speaking and immerse oneself in language contexts, but many
students lack motivation and confidence. Researchers have suggested that chatbot-
supported activities can create a more engaging and authentic language environment
and improve language learning outcomes (Lu et al., 2006). Language educational
chatbots generally possess three main characteristics. Firstly, they are available 24/7
to support students (Garcia Brustenga et al., 2018), allowing them to practice lan‑
guage skills at any time that suits them (Haristiani, 2019). Secondly, chatbots can
provide students with a broader range of language information than their peers, who
may be at a similar proficiency level, including additional expressions, vocabulary,
and questions (Fryer et al., 2019). Thirdly, chatbots can function as tireless assistants
and relieve teachers of repetitive tasks such as answering common questions and
providing continuous language practice (Fryer et al., 2019; Kim, 2018). Chatbots are
always available to help students practice speaking and learn a new language.
Although chatbots have proven to be advantageous in language education by
decreasing students’ anxiety (Ayedoun et al., 2019) and enhancing their participa‑
tion in language learning (Ruan et al., 2019), the temporary nature of learners’
engagement and performance improvement may be due to the novelty effect associ‑
ated with chatbots (Ayedoun et al., 2019; Fryer et al., 2019). The novelty effect refers
to the initial excitement of a new technology that wears off as students become more
accustomed to it. Additionally, concerns have been raised about chatbots’ limited
capabilities. While AI has advanced significantly, designing intelligent dialogue in
chatbots remains a challenge for developers (Brandtzaeg & Følstad, 2018). Even
small mistakes in student input can lead to irrelevant responses from the chatbot,
which may not be able to understand multiple sentences at once as humans can
(Kim et al., 2019). This can restrict students’ interaction to a pre-set knowledge base
(Grudin & Jacques, 2019) and may result in chatbots providing unrelated answers
(Haristiani, 2019; Lu et al., 2006).
Another challenge deals with cognitive load limitations (Huang et al., 2022).
Cognitive load limitations refer to the additional attention or mental effort which
are necessary to complete a task during the learning process (Sweller, 1988). The
amount of cognitive burden that students must carry depends on the instructional
design of activities supported by chatbots. If the cognitive load is too high, it can
interfere with learning outcomes, particularly for low-proficient students (Kim,
2016). Therefore, the use of chatbots must be carefully designed to avoid imposing
an excessive c ognitive load (Fryer et al., 2019). Teachers should take a leadership role
in determining the best way to use chatbots to achieve learning outcomes and miti‑
gate their limitations (Huang et al., 2022). It is imperative to complete a task that
involves learning, as it is a crucial part of the learning process. For example, chatbots
may be more appropriate for advanced learners, and restricted chatbots can be used
to correct spelling errors or check factual knowledge for beginners. Teachers have the
ability to establish guidelines for interactions with chatbots, which can assist learn‑
ers in comprehending the capabilities and limitations of these conversational agents.
Critical Appraisal of AI-Mediated Communication ◾ 69
To address the novelty effect, students can be prepared through a workshop before
the first lesson, and multimedia principles and human-like gestures can be employed
to enhance students’ cognitive processing. Quick buttons can also be used to enhance
interactivity and engagement between chatbot and students. These measures can
help make the chatbot experience more enjoyable and effective for language learners.
Taking into account the present level of technological progress is equally significant
when implementing chatbots in language learning.
Extended reality (XR), including virtual reality (VR), augmented reality (AR),
and mixed or merged reality (MR), can be used to create immersive language learn‑
ing experiences, allowing students to practice real-world language skills in a simu‑
lated environment. Over the last ten years, XR has gained significant popularity. As
XR aims to provide realistic simulations, authenticity, a strong sense of presence, and
exposure, it has been identified as an essential tool for language learning by research‑
ers in language education (Meccawy, 2023). Several studies have been conducted
worldwide to explore the potential benefits of XR in language education (see, Bonner
& Reinders, 2018; Godwin-Jones, 2016; Lan, 2020; Peterson & Jabbari, 2022).
CALL researchers have proposed that XR provides language learners with a
distinct and innovative learning environment due to its CMC exclusive learning
environments (Peixoto et al., 2021). The advantages of XR include enhancing learn‑
ers’ interest, motivation, engagement, and spatial memory and knowledge (Lege &
Bonner, 2020; Xie et al., 2021), providing an inaccessible environment, distance
learning, and empathy training (Bonner & Reinders, 2018; Lege & Bonner, 2020);
reducing distractions (Bonner & Reinders, 2018); linking classroom concepts to
the real world (Reinders & Wattana, 2014); facilitating interactions (Bonner &
Reinders, 2018); providing a culturally rich and dynamic context (Godwin-Jones,
2016; Yeh & Kessler, 2015); and promoting learners to participate in the construc‑
tion of their learning environment (Bonner & Reinders, 2018). These are just some
of the benefits that XR offers in language education, as suggested by scholars.
Although VR has been shown to have positive effects on language learning, lan‑
guage educators have mixed opinions on its use. Some of the main barriers to the inte‑
gration of XR in language education are the high cost of VR tools and the need for
advanced digital literacy skills (Parmaxi et al., 2017). Lack of VR-specific pedagogy,
cognitive demands, and potential immersion-breaking also pose challenges (Lege &
Bonner, 2020). To effectively integrate VR into language education, teachers need to
introduce it to the classroom before implementing it, as explained by Southgate et al.
(2018). Gender should also be considered as an influential variable in VR integra‑
tion (Southgate et al., 2019). In addition, empirical studies on the merits of XR in
teacher education and the design and development of such tools are scarce (Meccawy,
2023). Therefore, it would not be possible to judge the affordability of XR from the
teachers and material developers’ perspectives. Furthermore, the lack of XR-specific
pedagogy which spells out ‘why’ and ‘how’ language education stakeholders should
constructively and compellingly integrate technology is crystal clear. In other words,
implementing XR without sufficient and efficient teacher training is useless.
70 ◾ Innovations and Applications of Technology
2023; Kessler, 2007, 2010; Levy, 1997; Lord & Lomicka, 2011). The general goal of
CALL teacher education is to provide language teachers, both present and future,
with the necessary technical and pedagogical knowledge and skills to effectively
incorporate technology into their classes (Hubbard, 2008; Tafazoli, 2021).
There have been numerous research studies conducted that have revealed a posi‑
tive attitude among language instructors regarding the integration of CALL and
other contemporary technologies in their classrooms. However, despite this positive
outlook, many instructors tend to be hesitant in utilizing these technologies to a
great extent. Several external factors such as a lack of equipment, technical sup‑
port, inflexible curriculum, and time constraints can contribute to this reluctance.
Additionally, there are internal factors such as a lack of CALL literacy, limited expe‑
rience with technology as a learner, lack of motivation, difficulty integrating tech‑
nology with existing teaching practices and learning styles, fear of being outside of
their comfort zone, and the fear of losing control over the classroom and students’
respect that can also influence this reluctance (see, Tafazoli & Farshadnia, 2022).
Park and Son (2009) discovered in their study that while teachers acknowledged
that CALL makes language learning more engaging, they did not believe that they
needed to be experts in using computers. Abdelhalim (2016) found that even when
teachers integrated technology into their teaching, they mainly used basic appli‑
cations such as email or web browsing. Therefore, CALL teacher trainers should
consider these factors when developing their training programs. Although it may be
premature to identify specific ICALL teacher skills or propose ICALL teacher train‑
ing models, such models will likely emerge in the near future. It will be essential to
approach this task realistically and practically. Language teachers do not need to
have programming skills or expertise in AI to use chatbots or incorporate ICALL
practice into their classes.
Several scholars have developed detailed inventories and intricate diagrams of essen‑
tial abilities for teachers (see, Mishra & Koehler, 2006), which can impose impractical
demands on teachers of foreign languages. These materials overlook the fact that such
teachers are primarily language educators and professionals. To effectively overcome
the aforementioned barriers to successful CALL implementation, adequate and ongo‑
ing professional training may be the best solution. Teachers must believe that technol‑
ogy can assist them in achieving educational objectives more efficiently and effectively
without disrupting other aspects of classroom management. They must also possess
sufficient CALL skills and have unrestricted access to technology.
4.4 Conclusion
Incorporating AI into language education has given rise to the concept of ICALL,
which offers a new level of quality in language teaching and learning. AI-based
tools can provide a sophisticated educational environment that is more personalized
and flexible for learners and teachers. These tools can assist learners in acquiring
72 ◾ Innovations and Applications of Technology
the knowledge and skills that modern society demands. There are individuals who
hold a pessimistic perspective toward the incorporation of AI, fearing that it may
obtain complete dominance and transform into an oppressive mentor that directs
the content, timing, and manner in which students acquire knowledge, using infor‑
mation gathered without their approval. In contrast, others have a positive view,
envisioning learners who control their personal AI tools, which aid them (and their
teachers) in better understanding their progress and organizing learning activities
(Fryer & Carpenter, 2006).
The eminence of CALL teacher education and professional development should
be considered in this situation. Language teachers required to pick up new skills to
integrate ICALL tools into their teaching processes effectively and avoid unneces‑
sary workloads and repetitive tasks. The use of tools such as writing assistants and
correction systems can support learners. However, it remains to be seen how well-
informed language teachers are about ICALL advancements and how frequently
they incorporate AI tools into their teaching. Research needs to answer various
questions, such as what the preferred AI tools among language teachers are, how
they perceive ICALL, and what motivates them to use it. Additionally, identify‑
ing the key skills required for AI-enhanced teaching environments and developing
appropriate teacher training programs are crucial.
References
Abdelhalim, S. (2016). An interpretive inquiry into the integration of the information and
communication technology tools in TEFL at Egyptian universities. Journal of Research
in Curriculum, Instruction and Educational Technology, 2(4), 145–173.
Alhaisoni, E., & Alhaysony, M. (2017). An investigation of Saudi EFL university students’
attitudes towards the use of Google Translate. International Journal of English Language
Education, 5(1), 72–82. https://doi.org/10.5296/ijele.v5i1.10696
Anderson, J. R., Corbett, A. T., Koedinger, K. R., & Pelletier, R. (1995). Cognitive
tutors: Lessons learned. Journal of the Learning Sciences, 4(2), 167–207. https://doi.
org/10.1207/s15327809jls0402_2
Atkinson, R. C. (1968). Computerized instruction and the learning process. American
Psychologist, 23(4), 225–239. https://psycnet.apa.org/doi/10.1037/h0020791
Ayedoun, E., Hayashi, Y., & Seta, K. (2019). Adding communicative and affective strategies
to an embodied conversational agent to enhance second language learners’ willing‑
ness to communicate. International Journal of Artificial Intelligence in Education, 29(1),
29–57. https://doi.org/10.1007/s40593-018-0171-6
Bancheri, S. (2006). A language teacher’s perspective on effective courseware. In P. D. Randall
& A. H. Margaret (Eds.), Changing language education through CALL (pp. 31–47).
Routledge.
Bashori, M., van Hout, R., Strik, H., & Cucchiarini, C. (2021). Effects of ASR-based web‑
sites on EFL learners’ vocabulary, speaking anxiety, and language enjoyment. System,
99, 102496. https://doi.org/10.1016/j.system.2021.102496
Beal, C. R., Arroyo, I. M., Cohen, P. R., & Woolf, B. P. (2010). Evaluation of animal watch:
An intelligent tutoring system for arithmetic and fractions. Journal of Interactive Online
Learning, 9(1), 64–67. https://www.ncolr.org/jiol/issues/pdf/9.1.4.pdf
Critical Appraisal of AI-Mediated Communication ◾ 73
Bonner, E., & Reinders, H. (2018). Augmented and virtual reality in the language classroom:
Practical ideas. Teaching English with Technology, 18(3), 33–53. https://files.eric.ed.gov/
fulltext/EJ1186392.pdf
Brandtzaeg, P. B., & Følstad, A. (2018). Chatbots: Changing user needs and motivations.
Interactions, 25(5), 38–43. https://doi.org/10.1145/3236669
Briggs, N. (2018). Neural machine translation tools in the language learning classroom:
Students’ use, perceptions, and analyses. JALT CALL Journal, 14(1), 2–24. https://doi.
org/10.29140/jaltcall.v14n1.221
Bui, H. P., Bui, H. P. H., & Dinh, P. D. (2023). Vietnamese students’ use of smartphone apps in
English learning. LEARN Journal: Language Education and Acquisition Research Network,
1(6), 28–46. https://so04.tci-thaijo.org/index.php/LEARN/article/view/263430
Campbell-Howes, K. (2019). Why is AI a good thing for language teachers and learners?
OxfordTEFL: Teacher Training. https://oxfordtefl.com/blog/why-is-ai-a-good-thing-
for-language-teachers-and-learners/.
Carayannopoulos, S. (2018). Using chatbots to aid transition. The International Journal
of Information and Learning Technology, 35(2), 118–129. https://doi.org/10.1108/
IJILT-10-2017-0097
Carr, C. T. (2020). CMC is dead, long live CMC!: Situating computer-mediated communica‑
tion scholarship beyond the digital age. Journal of Computer-Mediated Communication,
25(1), 9–22. https://doi.org/10.1093/jcmc/zmz018
Chassignol, M., Khoroshavin, A., Klimova, A., & Bilyatdinova, A. (2018). Artificial
Intelligence trends in education: A narrative overview. Procedia Computer Science, 136,
16–24. https://doi.org/10.1016/j.procs.2018.08.233
Chen, K. T. C. (2022). Speech-to-text recognition in university English as a foreign lan‑
guage learning. Education and Information Technologies, 27, 9857–9875. https://doi.
org/10.1007/s10639-022-11016-5
Chiu, T. K. F., Moorhouse, B. L., Chai, C. S., Ismailov, M. (2023). Teacher support and
student motivation to learn with Artificial Intelligence (AI) based chatbot. Interactive
Learning Environments. https://doi.org/10.1080/10494820.2023.2172044
Conati, C., Gertner, A., & Vanlehn, K. (2002). Using Bayesian networks to manage uncer‑
tainty in student modeling. User Modeling and User-Adapted Interaction, 12, 371–417.
https://doi.org/10.1023/A:1021258506583
Dai, Y. J., & Wu, Z. W. (2023). Mobile-assisted pronunciation learning with feedback from
peers and/or automatic speech recognition: A mixed-methods study. Computer Assisted
Language Learning, 36(5–6), 861–884. https://doi.org/10.1080/09588221.2021.195
2272
Daniels, P., & Iwago, K. (2017). The suitability of cloud-based speech recognition engines
for language learning. JALT CALL Journal, 13(3), 229–239. https://doi.org/10.29140/
jaltcall.v13n3.220
Evers, K., & Chen, S. (2021). Effects of automatic speech recognition software on pronuncia‑
tion for adults with different learning styles. Journal of Educational Computing Research,
59, 669–685. https://doi.org/10.1177/0735633120972011
Evers, K., & Chen, S. (2022). Effects of an automatic speech recognition system with peer
feedback on pronunciation instruction for adults. Computer Assisted Language Learning,
35(8), 1869–1889. https://doi.org/10.1080/09588221.2020.1839504
Fredholm, K. (2019). Effects of Google Translate on lexical diversity: Vocabulary develop‑
ment among learners of Spanish as a foreign language. Revista Nebrija de Lingüística
Aplicada a la Enseñanza de Las Lenguas, 13(26), 98–117. https://doi.org/10.26378/
rnlael1326300
74 ◾ Innovations and Applications of Technology
Fryer, L., & Carpenter, R. (2006). Bots as language learning tools. Language Learning &
Technology, 10(3), 8–14.
Fryer, L. K., Nakao, K., & Thompson, A. (2019). Chatbot learning partners: Connecting
learning experiences, interest and competence. Computers in Human Behavior, 93,
279–289. https://doi.org/10.1016/j.chb.2018.12.023
Garcia, I., & Pena, M. (2011). Machine translation-assisted language learning: Writing for
beginners. Computer Assisted Language Learning, 24(5), 471–487. https://doi.org/10.1
080/09588221.2011.582687
Garcia Brustenga, G., Fuertes-Alpiste, M., & Molas-Castells, N. (2018). Briefing paper:
Chatbots in education. Universitat Oberta de Catalunya.
Godwin-Jones, R. (2016). Augmented reality and language learning: From annotated vocab‑
ulary to place-based mobile games. Language Learning & Technology, 20(3), 9–19.
https://llt.msu.edu/issues/october2016/emerging.pdf
Graesser, A. C., Chipman, P., Haynes, B. C., & Olney, A. (2005). Autotutor: An intelligent
tutoring system with mixed-initiative dialogue. IEEE Transactions on Education, 48(4),
612–618. https://doi.org/10.1109/TE.2005.856149
Graesser, A. C., Conley, M., & Olney, A. (2011a). Intelligent tutoring systems. In K. R. Harris,
S. Graham, & T. Urdan (Eds.), APA educational psychology handbook: Vol. 3. Applications
to learning and teaching (pp. 451–473). American Psychological Association.
Graesser, A. C., Mcnamara, D. S., & Kulikowich, J. M. (2011b). Coh-metrix: Providing
multilevel analyses of text characteristics. Educational Researcher, 40(5), 223–234.
https://doi.org/10.3102/0013189X11413260
Grudin, J., & Jacques, R. (2019). Chatbots, humbots, and the quest for artificial general
intelligence. Proceedings of the 2019 CHI Conference on Human Factors in Computing
Systems (pp. 1–11). ACM. https://doi.org/10.1145/3290605.3300439
Haristiani, N. (2019). Artificial intelligence (AI) chatbot as language learning medium:
An inquiry. Journal of Physics: Conference Series, 1387, 012020. https://doi.org/
10.1088/1742-6596/1387/1/012020
Huang, W., Hew, K. F., & Fryer, L. K. (2022). Chatbots for language learning-Are they
really useful? A systematic review of chatbot-supported language learning. Journal of
Computer Assisted Learning, 38(1), 237–257. https://doi.org/10.1111/jcal.12610
Huang, X., Zou, D., Cheng, G., Chen, X, & Xie, H. (2023). Trends, research issues and
applications of artificial intelligence in language education. Educational Technology &
Society, 26(1), 112–131. https://doi.org/10.30191/ETS.202301_26(1).0009
Hubbard, P. (2008). CALL and the future of language teacher education. CALICO Journal,
25(2), 175–188. https://doi.org/10.1558/cj.v25i2.175-188
Hubbard, P. (2018). Technology and professional development. In J. Liontas (Ed.), The
TESOL encyclopedia of English language teaching (pp. 1–6). Hoboken, NJ: Wiley
Blackwell. https://doi.org/10.1002/9781118784235.eelt0426
Hubbard, P. (2023). Contextualizing and adapting teacher education and professional devel‑
opment. In D. Tafazoli & M. Picard (Eds.), Handbook of CALL teacher education and
professional development: Voices from under-represented contexts. Springer. https://doi.
org/10.1007/978-981-99-0514-0_1
Hubbard, P., & Levy, M. (Eds.) (2006). Teacher education and CALL. John Benjamins.
Hung, B. P., & Nguyen, L. T. (2022). Scaffolding language learning in the online classroom. In
R. Sharma & D. Sharma (Eds.), New trends and applications in Internet of
Things (IoT) and big data analytics (pp. 109–122). Springer. https://doi.
org/10.1007/978-3-030-99329-0_8
Critical Appraisal of AI-Mediated Communication ◾ 75
Hung, B. P., Pham, D. T. A., & Purohit, P. (2022). Computer mediated communication in
second language education. In R. Sharma & D. Sharma (Eds.), New trends and applica‑
tions in Internet of Things (IoT) and big data analytics (pp. 45–60). Springer. https://doi.
org/10.1007/978-3-030-99329-0_4
Jia, J., Chen, Y., Ding, Z., & Ruan, M. (2012). Effects of a vocabulary acquisition and assessment
system on students’ performance in a blended learning class for English subject. Computers
& Education, 58(1), 63–76. https://doi.org/10.1016/j.compedu.2011.08.002
Kannan, J., & Munday, P. (2018). New trends in second language learning and teaching through
the lens of ICT, networked learning, and artificial intelligence. Círculo de Lingüística
Aplicada a la Comunicación, 76, 13–30. http://dx.doi.org/10.5209/CLAC.62495.
Kessler, G. (2007). Formal and informal CALL preparation and teacher attitude toward
technology. Computer Assisted Language Learning, 20(2), 173–188. https://doi.
org/10.1080/09588220701331394
Kessler, G. (2010). When they talk about CALL: Discourse in a required CALL class.
CALICO Journal, 27(2), 376–392. https://doi.org/10.11139/cj.27.2.376-392
Khella, R. A., & Abu-Naser, S. S. (2018). An intelligent tutoring system for teaching French.
International Journal of Academic Multidisciplinary Research, 2(2), 9–13. https://ijeais.
org/wp-content/uploads/2018/02/IJAMR180202.pdf
Kim, N.-Y. (2016). Effects of voice chat on EFL learners’ speaking ability according to profi‑
ciency levels. Multimedia-Assisted Language Learning, 19(4), 63–88.
Kim, N.-Y. (2018). A study on chatbots for developing Korean college students’ English
listening and reading skills. Journal of Digital Convergence, 16(8), 19–26. https://doi.
org/10.14400/JDC.2018.16.8.019
Kim, N.-Y., Cha, Y., & Kim, H.-S. (2019). Future English learning: Chatbots and artificial
intelligence. Multimedia-Assisted Language Learning, 22(3), 32–53.
Kolodner, J. (2002). Facilitating the learning of design practices: Lessons learned from an
inquiry into science education. Journal of Industrial Teacher Education, 39(3), 9–40.
https://scholar.lib.vt.edu/ejournals/JITE/v39n3/kolodner.html
Lam, Y., & Lawrence, G. (2002). Teacher-student role redefinition during a computer-
based second language project: Are computers catalysts for empowering change?
Computer Assisted Language Learning, 15(3), 295–315. https://doi.org/10.1076/
call.15.3.295.8185
Lan, Y. J. (2020). Immersion, interaction and experience-oriented learning: Bringing virtual
reality into FL learning. Language Learning & Technology, 24(1), 1–15. https://hdl.
handle.net/10125/44704
Lee, S.-M. (2020). The impact of using machine translation on EFL students’ writing.
Computer Assisted Language Learning, 33(3), 157–175. https://doi.org/10.1080/0958
8221.2018.1553186
Lee, S.-M. (2023). The effectiveness of machine translation in foreign language education:
A systematic review and meta-analysis. Computer Assisted Language Learning, 36(1–2),
103–125. https://doi.org/10.1080/09588221.2021.1901745
Lege, R., & Bonner, E. (2020). Virtual reality in education: The promise, progress, and
challenge. The JALT CALL Journal, 16(3), 167–180. https://doi.org/10.29140/jaltcall.
v16n3.388
Levy, M. (1997). A rationale for teacher education and CALL: The holistic view and its impli‑
cations. Computers and the Humanities, 30, 293–302.
Liakin, D., Cardoso, W., & Liakina, N. (2014). Learning L2 pronunciation with a mobile
speech recognizer: French/y/. CALICO Journal, 32(1), 1–25. https://doi.org/10.1558/
cj.v32i1.25962
76 ◾ Innovations and Applications of Technology
Liakin, D., Cardoso, W., & Liakina, N. (2017). Mobilizing instruction in a second-language
context: Learners’ perceptions of two speech technologies. Languages, 2(3), 11–32.
https://doi.org/10.3390/languages2030011
Lord, G., & Lomicka, L. (2011). Calling on educators: Paving the way for the future of technol‑
ogy and CALL. In N. Arnold & L. Ducate (Eds.), Present and future promises of CALL:
From theory and research to new directions in language teaching (pp. 441–469). CALICO.
Lu, C. H., Chiou, G. F., Day, M. Y., Ong, C. S., & Hsu, W. L. (2006). Using instant mes‑
saging to provide an intelligent learning environment. In M. Ikeda, K. D. Ashley, & T.
W. Chan (Eds.), Intelligent tutoring systems. ITS 2006. Lecture notes in computer science
(vol. 4053, pp. 575–583). Springer. https://doi.org/10.1007/11774303_57
Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. (2016). Intelligence unleashed. An
argument for AI in education. Pearson.
Mayo, M., Mitrovic, A., & McKenzie, J. (2000). CAPIT: An intelligent tutoring system for cap‑
italisation and punctuation. Proceedings International Workshop ON Advanced Learning
Technologies. IWALT 2000. Advanced Learning Technology: Design and Development
Issues (pp. 151–154). IEEE. https://doi.org/10.1109/IWALT.2000.890594
McCrocklin, S. (2019). Learners’ feedback regarding ASR-based dictation practice for pronun‑
ciation learning. CALICO Journal, 36(2), 119–137. https://doi.org/10.1558/cj.34738
McNamara, D. S., O’Reilly, T., Rowe, M., Boonthum, C., & Levinstein, I. B. (2007).
iSTART: A web-based tutor that teaches self-explanation and metacognitive reading
strategies. In D. S. McNamara (Ed.), Reading comprehension strategies: Theories, inter‑
ventions, and technologies (pp. 397–421). Routledge.
Meccawy, M. (2023). Teachers’ prospective attitudes towards the adoption of extended reality
technologies in the classroom: Interests and concerns. Smart Learning Environment, 10,
36. https://doi.org/10.1186/s40561-023-00256-8.
Meekings, S., & Scott, S. K. (2021). Error in the superior temporal gyrus? A systematic
review and activation likelihood estimation meta-analysis of speech production stud‑
ies. Journal of Cognitive Neuroscience, 33(3), 422–444. https://doi.org/10.1162/
jocn_a_01661
Michaud, L. N., McCoy, K. F., & Pennington, C. A. (2000). An intelligent tutoring system for
deaf learners of written English. Proceedings of the Fourth International ACM Conference
on Assistive Technologies (pp. 92–100). ACM. https://doi.org/10.1145/354324.354348
Mishra, P., & Koehler, M. J. (2006). Technological pedagogical content knowledge: A
framework for integrating technology in teachers’ knowledge. Teachers College Record,
108(6), 1017–1054.
Mousavinasab, E., Zarifsanaiey, N., Niakan Kalhori, S. R., Rakhshan, M., Keikha, L., &
Ghazi Saeedi, M. (2021). Intelligent tutoring systems: A systematic review of character‑
istics, applications, and evaluation methods. Interactive Learning Environments, 29(1),
142–163. https://doi.org/10.1080/10494820.2018.1558257
Mroz, A. P. (2018). Noticing gaps in intelligibility through Automatic Speech Recognition
(ASR): Impact on accuracy and proficiency. Paper presented at 2018 Computer‑Assisted
Language Instruction Consortium (CALICO) Conference, Urbana, IL.
Murtisari, E., Widiningrum, R., Branata, J., & Susanto, R. (2019). Google Translate in lan‑
guage learning: Indonesian EFL students’ attitudes. The Journal of AsiaTEFL, 16(3),
978–986. https://doi.org/10.18823/asiatefl.2019.16.3.14.978
Nguyen, A., Ngo, H. N., Hong, Y., Dang, B., & Nguyen, B‑P. T. (2023). Ethical prin‑
ciples for artificial intelligence in education. Education and Information Technologies,
28, 4221–4241. https://doi.org/10.1007/s10639‑022‑11316‑w
Critical Appraisal of AI-Mediated Communication ◾ 77
O’Neill, E. (2016). Measuring the impact of online translation on FL writing scores. IALLT
Journal of Language Learning Technologies, 46(2), 1–39. https://doi.org/10.17161/iallt.
v46i2.8560
Park, C. N., & Son, J.‑B. (2009). Implementing computer‑assisted language learning in
the EFL classroom: Teachers’ perceptions and perspectives. International Journal of
Pedagogies and Learning, 5(2), 80–101. https://doi.org/10.5172/ijpl.5.2.80
Parmaxi, A., Stylianou, K., & Zaphiris, P. (2017). Leveraging virtual trips in Google expe‑
ditions to elevate students’ social exploration. Proceedings of the IFIP Conference on
Human‑Computer Interaction (pp. 368–371). Springer. https://doi.org/10.1007/978‑
3‑319‑68059‑0_32
Peixoto, B., Pinto, R., Melo, M., Cabral, L., & Bessa, M. (2021). Extended virtual reality
for foreign language education: A PRISMA systematic review. IEEE Access, 9, 48952–
48962. https://doi.org/10.1109/ACCESS.2021.3068858
Peterson, M., & Jabbari, N. (2022). Digital games and foreign language learning: Context
and future development. In M. Peterson & N. Jabbari (Eds.), Digital games in lan‑
guage learning: Case studies and applications (pp. 1–13). Routledge. https://doi.
org/10.4324/9781003240075‑1
Reinders, H., & Wattana, S. (2014). Can I say something? The effects of digital game play
on willingness to communicate. Language Learning & Technology, 18(2), 101–123.
https://llt.msu.edu/issues/june2014/reinderswattana.pdf
Rilling, S., Dahlman, A., Dodson, S., Boyles, C., & Pazvant, O. (2005). Connecting CALL
theory and practice in pre‑service teacher education and beyond: Processes and Products.
CALICO Journal, 22(2), 213–235. https://doi.org/10.1111/j.1944‑9720.2006.
tb02276.x
Ruan, S., Willis, A., Xu, Q., Davis, G. M., Jiang, L., Brunskill, E., & Landay, J. A. (2019).
Bookbuddy: Turning digital materials into interactive foreign language lessons through
a voice chatbot. Proceedings of the Sixth (2019) ACM Conference on Learning Scale
(pp. 1–4). ACM. https://doi.org/10.1145/3330430.3333643
Schmulian, A., & Coetzee, S. A. (2019). The development of messenger bots for teaching
and learning and accounting students’ experience of the use thereof. British Journal of
Educational Technology, 50(5), 2751–2777. https://doi.org/10.1111/bjet.12723
Schulze, M. (2008). AI in CALL - artificially intelligent or almost imminent. CALICO
Journal, 25(3), 510–527. http://dx.doi.org/10.1558/cj.v25i3.510-527.
Shadiev, R., Sun, A., & Huang, Y.‑M. (2019). A study of the facilitation of cross‑cultural
understanding and intercultural sensitivity using speech‑enabled language translation
technology. British Journal of Educational Technology, 50(3), 1415–1433. https://doi.
org/10.1111/bjet.12648
Shute, V. J., & Psotka, J. (1996). Intelligent tutoring system: Past, present, and future. In
D. Jonassen (Ed.), Handbook of research for educational communications and technology
(pp. 570–600). Macmillan.
Shute, V. J., & Zapata‑Rivera, D. (2008). Using an evidence-based approach to assess
mental models. In D. Ifenthaler, P. Pirnay-Dummer & J. M. Spector (Eds.),
Understanding models for learning and instruction (pp. 23-41). Springer. https://doi.
org/10.1007/978-0-387-76898-4_2
Smutny, P., & Schreiberova, P. (2020). Chatbots for learning: A review of educational chat‑
bots for the Facebook messenger. Computers & Education, 151, 103862. https://doi.
org/10.1016/j.compedu.2020.103862
78 ◾ Innovations and Applications of Technology
VanLehn, K., Graesser, A. C., Jackson, G. T., Jordan, P., Olney, A., & Rose, C. P. (2007).
When are tutorial dialogues more effective than reading? Cognitive Science, 31(1),
3–62. https://doi.org/10.1080/03640210709336984
Weizenbaum, J. (1966). ELIZA‑A computer program for the study of natural language com‑
munication between man and machine. Communications of the ACM, 9(1), 36–45.
https://doi.org/10.1145/365153.365168
Xie, Y., Chen, Y., & Ryder, L. H. (2021). Effects of using mobile‑based virtual reality on
Chinese L2 students’ oral proficiency. Computer Assisted Language Learning, 34(3),
225–245. https://doi.org/10.1080/09588221.2019.1604551
Xu, Y., Wang, D., Collins, P., Lee, H., & Warschauer, M. (2021). Same benefits, different
communication patterns: Comparing children’s reading with a conversational agent
vs. a human partner. Computers & Education, 161, 104059. https://doi.org/10.1016/j.
compedu.2020.104059
Xu, Z., Wijekumar, K., Ramirez, G., Hu, X., Irey, R. (2019). The effectiveness of intel‑
ligent tutoring systems on K‑12 students’ reading comprehension: A meta‑analysis.
British Journal of Educational Technology, 50(6), 3119–3137. https://doi.org/10.1111/
bjet.12758
Yeh, E., & Kessler, G. (2015). Enhancing linguistic and intercultural competencies through
the use of social network sites and Google Earth. In J. Keengwe (Ed.), Promoting global
literacy skills through technology‑infused teaching and learning (pp. 1–22). IGI Global.
https://doi.org/10.4018/978‑1‑4666‑6347‑3.ch001
Chapter 5
5.1 Introduction
The rapid advancements in information technologies have transformed various
fields, including education. The advancements and transformations have also
made technology an integral part of language education paving the way for educa‑
tors to explore innovative ways such as online learning platforms and gamifica‑
tion to incorporate technology into traditional classroom settings (Altasan, 2016;
Kavaklı Ulutaş & Abuşka, 2022, 2023; Kazazoğlu & Bilir, 2021). Therefore, it
is apparent that learner‑centered innovative approaches do necessitate the utiliza‑
tion of customizable learning materials together with novel pedagogies that can
agreed that the initial version prepared by the CDS be reviewed (Robin & Pierson,
2005). They intend to expand upon the existing content to encompass ten essential
components. When developing a meaningful narrative, it is crucial to establish a
clear purpose as all subsequent plans can be based upon it.
The storytelling process is utterly instructive since it entails multi‑layered con‑
struction phases of acquiring information, organizing thoughts and ideas, forming
an action plan, overcoming difficulties that emerge in the process, and eventually
coming up with a result. As new literacies revolve around technological develop‑
ments and usage, digital stories are pedagogically efficient tools that help students
enhance their digital, communicational, and language competencies within class‑
room settings. Multiple reasons make digital stories user‑friendly (Bull & Kajder,
2004). In a simple example, they can be created using a variety of contemporary
multimedia tools (visuals, sound, animations, etc.), allowing users to express their
unique experiences and points of view.
The process of creating digital narratives adheres to the ADDIE
(analysis‑design‑development‑implementation‑evaluation) model, a widely rec‑
ognized and utilized framework that has been in existence since the 1970s. For
analysis, a topic that is both engaging and informative is carefully selected. The
intended audience for the digital story is identified, and objectives are established.
Subsequently, a scenario outline is created, and it is imperative to conduct a com‑
prehensive literature review during this stage to ensure that the selected topic
aligns with previous research and to identify any gaps that require addressing.
During the design stage of content production, a comprehensive storyboard is
crafted to outline the subject matter. The appropriate multimedia tools are then
selected or prepared, and a final script is meticulously created. Personal narration
is utilized to record the script onto the designated platform during the develop‑
ment phase. Lastly, titles, texts, and credits are added to troubleshoot any potential
issues that may arise in the process, ensuring that the story reaches its final form.
The process continues with the implementation phase. The digital story is created
with the completion of all these components. The final phase is evaluation, and in
this stage, feedback is provided for story drafts before they are being published.
Following this, it is determined whether the messages that the digital stories want
to convey reach the target audience or whether there are still some points that need
to be developed. Herein, Cennamo and Kalk (2018) state that this phase includes
formative and summative processes to determine the overall quality of the learning
materials, which also confirms the idea that DST can also be used as an alternative
form of assessment.
Additionally, since several teaching methods and approaches have recently
become significant and supported learner engagement, sharing the common back‑
ground to be “learner‑centered” (Barrett, 2006), students accustomed to passive
learning have become acquainted with patterns of process‑oriented methods that
promote active participation, thanks to the nature of DST‑oriented practices.
Therefore, various DST tools are introduced to contribute to these developmental
84 ◾ Innovations and Applications of Technology
5.3 Methodology
5.3.1 Research Design
The current study deployed a mixed‑methods research design, which was quanti‑
tatively driven in which the quantitative part was laced with one group pre‑test–
post‑test design whereas the qualitative part was nestled with their reflective reports
of structured interviews, which were the products of a funded national project.
Using a DST tool, named Powtoon, the English language teacher candidates’ writ‑
ing skills development was pursued together with taking their perceptions and
reflections on the process.
5.3.2 Participants
The participants were determined by purposive sampling (Tongco, 2007) among
freshmen at the Department of Foreign Languages Education from a state univer‑
sity in Türkiye voluntarily in the fall semester of the academic year 2022–2023.
The participants were initially ten students; however, two of them only executed the
pre‑test, albeit not the post‑test. Accordingly, they were excluded from the study to
ensure validity and excluded from further statistical analyses not to affect reliability.
Thereupon, eight volunteer participants who also attended the compulsory “Writing
Skills I” course and enrolled for the “Writing Skills II” course were recruited with
an age range between 18 and 20. The reason behind taking this compulsory course
was that initially, skills‑based courses (e.g., oral communication skills, listening
and pronunciation, advanced reading, and writing skills) were completed to move
on to subject‑specific field courses (e.g., linguistics, language acquisition, teaching
language skills, etc.) at these departments of Faculties of Education. Thus, freshmen
students were considered as the participants of this study to emphasize validity as
that of the generalizability of the results to similar research contexts. In addition,
the participants did share a common background of language knowledge since they
were enrolled at the department to their scores of a standardized high‑stakes exami‑
nation executed as a university entrance exam with no prior experience abroad,
which might derail the reliability of the study, and their self‑perceived proficiency
levels were similar (from B1+ to B2).
5.3.3 Instruments
As a DST tool, Powtoon was used which was funded by the project with its pre‑
mium option solely for the researcher to introduce the steps for digital story creation
together with the essentials of the tool. However, participants could only use its
free form since they did need to pay if they would like to add premium features,
albeit not needed during the process. The reason for choosing Powtoon was that
86 ◾ Innovations and Applications of Technology
the researcher had a good command of knowledge to use it, and it had a relatively
friendly interface providing handy visuals. Since the researcher was going to orga‑
nize introductory sessions on how to use this DST tool, it was of the researcher’s
convenience to use what was already being used by her; otherwise, it would take
much time. In addition, the free version was available for all participants without
charge, which was fair enough for the accomplishment of the current study.
Apart from that, a simple pre‑test and post‑test design was utilized regarding the
midterm and final scores of the participants during the semester from the “Writing
Skills II” course for the quantitative part. It was, thus, detected whether there was
a statistically significant increase that might be enhanced using DST‑oriented prac‑
tices. In the meantime, the participants were also asked by the researcher some
demographic questions before mingling with the DST process under three main
headings: (a) digital competence, (b) digital use, and (c) digital transformation.
They, therefore, self‑reported their digital skills via Google Forms to promote flex‑
ibility before the process began with a Cronbach’s Alpha reliability score of 85.
These items were also helpful in understanding whether they had prior knowledge
or skills to create digital stories better than the others, which might derail the reli‑
ability and validity of the current study. To investigate their prior digital compe‑
tencies, the participants were asked by a list of items (N = 15) composed of can‑do
statements on a 5‑point Likert‑type basis from strongly disagree (1) to strongly agree
(5) regarding their digital competence, digital use, and digital transformation to
detect digital repertoires and skills in the very beginning through self‑reports. This
form was only used in the very beginning since later, participants were expected
to write narratives to a structured open‑ended sentence pattern further elaborated.
For the qualitative part, a structured open‑ended sentence pattern was given
to the participants: “At the end of this session, I….…” which was provided with an
example (Ex: At the end of this session, I like to learn how to draw some simple story‑
boards to illustrate my ideas for a digital story) to ensure their understanding before
they filled in their narratives. In this way, it was aimed to observe their reflections
on the overall process in their own words. Before the participants were introduced
to the data collection process and instruments, their consents, together with the
Ethical Committee Approval, were taken to ensure that the results would solely be
reported for research purposes and their names would be kept anonymous.
5.3.4 Procedure
Priorly, Microsoft Teams (MT) was applied to conduct the study. On MT, a chan‑
nel of a proficient learning community (PLC) was created in which the group
of participants together with the researcher were listed. On this channel, all the
announcements were made by the researcher regarding introductory meetings on
the project, DST training sessions, updates, data collection processes, assignments,
and reflection for evaluation. Through the online sessions previously decided and
Digital Story Telling ◾ 87
set according to the convenience of both the researcher and participants, the proce‑
dure was implemented robustly for research purposes.
Initially, the first introductory meeting was held face‑to‑face to let participants
meet the researcher, understand the research process, and ask questions if they
happened to have any further inquiries. This meeting was also helpful for making
pre‑service English language teachers see what the project was about, what the proj‑
ect demanded, and whether they would like to participate or not. Taking this as the
starting point, the sampling of the participants became easier as well. However, for
the sampling process, the course instructor was also invited to meet and discuss the
potential of the project to help him convey the course more smoothly. Since mid‑
term and final exam scores would be in use, permission was taken from the course
instructor to take place in the research process. Taking the confirmation from the
course instructor to establish collaboration in‑betweens, the researcher together with
her academic advisor planned to detect voluntary participants as a preliminary step.
Afterward, the participants of the study were met online to discuss the timeline
together with the project requirements and given information on how to conduct
the overall process. In doing so, they were given a list of items in the form of can‑do
statements regarding their digital competence, digital use, and digital transfor‑
mation to detect participants’ digital repertoires and skills in the very beginning
through self‑assessment, and it was conducted online through Google Forms. In
that, if there was a participant with greater knowledge and skills than the oth‑
ers, s/he needed to be withdrawn from the process as it might derail the overall
results. On the contrary, participants indicated that they somehow lacked such
digital competencies. To eliminate the shortcomings that might arise from the lack
of participants’ knowledge and skills, and to facilitate the research project adapta‑
tion process, the digital story production process was thoroughly explained to the
participants beforehand.
Following that, the DST tool that would be in use for weeks was introduced
to the participants in an online meeting organized on MT. As mentioned previ‑
ously, as a DST tool, Powtoon was used by purchasing a premium account to pro‑
duce sample digital stories and guide the participants by the researcher. Until the
midterm week of the “Writing Skills II” course, the course syllabus tracked by the
course instructor was followed according to the usual flow. Introductory meetings
were implemented during this period during which the participants were given pre‑
determined tasks weekly and instructed to generate their digital stories between the
midterm and final weeks of the course. Since the research timeline was set as seven
weeks, the participants were awaited until midterm week to have a good command
of digital skills to create their own stories until the final week. During this time,
they were given sessions on how to use DST for writing skills development, how
to apply Powtoon as a DST tool to create unique stories, and how to create stories
effectively in line with the subject areas previously defined under the triangulation
of the course instructor, academic advisor, and researcher.
88 ◾ Innovations and Applications of Technology
Subsequently, the stories were shared on YouTube, and feedback sessions were
held on the MT channel on the stories developed by the participants, thanks to the
efforts of the researcher. Hereby, it was aimed to support and monitor the devel‑
opment of the participants’ digital skills as that of writing. The requested digital
stories were prepared under four headings with the consensus of the three parties
on (a) how to paraphrase (sentence/paragraph substitution), (b) content and lan‑
guage integrated learning (CLIL)‑based article analysis, (c) how to use citation/give
references, (d) how to create a cause‑effect essay. Henceforth, participants delved
into the various steps involved in producing digital stories and explored how they
could engage with digital content and computer‑mediated multimodal strategies
(Figures 5.1 and 5.2).
Herein to note, participants were not forced to produce their digital content
within a previously defined time limit or space. As the nature of the DST process
entailed flexibility, participants were given extra time to create their own stories;
however, to conduct the research project efficiently, there was a timeline to follow,
which was already shared by the participants in the very beginning session, also
shared as a separate file in the folders of the MT channel.
After presenting and publishing their digital content, the participants were
given time to study for final exams since this was a part of the process. Before letting
them, a structured open‑ended sentence pattern was asked online through Google
Forms again to ease the process of obtaining information about their personal expe‑
riences and reflections about the overall DST process. A structured open‑ended
sentence pattern given to the participants was: “At the end of this session, I….…”,
and an example was given in a separate sentence to direct the appropriate way for
the participants. Herein to note, all the materials in use were in English since their
language development was a part of the study regarding writing, and this was
maneuvered for research purposes, albeit their native language was Turkish. After
the completion of the term, the midterm and final exam scores of the participants
were gathered from the course instructor and documented. The developmental dif‑
ferences of these exams were analyzed using statistical software, and participants’
reflections were analyzed thematically via narrative analysis.
Valid N (listwise) 8
Contrary to the researcher’s belief that participants were born into the new age of
technology which might help them advance in digital skills, participants indicated
that they somehow lacked digital competencies. To eliminate the shortcomings that
might arise from the lack of participants’ knowledge and skills, and to facilitate the
Digital Story Telling ◾ 91
research project adaptation process, the digital story production process was thor‑
oughly explained to the participants beforehand. The lowest scores were noted for
Item 12 (I know advanced techniques, such as inter‑cut, slow and fast motion, and
subtitles to transmit messages according to the theme), Item 8 (I can create digital stories),
Item 7 (I have problem‑solving skills), and Item 10 (I can apply video editing techniques
to set the mood) with the mean scores of 1.88; 2.00; 2.25; and 2.63, respectively. Very
similarly, Item 15 (I can ask dramatic questions to hold the interest of the audience) with
a mean score of 2.66; Item 9 (I can apply different angles and shoots in a digital story)
with a mean score of 2.86; and Item 13 (I can develop storylines according to the theme
with creativity); and Item 14 (I can use storyboards for drawing and script writing) with
the same mean scores of 2.88 were noted to be lower in terms of ranking, stating that
participants were not fully competent in the steps to compose a digital story.
Nevertheless, the highest mean scores were estimated for Item 6 (I know when
voiceovers are recorded in a quiet place, better quality is engaged), Item 4 (I can use dig‑
ital tools like a camera, microphone, and tripod), and Item 5 (I can record a voice‑over)
with the mean scores of 5.00; 4.75; and 4.25, respectively. In sum, participants were
found to lack using advanced media techniques (e.g., inter‑cut, slow/fast motion,
etc.) whereas they were aware that recording in a quiet place contributes to the
process of ensuring better quality in the use of digital media content. Since the
participants were assumed to self‑report the overall process through reflections in
a qualitative way, they were not requestioned by this form to detect day‑after‑day
differences.
steps to help their writing skills development, and their future tendency to use DST
in their future language classrooms to teach a foreign/second language together
were listed as the main themes.
To remember, the first introductory meeting was held on MT on May 20, 2022,
and the participants’ reflections were gathered through a structured open‑ended
sentence pattern: “At the end of this session, I….…”. The responses gathered from the
participants were diverse, and to ensure anonymity and confidentiality, the names
of the participants were not listed, albeit given a flowing order without any correc‑
tion of the participants’ writing:
Looking at the responses comprehensively, it could be fair to state that although the
participants had concerns about issues of not creating their own digital stories at
first, they were positive and hopeful about the overall process. The fact that digital
stories were visually rich had a positive effect on the participants’ opinions about
DST. In addition, participants would mostly like to adopt DST for further use and
stated that they would continue to improve themselves. In this regard, participants
might also help to disseminate the use of DST in language education as a part of
their professional lives:
… I can use technology in my future classroom and easily grab students’ attention.
… I think it would be good for young learners as the bright colors and effects of the
digital stories will keep their attention and help them to focus.
… Digital storytelling is an interesting technique. With the right animation choices,
I am sure my students will be more interested in digital storytelling than tradi‑
tional methods.
… I think that digital storytelling attracts students’ attention.
… As a prospective teacher, I plan to use digital storytelling as its animations are
interesting for younger students, and I can also use it for my students to summarize
what I’m teaching.
… Digital storytelling makes the subject more understandable and interesting.
Digital Story Telling ◾ 93
Participants also stated that the use of DST provided an advantage for them as
future English language teachers in terms of grammatical accuracy and lexical
knowledge so that they could develop a sense of self‑confidence in preparing more
authentic materials:
… It takes time and effort. The making process is not that much fun.
… Creating digital stories is somewhat time consuming.
… I’ ll use it often, yet it takes a while to create, so if I don’t have time for my lecture
or presentation, I can’t use digital storytelling.
In addition, Powtoon used during the DST execution process had some limitations
according to the belief of one participant. The pricing policy of Powtoon created a
deficit for users, especially due to the exchange rate difference. Although the website
offered the opportunity to use a free account, the possibilities and material avail‑
ability offered in this account content were mostly limited when compared to those
of premium features. This limitation was reported to be demotivating and restricted
users’ creativity and freedom. Since s/he had positive thoughts about the use of
DST, s/he stated to favor using alternative platforms other than Powtoon, though:
… I will use digital storytelling in the future but probably not this website.
to convey the story’s message to the audience” (Xu et al., 2011, p. 181). In doing so,
DST provides students with an environment in which they can learn the content,
practice language skills, communicate and collaborate to produce, and critically
think and analyze the content together with the overall learning process, thanks to
the quality standards enabled by technology. In this vein, the question addressed in
this study considers whether the DST contributes to the writing skills of pre‑service
English language teachers.
Beyond question, there are multidisciplinary studies examining the use of
DST for the advancement of writing skills (Meletiadou, 2022), in gifted education
(Yaman & Taşdelen, 2022), in teacher candidates’ development of creative writing
skills (Duman & Göçen Kabaran, 2015), and the employment of DST in EFL set‑
tings (Castillo‑Cuesta et al., 2021). Results have revealed that the use of DST can
create an authentic learning environment for learners and facilitate the learning
process by providing long‑term motivation and self‑efficacy. Thanks to the oppor‑
tunity of adding personal narratives to digital stories, learners can develop their
communicative and critical thinking skills by internalizing the process.
In the context of this study, composed of eight English language teacher candi‑
dates recruited voluntarily from the Department of Foreign Languages Education
in a state university in Türkiye, participants are observed on a longitudinal basis
on a task‑based continuum and catered with constructive feedback on the subject
areas priorly defined in collaboration with the course lecturer, academic advisor, and
researcher to compose their unique digital stories via Powtoon. Analyzing the gath‑
ered quantitative data through SPSS and qualitative data through narrative analysis,
it is scrutinized that the implementation of DST has yielded a favorable impact on
pre‑service English language teachers’ writing skills development since it is showcased
that as the level of engagement with digital content has increased, their perceived level
of improvement in DST together with that of writing has also enhanced. Confirming
this, a statistically significant increase is reported between pre‑ and post‑test results.
It is also reflected by the participants that Powtoon despite having some limitations
because of the premium capabilities it offers can be utilized to improve the writing
abilities of pre‑service English language teachers and create digital stories.
When the pre‑ and post‑test results are compared, it can be stated that the process
is instructive, and the participants’ writing skills have advanced over time, confirming
the results of a previous study conducted with primary school students (Çıralı Sarıca
& Koçak Usluel, 2016). In addition, their (2016) study also proved that DST had a
significant influence on students’ writing skills development together with the visual
memory and the longevity of the information they learned. Similar results were gath‑
ered in this study, which confirmed the generalizability of the results to postulate the
efficiency of using DST to improve students’ writing skills. Likewise, as Alemi et al.
(2022) stated in their study, DST offered a novel experience for students as users, thus
elaborated guidance could be provided during the overall process. Taking this issue
into consideration, explanatory sessions on DST held at the offspring of the study
and sample digital stories prepared priori by the researchers to get superior products
Digital Story Telling ◾ 95
ultimately by the students as posterior digital stories could be given to confirm the idea
behind it. Lensing through the test results again, it was noteworthy to mention that
participants demonstrated enhancement in their self‑efficacy levels within the scopes
of dramatic questioning, making necessary arrangements, and developing storylines
by the end of the process. Additionally, the utilization of storyboards was reported to
facilitate the creation of well‑defined plotlines within their narratives as confirmed
by the previous studies in literature (Yamaç & Ulusoy, 2016). Since there might be
difficulties in the way of creation, it was recommended for story creators to enhance
problem‑solving skills in case they might be encountered.
As a result of the qualitative part, participants’ reflections indicated that the
use of this technique could increase the student’s interest in the lesson, and DST
was suitable for presenting the subject matter in a more meaningful and summa‑
tive way. The notion of utilizing digital stories by effectively using DST technology
throughout their professional career journey with their future students also boosted
the enthusiasm of the participants in creating unique digital stories. Similar results
were obtained in the literature and confirmed that the presence of an audience
could boost users’ motivation to develop texts with greater diligence and eagerness,
resulting in high writing performance (Meletiadou, 2022).
Beyond question, by attending explanatory sessions and receiving helpful sam‑
ples from the researchers, participants could develop confidence in their digital
competencies and problem‑solving skills. The statistically proven increase in their
writing skills laced with positive reflective thoughts on digital story creation as their
final products demonstrated the benefits of utilizing this method to enhance writ‑
ing skills. Nonetheless, it was crucial to note that DST might be time‑consuming,
albeit multifaceted. Thus, an appropriate platform just as planning the overall pro‑
cess from the beginning was to be selected to ensure that participants’ enthusiasm
was not dampened. By doing so, users could enhance their cognitive skills and retain
the acquired knowledge in long‑term memory. In this vein, students could deepen
their comprehension of the cyclical nature of the writing process by evaluating and
revising their texts from many aspects during the preparation, drafting, editing, sto‑
ryboarding, and production processes of their digital stories. Through the incorpora‑
tion of their voices and the use of several multimedia tools in the stories, participants
felt more engaged, found the opportunity to personalize their products, and devel‑
oped their new literacy skills. In addition, since the stories were created in the target
language (English), participants could develop their language skills to create mean‑
ingful narratives in the target language as well as their writing skills.
In summary, digital stories can be advantageous for language teaching and learn‑
ing due to their ability to provide a visually captivating and enjoyable experience.
This can serve as an intrinsic motivator for students, leading to increased language
competence. During the creation process, narrators can enhance grammatical com‑
petence by crafting meaningful narratives. This technique has been confirmed to be
effective by the participants who have experienced its benefits firsthand. During the
creation process of digital stories, narrators can test and improve their problem‑solving
96 ◾ Innovations and Applications of Technology
and management skills by overcoming technical problems (i.e., voice clarity, record‑
ing) together with the challenges of planning (i.e., creating storyboard and narration).
However, in the current study, the contributions of DST on EFL student–
teachers’ writing skills development are examined which may be remarked as a
limitation with a relatively small number of participants, albeit grounded upon
research‑based evidence. Following this, further research can be expanded by uti‑
lizing experimental design with a relatively higher number of participants to detect
possible contributions and potential risks in tow. Undoubtedly, DST will continue
to captivate the attention of educators and researchers who aim to elevate its efficacy
in promoting students’ learning since it is evident that DST offers a contemporary
pedagogical approach for enhancing users’ digital literacy and writing skills.
Acknowledgment
This study is supported by the Scientific and Technological Research Council of
Türkiye (TUBITAK) under the 2209‑A Research Project Support Program for
Undergraduate Students within the call of 2021/1.
References
Alemi, M., Givi, S., & Rezanejad, A. (2022). The role of digital storytelling in EFL stu‑
dents’ writing skill and motivation. Language Teaching Research, 32, 16–35. https://doi.
org/10.32038/ltrq.2022.32.02.
Altasan, A. (2016). Current language teaching approaches. GRIN Verlag.
Balaman, S. (2018). Digital storytelling: A multimodal narrative writing genre. Journal of
Language and Linguistic Studies, 14(3), 202–212.
Barrett, H. (2006). Researching and evaluating digital storytelling as a deep learning tool.
Proceedings of SITE society for information technology & teacher education interna‑
tional conference (pp. 647–654). Association for the Advancement of Computing in
Education (AACE).
Bull, G., & Kajder, S. (2004). Digital storytelling in the language arts classroom. Learning &
Leading with Technology, 32(4), 46–49.
Byram, M., & Feng, A. (2006). Living and studying abroad: Research and practice. Multilingual
Matters.
Cameron, L. (2001). Teaching languages to young learners. Cambridge University Press.
Castillo‑Cuesta, L. M., Quinonez‑Beltran, A., Cabrera‑Solano, P., Ochoa‑Cueva, C., &
Gonzalez‑Torres, P. (2021). Using digital storytelling as a strategy for enhancing EFL
writing skills. International Journal of Emerging Technologies in Learning (iJET), 16(13),
142–156. https://doi.org/10.3991/ijet.v16i13.22187.
Cennamo, K., & Kalk, D. (2018). Real world instructional design an interactive approach to
designing learning experiences (2nd ed.). Routledge.
Çıralı Sarıca, H., & Usluel, Y. (2016). The effect of digital storytelling on visual memory
and writing skills. Computers & Education, 94, 298–309. https://doi.org/10.1016/j.
compedu.2015.11.016.
Digital Story Telling ◾ 97
Revitalizing English
Language Learning:
An Exploration of
ChatGPT’s Impact on
Student Engagement
and Motivation
Nghi Tin Tran and Thang Tat Nguyen
6.1 Introduction
Language education has witnessed a hopeful innovation in chatbots, featuring
ChatGPT driven by the cutting‑edge GPT‑4 model, the iteration of the Generative
Pre‑trained Transformer (GPT) series. Unlike traditional methods, these chatbots
stimulate engagement and prompt real‑time feedback to learners. Nevertheless, there
is meager substantial proof to attest to the efficiency of ChatGPT in elevating stu‑
dent enthusiasm and engagement in EFL (English as a foreign language) contexts.
Artificial intelligence (AI) has quickly changed the landscape of language educa‑
tion. AI brings the possibility to shift educational patterns significantly, particu‑
larly in how languages are acquired. Through the help of AI‑assisted instruments,
students acquire distinct responses and practical learning events, giving them more
occasions to rehearse beyond the four walls of the classroom. This especially benefits
EFL students in learning English, as they frequently require supplementary assis‑
tance (Hung et al., 2022).
The significance of the English language across sectors like education, com‑
merce, and technology is undeniable. However, challenges in engagement and
motivation often hinder learners (Dörnyei, 2001). AI tools, such as ChatGPT, pres‑
ent potential solutions to these challenges, with initial studies suggesting enhanced
student engagement and motivation (Baker, 2016; Grassini, 2023).
As ChatGPT is an innovative application of AI, research into its applicability
may shed light on how it should be used and modified. To fill this gap, this study
aims to:
This study may contribute to the literature on the use of technology in language
education (Baker, 2016). It particularly offers empirical insights into ChatGPT’s
potential, informing its future development and implementation strategies. The
findings may give implications for the immediate context and beyond, underscor‑
ing the importance of integrating technology into pedagogical designs (Eiland &
Todd, 2019; Gellerstedt et al., 2018).
literature. However, determining just how ChatGPT fits into this paradigm is
still debatable. Consequently, this study aims to determine the potential impact of
ChatGPT on the EFL landscape, utilizing prior research on technology‑enhanced
language instruction. The ultimate objective is to ascertain how ChatGPT can fos‑
ter greater student motivation and engagement levels.
6.3 Methodology
6.3.1 Research Design
A quasi‑experimental design was employed, utilizing both pre‑tests and post‑tests to
assess the impact of ChatGPT on student engagement and motivation. The experi‑
mental group incorporated ChatGPT and traditional instruction, while the control
group adhered to the conventional curriculum.
6.3.2 Participants
The study encompassed 50 university EFL students, categorized into two distinct
groups: the experimental group (n = 25) and the control group (n = 25). The demo‑
graphic details of the participants are summarized in Table 6.1.
6.3.3 Materials
To measure student engagement and motivation, identical pre‑tests and
post‑tests were administered (see Appendix A). Experimental group partici‑
pants engaged with ChatGPT online to practice English and their traditional
instruction activities. The ChatGPT platform offered interactive language prac‑
tice and feedback opportunities. Eight students were given ChatGPT accounts
to access, and researchers monitored these accounts. Following the interven‑
tion, both experimental and control groups completed a feedback questionnaire,
a ssessing the usability and effectiveness of ChatGPT as a supplementary tool
(see Appendix B).
Table 6.2 Pre-Test and Post-Test Scores for Engagement and Motivation
Pre-Test/100 max Post-Test/100 max
Group Metric (Mean ± SD) (Mean ± SD) Min Max
6.3.4 Procedure
At the commencement of the study, both groups underwent a pre‑test. Over a
ten‑week period, the experimental group engaged with ChatGPT for a minimum
of 30 minutes weekly, while both groups received the same classroom instruction.
A post‑test and a feedback questionnaire were administered upon study completion.
6.4 Results
The following sections present a detailed analysis of the results, comparing the
experimental group that utilized ChatGPT with a control group that followed a
conventional curriculum.
Revitalizing English Language Learning ◾ 103
Experimental 6.8 ± 1.3 6.5 ± 1.2 7.1 ± 1.1 7.0 ± 1.2 4.9 9.7
Control 6.5 ± 1.4 6.2 ± 1.3 6.8 ± 1.2 6.9 ± 1.3 4.7 9.4
The control group exhibited a slightly lower mean overall motivation score of
7.0 (SD = 1.2), indicating a moderately motivated cohort. Their primary motivation
factor was “Academic requirements,” implying that external academic demands
significantly influenced their motivation. Moreover, their frequency of setting
personal goals for English learning was characterized by a “Sometimes” practice
(M = Sometimes).
The range of motivation scores, represented by the Minimum (Min) and
Maximum (Max) values, provides a deeper understanding of the variability within
each group. The experimental group’s motivation scores spanned from a minimum
of 5.9 to a maximum of 9.8, reflecting diverse motivation levels among participants.
Similarly, the control group’s motivation scores ranged from a minimum of 5.5 to a
maximum of 9.5, indicating varying degrees of motivation within this group.
The higher motivation scores in the experimental group suggest that the integra‑
tion of ChatGPT might have reignited their intrinsic interest in learning English,
making the learning process more enjoyable and encouraging the frequent setting
of personal learning goals.
Participants’ self‑assessed language proficiency across different language skills,
namely reading, writing, speaking, and listening, offers valuable insights into their
perceptions of their own language abilities. Table 6.5 provides a comprehensive
overview of the language proficiency self‑assessment scores for both the experimen‑
tal and control groups, derived from participants’ responses to the language profi‑
ciency self‑assessment questions outlined in Appendix A.
For the experimental group, the mean self‑assessed reading proficiency score was
6.8 (SD = 1.3), suggesting a high level of self‑perceived reading ability. Similarly, they
rated their writing skills at a mean score of 6.5 (SD = 1.2), indicating a proficient
Revitalizing English Language Learning ◾ 105
Experimental 8.2 ± 0.6 12% Yes Login issues, slow 6.9 8.4
response times
106 ◾ Innovations and Applications of Technology
6.5 Discussion
The investigation into the effectiveness of ChatGPT in enhancing English lan‑
guage education through a quasi‑experimental covering engagement, motivation,
language proficiency, and user feedback has unveiled substantial insights that con‑
tribute significantly to the ongoing discourse on the integration of AI in education.
This section provides a comprehensive exploration of the key findings, establishes
pertinent connections with previous research, and conducts a thorough analysis of
the implications of the results.
Engagement and motivation, acknowledged as pivotal components of success‑
ful language learning (Dörnyei, 1998), emerged as key themes in this study. The
experimental group, actively utilizing ChatGPT, demonstrated a marked increase
in engagement compared to the control group. The mean engagement score of 4.2
(SD = 0.7) highlighted a noteworthy level of perceived engagement, suggesting the
platform’s efficacy in generating immersive learning experiences. The heightened
engagement was attributed to the real‑time feedback and conversational flow, iden‑
tified as engaging features. This finding aligns with Nikolic et al. (2023), who also
108 ◾ Innovations and Applications of Technology
perceived language proficiency in EFL courses. The findings strongly suggest that
the experimental group, utilizing ChatGPT, consistently outperformed the con‑
trol group across multiple metrics. This aligns with prior research highlighting the
benefits of technology‑enhanced learning environments (Alzubaidi et al., 2016;
Arnau‑González et al., 2023). ChatGPT’s interactive nature and its provision of
real‑time feedback likely contributed to a more engaging learning environment,
encouraging students to actively participate and practice language skills (Ahmadi,
2018; Nikolic et al., 2023).
However, while the findings are encouraging, a cautious approach is nec‑
essary. A key limitation of this study is its confinement to a single university
setting with a small sample size, potentially affecting the generalizability of the
results. Additionally, relying on self‑assessment for evaluating language profi‑
ciency introduces the potential for bias, as students may not accurately gauge
their skills.
Looking forward, several intriguing avenues for future research beckon.
Exploring the long‑term effects of integrating ChatGPT into EFL courses, evalu‑
ating its impact across different age groups, and comparing its effectiveness with
other AI‑driven tools are all promising lines of inquiry. Questions surrounding the
sustained impact of ChatGPT on language proficiency, its differential effects across
diverse age groups and cultural contexts, and its comparative effectiveness and user
satisfaction concerning other AI‑driven language learning tools warrant rigorous
investigation.
As the educational technology landscape continues to evolve, tools like
ChatGPT offer a glimpse into the potential future of EFL learning. While initial
results are promising, dedicated research efforts are pivotal in harnessing the full
potential of AI‑driven tools, ensuring they serve as effective catalysts in the lan‑
guage learning journey.
6.6 Conclusion
In the rapidly advancing landscape of educational technology, the integration of
AI tools like ChatGPT into EFL courses has emerged as a promising avenue for
enhancing the learning experience. The findings underscore the significant poten‑
tial of ChatGPT in fostering increased student engagement and motivation. A
notable majority of participants not only found the chatbot easy to navigate but
also recognized its tangible benefits in bolstering their English language proficiency.
These positive outcomes resonate with the broader academic discourse, as high‑
lighted in the overview of existing research. Integrating technology, especially
AI‑driven tools, into educational settings has consistently fostered a more interactive
and engaging learning environment. As evidenced by prior studies, such environments
can significantly enhance motivation, a critical component of language acquisition.
110 ◾ Innovations and Applications of Technology
References
Ahmadi, D. M. R. (2018). The Use of Technology in English Language Learning: A Literature
Review. International Journal of Research in English Education, 3(2), 115–125. https://
doi.org/10.29252/IJREE.3.2.115
Aisami, R. S. (2015). Learning Styles and Visual Literacy for Learning and Performance.
Procedia – Social and Behavioral Sciences, 176, 538–545. https://doi.org/10.1016/j.
sbspro.2015.01.508
Alzubaidi, E., Aldridge, J. M., & Khine, M. S. (2016). Learning English as a Second
Language at the University Level in Jordan: Motivation, Self‑Regulation and Learning
Environment Perceptions. Learning Environments Research, 19(1), 133–152. https://
doi.org/10.1007/s10984‑014‑9169‑7
Arnau‑González, P., Arevalillo‑Herráez, M., Luise, R. A. D., & Arnau, D. (2023). A
Methodological Approach to Enable Natural Language Interaction in an Intelligent
Tutoring System. Computer Speech and Language, 81. https://doi.org/10.1016/j.
csl.2023.101516
Baker, R. S. (2016). Stupid Tutoring Systems, Intelligent Humans. International Journal
of Artificial Intelligence in Education, 26(2), 600–614. https://doi.org/10.1007/
s40593‑016‑0105‑0
Chowdhry, S., Sieler, K., & Alwis, L. (2014). A Study of the Impact of Technology‑Enhanced
Learning on Student Academic Performance. Journal of Perspectives in Applied Academic
Practice, 2(3). https://doi.org/10.14297/JPAAP.V2I3.111
Revitalizing English Language Learning ◾ 111
Coniam, D. (2014). The Linguistic Accuracy of Chatbots: Usability from an ESL Perspective.
Text & Talk, 34(5), 545–567. https://doi.org/10.1515/text‑2014‑0018
Dolores, R. V. (2006). A Study of Intonation Awareness and Learning in Non‑Native Speakers
of English. Language Awareness, 15(3), 141–159. https://doi.org/10.2167/la404.0
Dörnyei, Z. (1998). Motivation in Second and Foreign Language Learning. Language
Teaching, 31(3), 117–135. https://doi.org/10.1017/S026144480001315X
Dörnyei, Z. (2001). Motivational strategies in the language classroom. Cambridge University
Press, Cambridge.
Eiland, L. S., & Todd, T. J. (2019). Considerations When Incorporating Technology
into Classroom and Experiential Teaching. Journal of Pediatric Pharmacology and
Therapeutics, 24(4), 270–275. https://doi.org/10.5863/1551‑6776‑24.4.270
Fryer, L. K., Nakao, K., & Thompson, A. (2019). Chatbot Learning Partners: Connecting
Learning Experiences, Interest and Competence. Computers in Human Behavior, 93,
279–289. https://doi.org/10.1016/j.chb.2018.12.023
Gellerstedt, M., Babaheidari, S. M., & Svensson, L. (2018). A First Step towards a Model for
Teachers’ Adoption of ICT Pedagogy in Schools. Heliyon, 4(9), e00786. https://doi.
org/10.1016/j.heliyon.2018.e00786
Godwin‑Jones, R. (2023). 4 Smart devices and informal language learning. In T. Denyze, S.
Geoffrey, & K. Meryl (Eds.), Language learning and leisure (pp. 69–88). De Gruyter
Mouton. https://doi.org/10.1515/9783110752441‑004
Golonka, E. M., Bowles, A. R., Frank, V. M., Richardson, D. L., & Freynik, S. (2013).
Technologies for Foreign Language Learning: A Review of Technology Types and Their
Effectiveness. Computer Assisted Language Learning. 27(1), 70–105. https://doi.org/10
.1080/09588221.2012.700315
Grassini, S. (2023). Shaping the Future of Education: Exploring the Potential and
Consequences of AI and ChatGPT in Educational Settings. Education Sciences, 17(3),
692. https://doi.org/10.3390/educsci13070692
Hamari, J., Koivisto, J., & Sarsa, H. (2014). Does gamification work? – A literature review
of empirical studies on gamification. Proceedings of the Annual Hawaii International
Conference on System Sciences, 3025–3034, Waikoloa, HI, USA, January 6–9, 2014.
https://doi.org/10.1109/HICSS.2014.377
Hung, B. P. (2021). Mediation of Digital Tools in English Learning. LEARN Journal:
Language Education and Acquisition Research Network, 14(2), 512–528.
Hung, B. P., Pham, A. T. D., & Purohit, P. (2022). Computer mediated communication in sec‑
ond language education. In R. Sharma, D. Sharma (Eds.), New trends and applications
in Internet of things (IoT) and big data analytics, Intelligent Systems Reference Library,
(vol. 221, pp. 45–60). . Springer. https://doi.org/10.1007/978‑3‑030‑99329‑0_4
Hwang, G. J., & Chang, C. Y. (2023). A Review of Opportunities and Challenges of Chatbots
in Education. Interactive Learning Environments, 31(7), 4099–4112. https://doi.org/1
0.1080/10494820.2021.1952615
Hwang, W.‑Y., Nurtantyana, R., Purba, S. W. D., Hariyanti, U., Indrihapsari, Y., & Surjono,
H. D. (2023). AI and Recognition Technologies to Facilitate English as Foreign
Language Writing for Supporting Personalization and Contextualization in Authentic
Contexts. Journal of Educational Computing Research, 61(5), 1008–1035. https://doi.
org/10.1177/07356331221137253
Kuhail, M. A., Alturki, N., Alramlawi, S., & Alhejori, K. (2023). Interacting with Educational
Chatbots: A Systematic Review. Education and Information Technologies, 28(1), 973–
1018. https://doi.org/10.1007/s10639‑022‑11177‑3
112 ◾ Innovations and Applications of Technology
Lee, J. H., Yang, H., Shin, D., & Kim, H. (2020). Chatbots. ELT Journal, 74(3), 338–344.
https://doi.org/10.1093/ELT/CCAA035
Luo, Z. (2023). The Effectiveness of Gamified Tools for Foreign Language Learning (FLL):
A Systematic Review. Behavioral Sciences, 13(4), 331–331. https://doi.org/10.3390/
BS13040331
Mageira, K., Pittou, D., Papasalouros, A., Kotis, K., Zangogianni, P., & Daradoumis, A.
(2022). Educational AI Chatbots for Content and Language Integrated Learning.
Applied Sciences, 12(7), 3239. https://www.mdpi.com/2076‑3417/12/7/3239
Nikolic, S., Daniel, S., Haque, R., Belkina, M., Hassan, G. M., Grundy, S., Lyden, S.,
Neal, P., & Sandison, C. (2023). ChatGPT versus Engineering Education Assessment:
A Multidisciplinary and Multi‑Institutional Benchmarking and Analysis of This
Generative Artificial Intelligence Tool to Investigate Assessment Integrity. European
Journal of Engineering Education, 48(4), 559–614. https://doi.org/10.1080/0304379
7.2023.2213169
Okonkwo, C. W., & Ade‑Ibijola, A. (2021). Chatbots Applications in Education: A
Systematic Review. Computers and Education: Artificial Intelligence, 2, 100033. https://
doi.org/10.1016/j.caeai.2021.100033
Selvaraj, A., Radhin, V., Ka, N., Benson, N., & Mathew, A. J. (2021). Effect of Pandemic Based
Online Education on Teaching and Learning System. International Journal of Educational
Development, 85, 102444. https://doi.org/10.1016/J.IJEDUDEV.2021.102444
Setyosari, P., Slamet, T. I., Ulfa, S., & Oktaviani, H. I. (2019). Technology‑Supported Learning
Environment to Improve Higher‑Order Thinking Experience of Social Science Teachers
TPCK for the 21st Century Learning. International Conference on Learning Innovation,
Universitas Negeri Malang, Indonesia.
Wang, Z., & Han, F. (2021). Developing English Language Learners’ Oral Production with
a Digital Game‑Based Mobile Application. PLoS One, 16(1), e0232671. https://doi.
org/10.1371/journal.pone.0232671
TECHNOLOGY- II
ASSISTED
LANGUAGE TESTING
AND ASSESSMENT
Chapter 7
Technology‑Based
Language Testing:
Principles and
Future Directions
Hung Phu Bui and Truong Cong Bang
7.1 Introduction
Technological developments have revolutionized second language education and
greatly influenced second language testing (Bui, 2023). It has provided an alterna‑
tive to the traditional in‑person testing system. Instead, technology‑assisted lan‑
guage tests can be administered to test‑takers who do not have to be physically
present at the registered test site (Suvorov & Hegelheimer, 2014). Also, artificial
intelligence can provide immediate constructive feedback for language learning and
development. However, many researchers (e.g., Ockey & Neiriz, 2021; Sadeghi,
2022) have raised several concerns about the current limits of technology‑assisted
language testing.
Although technology‑assisted language testing has attracted the attention of
applied linguists and practitioners, its challenges are worth examining and discuss‑
ing (Javed et al., 2019). Driven by a desire to provide practitioners with a compre‑
hensive landscape of technology‑assisted language testing, we critically review the
principles for the development and practices of online language tests. Then, we out‑
line the mechanisms of automated scoring of writing and speaking before summa‑
rizing existing problems and discussing the future directions for technology‑assisted
DOI: 10.1201/9781003473916-9115
116 ◾ Innovations and Applications of Technology
7.2 History
The introduction of technology‑assisted language testing dates back to the 1980s
when applied linguists began to make use of item response theory (Dunkel,
1999). The first known test in the field might have been the computer adaptive
test (CAT) developed by Larson and Madsen (1985). Since then, a large body of
research has investigated different aspects of using technology in language test‑
ing and assessment. To date, technology‑assisted language testing and assessment
is widely integrated into education, with such international proficiency tests as
TOEFL iBT and Aptis. The emergence of artificial intelligence eases language
test designing and administration (Van Moere & Downey, 2016). Using auto‑
mated scoring technology, artificial intelligence can evaluate spoken and written
texts produced by second‑language speakers (Alderson, 2000; Shermis, 2014).
Also, it is easy for test developers to convert a text into a test of a multiple choice,
gap‑filling, or true or false format (Kane, 2012). Despite the development and
popularity of technology‑assisted language testing, researchers and practitioners
have raised concerns about the reliability and validity of online tests. Identifying
the possible strengths and weaknesses of technology in language assessment, we
argue that human examiners are over technology in that they are human; they are
able to make sense of emerging human‑relatedness and contextualize assessment
flexibly. By contrast, technology is over human examiners in that it is not human;
therefore, the testing system is objectivity‑oriented (Javed et al., 2019; Ockey &
Neiriz, 2021; Sadeghi, 2022).
7.3 Attributes
Technology‑assisted language testing refers to the use of technology to assist in
assessing language competences. Early developments in the field were mainly inter‑
ested in exploring attributes of computer‑assisted performance tests, used primar‑
ily for gauge or evaluate linguistic competences, compared to paper‑and‑pencil
tests and attempted to make online tests friendly to takers (Winke & Isbell, 2017).
Technological advancements have driven the field beyond its border and now include
artificial intelligence in language testing. Accordingly, the terms technology‑assisted
language testing and technology‑assisted language assessment are used interchangeably,
generally conceptualized as the use of technology to assess language abilities for
multiple purposes, e.g., to diagnose second‑language learners’ problems and under‑
stand and support the language learning process (Laghos & Zaphiris, 2009).
Technology-Based Language Testing ◾ 117
7.4 Principles
An extensive literature review shows many proposed frameworks for online assess‑
ment. To guide a transition to online assessment in educational settings, Bearman
et al. (2014) introduced Assessment Design Decisions Framework with six main
components categorized into three main stages of assessment. In the planning
phase, the assessment purpose and context are worth considering. During assess‑
ment, it might be necessary to consider what tasks should be provided and how
to assign them to learners or examinees. Interactions between the faculty and stu‑
dents are required, in which feedback is given where relevant. Learner outcomes
should probably be measured through confirmative assessment. It may be essential
for administrators and educators to consider if the learners achieve the outcomes as
expected (Bui & Nguyen, 2022). The assessment results, therefore, should be used
to modify the system. Jaam et al. (2021) used this theoretical three‑phase model to
explore the online assessment conducted at the national university in Qatar during
the COVID‑19 pandemic. The researchers concluded that the use of this frame‑
work contributed partly to the success of the study.
118 ◾ Innovations and Applications of Technology
of automated scoring of writing can be classified into two main groups. The first
two models mainly rely on target prompts, but the scoring systems of the last two
models, construed as cross‑prompt scoring models, use non‑target prompts. Each
pair has a holistic and trait scoring system (Table 7.1).
Unlike the prompt‑specific model, the generic model is predetermined and
applied to all submitted answers. The scoring system mainly depends on the lan‑
guage features of the submitted text, such as the range of vocabulary and grammar
accuracy. In other words, while the generic scoring model relies on the surface fea‑
tures of the text, the prompt‑specific model can precisely explore the content and
the development and organization of ideas.
According to Van Moere and Downey (2016), the construction of a scoring
model should consist of a few phases. It initially determines variables in the con‑
struct to be measured. For instance, the range of grammatical structures may
include such variables as the frequency of a specific structure and the complexity
of structures used in the text. Then, texts written by the target population are anal‑
ysed and statistics are generated. Scoring models are constructed to predict writing
scores based on the predetermined variables and their weights. To assess essays,
the scoring system needs to include assessment of surface features and grammar
checker, latent semantic analysis, word class categorization, and N‑grams. The
assessment of surface features and grammar should be able to assess such features
as length, punctuation, sentence structure diversity, and text structure. Unlike
surface feature and grammar checker, latent semantic analysis, a natural language
processing technique, refers to the evaluation of meaning rather than forms (Foltz
et al., 2013).
The word class categorizer is able to identify the part of speech of individual
words of an essay to determine the diversity, complexity, and accuracy of the vocab‑
ulary and grammar used in that essay (Enright & Quinlan, 2010). The construction
of N‑grams is based on corpus analysis. Monograms (e.g., “good”), bigrams (e.g.,
“good house”), and trigrams (e.g., “a good house”) are then developed. Higher fre‑
quency N‑grams are identified as low quality. Nevertheless, although the expression
120 ◾ Innovations and Applications of Technology
“a laptop expensive” is less frequent, it is not an indicator of high quality as the word
class categorizer is determined as incorrect (Attali & Burstein, 2006).
The evaluation of speaking must be based on what the examinee says and how
they say it. Therefore, examiners must develop three main models: acoustic, lan‑
guage, and scoring. The accuracy of automated scoring depends mainly on these
models and test design. The acoustic model must be able to recognize test‑takers’
speech. Speech recognition is based on probabilities of phonetic features in relation
to orthographic representations (Hinton et al., 2012). The quality of devices (e.g.,
microphones), unwanted noise from the surroundings, and the examinee’s accent
may affect speaking scores. The current literature proposes that the sample data used
to train the acoustic model should be from the target population. Demographic
features should be considered when the test provider develops the rating criteria.
Like rating essays, rating spoken language should build language models, includ‑
ing frequencies of lexical items and accuracy (Balogh et al., 2012). Several scholars
(e.g., Bernstein et al., 2010; Xi et al., 2008) have proposed that the scoring model
development should be based heavily on speech recognition and fluency. Therefore,
speech variables may be the best predictors of spoken language proficiency.
7.5 Challenges in Technology‑Assisted
Language Assessment
When approaching the domain of technology‑enhanced language assessment, we
are confronted with several critical challenges that require careful consideration.
These obstacles encompass various areas, including validity and reliability, equity
and accessibility, security and integrity, adaptability and universality, and ethical
considerations. Each area presents its unique challenges, necessitating strategic
approaches and solutions.
metrics are crucial for assessing the effectiveness and efficiency of the language
assessment system. Researchers and developers can obtain a deeper comprehen‑
sion of the performance and precision of the evaluation tools by delving into these
nuances.
To provide additional context, it is essential to note that Oh and Song’s (2021)
work accords with a field‑wide consensus. Mezzadri and Sisti (2019) also concur
that a comprehensive and systematic validation and reliability testing approach is
essential for technology‑enhanced language assessment.
accountable use of technology in language testing moving forward. This will serve
as a crucial framework for responsibly incorporating technology in brand assess‑
ment practices.
7.7 Conclusion
Consolidating the investigation of technology‑assisted language assessment reveals
that this field offers a range of opportunities interwoven with obstacles. Principal
considerations include validity and dependability, equity and accessibility, security
and the prevention of dishonest practices, adaptability and universality, all under‑
pinned by ethical considerations.
The discussed advancements and innovations have significant implications for
the future of language assessment. Using AI, ML, and other sophisticated technolo‑
gies has substantial potential for reforming language proficiency assessment.
When considering the path forward, it is imperative to uphold a steadfast
commitment to the ethical, equitable, and inclusive implementation of technol‑
ogy‑assisted language assessment. By effectively addressing obstacles and capitaliz‑
ing on favourable conditions, it is possible to lay the foundation for a future period
in which language assessment attains enhanced accuracy, accessibility, and congru‑
ence with the diverse needs of learners across the globe.
This chapter aims to provide an exhaustive overview of the landscape of tech‑
nology‑assisted language testing, exploring the current challenges confronting the
field and imagining future developments that promise more effective, personalized,
and inclusive language assessment practices. We are in a position to shape a future
in which language testing accurately reflects the dynamic and diverse nature of lan‑
guage acquisition and proficiency due to our careful consideration of these central
issues and emergent technologies.
126 ◾ Innovations and Applications of Technology
Acknowledgement
This publication was funded by the University of Economics Ho Chi Minh City
(UEH University) and University of Economics and Law, Ho Chi Minh City,
Vietnam (VNU).
References
Alderson, J. C. (2000). Technology in testing: The present and the future. System, 28(4),
593–603. https://doi.org/10.1016/S0346‑251X(00)00040‑3
Amigud, A., Arnedo‑Moreno, J., Daradoumis, T., & Guerrero‑Roldan, A. E. (2016). A behav‑
ioral biometrics based and machine learning aided framework for academic integrity
in e‑assessment. International Conference on Intelligent Networking and Collaborative
Systems (INCoS), Ostrava, Czech Republic, Ostrava, Czech Republic, September 7–9,
2016 (pp. 255–262). https://doi.org/10.1109/INCoS.2016.16
Atoum, Y., Chen, L., Liu, A. X., Hsu, S. D. H., & Liu, X. (2017). Automated online
exam proctoring. IEEE Transactions on Multimedia, 19(7), 1609–1624. https://doi.
org/10.1109/TMM.2017.2656064
Attali, Y., & Burstein, J. (2006). Automated essay scoring with e‑rater(r) V.2. Journal of
Technology, Learning, and Assessment, 4(3), 1–31. https://www.jtla.org
Balash, D. G., Kim, D., Shaibekova, D., Fainchtein, R. A., Sherr, M., & Aviv, A. J. (2021,
August 9). Examining the examiners: students’ privacy and security perceptions of
online proctoring services. 17th Symposium on Usable Privacy and Security, Virtual
Conference, USA.
Balogh, J., Bernstein, J., Cheng, J., Van Moere, A., & Suzuki, M. (2012). Validation of
automated scoring of oral reading. Educational and Psychological Measurement, 72(3),
435–452. https://doi.org/10.1177/0013164411412590
Bearman, M., Dawson, P., Boud, D., Hall, M., Bennett, S., & Molloy, E. (2014). Guide to the
Assessment Design Decisions Framework. https://www.assessmentdecisions.org/guide/
Bernstein, J., Van Moere, A., & Cheng, J. (2010). Validating automated speaking tests.
Language Testing, 27(3). 355–377. https://doi.org/10.1177/0265532210364404
Bhole, C., Dave, J., Surve, T., & Thakkar, K. (2020, April 8). English proficiency adaptive test
series. The 3rd International Conference on Advances in Science & Technology (ICAST)
2020, Mumbai, India. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3566058
Bui, H. P. (2023). L2 teachers’ strategies and students’ engagement in virtual classrooms: A
multidimensional perspective. In D. K. Sharma, S. L. Peng, R. Sharma, & G. Jeon
(Eds.), Lecture notes in networks and systems (p. 617). Singapore: Springer. https://doi.
org/10.1007/978‑981‑19‑9512‑5_18
Bui, H. P., & Nguyen, T. T. T. (2022). Classroom assessment and learning motivation:
Insights from secondary school EFL classrooms. IRAL: International Review of Applied
Linguistics in Language Teaching, 60(3). https://doi.org/10.1515/iral‑2022‑0020
Cassell, M. (2023). Language technology applications: Current developments and future
implications. Journal of Linguistics and Communication Studies, 2(2), 83–89. https://
doi.org/10.56397/JLCS.2023.06.11
Chen, X., Zou, D., Xie, H., & Cheng, G. (2021). Twenty years of personalized language
learning. Educational Technology & Society, 24(1), 205–222.
Technology-Based Language Testing ◾ 127
Cheng, S. C., Cheng, Y. P., & Huang, Y. M. (2021). To implement computerized adaptive
testing by automatically adjusting item difficulty index on adaptive English learning
platform. Journal of Internet Technology, 22(7), 1599–1607. https://doi.org/10.53106/
160792642021122207013
Draaijer, S., Jefferies, A., & Somers, G. (2018). Online proctoring for remote examination:
A state of play in higher education in the EU. Technology Enhanced Assessment: 20th
International Conference, TEA, Barcelona, Spain, October 5–6, 2017 (pp. 96–108).
Springer.
Dunkel, P (1999). Research and development of a computer‑adaptive test of listening com‑
prehension in the less commonly‑taught language Hausa. In M. Chalhoub‑Deville
(Eds.), Development and research in computer adaptive language testing (pp.91–121).
Cambridge: Cambridge University Press.
Enright, M. K., & Quinlan, T. (2010). Complementing human judgment of essays written
by English language learners with e‑rater scoring. Language Testing, 27(3), 317–334.
https://doi.org/10.1177/0265532210363144
Fairbairn, J., & Spiby, R. (2019). Towards a framework of inclusion: developing accessibil‑
ity in tests at the British Council. European Journal of Special Needs Education, 34(2),
236–255. https://doi.org/10.1080/08856257.2019.1581404
Foltz, P. W., Streeter, L. A., Lochbaum, K. E., & Landauer, T. K. (2013). Implementation
and applications of the Intelligent Essay Assessor. In M. D. Shermis & J. Burstein
(Eds.), Handbook of automated essay evaluation: Current applications and new directions
(pp. 68–88). London, UK: Routledge.
Ghizlane, M., Hicham, B., & Reda, F. H. (2019, December 12–13). A new model of auto‑
matic and continuous online exam monitoring. International Conference on Systems of
Collaboration Big Data, Internet of Things & Security, Morocco.
Gu, L., Davis, L., Tao, J., & Zechner, K. (2021). Using spoken language technology for
generating feedback to prepare for the TOEFL iBT(r) test: A user perception study.
Assessment in Education: Principles, Policy & Practice, 28(1), 58–76. https://doi.org/10.
1080/0969594X.2020.1735995
Hinton, G., Deng, L., Yu, D., Dahl, G., Mohamed, A., Jaitly, N., Senior, A., Vanhoucke,
V., Nguyen, P., Sainath, T., & Kingsbury, B. (2012). Deep neural networks for acous‑
tic modeling in speech recognition. IEEE Signal Processing Magazine, 29(6), 82–97.
https://doi.org/10.1109/MSP.2012.2205597
Horák, T., & Gandini, E. (2019). Improving feedback through computer‑based language
proficiency assessment. In N. Becerra, R. Biasini, H. Magedera‑Hofhansl, & A. Reimão
(Eds.), Innovative language teaching and learning at university: a look at new trends
(pp. 95–103). Research‑publishing.net. https://doi.org/10.14705/rpnet.2019.32.906
Jaam, M., Nazar, Z., Rainkie, D. C., Hassan, D. A., Hussain, F. N., & Kassab, S. E. (2021).
Using Assessment Design Decision Framework in understanding the impact of rapid
transition to remote education on student assessment in health‑related colleges: A
qualitative study. PLoS One, 16(7), e0254444. https://doi.org/10.1371/journal.
pone.0254444
Javed, M., Tahir, A., & Qadeer, A. (2019). The changing roles of students in the blended ELT
environment in Pakistan. Journal of Linguistics and Literature, 2(2), 17–25. https://
journals.au.edu.pk/ojserevna/index.php/erevna/article/view/52
Kane, M. (2012). Validating score interpretations and uses. Language Testing, 29(1), 3–17.
https://doi.org/10.1177/0265532211417210
128 ◾ Innovations and Applications of Technology
Park, H., & Kim, T. (2022). User authentication method via speaker recognition and speech
synthesis detection. Security and Communication Networks, 2022, 1–10. https://doi.
org/10.1155/2022/5755785
Resdiana, W., & Yulientinah, D. S. (2023). Designing English language testing using a
web‑based monitoring platform. Journal of English Education, Linguistics, and Literature,
9(2), 41–48. https://doi.org/10.32682/jeell.v9i2.2838
Ridley, R., He, L., Dai, X., Huang, S., & Chen, J. (2021, February) Automated cross‑prompt
scoring of essay traits. Proceedings of the AAAI Conference on Artificial Intelligence, 35
(pp. 13745–13753).
Riekki, M., & Kuure, L. (2018). Discourses in place: Technology and language experts
negotiating solutions for a language learning application. In P. Taalas, J. Jalkanen, L.
Bradley, & S. Thouësny (Eds.), Future‑proof CALL: language learning as exploration and
encounters‑ short papers from EUROCALL 2018 (pp. 266–271). Research‑publishing.
net. https://doi.org/10.14705/rpnet.2018.26.848
Sadeghi, K. (2022). Technology in language assessment. In K. Sadeghi (Ed.), Technology‑assisted
language assessment in diverse contexts (pp. 1–13). London, UK: Routledge. https://doi.
org/10.4324/9781003221463
Settles, B., LaFlair, G. T., & Hagiwara, M. (2020). Machine learning‑driven language assess‑
ment. Transactions of the Association for Computational Linguistics, 8, 247–263. https://
doi.org/10.1162/tacl_a_00310
Shermis, M. D. (2014). State‑of‑the‑art automated essay scoring: Competition, results, and
future directions from a United States demonstration. Assessing Writing, 20, 53–76.
https://doi.org/10.1016/j.asw.2013.04.001
Stapleton, P., & Blanchard, J. (2021, March 13–20). Remote proctoring: expanding reli‑
ability and trust. The 52nd ACM Technical Symposium on Computer Science Education,
Virtual Event USA.
Suvorov, R., & Hegelheimer, V. (2014). Computer‑assisted language testing. In A. J.
Kunnan (Ed.) The companion to language assessment (pp. 1–20). Wiley. https://doi.
org/10.1002/9781118411360.wbcla083
Urosevic, A. (2019). Student authentication framework for online exams outside of school.
Unpublished thesis, Laurea University of Applied Sciences.
Uto, M. (2021). A review of deep‑neural automated essay scoring models. Behaviormetrika,
48, 459–484. https://doi.org/10.1007/s41237‑021‑00142‑y
Van Moere, A., & Downey, R. (2016). Technology and artificial intelligence in language
assessment. In T. Dina & B. Jayanti (Eds.), Handbook of second language assessment.
De Boston, MA: Gruyter Mouton. https://doi.org/10.1515/9781614513827‑023
Winke, P., & Isbell, D. (2017). Computer‑assisted language assessment. In S. Thorne &
S. May, (Eds.) Language, education and technology. Encyclopedia of language and educa‑
tion. Singapore: Springer. https://doi.org/10.1007/978‑3‑319‑02237‑6_25
Xi, X., Higgins, D., Zechner, K., & Williamson, D. M. (2008). Automated scoring of spon‑
taneous speech using SpeechRater V1.0. ELT Research Report Series, 2. Princeton, NJ:
ETS.
Chapter 8
Technology‑Assisted
Task‑Based Second
Language Assessment
for Learning
Lien Thi Xuan Cao, Huy Van Nguyen,
and Hung Phu Bui
8.1 Introduction
Assessment is widely documented as a cornerstone of education; it is as a pivotal
tool to gauge students’ understanding, growth, and competencies (Eisner, 1993;
Newton, 2007). The landscape of classroom assessment has shifted significantly
in recent decades. Traditionally, classroom assessment comprised both summative
and formative assessments, serving the dual purpose of evaluating learning out‑
comes and guiding instructional decisions (Bui, 2023; Dixson & Worrell, 2016).
However, with time, formative assessment has emerged as the cornerstone of class‑
room assessment, gaining recognition as the most critical component. This shift
results from recognizing the powerful impact of formative assessment on student
learning. Formative assessment, in its essence, has evolved into assessment for learn‑
ing, a dynamic approach that not only assesses but actively supports and enhances
the learning process (Bui & Nguyen, 2022). Bui (2023) emphasizes “formative and
summative evaluation tells where a student is standing on the way to his destination
of learning, how much he is ahead or behind his classmates, to what extent the
behavioral changes occurred in him are acceptable, how far he can apply his pres‑
ent acquired knowledge to his future life or learning situations, at what point he
is facing any difficulty and why and so on” (p. 777). Consequently, this transition
has paved the way for the emergence of learning‑oriented assessment, where the
primary focus is on fostering continuous improvement and growth in students’
knowledge and skills, creating a more student‑centered and outcomes‑driven edu‑
cational environment.
In recent years, the educational landscape has witnessed a revolutionary shift
toward task‑based learning, an innovative approach that encourages active engage‑
ment, critical thinking, and practical application of knowledge (Ellis, 2003; Willis
& Willis, 2007). Amidst this transformation, technology has emerged as a pow‑
erful enabler, offering new avenues for assessing students dynamically and effi‑
ciently (Ramadhan et al., 2021). According to Lai and Li (2011), Ziegler (2016),
and Mulyadi et al. (2021), the integration of learning technologies into task‑based
language learning has been regarded as a practical instructional approach that
can offer a variety of benefits to language students. As a trajectory driven by the
convergence of task‑based pedagogy and technological advancements, the emer‑
gence of technology‑based task‑based assessment has become an inevitable trend in
language education. Early studies found that integrating educational technologies
and task‑based language teaching can make students less anxious and more moti‑
vated to apply their English skills in practical communication situations (Eslami
& Kung, 2016).
The purpose of this book chapter is to offer a comprehensive journey through
the theoretical underpinnings, practical considerations, and prospects of technol‑
ogy‑assisted task‑based assessment, catering to both scholars and practitioners in
the field of ELT. The chapter delves into the pedagogical implications of this assess‑
ment approach. To achieve this, the chapter is structured systematically, beginning
with a discussion on the fundamental definitions of tasks and task‑based language
assessment (TBLA). It then transitions to the realm of technology integration into
task‑based instruction and the emergence of online task‑based assessment. The ben‑
efits and challenges associated with this approach are critically examined, provid‑
ing a nuanced perspective on its strengths and potential limitations. Subsequently,
the chapter delves into the procedure of conducting technology‑assisted task‑based
assessments, detailing the steps involved in implementing this approach effectively.
There is also the inclusion of some practical suggestions for online task‑based assess‑
ment activities and recommendations for technological tools. Lastly, the chapter
looks ahead, exploring the future trends in online task‑based assessment, which
encompass innovations driven by artificial intelligence (AI) to enhance adaptive
and personalized assessment and multimodal assessments in the virtual learning
environment.
132 ◾ Innovations and Applications of Technology
The heart of this approach lies in students’ communicative and meaningful engage‑
ment with these tasks and activities, fostering a holistic language‑learning experi‑
ence that promotes both linguistic and communicative competence.
Willis (1996) presents a comprehensive task‑based framework that guides lan‑
guage instruction through a structured sequence. The framework comprises three
main phases: pre‑task, task process, and language production. Successive endeavors
(e.g., Ellis, 2003; Willis & Willis, 2007) modified the guidelines and gave differ‑
ent names for three stages, namely pre‑tasks, tasks or during tasks, and post‑tasks.
However, the nature of each stage’s purpose and activities remained similar.
Basically, at the beginning of task‑based instruction, teachers design the tasks that
fit the lesson objectives and language aspects that students need to practice. In order
for students to perform tasks successfully, clear instructions and expectations are of
great significance; therefore, teachers need to clarify what students have to do and
how they can do the tasks. During the tasks, based on the nature of task require‑
ments, teachers can arrange independent work, pair work, or group work and let
students perform the tasks. At this stage, students might be also asked to report
what they have done in the tasks and showcase their work in various ways. After
students finish the tasks, teachers need to give feedback and encourage reflections so
that students can know their strengths, weaknesses, and considerations for further
improvement. It is also necessary for teachers to analyze and discuss with students
some important language features such as new words or grammar structures and
raise their awareness of the form and function of those features to take in. The pro‑
cedure of task‑based instruction is summarized in Figure 8.1.
tools, like Google Docs or Wiki, can facilitate group tasks and c ollaborative p
rojects
so that students can work together in real‑time, even if they are not in the same
physical location. Likewise, video conferencing tools like Zoom, Skype, or
Microsoft Teams enable live communication and collaboration among students,
even when they are geographically distant. This is particularly valuable for oral
communication tasks.
Furthermore, TATBA is advantageous in that technology offers more feed‑
back options (Ellis, 2003; González‐Lloret, 2017). Online feedback tools like
computer‑mediated communication software can facilitate communication
between teachers and students. Teachers can provide written or audio feedback on
assignments, and students can ask questions or seek clarification through digital
channels. Technology can automate certain aspects of feedback delivery, such as
grammar and vocabulary checks, pronunciation analysis, and language proficiency
level assessments. For instance, speech recognition software can evaluate pronun‑
ciation, fluency, and intonation for oral tasks. These tools can provide immediate
feedback on specific language features without the need to depend on teacher or
peer feedback, enabling students to become more independent in their task perfor‑
mance and improvement.
However, the implementation of TATBA poses some considerable challenges
that should not be underestimated. The most prevailing issue that can arise during
TATBA is technical problems and insufficient digital literacy for both teachers and
students alike. Students may encounter technical problems such as poor Internet
connectivity, software glitches, or hardware issues that can disrupt the assessment
process (Arslanyilmaz, 2012; Xue, 2022). Moreover, not all students have equal
access to technology and the Internet. This digital divide can create disparities in
students’ ability to participate in and benefit from TATBA. In addition, the risks
of cheating and dishonest behaviors, which can threaten academic integrity, are
also a considerable challenge in online environments (Baer & McIntyre, 2022;
Chen, 2014). Online assessments of any kind can be susceptible to cheating and
plagiarism, and TATBA is no exception. Students may be tempted to seek unau‑
thorized assistance or copy content from the Internet, compromising the reliability
of the assessment.
From the perspective of Baralt and Gómez (2017), the online context intro‑
duces distinct dynamics to TBLT compared to face‑to‑face settings, influenced by
five key factors that teachers should take into careful consideration to ensure the
effectiveness of TATBA. Firstly, tasks that are effective in traditional settings might
not be equally captivating in a virtual environment. Secondly, online students often
grapple with distractions stemming from technical and social aspects, potentially
leading to misalignments between student and teacher expectations. Thirdly, main‑
taining communicative language teaching can prove intricate within e‑learning
spaces. Fourthly, video‑based interactions could induce self‑consciousness among
students. Lastly, educators must invest additional cognitive effort to adapt conven‑
tional language teaching tools for online pedagogy.
138 ◾ Innovations and Applications of Technology
(Continued)
140 ◾ Innovations and Applications of Technology
For example, teachers can ask students to engage in virtual customer service role‑plays,
interacting with AI chatbots in the target language. These scenarios can be executed
through chat platforms like ChatGPT, Google Bard AI, or Bing AI or immersive aug‑
mented reality environments using Google ARCore or VR using affordable Google
Cardboard. Students can practice resolving issues and answering questions, receiving
automated feedback to improve their language proficiency and communication abilities.
Collaborative digital content creation: This activity aims to foster students’ col‑
laboration and language use in multimedia projects. Students can collaborate on
projects, such as creating a website, writing a blog, producing a podcast, or devel‑
oping a video, entirely in the target language. To complete this task, students have
to research, plan, and present their projects using the target language integratively
and meaningfully. The use of Web 2.0 tools such as wikis (Slimwiki, Mediawiki),
blogs (Weebly, Wix, WordPress, Blogger), Spotify, or social media platforms like
Youtube and TikTok can offer students more opportunities to not only practice
using language for real‑time purposes, but also help them to develop their digital
competence and creativity.
Technology-Assisted Task-Based Second Language Assessment ◾ 141
Slides or using more modern presentation software like Prezi or Canva. The assess‑
ment of this activity should focus on the content quality, students’ language use,
and the effectiveness of multimedia elements like visuals and audio, encouraging
students to communicate effectively in a multimedia‑rich digital environment.
The above‑suggested activities are just a few to name because technological
innovations are believed to continuously change and teachers’ creativity is endless.
Depending on their teaching contexts and students’ level, teachers should thought‑
fully consider different ways to implement TATBA to make their assessment more
diverse and engaging.
images, and interactive simulations. This shift enables the evaluation of skills that
extend beyond traditional text‑based tasks, providing students with a wider variety
of opportunities to showcase their language proficiency and creativity in practical
and meaningful ways. These aforementioned trends collectively reflect the dynamic
nature of TATBA as it continues to adapt and innovate in response to the evolving
demands of language education in the digital age.
Acknowledgment
This publication was funded by the University of Foreign Languages and
International Studies and the University of Economics Ho Chi Minh City (UEH
University), Vietnam.
144 ◾ Innovations and Applications of Technology
References
Al Kandari, A. M., & Al Qattan, M. M. (2020). E‑task‑based learning approach to enhanc‑
ing 21st‑century learning outcomes. International Journal of Instruction, 13(1), 551–
566. https://doi.org/10.29333/iji.2020.13136a
Ali, J. K. M., Shamsan, M. A. A., Hezam, T. A., & Mohammed, A. A. (2023). Impact of
ChatGPT on learning motivation: Teachers and students’ voices. Journal of English
Studies in Arabia Felix, 2(1), 41–49. https://doi.org/10.56540/jesaf.v2i1.51
Anwar, K., & Arifani, Y. (2016). Task‑based language teaching: Development of CALL.
International Education Studies, 9(6), 168. https://doi.org/10.5539/ies.v9n6p168
Arslanyilmaz, A. (2012). An online task‑based language learning environment: Is it better
for advanced‑or intermediate‑level second language learners? Turkish Online Journal of
Educational Technology‑TOJET, 11(1), 20–35. https://www.learntechlib.org/p/55809
Baer, B. J., & McIntyre, T. (2022). Bringing task‑based instruction online: Challenges of
remote language assessment. In S. V. Nuss, & C. L. Martin (Eds.), Student‑centered
approaches to Russian language teaching (pp. 121–132). Routledge.
Baidoo‑Anu, D., & Owusu Ansah, L. (2023). Education in the era of generative artificial intel‑
ligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching
and learning. Journal of AI, 7(1), 52–62. http://dx.doi.org/10.2139/ssrn.4337484
Baralt, M., & Gómez, J. M. (2017). Task‑based language teaching online: A guide for teachers.
Language Learning & Technology, 21(3), 28–43. http://hdl.handle.net/10125/44630
Brindley, G. (2013). TBLA. In C. Chapelle (Ed.), The encyclopedia of applied linguistics
(pp. 1–6). Wiley Blackwell. https://doi.org/10.1002/9781405198431
Bui, H. P. (2023). Vietnamese university EFL teachers’ and students’ beliefs and teachers’
practices regarding classroom assessment. Language Testing in Asia, 13, 10. https://doi.
org/10.1186/s40468‑023‑00220‑w
Bui, H. P., & Nguyen, T. T. T. (2022). Classroom assessment and learning motivation:
Insights from secondary school EFL classrooms. IRAL: International Review of Applied
Linguistics in Language Teaching, 60(3). https://doi.org/10.1515/iral‑2022‑0020
Chen, T., & Lin, C. (2018). Enhancing L2 English learning through mobile‑assisted TBLT:
EFL learners’ perspectives. The Journal of Asia TEFL, 15(2), 453–461. https://doi.
org/10.18823/asiatefl.2018.15.2.13.453
Chen, Y. L. (2014). A study on student self‑efficacy and technology acceptance model within
an online task‑based learning environment. Journal of Computer, 9(1), 34–43. https://
doi.org/10.4304/jcp.9.1.34‑43
Chong, S. W., & Reinders, H. (2020). Technology‑mediated task‑based language teaching: A
qualitative research synthesis. Language Learning & Technology, 24(3), 70–86. https://
www.lltjournal.org/item/10125‑44739/
Dixson, D. D., & Worrell, F. C. (2016). Formative and summative assessment in the class‑
room. Theory into Practice, 55(2), 153–159. https://doi.org/10.1080/00405841.2016.
1148989
Eisner, E. W. (1993). Reshaping assessment in education: Some criteria in search of prac‑
tice. Journal of Curriculum Studies, 25(3), 219–233. https://doi.org/10.1080/
0022027930250302
Ellis, R. (2003). Task‑based language learning and teaching. Oxford: Oxford University Press.
Eslami, Z. R., & Kung, W. T. (2016). Focus‑on‑form and EFL learners’ language develop‑
ment in synchronous computer‑mediated communication: Task‑based interactions.
The Language Learning Journal, 44(4), 401–417. https://doi.org/10.1080/09571736.
2016.1227219
Technology-Assisted Task-Based Second Language Assessment ◾ 145
Estaire, S., & Zanon, J. (1994). Planning classwork: A task based approach. Oxford: Heinemann.
González‐Lloret, M. (2017). Technology for task‐based language teaching. In C. A. Chapelle,
& S. Sauro (Eds.), The handbook of technology and second language teaching and learn‑
ing (pp. 234–247). Wiley ‑ Blackwell. https://doi.org/10.1002/9781118914069.
ch16
González‐Lloret, M., & Ortega, L. (2014). Towards technology‐mediated TBLT: An intro‑
duction. In M. González‐ Lloret, & L. Ortega (Eds.), Technology‐mediated TBLT:
researching technology and tasks (pp. 1–22). John Benjamins Publishing Company.
Lai, C., & Li, G. (2011). Technology and task‑based language teaching: A critical review.
CALICO Journal, 28(2), 498–521. https://doi.org/10.11139/cj.28.2.498‑521
Long, M. (2015). Second language acquisition and task‑based language teaching. Wiley
Blackwell.
Mulyadi, D., Wijayatiningsih, T. D., Singh, C. K. S., & Prastikawati, E. F. (2021). Effects of
technology‑enhanced task‑based language teaching on students’ listening comprehen‑
sion and speaking performance. International Journal of Instruction, 14(3), 717–736.
https://doi.org/10.29333/iji.2021.14342a
Newton, P. E. (2007). Clarifying the purposes of educational assessment. Assessment in
Education, 14(2), 149–170. https://doi.org/10.1080/09695940701478321
Nguyen, H. T. T. (2023). LMS‑based integrated online assessment implementation at the
university to foster learning motivation and academic performance. Interactive Learning
Environments, 1–14. https://doi.org/10.1080/10494820.2023.2187422
Nielson, K. B. (2014). Evaluation of an online, task‑based Chinese course. In
M. González‑Lloret, L. Ortega (Eds.), Technology‑mediated TBLT (pp. 295–322). John
Benjamins.
Noroozi, M., & Taheri, S. (2021). The distinguishing characteristic of task‑based language
assessment. Journal of Language Teaching and Research, 12(5), 688–695. https://doi.
org/10.17507/jltr.1205.07
Noroozi, M., & Taheri, S. (2022). Task‑based language assessment: A compatible
approach to assess the efficacy of task‑based language teaching vs. present, prac‑
tice, produce. Cogent Education, 9(1), 2105775. https://doi.org/10.1080/23311
86X.2022.2105775
Norris, J. M. (2016). Current uses for task‑based language assessment. Annual Review of
Applied Linguistics, 36, 230–244. https://doi.org/10.1017/S0267190516000027
Norris, J. M., & East, M. (2021). Task‑based language assessment. In M. J. Ahmadian &
L. M. H (Eds.), The Cambridge handbook of task‑based language teaching (pp. 507–528).
Cambridge University Press.
Nunan, D. (2004). Task‑based language teaching. Cambridge University Press.
Pellerin, M. (2014). Language tasks using touch screen and mobile technologies:
Reconceptualizing task‑based CALL for young language learners. Canadian Journal
of Learning and Technology / La revue canadienne de l’apprentissage et de la technologie,
40(1), 1–23. Canadian Network for Innovation in Education.
Perveen, A. (2021). Use of word clouds for task based assessment in asynchronous e‑language
learning. Mextesol Journal, 45(2), 1–11. https://www.mextesol.net/journal/index.
php?page=journal&id_article=23533
Ramadhan, S., Sukma, E., & Indriyani, V. (2021). Design of task‑based digital language
teaching materials with environmental education contents for middle school students.
Journal of Physics: Conference Series, 1811(1), 012060. https://doi.org/10.1088/1742‑
6596/1811/1/012060
146 ◾ Innovations and Applications of Technology
Shehadeh, A. (2005). Task‑based language learning and teaching: Theories and applications.
In C. Edwards, & J. Willis (Eds.), Teachers exploring tasks in English language teaching
(pp. 13–30). Palgrave Macmillan.
Willis, D., & Willis, J. (2007). Doing task‑based teaching. Oxford University Press.
Willis, J. (1996). A framework for task‑based learning. Longman.
Xue, S. (2022). A conceptual model for integrating affordances of mobile technologies into
task‑based language teaching. Interactive Learning Environments, 30(6), 1131–1144.
https://doi.org/10.1080/10494820.2019.1711132
Ziegler, N. (2016). Taking technology to task: Technology‑mediated TBLT, performance,
and production. Annual Review of Applied Linguistics, 36, 136–163. https://doi.
org/10.1017/S0267190516000039
Chapter 9
9.1 Introduction
Language testing has long been a great interest in language education and applied
linguistics (Bui, 2023). Language testing used to depend solely on humans in that
they create, deliver, and evaluate tests, as well as to give feedback and interpret
outcomes. The emergence of artificial intelligence (AI) in language education has
resulted in a renewed paradigm in language testing in the past decades (Bui &
Nguyen, 2022). Therefore, language developers and test takers have new opportuni‑
ties and are faced with emerging challenges.
To date, the promising role of AI in second language assessment has become
more prominent. AI has provided powerful tools, such as machine learning (ML)
and natural language processing (NLP), to support learners by identifying their
errors, providing feedback, and assessing their language competencies (Woo &
Choi, 2021). Grounded on presupposition, AI tools can support teachers in creating
individualized learning experiences and offer adaptive and personalized instruction
DOI: 10.1201/9781003473916-11147
148 ◾ Innovations and Applications of Technology
to learners. Moreover, AI can empower teachers to monitor and tutor learners more
effectively and efficiently as well as to design more authentic and interactive assess‑
ment tasks. However, the growing use of AI in language testing has raised various
ethical, social, and pedagogical concerns (Pedró et al., 2019).
Driven by the authors’ desire to assist practitioners in understanding the
aforementioned interest, this book chapter aims to provide practical guidance on
employing AI to develop language tests and ensure that they can be conducted
effectively and ethically. The chapter initially introduces the basic concepts of AI
and arguments for and against the use of AI in many aspects of language testing,
such as test development, administration, scoring, and feedback.
According to Chapelle and Voss (2008), Higgins et al. (2011), and Leacock and
Chodorow (2003), it is also designed to analyze the test‑taker’s responses and
provide feedback or scores based on various linguistic features, such as grammar,
vocabulary, fluency, pronunciation, or discourse. Moreover, as noted by Roever
and McNamara (2006), NLP can help improve the validity and fairness of lan‑
guage tests by ensuring that the content and difficulty are suitable for a specific
purpose and a particular social group.
As developed to process and create visual data, such as videos or pictures,
CV uses various techniques and algorithms to understand and manipulate visual
information, such as recognition, segmentation, reconstruction, synthesis, and
enhancement. This technology is useful for language testing because it enables the
system to process and interpret visual stimuli often used in language tests, such
as pictures or graphs. For instance, it can help recognize the objects or actions
depicted in an image and to generate a corresponding description or question
(Settles et al., 2020a). As proposed by Bernardi et al. (2016), CV can help evaluate
the quality and relevance of the visual materials used in language tests, such as the
clarity, contrast, and complexity of the images, and create new types of language
tests that involve visual tasks, such as image captioning, image summarization, or
image translation.
To assist the conversion of spoken language into other formats, such as texts,
scientists have also developed SR. It has many applications and benefits for oral
language testing, as it can filter out noise, recognize different speakers, adapt to
various accents, and so on. It is especially beneficial for language testing when it
supports the computer system in processing spoken input and output. For example,
it can transcribe a learner’s oral response and provide feedback on their pronuncia‑
tion or fluency (Zechner et al., 2009; Settles et al., 2020a) and design and develop
oral language tests that are more authentic, interactive, and adaptive to the learners’
needs and abilities. For instance, SR can enable the creation of simulated dialogues,
speech‑based games, or personalized tasks that can assess the learners’ communica‑
tive competence in various contexts and situations (Chapelle & Douglas, 2006; Xi &
Zechner, 2008).
In brief, the potential of AI technologies in language testing is vast and mul‑
tifaceted. The various sub‑domains of AI, including ML, NLP, CV, and SR, each
play a crucial role in enhancing distinct aspects of language testing. ML algorithms
enable the system to adapt to different learners and contexts, providing personal‑
ized feedback and guidance. NLP allows the system to understand and generate
natural language, thereby evaluating a learner’s response’s grammatical accuracy,
lexical diversity, and semantic coherence. CV aids in processing and interpreting
visual stimuli used in language tests, while SR is instrumental in handling spoken
input and output, transcribing a learner’s oral response, and providing feedback on
pronunciation or fluency. AI technologies will promisingly revolutionize language
testing by making it more efficient, accurate, and personalized.
150 ◾ Innovations and Applications of Technology
9.3.1 Efficiency
AI technology can strongly support language testing practices as it can enhance the
efficiency and quality of developing and scoring language tests. Unlike traditional
methods that rely on human evaluators to manually rate each test, AI can automate
the process of language assessment and provide immediate feedback to learners. For
example, as Deane et al. (2013) suggest, automated essay scoring systems can ana‑
lyze written responses in seconds and provide scores based on linguistic features and
content, allowing learners to assess themselves and monitor their learning process
efficiently and improve their language skills accordingly. Furthermore, AI can also
assist test developers in creating test items automatically, reducing the workload and
increasing the diversity of test content (Settles et al., 2020b). Therefore, AI technol‑
ogy can promote language testing in terms of efficiency and quality.
9.3.2 Objectivity
Unlike human evaluators who may be swayed by factors like fatigue or bias, AI
systems may offer objectivity, leading to more reliable and equitable assessment out‑
comes (Zechner et al., 2009). This objectivity is crucial in upholding the integrity
of language tests and ensuring that all learners are evaluated based on the same cri‑
teria. Furthermore, AI can provide learners with constructive diagnostic feedback,
which can help them identify their strengths and weaknesses and improve their
learning outcomes.
AI technology can contribute significantly to maintaining the objectivity and
integrity of language tests. First, it applies consistent and objective evaluation crite‑
ria, which reduces the possibility of human error or bias that may compromise the
fairness and reliability of language assessment (Zechner et al., 2009). This objectiv‑
ity is essential for maintaining the credibility of language tests and ensuring that
all learners are assessed according to the same set of criteria. Second, it provides
more comprehensive and diagnostic feedback to learners to assist in monitoring
their progress and address their areas of improvement, resulting in better learning
outcomes and more effective language development.
Use of AI in Language Test Development and Administration ◾ 151
9.3.3 Scalability
AI can potentially transform language testing in various ways, especially in contexts
where large‑scale testing is needed. Duolingo English Test, for example, uses AI to
estimate item difficulty and linguistic skills based on machine‑learned scale models
(Settles et al., 2020b). This allows the test to accommodate a high number of test
takers simultaneously, reducing the cost and time of language testing. Moreover,
AI can also facilitate language testing in different modes and modalities, such as
online or offline, spoken or written, or multimodal (Woo & Choi, 2021). This can
enhance the accessibility and diversity of language testing for different learners and
test reliability and validity.
9.4.1 Fairness
Linguists and language educators have recently raised concerns about the fairness
of AI‑assisted language testing. AI systems can potentially introduce unfairness
or discrimination in language assessment if they are not designed and developed
with careful attention to the diversity and complexity of language use (Burt,
2020). For example, an AI‑supported language testing system that evaluates
speaking skills might favor particular dialects or accents over others, creating
an unequal opportunity for test takers who do not speak with those dialects or
accents.
To address this challenge, it is essential to apply rigorous standards and best
practices in AI language test development and design, such as using diverse and rep‑
resentative datasets for training and testing AI systems, conducting regular audits
and evaluations of AI systems’ performance and impact, and involving relevant
stakeholders and experts in the decision‑making process. Another way to improve
the fairness of AI systems is to ensure transparency and accountability in test
design, development, and use (Burt, 2020). Transparency means that the AI system
should provide clear and understandable explanations of its decisions and actions,
while accountability means that it should be subject to oversight and regulation by
human experts and stakeholders (Burt, 2020).
152 ◾ Innovations and Applications of Technology
9.4.2 Validity
Another critique centers on test validity. Accordingly, stakeholders in language test‑
ing are concerned about the level at which a test is about what it is established for
and interprets the examinees’ submissions appropriately (Messick, 1989). Validity
is essential in the context of language testing supported by AI, where the use of AI
systems poses new challenges and opportunities for assessing language proficiency.
For instance, an AI system might not accurately assess a test‑taker’s competence
to use language appropriately in different situations or perform complex linguistic
tasks that require higher‑order thinking skills. For example, an AI system might fail
to recognize the pragmatic functions of language, such as politeness, sarcasm, or
humor, or it might not be able to evaluate the coherence and cohesion of an exam‑
inee’s written or spoken discourse.
Concerning validity, it is essential to adopt a validity framework that takes into
account the role and impact of AI in language testing, drawing on the latest theo‑
retical and empirical advances in the field. In addition, researchers must develop
methods for rapidly creating language proficiency assessments that are valid,
dependable, and secure (Settles et al., 2020b). These methods include using NLP
techniques to automatically generate test items from authentic texts, AL models to
automatically score test responses, and data analytics to monitor test quality and
security (Settles et al., 2020b).
9.4.3 Reliability
Test reliability refers to the level of consistency and accuracy of test results across
different occasions, raters, and forms (Bachman & Palmer, 2010). Accordingly, a
reliable test should produce similar outcomes when administered under the same
or equivalent conditions. However, achieving reliability in AI‑supported language
testing can pose various challenges. For example, an AI system designed to assess the
test‑taker’s language proficiency might produce inconsistent results if it is affected
by subtle differences in the test‑taker’s input, such as pronunciation, grammar, or
vocabulary (Woo & Choi, 2021). Regarding reliability, it is essential to conduct
thorough testing of AI systems under different scenarios and contexts and to keep
track of their performance over time.
Researchers (e.g., Woo & Choi, 2021) argue for the necessity of reliability test‑
ing, emphasizing interdisciplinary collaboration to enable rigorous and targeted
testing and aid in enacting and enforcing industry standards. Test reliability involves
conducting various types of analyses to evaluate the performance and behavior of
the AI system under different conditions and scenarios, such as stability, robustness,
generalizability, scalability, and usability (Woo & Choi).
Use of AI in Language Test Development and Administration ◾ 153
reduced the workload and stress of human raters and increased their satisfaction
and motivation. Similarly, Seo et al. (2021) found that using a testing system to
provide feedback on writing tasks enhanced the efficiency and quality of feedback,
and improved learners’ writing performance and self‑regulation.
The practice can provide educators with valuable insights into each learner’s
progress and areas of difficulty. By analyzing the data collected from language tests,
AI can generate reports and dashboards that show learners’ strengths and weak‑
nesses, learning patterns, and preferences. These insights can help educators design
differentiated instruction and personalized learning experiences that cater to learn‑
ers’ diverse needs and goals. For instance, using an AI‑based system to monitor
and visualize learners’ reading comprehension levels enabled educators to adjust
their reading instruction accordingly, improving learners’ reading outcomes and
motivation.
However, educators may also encounter challenges from the emerging language
testing system. Some educators may perceive AI as a threat rather than a support to
their teaching roles, especially if they lack the understanding and confidence about
the role of AI in education, possibly resulting in resistance or reluctance to use AI
tools or a loss of professional identity or autonomy. As Seo et al. (2021) argued, it
is essential to clarify that AI is not intended to replace human instruction but to
enhance it by providing complementary functions and services.
Another challenge for educators is to acquire the necessary skills and knowledge
to effectively use AI tools in their teaching practices. This requires professional
development programs that focus on the use of AI in education, and that provide
educators with opportunities to learn about the principles, features, benefits, and
limitations of AI tools. Moreover, educators need ongoing support and resources
to help them integrate AI tools into their teaching practices smoothly and success‑
fully. For example, it is suggested that providing educators with access to online
communities, where they can share their experiences and challenges with using AI
tools, can foster their confidence and competence in using AI in language testing.
Therefore, ensuring that educators are adequately trained and supported in
using AI tools is crucial. Professional development programs focusing on using AI
in education can equip educators with the necessary skills and knowledge to effec‑
tively integrate AI into their teaching practices. Furthermore, ongoing support and
resources can help educators navigate the challenges and maximize the benefits of
using AI in language testing (Seo et al., 2021).
The integration of AI in language testing can have a profound impact on test
takers, offering both potential benefits and challenges. From the test takers’ per‑
spective, using AI in language tests can enhance their perceptions of fairness and
consistency. This is because AI systems are programmed to evaluate responses
objectively based on predefined and consistent criteria, eliminating the risk of
human bias or error. Furthermore, AI systems can provide immediate and adaptive
feedback, potentially improving the test takers’ learning outcomes and motivation.
Moreover, AI systems can increase the availability and accessibility of language tests.
Use of AI in Language Test Development and Administration ◾ 155
This means that test takers can take the test at their convenience without waiting
for a human examiner to be available. As a result, the integration of AI may make
language tests more accessible and convenient for test takers (Zhang et al., 2023).
Despite these potential benefits, using AI in language testing might also incite
mistrust among some test takers. This could stem from concerns about the reli‑
ability of AI systems. For instance, test takers might question whether an AI
system can accurately assess complex aspects of language use, such as pragmatics
or cultural nuances. Some test takers may feel uncomfortable interacting with
an AI system instead of a human examiner. The lack of human interaction could
make the testing experience feel impersonal or intimidating for some individuals
(Zhang et al., 2023).
Given these potential challenges, ensuring that AI systems used in language
testing are reliable and user‑friendly is crucial. Test developers should invest in rig‑
orous testing and validation processes to ensure that AI systems can accurately
assess language skills. Furthermore, user interfaces should be designed to be intui‑
tive and easy to navigate, helping to alleviate any discomfort or anxiety test takers
might feel about interacting with an AI system. By doing so, test developers can
optimize the benefits of integrating AI into language testing while minimizing the
potential drawbacks (Chapelle & Voss, 2022).
While integrating AI in language testing offers promising benefits such as
enhanced fairness, consistency, and availability, it also presents potential challenges
related to reliability and user comfort. Therefore, careful consideration must ensure
these systems are reliable, user‑friendly, and ultimately beneficial for test takers.
AI presents a significant opportunity for policy makers, particularly in the
realm of education. It serves as a powerful tool that can shape and influence educa‑
tional policies in unprecedented ways. The systems can collect, analyze, and inter‑
pret vast amounts of data, providing valuable insights to inform policy decisions.
For instance, it can track student performance across various parameters, identify
patterns and trends, and predict future outcomes. These insights can help policy
makers understand current policies’ effectiveness, identify improvement areas, and
make informed decisions about future policies (Tyler et al., 2023).
However, the incorporation of AI in education also brings with it potential risks
that policy makers need to be cognizant of. With AI systems collecting and analyz‑
ing large amounts of data, there are valid concerns about how this data is stored,
used, and protected. Policy makers need to ensure that robust data protection mea‑
sures are in place to safeguard the privacy of students and educators. AI system may
exhibit biased behavior should the training data be biased or unrepresentative. This
could lead to unfair outcomes in language testing. Therefore, policy makers need
to ensure that AI systems are trained on diverse and representative datasets to avoid
unwanted potential bias (Tyler et al., 2023).
While there are challenges associated with using AI in language testing, its
potential benefits are immense. Its ability to provide valuable insights can greatly
aid policy makers in making informed decisions. However, these benefits must be
156 ◾ Innovations and Applications of Technology
balanced against the potential risks associated with privacy and discrimination. As
research continues and technology advances, we can look forward to seeing more
advanced AI applications in language testing. These advancements will undoubt‑
edly present new opportunities and challenges for policy makers, test takers, and
educators alike.
9.6 Conclusion
In summation, the incorporation of AI into language testing holds the potential to
reshape the landscape of language testing fundamentally. It offers many advantages,
including enhanced efficiency, objectivity, and scalability, and has demonstrated its
effectiveness in various contexts. Nevertheless, it also introduces challenges pertain‑
ing to fairness, validity, and reliability that require careful consideration. Despite
these hurdles, the potential of AI in language testing is vast. As technological
advancements continue to unfold and research progresses, we anticipate witnessing
increasingly innovative applications of AI within this domain. This underscores the
imperative for sustained research efforts to explore these applications and address
the concomitant challenges. The future of language testing is poised for a transfor‑
mative shift, with the development of AI as a central role in steering this evolution.
Acknowledgment
This publication was funded by, Ho Chi Minh City Open University (HCMCOU),
and the University of Economics Ho Chi Minh City (UEH University), Vietnam.
References
Bachman, L. F., & Palmer, A. S. (2010). Language assessment in practice. Oxford University
Press.
Bernardi, R., Cakici, R., Elliott, D., Erdem, A., Erdem, E., Ikizler‑Cinbis, N., Keller, F.,
Muscat, A., & Plank, B. (2016). Automatic description generation from images: A
survey of models, datasets, and evaluation measures. Journal of Artificial Intelligence
Research, 55, 409–442. https://doi.org/10.1613/jair.4900
Brown, H. D., & Abeywickrama, P. (2019). Language assessment: Principles and classroom
practices. Pearson Education.
Bui, H. P. (2023). Vietnamese university EFL teachers’ and students’ beliefs and teachers’
practices regarding classroom assessment. Language Testing in Asia, 13, 10. https://doi.
org/10.1186/s40468‑023‑00220‑w
Bui, H. P., & Nguyen, T. T. T. (2022). Classroom assessment and learning motivation:
Insights from secondary school EFL classrooms. IRAL: International Review of Applied
Linguistics in Language Teaching, (0). https://doi.org/10.1515/iral‑2022‑0020
Use of AI in Language Test Development and Administration ◾ 157
Sumita, E., Kikui, G., Yamamoto, H., & Shirai, S. (2005). Measuring non‑native speakers’
proficiency of English by using a test with automatically‑generated fill‑in‑the‑blank
questions. In Proceedings of the Second Workshop on Building Educational
Applications Using NLP (pp. 61–68). Association for Computational Linguistics.
Tyler, C., Akerlof, K. L., Allegra, A., Arnold, Z., Canino, H., Doornenbal, M. A., Goldstein,
J. A., Budtz Pedersen, D., & Sutherland, W. J. (2023). AI tools as science policy advis‑
ers? The potential and the pitfalls. Nature, 622(7981), 27–30. https://doi.org/10.1038/
d41586‑023‑02999‑3
Wainer, H., Dorans, N. J., Flaugher, R., Green, B. F., & Mislevy, R. J. (2000).
Computerized adaptive testing: A primer (2nd ed.). Routledge. https://doi.
org/10.4324/9781410605931
Woo, J. H., & Choi, H. (2021). Systematic review for AI‑based language learning tools.
Journal of Digital Contents Society, 22(11), 1783–1792. https://doi.org/10.9728/
dcs.2021.22.11.1783
Xi, X., & Zechner, K. (2008). Towards automatic scoring of a test of spoken language with
heterogeneous task types. In J. Tetreault, J. Burstein & R. D. Felice (Eds.), Innovative
Use of NLP for Building Educational Applications: BEA Workshop 2008, Oregon,
USA: Proceedings (pp. 98–106). https://aclanthology.org/volumes/W08‑09/
Zechner, K., Higgins, D., Xi, X., & Williamson, D. M. (2009). Automatic scoring of
non‑native spontaneous speech in tests of spoken English. Speech Communication,
51(10), 883–895. https://doi.org/10.1016/j.specom.2009.04.009
Zhang, D., Hoang, T., Pan, S., Hu, Y., Xing, Z., Staples, M., Xu, X., Lu, Q., & Quigley,
A. (2023). Test‑takers have a say: Understanding the implications of the use of AI in
language tests. https://doi.org/10.48550/arXiv.2307.09885
Zou, B., Reinders, H., Thomas, M., & Barr, D. (2023). Editorial: Using artificial intelligence
technology for language learning. Frontiers in Psychology, 14, 1287667. https://doi.
org/10.3389/fpsyg.2023.1287667
USING III
TECHNOLOGY FOR
SPECIFIC PURPOSES IN
LANGUAGE EDUCATION
Chapter 10
Using Asynchronous
Computer‑Mediated
Communication to
Increase Students’
Out‑of‑Class Interaction
Vu Hong Lan
10.1 Introduction
To encourage 21st‑century communications, many media and the tools that sup‑
port them have been invented and have firmly embedded themselves in our inter‑
connected world. Computer‑mediated communication (CMC) systems have proven
vital for initiating, developing, and maintaining interpersonal interactions in vari‑
ous ways (Grunander, 2016; Croes et al., 2019). Although the digital revolution has
accelerated the development of multimodal and multisensory interfaces, they play
an essential part in the responsive structuring of communication in practically every
relational scenario (Keary, 2018; Sauer et al., 2022). It is, therefore, understand‑
able that research focused on communication and technology has accelerated enor‑
mously. They offer novel ways to learn about and communicate effectively beyond
our physical reach. The role of computers in CMC nowadays has changed due to
the advent of many modes of communication and their mediated communication
channels. Nevertheless, it is important to remain cautious regarding how CMC
DOI: 10.1201/9781003473916-13161
162 ◾ Innovations and Applications of Technology
utilizes these new tools, especially the multisensory integration of audio and visual
in affective information and communication (Tran, 2017; Massey et al., 2023).
Efforts to better understand student participation from several viewpoints have
resulted in several definitions of CMC interaction, which have enabled multimo‑
dality or multisensory, including auditory, visual, and diffused gesture—the body
language. The key to increasing students’ engagement is promoting collaboration
and interaction during learning (Umbach & Wawrzynski, 2005). Given that CMC
is influenced by societal and personal factors, stimulating and sustaining student
engagement has consistently been identified as a highly desirable but generally elu‑
sive goal for CMC research (Kahu, 2013; Bedenlier et al., 2020; Hutson & Herrell,
2021). Although it is widely agreed that good communication is essential to the
growing interdependence, virtualization, and multidisciplinary nature of coopera‑
tion, it still needs to be determined how to match engagement activities with com‑
munication behaviors. Yet, the lack of nonverbal cues and affective information such
as moods and emotions led students to become self‑focused, unyielding to influence,
disobedient, aggressive, and negatively affective in interpersonal communication.
Previous literature has confirmed that emotions play an important role as arousal
or stimuli in developing motivation for students’ engagement (Bradley et al., 2001;
Kim et al., 2014; Sands & Isaacowitz, 2017; Grondin et al., 2019; Li et al., 2020;
Nahumi‑Shani et al., 2022). Nonetheless, more must be recognized for how effec‑
tively CMC transmits affective information, or emotions to be precise, in terms
of displaying asynchronous nonverbal cues and increasing engagement through
gesture interaction. By nature, humans are capable of visualizing an image from
a sound in their cognitive thought, and through generative integration, it is no lon‑
ger through their imagination to picture the image (Suguitan & Dacaymat, 2019;
Hanawalt & Hofsess, 2020; Jackson & Nilan, 2022). However, most asynchronous
forms of communication are text‑based, such as email and text messaging, which
have limited the student’s awareness of their surroundings and usage of nonverbal
communication in CMC. Therefore, alternative means are required as there are
no emotions or gestures to let one express their sentiments (Harn, 2017; Wang &
Haapio, 2021; Ciancarini et al., 2021; Keynan et al., 2022) and by employing the
form of multisensory communication, recipients can concentrate on the hidden
message being conveyed, which help motivate them and help them comprehend
context (Harn 2017; Treem et al., 2020).
As part of reviewing prior literature, several types of engagement and prominent
goals are established in addition to the motivation and effectiveness of CMC media,
which results in a relative lack of focus on satisfaction with the CMC experience.
While there are many CMC theories, this chapter focuses on evaluating the moti‑
vation, effectiveness, and satisfaction based on the CMC competency developed
by Spitzberg (2006). Additionally, a microanalysis of interaction using Waikato
Environment for Knowledge Analysis (WEKA) image classification and color lay‑
out is also implemented to contribute to the literature on the impacts of CMC on
students’ interaction.
Asynchronous Computer-Mediated Communication ◾ 163
10.3 This Study
10.3.1 Preparation for the Experiment
The current study proposed a two‑stage experiment involving qualitative and quan‑
titative approaches to analyze the indicators of computer‑mediated communica‑
tion that affect students’ engagement. Recruited participants were graduate EFL
(English as a foreign language) students with backgrounds in interaction design,
creative development, and graphic design. The participants all share the hobby of
Asynchronous Computer-Mediated Communication ◾ 165
Graphic 8 30.00%
Designer
listening to music on platforms such as Spotify, Apple Music, and YouTube. Thirty
participants (n = 30) were recruited (see Table 10.1 and Figure 10.1).
As part of the current study, a questionnaire was developed based on mature
items measuring motivation, effectiveness, and satisfaction from the CMC com‑
petency (CMCC) model proposed by Spitzberg (2005) (see Tables 10.2 and 10.3).
4 Mostly true of me
5 Very true of me
Source: Spitzberg, B. H. (2006). Preliminary development of a model
and measure of computer‑mediated communication (CMC)
competence. Journal of Computer‑Mediated Communication,
11(2), 629–666. https://doi.org/10.1111/j.1083‑6101.2006.00030.x
Happy mood_happy‑musicnn‑msd‑2.pb
Aggressive mood_aggressive‑musicnn‑msd‑2.pb
Relaxing mood_relaxed‑musicnn‑msd‑2.pb
Acoustic mood_acoustic‑musicnn‑msd‑2.pb
Dance danceability‑musicnn‑msd‑2.pb
In the first stage of the experiment, the recruited participants were invited to
determine the preferred music type of their choice before grouping them based on
their music preference. Before generating their first audio visualization, participants
were asked to cooperate and finalize three pieces of music. Once the music pieces
were added to the software, the results were generated (see Figure 10.2), which
revealed the results of the first five pairs of participants. Participants could then dis‑
cuss whether they preferred this form of visualization and expressed their emotional
experience before viewing the music mood score (see Table 10.6).
Asynchronous Computer-Mediated Communication ◾ 169
Figure 10.2 illustrates the first five pairs of the participants’ results for the
audio‑to‑visual process in the first stage of the experiment. Fifteen illustrations were
presented, and each pair had to select three pieces of music to generate the illustra‑
tion. The illustrations resembled the form of audio sound waves with different color
mixtures in order to match the type of music.
Table 10.6 Table of Audio Data Corresponding to the Generated Image and
User Pairs
1st Song 2nd Song 3rd Song
Data Data Data Average
1st Pair happy −0.166 happy −0.655 happy −0.436 happy −0.419
(P1 & P2) dance −0.667 dance −0.983 dance −0.406 dance −0.685
M&F aggressive aggressive aggressive aggressive
−0.036 −0.313 −0.948 −0.432
relaxing relaxing relaxing relaxing
−0.964 −0.318 −0.303 −0.528
acoustic acoustic acoustic acoustic
−0.519 −0.003 −0.027 −0.183
2nd Pair happy −0.649 happy −0.166 happy −0.042 happy −0.286
(P3 & P4) dance −0.708 dance −0.667 dance −0.192 dance −0.522
M&M aggressive aggressive aggressive aggressive
−0.03 −0.036 −0.039 −0.035
relaxing relaxing relaxing relaxing
−0.596 −0.964 −0.959 −0.84
acoustic acoustic acoustic acoustic
−0.543 −0.519 −0.645 −0.569
3rd Pair happy −0.436 happy −0.649 happy −0.735 happy −0.607
(P5 & P6) dance −0.406 dance −0.708 dance −0.653 dance −0.59
F&F aggressive aggressive aggressive aggressive
−0.947 −0.03 −0.037 −0.338
relaxing relaxing relaxing relaxing
−0.303 −0.596 −0.634 −0.511
acoustic acoustic acoustic acoustic
−0.027 −0.543 −0.431 −0.334
(Continued)
Asynchronous Computer-Mediated Communication ◾ 171
4th Pair happy −0.042 happy −0.735 happy −0.649 happy −0.475
(P7 & P8) dance −0.191 dance −0.653 dance −0.708 dance −0.517
M&M aggressive aggressive aggressive aggressive
−0.038 −0.037 −0.03 −0.035
relaxing relaxing relaxing relaxing
−0.958 −0.633 −0.596 −0.729
acoustic acoustic acoustic acoustic
−0.645 −0.431 −0.543 −0.539
5th Pair happy −0.166 happy −0.655 happy −0.436 happy −0.419
(P9 & P10) dance −0.667 dance −0.983 dance −0.406 dance −0.685
M&F aggressive aggressive aggressive aggressive
−0.037 −0.313 −0.947 −0.432
relaxing chill −0.318 relaxing relaxing
−0.964 acoustic −0.303 −0.528
acoustic −0.003 acoustic acoustic
−0.519 −0.027 −0.183
Table 10.6 illustrates the score distribution between the music mood model (see
Table 10.5). Overall, the decision for color choice and the type of music depended
on the highest and second‑highest scores of the models. An average column was
added to the table to reveal the average score based on the three selected types of
music.
interaction can be achieved with the audio‑to‑visual generated artwork from the
first stage. Accordingly, the Kinect medium enabled TD’s body tracing technique
to capture and portray the participants’ figures or silhouettes onto a computer dis‑
play. The experiment setup (see Figure 10.4) requires two notebook computers and
Kinect devices while participants stood facing the devices to perform. Through
this stage, students act as artists or performers to communicate through their body
movements.
As can be observed from Figure 10.3, a TD node network requires multiple
nodes for different purposes. For instance, the purple‑colored node represents
the texture operator that manipulates pixels, while green‑colored channel opera‑
tors process data from multiple sources. In the second stage, participants interact
with one another using their audiovisual art generated from the first stage in a
Asynchronous Computer-Mediated Communication ◾ 173
Figure 10.4 Setup and participation from participants in the second stage of the
experiment.
Figure 10.6 Sample of image data for machine learning technique to detect
human involvement and interaction.
Asynchronous Computer-Mediated Communication ◾ 175
The image classification technique could extract certain features of the dataset
(Shoumy et al., 2020). As the image data might contain one or more instances,
different image filters from WEKA were employed in the current study. One typi‑
cal algorithm, such as color layout, explores the read, green, and blue (RGB) color
scheme of the given data. The Pyramid histogram of Oriented Gradients (PHOG),
built from the histogram of gradient orientation (HOG), also captures the textural
and shape properties. Both filters divide the image data into smaller blocks before
applying additional widely used classifiers: multilayer perceptron (MLP), sequential
minimal optimization (SMO), and simple logistic (SL). MLP, with a quick learning
rate, determined the irrelevant features of multiple layers from a linear perceptron.
SMO models support vector outputs to acquire accurate probability estimations and
replace all missing values into binary and normalized ones. SL used “LogitBoost”
to utilize simple logistic models. The method measured the accuracy of the model.
The data was inserted into the WEKA application and proceeded with the
image classifiers, generating ten equal‑sized sets to repeat the enhancing process
of the first and the second to tenth datasets. Two sets (training and testing) were
divided into ten folds using a cross‑validation technique to obtain more accurate
results. With data points utilized once for testing, it tracked each component, aver‑
aged the results, and enhanced the auxiliary nine times for training. As for the
training data, it created a classifier with an algorithm and applied it to the rest of
the testing data for the first set.
10.4 Results
10.4.1 Results of the Two‑Stage Experiment
The Cronbach’s Alpha value of the pretest was acceptably 0.718, while the post‑test
score was 0.833 (Table 10.7). Additionally, Figure 10.7 demonstrates the box plot
results of students’ motivation, effectiveness, and satisfaction as they engaged with
CMC systems.
Though not statistically significant, the results from Figure 10.7, box plot of stu‑
dent’s motivation, effectiveness, and satisfaction in the preliminary and post‑stage of
Mean: 4.19
Mean: 4.25 Mean: 3.69
4
Mean: 3.66
Mean: 3.8 Mean: 3.97
Figure 10.7 Box plot of student’s motivation, effectiveness, and satisfaction in the preliminary and post‑stage of the experiments.
Asynchronous Computer-Mediated Communication ◾ 177
Table 10.8 Table of Results for the Image Classification Using Color Layout
MLP SMO SL
Table 10.9 Table of Results for the Image Classification Using PHOG
MLP SMO SL
10.5 Discussion
This chapter reports a study on the relationship between using CMC and EFL
students’ engagement in interaction. The data was collected from the medium of
computer‑generated images using music as audio input and adopting gesture inter‑
action for developing multimodality interaction and communication. The main
parameter for visualization in this study was music, as it was evidenced from previ‑
ous studies that music is one of the most effective tools for information exchange
(Er & Aydilek, 2019). Various images and intellectual works were used to produce
high‑resolution, globally coherent images with hundreds of thousands of dimen‑
sions. These methods were not directly transferrable to waveform synthesis because
they leveraged spatial features of images that were absent in one‑dimensional audio
signals. These approaches, however, were more directly relevant to two‑dimensional
representations such as spectrograms. While adopting different techniques of
music visualization from past studies, which referenced from Itten’s color system
to Russell’s circumplex model of affect by Dharmapriya et al. (2021), the affective
elements such as color and shape in the microanalysis.
Because of a limited number of retrieved images and recognized interactions
from the second stage of the experiments, greater numbers of images for the dif‑
ferent groups may be included in future studies to improve the model’s accuracy.
Based on the data mining technique results, PHOG delivered lower accuracy com‑
pared to color layout because the color usage in the experiment was dominant com‑
pared to the shape usage. Thus, further research may consider the improvement of
object awareness within the Kinect device field and an image‑to‑image generative
model that might also be used in future research to deliver more context to the
ACMC. Using machine learning techniques for image classification might improve
users’ experience and create a co‑creation atmosphere. As mentioned earlier, the
MLP algorithm delivered the best classification accuracy. However, as image clas‑
sification usually involves large data sets, future work requires more extensive image
data with various features and properties.
Asynchronous Computer-Mediated Communication ◾ 179
Gestures and finger motions were used frequently in our daily lives to manipu‑
late objects. Low‑cost depth sensors like Kinect were useful for studying Human-
Computer Interaction (HCI) with bare hands. However, leap motion could be
considered an alternative; as the degrees of freedom increase, recognizing precise
hand movements becomes significantly more complex than recognizing a Kinect
skeletal gesture. The inclusion of human beings in the age of digital content impacts
the centralized nature of HCI. It broadens the understanding of CMC to encompass
co‑creation activities via the medium of multimedia. Consequently, this realization
enables new media artists and interface designers to acknowledge later that genera‑
tive and diffusion models can be employed in interactive new media art installa‑
tions, and more applications can be explored to encourage human engagement.
10.6 Conclusion
Computer‑mediated technologies facilitate more 21st‑century communications and
creativity in conveying information in daily life. The advent of multiple modes of
communication and their mediated communication channels has urged for more
research toward innovative implementation and extensive study to explore further
new possibilities and value to humanity’s social skills and create more cooperative
opportunities.
In this study, audiovisual‑generated arts and gesture interaction have been
implemented to increase students’ engagement. The microanalysis using WEKA
image classification resulted in high accuracy but still requires an extension to
the data quantity and more forms of human‑to‑human interaction to be defined.
Although the affective perspective of music and visualization has been discussed
throughout the study, the proper evaluation metrics for the affective communica‑
tion and aesthetic perspective are expected to be implemented in further studies.
References
Alonso‑Jiménez, P., Bogdanov, D., Pons, J., & Serra, X. (2020). Tensorflow audio models in
essential. In ICASSP 2020‑2020 IEEE International Conference on Acoustics, Speech and
Signal Processing (ICASSP) (pp. 266–270), Barcelona, Spain. https://doi.org/10.1109/
ICASSP40776.2020.9054688
Bedenlier, S., Bond, M., Buntins, K., Zawacki‑Richter, O., & Kerres, M. (2020). Facilitating
student engagement through educational technology in higher education: A systematic
review in the field of arts and humanities. Australasian Journal of Educational Technology,
36(4), 126–150. https://doi.org/10.14742/ajet.5477
Bradley, M. M., Codispoti, M., Cuthbert, B. N., & Lang, P. J. (2001). Emotion and motiva‑
tion I: Defensive and appetitive reactions in picture processing. Emotion, 1(3), 276–
298. https://doi.org/10.1037/1528‑3542.1.3.276
180 ◾ Innovations and Applications of Technology
Kelleher, C., & Wagener, T. (2011). Ten guidelines for effective data visualization in scien‑
tific publications. Environmental Modelling & Software, 26(6), 822–827. https://doi.
org/10.1016/j.envsoft.2010.12.006
Keynan, O., Brandel, N., & Slakmon, B. (2022). Students’ knowledge on emotion
expression and recognition in computer‑mediated communication: A compara‑
tive case study. Computers & Education, 189, 104597. https://doi.org/10.1016/j.
compedu.2022.104597
Kim, D., Frank, M. G., & Kim, S. T. (2014). Emotional display behavior in different forms
of computer mediated communication. Computers in Human Behavior, 30, 222–229.
https://doi.org/10.1016/j.chb.2013.09.001
Landry, S., Jeon, M. (2020). Interactive sonification strategies for the motion and emotion
of dance performances. Journal of Multimodal User Interfaces, 14, 167–186. https://doi.
org/10.1007/s12193‑020‑00321‑3
Li, L., Gow, A. D. I., & Zhou, J. (2020). The role of positive emotions in education: A
neuroscience perspective. Mind, Brain, and Education, 14(3), 220–234. https://doi.
org/10.1111/mbe.12244
Massey, A., Montoya, M., Samuel, B. M., & Windeler, J. (2023). Presence and team per‑
formance in synchronous collaborative virtual environments. Small Group Research
(Online advanced publication). https://doi.org/10.1177/10464964231185748
Nahum‑Shani, I., Shaw, S. D., Carpenter, S. M., Murphy, S. A., & Yoon, C. (2022).
Engagement in digital interventions. American Psychologist, 77(7), 836–852. https://
doi.org/10.1037/amp0000983
Northey, G., Bucic, T., Chylinski, M., & Govind, R. (2015). Increasing student engagement
using asynchronous learning. Journal of Marketing Education, 37(3), 171–180. https://
doi.org/10.1177/0273475315589814
O’Brien, H. (2016). Theoretical perspectives on user engagement. Why engagement matters:
Cross‑disciplinary perspectives of user engagement in digital media (pp. 1–26). https://
doi.org/10.1007/978‑3‑319‑27446‑1_1
Oh, S. P. (2019). Computer‑mediated communication competencies of teachers and their
accessibility to virtual learning platform. Educational Leader (Pemimpin Pendidikan), 7,
61–74. https://ejournal.um.edu.my/index.php/PEMIMPIN/article/view/22003/11132
Pandeya, Y. R., & Lee, J. (2021). Deep learning‑based late fusion of multimodal informa‑
tion for emotion classification of music video. Multimedia Tools and Applications, 80,
2887–2905. https://doi.org/10.1007/s11042‑020‑08836‑3
Sands, M., & Isaacowitz, D. M. (2017). Situation selection across adulthood: The role of
arousal. Cognition and Emotion, 31(4), 791–798. https://doi.org/10.1080/02699931.
2016.1152954
Sauer, M., Wagner, C., Lombardi, G., Di Cesare, G., Sonesson, G., Igl, N., ... & Lensing, J. U.
(2022). Multimodality: The sensually organized potential of artistic works. Art & Culture
International Magazine, 10, 135–161. https://doi.org/10.11588/artdok.00007968
Sherblom, J. C., Withers, L. A., & Leonard, L. G. (2013). The influence of computer‑mediated
communication (CMC) competence on computer‑supported collaborative learn‑
ing (CSCL) in online classroom discussions. Human Communication, 16(1), 31–39.
University of Maine. https://scholar.google.com/citations?view_op=view_citation&hl
=en&user=IlQp164AAAAJ&citation_for_view=IlQp164AAAAJ:dhFuZR0502QC
Shoumy, N. J., Ang, L. M., Seng, K. P., Rahaman, D. M., & Zia, T. (2020). Multimodal big
data affective analytics: A comprehensive survey using text, audio, visual and physi‑
ological signals. Journal of Network and Computer Applications, 149, 102447. https://
doi.org/10.1007/978‑3‑319‑97598‑6_3
182 ◾ Innovations and Applications of Technology
Impacts of a Blended
Learning Model on EFL
Students’ Engagement
and English Proficiency:
Evidence from IELTS
Preparation Courses
Tuyet Thi Tran and Chau Thi Hoang Hoa
11.1 Introduction
The landscape of English education in Vietnam has evolved significantly, driven
by the demand for global communication and the diverse opportunities it offers
(Rynhart & Chang, 2014). Nevertheless, the enduring prevalence of teacher‑cen‑
tered pedagogical approaches, especially in remote areas, significantly impedes the
attainment of communicative language education objectives (Nuby et al., 2020).
Many efforts have been made to improve students’ language proficiency through‑
out the country, such as implementing the national level project “Teaching and
Learning Foreign Languages in the National Formal Educational System from
2008 to 2020 Project” (Phuong, 2017).
DOI: 10.1201/9781003473916-14183
184 ◾ Innovations and Applications of Technology
11.3 Methodology
This study adopted a mixed‑methods research design. It used IELTS paper‑based
tests, observations, and teachers’ and students’ surveys as research instruments.
11.3.1 Participants
This study involved 600 students and 60 English teachers from 20 schools who
had no prior exposure to IELTS preparation. Besides the student and teacher par‑
ticipants as sources of research data, the implementation of this research project
involved the participation of staff at different positions, such as administrators, offi‑
cers, technicians, employers, parents, and the project manager, and their roles are
specified in Figure 11.1.
Together, these participants formed a cohesive network of roles that are instru‑
mental in implementing the OMO model. Each role was carefully examined within
the framework of theoretical frameworks, all with the goal of making meaningful
progress in English education in remote Vietnamese regions.
oral skills. The third set of classes, led by Vietnamese teachers, concentrated on
comprehensive assessment and knowledge consolidation. Regular mock tests were
conducted every two months to evaluate the intervention’s impact on student learn‑
ing outcomes. These tests served as practical assessments of the BL model’s effec‑
tiveness in this study, playing a vital role in achieving its objectives. For classroom
equipment, please see Figure 11.3.
Figure 11.2, clearly depict that their language skills were relatively low before the
project implementation. In the first group, 22% of students scored below 2.0, indi‑
cating a basic understanding of English. The majority of the second group (53%)
achieved scores between 2.0 and just under 3.0, signifying a moderate level of com‑
petency. Approximately 25% of students in the third group scored between 3.0 and
3.5, demonstrating a relatively higher level of proficiency.
After implementing the OMO model, students underwent a thorough reeval‑
uation process, revealing significant progress. Notably, 31% of students improved
their scores to fall within the 2.5–3.5 range, indicating commendable progress in
English proficiency. A substantial 53% of students achieved scores ranging from
3.5 to 4.0, marking significant advancements in their language skills. Moreover,
16% of students scored between 4.0 and 4.5, showing a remarkable leap in profi‑
ciency. The remaining 6% of students who did not show improvement belonged
to various level groups but shared common characteristics, including low atten‑
dance, homework scores, and a tendency to neglect assignments even after teacher
corrections. Notably, students in groups 2 and 3 demonstrated more significant
score increases than those in group 1. These transformative findings are depicted
in Figure 11.4.
Furthermore, the study examined the magnitude of score improvements demon‑
strated by students. While a small 6% segment showed no discernible improvement,
a significant 28% experienced commendable advancements of 0.5 bands in their
IELTS scores, signifying a positive impact on their language skills. Impressively,
39% of students demonstrated remarkable progress, achieving a complete 1‑band
improvement, shedding light on substantial enhancement. Finally, 27% of students
showcased substantial development, remarkably elevating their scores by 1.5 bands.
These findings, as depicted in Figure 11.4, underscore the profound impact of the
model on students’ IELTS scores, reaffirming its pivotal role in enhancing English
proficiency.
revealed that 94% of students found the model more satisfying than traditional
classrooms, directly correlating with increased engagement. Approximately 89% of
students attributed their improved English skills and heightened enjoyment of the
learning process to the model’s influence.
Furthermore, 86% of students acknowledged the model’s positive impact on
English competence, directly linking proficiency enhancement with improved
learning outcomes. Impressively, 91% recognized its positive influence on class‑
room engagement, actively promoting student participation. The model’s adaptabil‑
ity received commendations from 88% of students, affirming its versatility across
diverse learning styles. These findings were consistently validated in Table 11.1,
consistently showcasing mean scores exceeding 4.8 for all measured dimensions,
reflecting an exceptionally high level of agreement with the model’s effectiveness.
Teachers’ overall evaluation of BL evaluation is presented in Table 11.2. The reli‑
ability of these survey responses was affirmed by robust Cronbach’s Alpha values,
ranging from 0.86 to 0.90.
Similarly, teacher survey results confirmed the OMO model’s positive influence
(see Table 11.2) on learning and teaching. A remarkable 92% of teachers observed
the model’s significant enhancement of student participation, aligning with stu‑
dent perceptions. A substantial 87% noted its positive effect on students’ English
proficiency, further substantiating its impact. Teachers attested to the model’s
transformative effects, with 89% reporting fundamental changes in their teach‑
ing experiences and 91% emphasizing the improved quality of English education.
11.5 Conclusion
The OMO‑based English approach introduces a distinctive fusion of content deliv‑
ery and pedagogical approaches that break away from conventional educational
norms. This finding suggests significant implications for student engagement and
academic achievements. The synergy between innovative pedagogy and seamless
technology integration highlights the model’s remarkable potential to drive trans‑
formative changes. It reshapes established pedagogical paradigms, significantly
enhancing student learning experience and bringing new life into education.
This investigation emphasizes the OMO model’s role in elevating English pro‑
ficiency in remote Vietnamese areas. The implications of these findings illuminate
the transformative potential embodied by the OMO model. It emerges as a power‑
ful tool capable of addressing educational disparities and creating equitable learn‑
ing opportunities, bridging gaps that were once seen as impossible. Moreover, the
model’s crucial role in aligning students with the dynamics of the post‑2020 labor
market underscores a central theme. It highlights the model’s immediate relevance
and urgency in a rapidly evolving world where digital literacy and language profi‑
ciency are more essential than ever.
This study serves as a call to action, a passionate appeal to educators, policymak‑
ers, and stakeholders. It urges them to recognize and harness the untapped potential
inherent in the OMO model as a potent catalyst for positive transformations in
remote education. The BL’s transformative power, driven by innovative pedagogical
196 ◾ Innovations and Applications of Technology
References
Akbari, E., Naderi, A., Simons, R. J., & Pilot, A. (2016). Student engagement and foreign
language learning through online social networks. Asian Journal of Second and Foreign
Language Education, 1(4). https://doi.org/10.1186/s40862‑016‑0006‑7
Almekhlafi, A. G., & Almeqdadi, F. A. (2010). Teachers’ perceptions of technology integra‑
tion in the United Arab Emirates school classrooms. Journal of Educational Technology
& Society, 13(1), 165–175. http://www.jstor.org/stable/jeductechsoci.13.1.165
Asrifan, A., Zita, C. T., Vargheese, K. J., Syamsu, T., & Amir, M. (2020). The effects of CALL
(computer assisted language learning) toward the students ‘English achievement and
attitude. Journal of Advanced English Studies, 3(2), 94–106. http://doi.org/10.47354/
jaes.v3i2.88
Bizami, N. A., Tasir, Z., & Kew, S. N. (2023). Innovative pedagogical principles and techno‑
logical tools capabilities for immersive blended learning: A systematic literature review.
Education and Information Technology, 28, 1373–1425. https://doi.org/10.1007/
s10639‑022‑11243‑w
Bui, P. H., Pham, A. T. D., & Purohit, P. (2022). Computer mediated communication
in second language education. In R. Sharma & D. Sharma (eds.), New trends and
applications in Internet of Things (IoT) and big data analytics. Springer. https://doi.
org/10.1007/978‑3‑030‑99329‑0_4
Bui, T. K., Bui, P. H., & Hejsalem‑Brahmi, M. (2023). Qualitative research in social sci‑
ences: Data collection, data analysis, and report writing. International Journal of
Public Sector Performance Management, 12(1–2), 187–209. https://doi.org/10.1504/
IJPSPM.2023.132247
Impacts of a Blended Learning Model ◾ 197
Cao, N. (2023). IELTS 4.0 can be converted into 10 marks for admission into Vietnamese col‑
leges and universities. https://giaoduc.net.vn/ielts‑40‑duoc‑quy‑doi‑thanh‑diem‑10‑kh
ong‑anh‑huong‑den‑cong‑bang‑khi‑xet‑tuyen‑post234613.gd
Chen, X. L., Zou, D., Xie, H. R., & Su, F. (2021). Twenty‑five years of computer‑assisted
language learning: A topic modeling analysis. Language Learning & Technology, 25(3),
151–185. http://hdl.handle.net/10125/73454
Graham, C. R. (2013). Emerging practice and research in blended learning. In M. G.
Moore, Handbook of distance education, 3 (pp. 333–350). Routledge. https://doi.
org/10.4324/9780203803738.ch21
Ha, C. (2020). Only 2.9 marks for admission into Grade 10 in Nghe An. https://vtc.vn/29‑
diem‑moi‑mon‑la‑do‑vao‑lop‑10‑cong‑lap‑o‑nghe‑an‑ar564480.html
Hoang, T. (2022). 239 students achieving 10 in the National Examination for General
Education in Nghe An. https://congthuong.vn/nghe‑an‑239‑thi‑sinh‑dat‑diem‑10‑ky‑
thi‑tot‑nghiep‑trung‑hoc‑pho‑thong‑215237.html
Kern, R. (2006). Perspectives on technology in learning and teaching languages. TESOL
Quarterly, 40(1), 183–210. https://doi.org/10.2307/40264516
Le, V. C. (2008). Teachers’ beliefs about curricular innovation in Vietnam: A preliminary
study. ELT curriculum innovation and implementation in Asia (pp. 191–216). https://
bit.ly/45FE6HT
Le, V. C. (2018). Remapping the teacher knowledgebase of language teacher education:
A Vietnamese perspective. Language Teaching Research, 24(1), 71–81. https://doi.
org/10.1177/136216 8818777525
Means, B., Toyama, Y., Murphy, R., & Baki, M. (2013). The effectiveness of online and
blended learning: A meta‑analysis of the empirical literature. Teachers College Record,
115(3), 1–47. https://doi.org/10.1177/016146811311500307
Mishra, P., & Koehler, M. J. (2006). Technological pedagogical content knowledge: A frame‑
work for integrating technology in teachers’ knowledge. Teachers College Record, 108(6),
1017–1054. https://doi.org/10.1111/j.1467‑9620.2006.00684.x
Nguyen, A. (2023). Unraveling EMI as a predictor of English proficiency in Vietnamese
higher education: Exploring learners’ backgrounds as a variable. Studies in Second
Language Learning and Teaching, 13(2), 347–371. https://doi.org/10.14746/ssllt.38278
Nuby, M. H. M., Rashid, R. A., Rahman, A. R. M. M., & Hasan, M. R. (2020).
Communicative language teaching in Bangladeshi rural schools. Universal Journal of
Educational Research, 8(2), 622–630. https://doi.org/10.13189/ujer.2020.080235
Pawlak, M., & Kruk, M. (2022). Individual differences in computer assisted language learning
research. Taylor & Francis.
Phuong, H. Y. (2017). Improving English language teaching in Vietnam: Voices from uni‑
versity teachers and students. Current Politics and Economics of South, Southeastern, and
Central Asia, 26(3), 285–310. https://bit.ly/466Rwg9
Rynhart, G., & Chang, J. (2014). The road to the ASEAN economic community 2015 the
challenges and opportunities for enterprises and their representative organizations. ILO
Working Papers. International Labour Organization.
Tran, T. T. (2023). Online‑erge‑Offline Model for distance learning in English language
education: A case study. Vietnam Journal of Education, 7(3), 215–226. https://doi.
org/10.52296/vje.2023.251
Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes.
Harvard University Press.
Chapter 12
12.1 Introduction
Willingness to communicate (WTC) in L2 (second or foreign language) has
recently been a topic of interest for years (Khatib & Nourzadeh, 2015; Lee & Hsieh,
2019; Lee & Drajati, 2019; Mulyono & Saskia, 2021; Sato, 2019; Wang & Tseng,
2020; Zarrinabadi, 2014; Zarrinabadi et al., 2021; Zulaiha & Mulyono, 2020). The
term was initially introduced by McCroskey and Baer (1985) to denote individu‑
als’ propensity to communicate in their first language (L1) environment. Later, the
term was adopted in the context of second language acquisition by Macintyre et al.
(1998), subsequently becoming recognized as L2 WTC. He interpreted L2 WTC
as the conscious decision of an individual to engage, or not engage, in a particular
2016). Addressing this research gap, this study aims to examine the types of IDLE
activities conducted by Indonesian English L2 teachers and to what extent these
activities can predict their IWTC. Specifically, it seeks to address the research ques‑
tions below:
12.2 Methods
12.2.1 The Study Context and Participants
This study adopted a survey design to explore Indonesian EFL teachers’ IDLE activ‑
ities and the extent to which the activities could be used to predict their IWTC.
Data was collected through a convenience sampling technique, with participation
from 292 primary and secondary school educators. However, following a screening
process, only data from 229 educators proved valid and was subsequently analysed.
Table 12.1 outlines the demographic profiles of the participating teachers.
Gender
Male 63 (27.5%)
Age
<30 years old 150 (62.5%)
Teaching Experience
Under 5 years 145 (63.3%)
(Continued )
202 ◾ Innovations and Applications of Technology
Teacher’s Status
Pre‑service teachers 139 (60.7%)
Teaching Level
Primary 60 (26.2%)
this, valid data was analysed to derive an item and person summary and perform
Wright map analysis. A reliability analysis was also performed on the data, show‑
casing a great reliability level (α > 0.9). Additionally, considerable reliability levels
were observed in the sub‑scales for both IWTC and receptive IDLE activities, with
Cronbach α > 0.9, along with high‑level reliability reported for another sub‑con‑
struct related to productive IDLE activity, documented at >0.8 (Table 12.3).
To examine the interplay between the participants’ IDLE activities, demo‑
graphic backgrounds, and their IWTC, a Rasch‑based multiple regression analysis
was performed. In accordance with previous research study (Lee & Hsieh, 2019;
Mulyono & Saskia, 2021), two regression models were developed and applied. The
initial model was designed to exclusively investigate the link between IDLE and
204 ◾ Innovations and Applications of Technology
IWTC, while the secondary analysed the correlation of teachers’ IDLE and demo‑
graphic data to their IWTC. Through these models, the multiple factors impacting
the teachers’ IWTC could be comprehensively examined.
|
4 .### +
|
#### |
### |
|
.## S|
........................
3 ## +
######### |
LOW
######## |
MODERATE
## |
.## |
# |
2 .#### +
.### |
.###### |T MA (Multilingual Awareness)
######### M|
........................
.###### | Q6
## |
1 #### + Q3
######## |S Q4
.#### | Q2
.#### | Q5 Q7
.### | Q13 Q8
# |.................................................................
Q9
0 ### S+M
MODERATE
.#### | Q10
# |
# | Q1
. |.................................................................
# |S
-1 + Q12
| Q11
| Q15
. T| Q14
. |T
|
-2 . +
|
. |
LOW
|
|
|
-3 # +
|
|
HIGH
|
|
|
-4 . +
|
. |
|
|
|
-5 +
|
|
|
. |
|
-6 ### +
<less>|<freq>
Figure 12.1 Item‑person Wright mapping of IDLE scale logit value. Mp: person
mean; Sp: one standard deviation of person mean; Tp: two standard deviation of
person mean; Mi: item mean; Si: one standard deviation of item mean; Ti: two
standard deviation of item mean.
206 ◾ Innovations and Applications of Technology
208
Model 1 Model 2
F = 39.8, p < 0.001; R2 = 0.149 F = 5.83, p < 0.001; R2 = 0.211
Predictor β SE t β SE t
Gender
Female‑Male 0.280 0.264 1.932
Age:
Teaching Experience
(5–10 y) – (<5 y) 0.320 0.404 1.450
Teachers’ status
Note: β = standardized estimate, SE = standard error, Confidence interval = 95%, *p < 0.01.
EFL Teachers’ Informal Digital Learning of English ◾ 209
|
5 +
|
T|
|
.# |
4 +
.# |
LOW
. |
. |
.## S|
.......................
3 .# +
MODERATE
.##### |
.######### |
.###### |
.######## |
2 .######## M+
.......................
.##### |
### |
##### |
.# | Q33
1 .## +T
## + Q27
# S| Q28 Q30 Q31
.# |S Q25 Q34
| Q29 Q39
...............................................................
0 .### +M Q22 Q26 Q38
| Q17 Q35 Q36 Q37
MODERATE
...............................................................
. |S Q18 Q19 Q21 Q24
T| Q16 Q20 Q23 Q32
|
-1 . +T
|
|
|
LOW
. |
-2 +
. |
|
|
HIGH
|
-3 +
|
|
|
|
-4 +
|
. |
|
|
-5 .# +
<less>|<freq>
Figure 12.2 Item‑person Wright mapping of IWTC scale logit value. Mp: person
mean; Sp: one standard deviation of person mean; Tp: two standard deviation of
person mean; Mi: item mean; Si: one standard deviation of item mean; Ti: two
standard deviation of item mean.
EFL Teachers’ Informal Digital Learning of English ◾ 211
12.4 Conclusion
In conclusion, this investigation explored the correlation between the IDLE activi‑
ties and demographic factors of Indonesian EFL teachers, and their WTC in
English as a medium of instruction. A significant number of teachers were highly
engaged in deploying digital applications for informal English learning, with an
approximate 49% utilizing them extensively. A further 31.87% made moderate use
of these resources while 19.21% were low‑level users. Most teachers targeted the
augmentation of receptive language skills through activities such as listening to
English songs and consuming YouTube content. Interestingly, the desire to boost
speaking and writing competencies was noticeably lower. Very few teachers ven‑
tured into the realm of conversational English with native and non‑native speak‑
ers on social platforms, and an even fewer expressed interest in writing emails in
English. More importantly, regardless og their age, gender, teaching experience,
status, or the level of school they taught at, the engagement in IDLE activities
among teachers remained consistent across demographic variables. This indicates
that IDLE activities are a standard practice amongst teachers, regardless of demo‑
graphical differences.
In addition, through multiple regression analysis, the study’s aim was to discern
if statistical differences exist in teachers’ willingness to utilize English based on
their involvement in IDLE activities and demographic constituents. The findings of
statistical analyses revealed out to certain understandings from the models, yet they
did not entirely account for the variations in teachers’ WTC through English. This
research highlights the link between IDLE behaviours and WTC, while mindful
of the study’s limitations. The models’ relatively subdued explanatory capacity sug‑
gests the existence of other determinants influencing teachers’ instructional com‑
munication. Furthermore, the narrow demographic focus of the study and reliance
on self‑reported data potentially restrict the findings’ applicability on a wider scale.
Despite constraints, this study advances our comprehension of IDLE’s impact on
English language teachers’ WTC. This prompts further insights into the context of
language instruction and teacher development.
References
Alrabai, F. (2022). Teacher communication and learner willingness to communicate in
English as a foreign language: A structural equation modeling approach. Saudi Journal
of Language Studies, 2(2), 45–67. https://doi.org/10.1108/SJLS‑03‑2022‑0043
Amerstorfer, C. M., & Freiin von Münster‑Kistner, C. (2021). Student perceptions of
academic engagement and student‑teacher relationships in problem‑based learning.
Frontiers in Psychology, 12, 713057. https://www.frontiersin.org/articles/10.3389/
fpsyg.2021.713057
Belhiah, H., & Elhami, M. (2015). English as a medium of instruction in the Gulf: When
students and teachers speak. Language Policy, 14(1), 3–23. https://doi.org/10.1007/
s10993‑014‑9336‑9
212 ◾ Innovations and Applications of Technology
Briggs, J. G., Dearden, J., & Macaro, E. (2018). English medium instruction: Comparing
teacher beliefs in secondary and tertiary education. Studies in Second Language Learning
and Teaching, 8(3), 673–696. https://doi.org/10.14746/ssllt.2018.8.3.7
Bui, T. H. (2022). English teachers’ integration of digital technologies in the class‑
room. International Journal of Educational Research Open, 3, 100204. https://doi.
org/10.1016/j.ijedro.2022.100204
Jensen, S. H. (2017). Gaming as an English language learning resource among young chil‑
dren in Denmark. Calico Journal, 34(1), 1–19.
Jurkovič, V. (2019). Online informal learning of English through smartphones in Slovenia.
System, 80, 27–37. https://doi.org/10.1016/j.system.2018.10.007
Khatib, M., & Nourzadeh, S. (2015). Development and validation of an instructional
willingness to communicate questionnaire. Journal of Multilingual and Multicultural
Development, 36(3), 266–283. https://doi.org/10.1080/01434632.2014.914523
Lee, J. S. (2019a). Informal digital learning of English and second language vocabulary out‑
comes: Can quantity conquer quality? British Journal of Educational Technology, 50(2),
767–778. https://doi.org/10.1111/bjet.12599
Lee, J. S. (2019b). Quantity and diversity of informal digital learning of English. Language
Learning and Technology, 23(1), 114–126. https://doi.org/10.125/44675
Lee, J. S. (2020). Informal digital learning of English and strategic competence for cross‑cul‑
tural communication: Perception of varieties of English as a mediator. ReCALL, 32(1),
47–62. https://doi.org/10.1017/S0958344019000181
Lee, J. S., & Drajati, N. A. (2019). Affective variables and informal digital learning of
English: Keys to willingness to communicate in a second language. Australasian Journal
of Educational Technology, 35(5), 168–182. https://doi.org/10.14742/ajet.5177
Lee, J. S., & Dressman, M. (2018). When IDLE hands make an English workshop: Informal
digital learning of english and language proficiency. TESOL Quarterly, 52(2), 435–
445. https://doi.org/10.1002/tesq.422
Lee, J. S., & Hsieh, J. C. (2019). Affective variables and willingness to communicate of EFL
learners in in‑class, out‑of‑class, and digital contexts. System, 82, 63–73.
Lee, J. S., & Lee, K. (2019). Informal digital learning of English and English as an interna‑
tional language: The path less traveled. British Journal of Educational Technology, 50(3),
1447–1461. https://doi.org/10.1111/bjet.12652
Lee, J. S., & Sylvén, L. K. (2021). The role of informal digital learning of English in Korean
and Swedish EFL learners’ communication behaviour. British Journal of Educational
Technology, 52(3), 1279–1296. https://doi.org/10.1111/bjet.13082
Lee, J. S., Xie, Q., & Lee, K. (2024). Informal digital learning of English and L2 willing‑
ness to communicate: roles of emotions, gender, and educational stage. Journal of
Multilingual and Multicultural Development, 45(2), 596–612. https://doi.org/10.108
0/01434632.2021.1918699.
Macintyre, P. D., Burns, C., & Jessome, A. (2011). Ambivalence about communicat‑
ing in a second language: A qualitative study of French immersion students’ will‑
ingness to communicate. Modern Language Journal, 95(1), 81–96. https://doi.
org/10.1111/j.1540‑4781.2010.01141.x
Macintyre, P. D., Dornyei, Z., Clément, R., & Noels, K. A. (1998). Conceptualizing willingness
to communicate in a L2: A situational model of L2 confidence and affiliation. Modern
Language Journal, 82(4), 545–562. https://doi.org/10.1111/j.1540‑4781.1998.tb05543.x
McCroskey, J. C., & Baer, J. E. (1985). Willingness to communicate: The construct and
its measurement. Paper Presented at the Annual Meeting of the Speech Communication
Association (Denver, CO, November 7–10).
EFL Teachers’ Informal Digital Learning of English ◾ 213
Mulyono, H., Ningsih, S. K., Fausia, F., Setiawan, H., Ibarra, F. P., & Mukminin, A. (2023).
Developing an academic writing creativity and self‑efficacy among Indonesian TVET
instructors: Evaluating an online genre analysis‑based academic writing workshop.
Cogent Education, 10(2), 2237319. https://doi.org/10.1080/2331186X.2023.2237319
Mulyono, H., & Saskia, R. (2021). Affective variables contributing to Indonesian EFL
students’ willingness to communicate within face‑to‑face and digital environments.
Cogent Education, 8(1), 1911282. https://doi.org/10.1080/2331186X.2021.1911282
Mulyono, H., Saskia, R., Arrummaiza, V. S., & Suryoputro, G. (2020). Psychometric assess‑
ment of an instrument evaluating the effects of affective variables on students’ WTC in
face‑to‑face and digital environment. Cogent Psychology, 7(1), 1823617. https://doi.org
/10.1080/23311908.2020.1823617
Musthafa, B., Hamied, F. A., & Zein, S. (2018). Enhancing the quality of Indonesian teach‑
ers in the ELF era. In S. Zein (Ed.), Teacher Education for English as a Lingua Franca
(pp. 175–190). Routledge.
Ningsih, S. K., Mulyono, H., Ar Rahmah, R., & Fitriani, N. A. (2021). A Rasch‑based
validation of Indonesian EFL teachers’ received online social support scale. Cogent
Education, 8(1), 1957529. https://doi.org/10.1080/2331186X.2021.1957529
Othman, J., & Saat, R. M. (2009). Challenges of using English as a medium of instruc‑
tion: Pre‑service Science teachers’ perspective. Asia‑Pacific Education Researcher, 18(2),
307–316. https://doi.org/10.3860/taper.v18i2.1331
Pun, J. K. H., & Thomas, N. (2020). English medium instruction: Teachers’ challenges and
coping strategies. ELT Journal, 74(3), 247–257. https://doi.org/10.1093/elt/ccaa024
Rusland, S. L., Jaafar, N. I., & Sumintono, B. (2020). Evaluating knowledge creation pro‑
cesses in the Royal Malaysian Navy (RMN) fleet: Personnel conceptualization, partici‑
pation and differences. Cogent Business & Management, 7(1), 1785106. https://doi.org
/10.1080/23311975.2020.1785106
Sato, R. (2019). Fluctuations in an EFL teacher’s willingness to communicate in an
English‑medium lesson: An observational case study in Japan. Innovation in Language
Learning and Teaching, 13(2), 105–117. https://doi.org/10.1080/17501229.2017.13
75506
Sumintono, B., & Widhiarso, W. (2014). Aplikasi Model Rasch untuk Penelitian Ilmu‑ilmu
Sosial, (B. Trim (Ed.), 2nd edition). Trim Komunikata.
Suryoputro, G., Rahmanda, A., Sulthonah, F. A., Mulyono, H., & Ningsih, S. K. (2023).
Measuring Indonesian EFL teachers’ digital creativity: Validation of Hoffmann’s digital
creativity scale. International Journal of Information and Education Technology, 13(4),
763–771. https://doi.org/10.18178/ijiet.2023.13.4.1865
Taherian, T., Shirvan, M. E., Yazdanmehr, E., Kruk, M., & Pawlak, M. (2023). A longitu‑
dinal analysis of informal digital learning of English, willingness to communicate and
foreign language boredom: A latent change score mediation model. The Asia‑Pacific
Education Researcher. https://doi.org/10.1007/s40299‑023‑00751‑z
Wang, C., & Tseng, W. T. (2020). Toward an instructional WTC‑mediated model for
L2 classroom interaction. SAGE Open, 10(3), 1–16. https://doi.org/10.1177/
2158244020943524
Zadorozhnyy, A., & Lee, J. S. (2023). Informal digital learning of English and willingness to
communicate in a second language: Self‑efficacy beliefs as a mediator. Computer Assisted
Language Learning, 1–21. https://doi.org/10.1080/09588221.2023.2215279
214 ◾ Innovations and Applications of Technology
215
Appendix B: Feedback
Questionnaire on
ChatGPT
1. Usability:
• On a scale of 1–10, how easy was it to use ChatGPT for English practice?
(1 = very difficult; 10 = very easy) 1 2 3 4 5 6 7 8 9 10
• Did you encounter any technical issues while using ChatGPT? (Yes/No)
• If yes, please specify: _____________________________________
2. Effectiveness:
• On a scale of 1–10, how effective do you find ChatGPT in aiding your
English learning? (1 being not effective at all and 10 being highly effective)
1 2 3 4 5 6 7 8 9 10
• Do you feel that ChatGPT provided feedback that helped improve your
English skills? (Yes/No)
• Would you recommend ChatGPT to fellow students for English practice?
(Yes/No)
3. Engagement:
• On a scale of 1–10, how engaging did you find the ChatGPT sessions?
(1 being not engaging at all and 10 being highly engaging)
1 2 3 4 5 6 7 8 9 10
• What features of ChatGPT did you find most engaging?____________
• Were there any features or aspects of ChatGPT that you found distracting
or unhelpful? (Yes/No) _______________________________________
4. Open‑ended Feedback:
• What improvements would you suggest for the ChatGPT platform for
English learning? ___________________________________________
________________________________________________________
• Share any additional comments or experiences you had while using
ChatGPT for your English practice. ____________________________
________________________________________________________
216