0% found this document useful (0 votes)
16 views13 pages

Computers and Education Open: Journal Homepage

Uploaded by

trx
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views13 pages

Computers and Education Open: Journal Homepage

Uploaded by

trx
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Computers and Education Open 6 (2024) 100184

Contents lists available at ScienceDirect

Computers and Education Open


journal homepage: www.sciencedirect.com/journal/computers-and-education-open

Combining human and artificial intelligence for enhanced AI literacy in


higher education
Anastasia Olga (Olnancy) Tzirides a, *, Gabriela Zapata b, Nikoleta Polyxeni Kastania a,
Akash K. Saini a, Vania Castro a, Sakinah A. Ismael a, Yu-ling You c, Tamara Afonso dos Santos d,
Duane Searsmith a, Casey O’Brien a, Bill Cope a, Mary Kalantzis a
a
University of Illinois Urbana-Champaign, USA
b
The University of Nottingham, United Kingdom
c
National Changhua University of Education, Taiwan
d
Universidade Federal de Rondônia, Brazil

A R T I C L E I N F O A B S T R A C T

Keywords: This paper seeks to contribute to the emergent literature on Artificial Intelligence (AI) literacy in higher edu­
Adult learning cation. Specifically, this convergent, mixed methods case study explores the impact of employing Generative AI
Cooperative learning (GenAI) tools and cyber-social teaching methods on the development of higher education students’ AI literacy.
Collaborative learning
Three 8-week courses on advanced digital technologies for education in a graduate program in the College of
Human-computer interface
Post-Secondary Education
Education at a mid-western US university served as the study sites. Data were based on 37 participants’ expe­
Teaching/Learning Strategies riences with two different types of GenAI tools–a GenAI reviewer and GenAI image generator platforms. The
application of the GenAI review tool relied on precision fine-tuning and transparency in AI-human interactions,
while the AI image generation tools facilitated the participants’ reflection on their learning experiences and AI’s
role in education. Students’ interaction with both tools was designed to foster their learning regarding GenAI’s
strengths and limitations, and their responsible application in educational contexts. The findings revealed that
the participants appeared to feel more comfortable using GenAI tools after their course experiences. The results
also point to the students’ enhanced ability to understand and critically assess the value of AI applications in
education. This study contributes to existing work on AI in higher education by introducing a novel pedagogical
approach for AI literacy development showcasing the synergy between humans and artificial intelligence.

1. Introduction in education can be categorized into three main areas: learning "for,"
"about," and "with" AI [6]. While there is a substantial body of research
Artificial intelligence (AI) has experienced several advancements focused on learning “about” and “for” AI, fewer empirical studies have
since its inception and publicization in the 1950s [1]. This was seen most explored applications [7], especially since the emergence of Generative
recently in November 2022 with the emergence of large language model AI (GenAI). GenAI promises to revolutionize pedagogical approaches
(LLM) chatbots, such as ChatGPT, Gemini, and Meta.ai. AI has become a through personalized and adaptive learning, aiming to enhance educa­
pervasive part of everyday life in various domains through the massive tional outcomes [2,8,9]. Consequently, it is important to investigate
popularity and use of personal devices. Education is a primary area of issues of teaching methodologies, institutional frameworks, access,
implementation due to its immense potential to transform student ethics, equity, bias, and sustainability [10,11] for successful imple­
learning experiences [2]. mentation and adaptation. The use of GenAI by both teachers and stu­
The incorporation of AI into education has been occurring for dents also highlights the need for a new type of literacy - AI literacy -
approximately sixty years and has been changing how we interact with crucial for the effective and responsible utilization of these emerging
the world—first with expert programmed learning systems [3], then technologies.
with systems based on hand-annotated machine learning [4], and more AI literacy is a multifaceted concept that encompasses not only the
recently, with self-supervised, reinforcement learning [3,5]. Broadly, AI understanding of AI technologies but also their responsible and effective

* Corresponding author.
E-mail address: [email protected] (A.O.(O. Tzirides).

https://doi.org/10.1016/j.caeo.2024.100184
Received 14 February 2024; Received in revised form 9 May 2024; Accepted 13 May 2024
Available online 14 May 2024
2666-5573/© 2024 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC license (http://creativecommons.org/licenses/by-
nc/4.0/).
A.O.(O. Tzirides et al. Computers and Education Open 6 (2024) 100184

use, along with the application of critical thinking to their design and through self-directed reading materials and hands-on experiences. The
implementation [12–15]. Laupichler et al. [7] and Ng et al. [16] participants were 82 women and 38 men, who were assessed on their
describe this type of literacy as the capacity to critically understand, progress in understanding AI concepts. Data were gathered through pre-
evaluate, and apply AI technologies, without the prerequisite of creating and post-course surveys and tests, as well as focus group interviews. The
AI models independently. This also involves a spectrum of skills that findings showed significant improvements in students’ understanding of
enable individuals to effectively communicate and cooperate with AI AI concepts, AI literacy, and empowerment, indicating that such
systems and employ AI tools in various aspects of life, including online educational interventions can bridge gaps in AI knowledge across gen­
spaces, domestic environments, and the workplace [17]. These skills ders and disciplines. This study also highlights the potential for AI lit­
have become particularly relevant when GenAI technologies are eracy courses to empower a broader range of students as well as foster
considered. In light of this, scholars such as Steinbauer et al. [18] and Ng inclusive education with AI technologies.
et al. [16] contend that the development of AI literacy should be an Fathahillah et al. (2023) have also investigated AI literacy in
integral aspect of K-12 education. Indeed, this type of literacy has connection with university students, examining the opinions of 156
become vital for navigating life, academic content, and employment in students enrolled in web programming courses relying on blended
an evolving, AI-dependent society. Based on this need, recent scholarly learning during the COVID-19 pandemic in a department of informatics
work has delved into the field of AI literacy in higher education, and computer engineering. The study used a proportional sampling
uncovering a nuanced landscape of both opportunity and necessity. method to distribute a Google survey that probed the participants’ views
For example, Laupichler et al. [7] carried out a scoping literature on various aspects of AI literacy. The results showed that the students
review on the topic of AI literacy in higher and adult education. Spe­ had a moderate level of understanding of AI concepts and applications.
cifically, the scholars aimed to evaluate the current state of the litera­ Additionally, Fathahillah and colleagues posited that understanding the
ture, identify thematic foci and recent research trends, and provide advantages and disadvantages of AI, the implications of AI use, and the
recommendations for research and practice. The review focused on 30 ethical and legal aspects of AI might have a significant impact on data
articles published since 2021 that explicitly dealt with theoretical or security and privacy in blended learning instruction. The scholars also
practical aspects of AI literacy, particularly teaching AI skills to believe that further research is needed to explore the complex re­
non-experts. The analysis undertaken showed that the scope and lationships between AI literacy, ethics, law, and data security and pri­
selected methods reported in the chosen papers were broad, with most vacy in blended learning models.
works reporting on program assessment and only seven studies (23 %) AI literacy development has also been explored in connection with
based on empirical work. Consequently, Laupichler and colleagues language learning in higher education contexts. For example, Hwang
emphasized the need for further research in the area, as well as the et al. [15] applied Oppenlaender’s [22] taxonomy of GenAI prompt
clarification of relevant terminology and the identification of suitable modifiers in the examination of the role of prompt literacy in second
content for all students, regardless of their backgrounds. language (L2) university classes. This study involved the participation of
More recently, Sperling et al. [19] conducted a similar review, but 30 L2 English students in Korea, who worked on a GenAI-powered
the focus this time was on the literature that conceptualizes AI literacy in project to create visual representations of English words. Learners
relation to teachers’ diverse forms of professional knowledge, specif­ showed their understanding of new L2 words through prompts they
ically in the context of teacher education. The scholars identified 34 developed to visually represent the meaning conveyed by the chosen
papers that met their inclusion criteria and analyzed them using a terms. The analysis of the prompts resulting from learners’ work showed
combination of quantitative and qualitative methods. The analysis that they exhibited the same iterative nature reported by Oppenlaender,
showed that existing studies covered a wide range of topics and used which entailed exerting various changes to prompts until the desired
different methodological approaches, but they did not broadly address results were achieved. The findings in Hwang et al.’s study also pointed
important aspects of teachers’ professional knowledge. For instance, to noticeable improvements in the participants’ vocabulary learning
Sperling and colleagues uncovered research gaps in connection with strategies. Additionally, participation in this work appears to have
educators’ practical and ethical knowledge, suggesting that addressing enhanced the students’ understanding of human-AI collaboration. This
these gaps could contribute to a more comprehensive understanding of study highlighted the possible contributions of GenAI to L2 education,
AI literacy in teaching as well as inform AI literacy education in teacher and it also showed the importance of prompt literacy in the AI era.
programs. In their reflection on the role of GenAI in L2 writing education, Kang
Cardon et al.’s work [20] also focused on teachers and AI technol­ and Yi [23] also identified “fine-tuned prompt literacy” as a critical
ogies, investigating the challenges and opportunities of AI-assisted competency for students’ growth as effective AI users and multimodal
writing in business communication as viewed by university in­ communicators. For example, the researchers described ways in which
structors. The study involved the participation of 343 communication GenAI can aid in developing multimodal and fine-tuned prompt literacy
instructors to understand their opinions on AI-assisted writing and its in L2 writers, and they emphasized the need to offer learners opportu­
impact on instruction. The results revealed that the participants believed nities to critically assess and create AI prompts effectively. Both Hwang
AI-assisted writing would be widely adopted in the workplace and et al.’s [15] study and Kang and Yi’s [23] reflection underscore the
would require significant changes to instruction, identifying also chal­ potential of AI in fostering a more dynamic literacy landscape, enabling
lenges and benefits. Similarly to Sperling et al. [19], based on their university students to create more nuanced and contextually appro­
findings, Cardon and colleagues highlighted the importance of devel­ priate outputs.
oping AI literacy both in connection with educators and students, which The studies discussed in the previous paragraphs have offered in­
would entail the need for a focus on application, authenticity, formation on important aspects related to AI literacy in connection with
accountability, and agency. both educators and students in higher education. Much, however, is still
AI literacy has also been investigated in connection with educational needed, particularly within fields such as social arts and history, media,
programs or curricula. Specifically, research in this area has examined and education, which have not been widely examined. The purpose of
instructional practices that might result in the development of students’ this work is to contribute to the existing body of work on AI literacy in
AI literacy. For instance, Kong et al. [21] describe the design, imple­ higher education by addressing calls for more empirical work (e.g., [7,
mentation, and evaluation of an AI literacy course for university stu­ 19]) as well as by bridging existing gaps. To do so, we investigate what
dents with diverse backgrounds, the objective of which was to promote students in a postgraduate education program believe are effective ways
AI literacy and empower participants to understand and work with AI of developing their AI literacy. Specifically, this study focuses on grad­
concepts. The course employed a flipped classroom learning approach uate university students’ perspectives, with the objective of answering
and focused on conveying AI concepts, rather than technical details, the following research question: How do university students’ exposure to

2
A.O.(O. Tzirides et al. Computers and Education Open 6 (2024) 100184

and work with AI review mechanisms and AI-image generation tools influence Table 2
their perceived AI literacy development? Through this exploration, this Participants’ Academic Backgrounds.
research aims to set the stage for a future where AI literacy is not only Academic Background Percentage
technical expertise but rather a holistic understanding that aligns with
Education 37.8 %
humanistic values and ethical considerations. Humanities (English, Literature, International Studies) 16.2 %
To achieve this goal, the study explores machine and human Science (Biology, Chemistry, Mathematics) 10.8 %
collaboration through review mechanisms within the context of student Business/Administration (Economics, Human Resources Management) 8.1 %
AI literacy. This investigation is grounded in the notion of cognitive Engineering (Mechanical Engineering) 2.7 %
Other (Industrial Design, Psychology, Actuarial Science) 10.8 %
prostheses, which views digital technologies as learning process N/A 13.5 %
enrichment. That is, technological developments, such as computers,
smartphones, and AI tools, are able to not only increase the accessibility
and capabilities of cognitive tools and can shape how individuals AI and machine learning concepts, 54 % claimed moderate familiarity.
interact but can also complement and augment human cognition and the These levels of familiarity are also reflected in the fact that 57 % of
capacity to convey meaning [24]. Grigsby [25] believes that the human participants reported that they had used an AI tool, such as ChatGPT,
cognitive system is limited in its capacity to perform tasks such as Bing, etc., for assessment purposes in academic or professional settings.
memory retention, attention span, sensory processing, comprehension, Notably, 22 % admitted to being ‘Not at all familiar’ with AI and ma­
and visualization. Therefore, by harnessing the power of AI, we can chine learning concepts. This lack of familiarity aligns with the 73 % of
augment human cognitive skills and create a symbiotic relationship participants who had no prior experience using AI image generation
between humans and AI. As a result, AI can be leveraged as a cognitive tools. Furthermore, 43 % of participants reported never using AI tools in
prosthesis to create engaging experiences that seamlessly enhance our their daily or professional lives, while 27 % reported usage at a cadence
understanding and capabilities. of ‘Once a week.’
This study was conducted at a public university in the US, and it
involved the participation of students from three 8-week post-graduate
2.2. Educational context
courses. The focus of these classes was the use of advanced digital
technologies in education. Two key applications were involved in this
This study focused on students’ experience and exposure to AI con­
work: (1) the use of a specialized AI review tool in conjunction with
cepts and tools through graduate-level education courses and their
human peer reviews for assessing complex essays, and (2) the employ­
required activities. The study targeted three courses taught during one
ment of AI-based image generation tools for obtaining reflections on
semester through a joint weekly live session and various asynchronous
student learning experiences. In the next sections of this paper, we
activities. Collectively, these three classes offered a comprehensive
introduce the study and discuss its results. This is followed by the pre­
exploration of the dynamics between learning, technology, and peda­
sentation of pedagogical suggestions and the limitations of this work.
gogy. Course A contrasted machine and human learning, delving into
AI’s capabilities and implications in education, while Course B bridged
2. Methods
learning theories and educational technology, critically exploring
paradigm shifts in psychology and their practical application in digital
2.1. Participants
learning environments. Course C examined diverse pedagogical ap­
proaches and the knowledge acquisition process across various educa­
This study was conducted across three 8-week online courses within
tional contexts, highlighting the role of literacy and critical engagement
the College of Education at a midwestern university in the United States.
with learning materials. These courses were pre-existing curriculum
The recruitment of participants for this work was carried out through
components and not designed specifically for research purposes. The
course announcements. Even though the study was based on curricular
research opportunity presented itself when students enrolled in these
content and activities, participation in it was voluntary and not a
classes, allowing for the observation and analysis of their engagement
requirement for course enrolment. Sixty-one students were enrolled in
with AI tools and concepts. Jointly, these classes offered students a
the courses; however, only 37 volunteered to participate in the study,
complex view of the multifaceted intersection connecting technological
completed all parts of the data collection process, and were considered
advancement and educational practice.
in the analysis.
In order to explore students’ perceived development of their AI lit­
These participants were of mixed demographics, with the majority of
eracy in these classes, a holistic, cyber-social approach was followed,
them being white females between 25 and 45 years old (Table 1). All of
entailing the use of various digital tools and the implementation of
them were pursuing graduate academic degrees ranging from certifi­
diverse collaborative learning practices. These included the following:
cates (5 %) and master’s (65 %) to doctoral degrees (30 %). Their main
(a) the application of a social learning platform’s GenAI review tool
academic backgrounds were ‘Education’ (37.8 %) and ‘Humanities’
designed and developed by our research lab, accompanied with tutorial
(16.2 %) (Table 2), and they were all concurrently maintaining profes­
videos to facilitate student comprehension of AI tools and their func­
sional careers as education professionals, either as instructors, admin­
tionality (see also Section 2.3.); (b) the employment of GenAI image
istrators in educational institutions, instructional designers, or
generation tools for students’ reflections on their experience with AI and
consultants.
peer reviews; and (c) students’ critical exposure to topics related to
The participants reported varied levels of exposure to AI technolo­
educational technologies and AI through course resources, live discus­
gies prior to the study. While 22 % stated they were ‘Very familiar’ with
sions, peer lightning presentations, and project creation.
The projects that students were expected to complete in the courses
Table 1
in which they were enrolled (i.e., the three classes of focus) consisted of
Participants’ Demographics. multimodal critical pieces examining technology, educational theory,
and practice. Students chose their topics and then incrementally worked
Age Percentage Race/Ethnicity Percentage Gender Percentage
on their projects throughout the semester, receiving both GenAI and
18–24 11 % White 67.6 % Male 30 % peer feedback at different points of the development process. Both
25–35 38 % Hispanic/Latino 16.2 % Female 70 %
35–45 30 % Asian 8.1 %
45–55 16 % Multiracial 2.7 %
55+ 5% N/A 2.7 %

3
A.O.(O. Tzirides et al. Computers and Education Open 6 (2024) 100184

learners’ work and the AI and human reviews were based on a rubric1 enhancing its output by prioritizing these over the vast array of less
drawn from a schema grounded in an epistemological approach to reliable internet sources. This implementation enriches the LLM with
learning (Fig. 1), not solely focusing on cognition but more broadly on extensive scholarly writings, created by the research team and graduate
knowledge-making activities additionally involving material practices, students, aiming to significantly improve the quality of its outputs.
embodied activity, and socio-emotional engagement [26,27]. Upon Another key aspect is that transparency and human moderation
submission of a complete draft, students generated an AI review based allow the LLM interactions to be visible to users and, furthermore, to
on this elaborated rubric. Then, they revised their work based on the AI subject AI suggestions to human review. The researchers’ AI tool oper­
feedback received. Once revisions were finalized, they submitted their ates with a pedagogically explicit rubric, mirroring the one provided to
work for peer review. Each student further reviewed the work of two students. This ensures that human evaluators always cross-check AI-
other students against the same rubric used by the GenAI tool. Students generated advice—whether peers, the students themselves, or in­
finally compared the human and machine feedback and then reflected structors. In addition, ontology supplements introduce machine-
on the review process before their final revision and submission. Fig. 2 understandable domain expertise, shaping the LLM’s analysis with
showcases the steps followed in the project creation process. structured human knowledge. Drawing on the knowledge processes
schema outlined in Fig. 1 [32], the suite of pedagogical strategies pre­
2.3. AI review tool sented in Fig. 2 was implemented. These strategies, which emphasize
collaborative and reflective learning practices, were applied through the
To design and develop the AI review tool used in this study, a novel designed AI tool aiming to enhance the pedagogical repertoire of edu­
approach was implemented, termed “cyber-social research” ([3], p. 88). cators and support learners in achieving specific educational outcomes.
This methodology, inspired by modern software development practices, The effectiveness of these strategies is expected to vary depending on
synergizes “agile research” approaches [28], with educational design their application and adaptation across various disciplinary and
research techniques [29]. The unique approach of this present study educational contexts.
involves collaboration between higher education students and research Through all these recalibrations, the aim is to make the AI not just a
team members in the iterative development of the tool. The develop­ tool but a collaborator in the educational process, one that respects the
ment process is dynamic, with software updates being deployed nightly, nuances of discipline-specific teaching and learning and one that sup­
influenced by user interactions from the preceding day leveraging agile, ports the expansion of pedagogical and knowledge repertoires within
cyber-social research methods and practices [3]. academic settings.
The AI review tool (Fig. 3), a novel addition to the social learning
platform used in this work, interfaces with OpenAI’s GPT through an 2.4. Procedures
application programming interface (API). This integration enables the
provision of automated feedback on the multimodal texts developed by This investigation employed case study methodology [33,34] using a
students in their courses, complementing peer and instructor feedback convergent, mixed-methods approach [35,36] to explore students’ per­
based on the criteria of the course project (see rubric1). The tool is ceptions of their AI literacy development in higher education. The
designed to accommodate instructors’ input of various assessment ru­ mixed-methods approach, integrating quantitative Likert scale data with
brics, offering AI-generated evaluations of student submissions. qualitative insights from open-ended questions and student reflections,
The instructional scaffolding for this tool was augmented with a effectively addresses the limitations of Likert scales by providing deeper
comprehensive video tutorial, depicted in Fig. 4, which detailed the use insights into the evolving perceptions and experiences of participants.
of the AI review tool. The video provided step-by-step instructions, Since our aim was only to investigate students’ views, no other methods
demonstrating the entire process—from signing into the platform to were employed to assess AI literacy growth. That is, in this work, our
obtaining the AI-generated review. Furthermore, the tutorial explained focus was mostly pedagogical, as we sought to unveil technical and
the underlying mechanics of the tool, such as the use as prompts of the methodological strategies to enhance graduate students’ AI literacy
same rubric criteria students employ to construct their works. It also based on their own experiences as well as suggest ways in which arti­
referred to the tool’s calibration for providing targeted feedback based ficial and human intelligence can be leveraged to effectively achieve this
on a knowledge base pertinent to the courses. The tutorial underlined goal.
the necessity for a critical stance towards the AI-generated reviews, The first source of data in this study consisted of pre- and post-course
recommending their use in conjunction with peer assessments to opti­ surveys that probed into participants’ perceived AI literacy development
mize feedback for student projects. in the courses of focus. The surveys were designed and distributed
The AI review tool we designed distinguishes itself through strategic through Qualtrics [37], and they were administered at the onset of the
enhancements to the large language model (LLM) it employs. This in­ semester (pre-course survey) and during its concluding phase (post-­
cludes prompt engineering, precision fine-tuning, insistence on trans­ course survey).
parency, human moderation, and the integration of high-level The pre-course survey (see Appendix A, Table A.1) included five
disciplinary ontologies as supplementary knowledge processes. These questions aiming to gage participants’ experience with AI. Students
modifications were considered vital for the effective application of were first asked to rate their familiarity with AI and machine learning
GenAI in educational environments. concepts through a single-select Likert scale ranging from 1 (not at all
Central to this effort is prompt engineering which is the art of familiar) to 5 (extremely familiar), aiming to ascertain their baseline
crafting queries for the chatbot to interact with the LLM, emphasizing knowledge. Subsequent questions inquired whether participants had
academic literacies and the structure of knowledge representation. previously employed GenAI tools, like ChatGPT, Bard, or Bing, for
Generative AI excels at automating genre-specific responses [30], and reviewing academic or professional work and, similarly, if they had
the tool created and used in this research exploits this by guiding the AI experience using GenAI image generation tools such as DALL-E, Mid­
to analyze student submissions based on genre characteristics defined in journey, or Stable Diffusion for educational or work purposes. These
a rubric, thereby providing targeted feedback. Fine-tuning the process single-select, closed questions—with response options of “No,”
then involves curating the LLM with academically valuable texts [31], “Maybe,” or “Yes”—aimed to shed light on the participants’ hands-on
experience with AI applications prior to the study. The pre-course sur­
vey also sought to measure participants’ self-assessed confidence in
1
For a schematic view of the rubric, check the following link: https://drive. using AI tools to improve learning outcomes, as well as their confidence
google.com/file/d/1AeAXykz5uZ8pAXEkT6oJ-q_JLDrvHxLV/view?usp=shar in crafting prompts for AI image generation tools. These two questions
ing. utilized a single-select Likert scale with options from 1 (not at all

4
A.O.(O. Tzirides et al. Computers and Education Open 6 (2024) 100184

Fig. 1. Knowledge Processes Schema.

Fig. 2. Project Workflow with Human and AI Reviews.

confident) to 5 (extremely confident), targeting the participants’ self- usefulness of AI image generation tools for their learning experience.
perceived proficiency and comfort level with AI tools. Complementing these, there were six open-ended questions that pro­
The post-course survey (see Appendix A, Table A.2) comprised ten vided participants with the opportunity to express in their own words
questions and targeted participants’ views and perceived learning out­ their comprehension of AI, explain their confidence levels of using AI
comes concerning AI and machine learning concepts after completing tools for learning, describe their knowledge gains about using AI in
the course. Specifically, the survey included four single-select Likert course processes, articulate their thoughts on combining human and
scale questions, which were closed questions allowing students to reflect artificial intelligence in pedagogical activities, identify any skills they
on their level of familiarity, confidence, and perceived usefulness of AI enhanced related to using AI in the course, and explain why they felt a
tools on a scale from 1 (not at all) to 5 (extremely). These questions certain level of confidence in creating prompts for AI image generation.
addressed the participants’ familiarity with AI and machine learning These surveys provided a framework for assessing shifts in students’
concepts, confidence in utilizing AI tools to enhance learning outcomes, perceptions and competency with AI tools before and after their course
confidence in creating prompts for AI image generation, and the exposure and use, contributing to the understanding of AI literacy

5
A.O.(O. Tzirides et al. Computers and Education Open 6 (2024) 100184

Fig. 3. AI Review of a Course Participant’s Work in the AI Review Tool.

Fig. 4. Screenshot of the video tutorial about the AI review tool.

development in higher education settings. Tables A.1 and A.2 in Ap­ Generative AI tools to create digital images to convey their experiences
pendix A present the specific questions in the pre- and post-course sur­ with both review types (human and AI) multimodally (i.e., through the
veys, the response options, and each question’s type and goal. combination of visual, gestural, and spatial semiotic resources). Stu­
The third source of data for this paper was the participants’ written dents were provided with a set of recommendations on possible tools
reflections on their AI literacy progress. After students had finalized the they could use, but they were encouraged to employ any that would suit
post-graduate courses of focus, they were invited to express their per­ better their needs and preferences. The resulting artifacts were accom­
ceptions of the ways in which the study’s holistic approach might have panied by textual reflections in which the students assessed the AI tools
influenced their AI competence. These reflections were multimodal, and they had used and described the process they had followed to develop
they were guided by two different prompts. their multimodal works.
The first prompt offered students the opportunity to employ The second prompt guided participants to describe, linguistically,

6
A.O.(O. Tzirides et al. Computers and Education Open 6 (2024) 100184

their experiences with the peer and AI review processes. These written or will be expected to teach in the future or their current professional
descriptions provided insights into their views of the effectiveness of role/responsibilities. This finding could also point to the need for
both review types in relation to their work in the courses of focus, as well customized educational strategies to maximize AI tool adoption and
as the lessons learned for future peer collaboration. These reflections literacy development based on varied student needs and levels of AI
also involved a self-assessment of their AI competence, considering their literacy.
confidence when using AI tools, their trust in AI outcomes, the impact of The statistical analysis of the pre- and post-course surveys reflects an
the course on their future use of AI in personal and professional contexts, increase in students’ perceived familiarity with AI and machine learning
and how they envisioned preparing for the evolving landscape of AI concepts. The data presented in Table 3 suggests growth in the partici­
tools. pants’ reported understanding of AI and machine learning concepts, as
evidenced by the higher mean value in the post-survey as compared to
2.5. Data analysis the pre-survey values (3.22 vs. 2.62 in the pre-course survey). This
might be indicative of AI literacy development. Additionally, the stan­
The data from the 37 participants who completed both the pre- and dard deviation and variance values in the post-survey point to more
post-surveys were subjected to both descriptive and inferential statisti­ consistent responses among the participants after their courses. The
cal analyses to determine differences between their responses after they decrease in these measures of dispersion suggests that the intervention’s
had been exposed to AI in the courses. The first step of the analysis holistic approach might have led to more consistent comprehension and
consisted of destringing the non-numerical variables into a numerical understanding of the concepts among the participants. Furthermore, the
format for mathematical operations. This resulted in the generation of inferential statistical analysis yielded a t-statistic of − 3.48, which,
several dummy variables, primarily focusing on converting Likert-scale combined with the low p-value (p < .01) rendered these results signif­
responses into numerical values ranging from 1 to 5, where 1 represents icant. These findings appear to complement the change registered in the
the lowest rating and 5 signifies the highest rating. Later, to determine percentage of students who initially lacked experience with AI image
whether there were significant changes in participants’ reported famil­ generation tools but, post-course, acknowledged at least a moderate
iarity with AI concepts, confidence in using AI tools, and prompt crea­ utility for these tools in their learning.
tion skills for image generation, a paired samples t-test was employed. The statistical analysis of the pre- and post-survey data also indicates
This statistical analysis was chosen because it is suitable for comparing a reported benefit in students’ ability to use AI tools for educational
the means of two related groups, i.e., in this case, the same participants’ purposes (Table 4). For example, an increase in mean scores in the
pre- and post-survey scores. statements probing into the participants’ perceived AI confidence is
The survey’s open-ended responses and participants’ written re­ noted in the post-course survey in comparison with pre-survey values
flections on their AI literacy progress were subjected to thematic anal­ (3.27 vs. 2.41 in the pre-course survey). Specifically, post-course, 55 %
ysis. This type of analysis has been employed in a large number of of the students indicated that they felt ‘Moderately confident’ in utiliz­
studies that have focused on participants’ opinions and have relied on ing AI tools to enhance their learning outcomes, while 30 % chose ‘Very’
similar instruments for data collection [38]. Therefore, it was deemed or ‘Extremely confident’ to characterize their level of confidence with
appropriate for this study. The first step of the analysis consisted of the this technology. Additionally, lower standard deviation and variance
careful reading of the students’ responses and the recording of aspects values post-course (see Table 4) suggest a more consistent reported
common to their experiences with AI. In the next stage, themes and confidence level among participants, indicating that the study’s
exemplifying statements were identified and recorded. approach might have not only increased overall confidence, but could
also have contributed to a more uniform AI literacy growth across the
3. Results participant group. The t-statistic and p-value resulting from the infer­
ential analysis suggest statistically significant differences between pre-
This section presents the findings of our study. Our discussion is and post-survey values. These findings, aligned with the previous data
organized in three different sub-sections. The first one focuses on the suggesting increased perceived usefulness of AI image generation tools,
pre- and post-course survey data on AI literacy progression. This is fol­ point to the study’s possible positive influence on participants’ reported
lowed by the results from the qualitative analysis of students’ reflections confidence and competency in utilizing AI tools for learning, which
accompanying the AI image generation artifacts, and textual comments might have resulted in AI literacy development.
on their overall experience with AI tools. The analysis of the pre- and post-survey statements examining par­
ticipants’ perception of their abilities to craft prompts for AI image
3.1. Pre- and post-course survey self-reported data on AI literacy generation also suggests positive changes (Table 5). For instance, the
progression mean score in the post-survey was higher than in the pre-survey (3.35 vs.
2.16 in the pre-course survey). This change appears to be supported by
In the pre-course survey, the participants disclosed their prior use of the standard deviation and variance values, which point to higher re­
AI tools, distinguishing between general applications and AI image ported AI abilities and more consistency of opinions among the partic­
generation tools. Even though the majority (57 %) had previous expe­ ipants (see Table 5). Inferential statistics values offer support for these
rience with general AI tools such as ChatGPT, Bard, and Bing, a signifi­ results, as differences between pre- and post-survey findings are statis­
cant 73 % had not used AI image generation tools, suggesting potential tically significant. These data highlight the possible positive influence of
for educational development in this domain. After course exposure to
these types of tools, 5 % regarded them as ‘Extremely Useful,’ 32 % Table 3
found them ‘Moderately Useful,’ and 30 % considered them ‘Very Use­ Statistical Insights into Participants’ Evolving Perceived Familiarity of AI and
ful.’ These results suggest that those students with no previous experi­ Machine Learning Concepts.
ence with AI image generation appeared to recognize its practical
Mean Standard Variance t- p- Cohen’s
application in an academic setting after their direct interaction and Deviation statistic value d
experience with it. Despite these overall positive attitudes, it is impor­
Pre- 2.62 1.06 1.13 − 3.48 0.0013 0.66
tant to note that some participants seemed to find little use for this type course
of AI use, as 22 % characterized it as ’Slightly Useful’, and 11 % survey
responded ‘Not useful at all.’ This result could be related to the partic­ Post- 3.22 0.71 0.51
ipants’ area of education, as some students might not have been able to course
survey
see a use for this type of tool in connection with the subjects they teach

7
A.O.(O. Tzirides et al. Computers and Education Open 6 (2024) 100184

Table 4
Statistical Insights into Participants’ Evolving Perceived Competency in AI Tool Utilization for Learning Outcomes.
Mean Standard Deviation Variance t-statistic p-value Cohen’s d

Pre-course survey 2.41 0.96 0.91 − 4.64 0.000045 0.95


Post-course survey 3.27 0.87 0.76

Table 5
Statistical Insights into Participants’ Evolving Perceived Proficiency When Generating Prompts for Image Creation.
Mean Standard Deviation Variance t-statistic p-value Cohen’s d

Pre-course survey 2.16 1.04 1.08 − 6.42 0.00000019 1.23


Post-course survey 3.35 0.89 0.79

the adopted practices on the participants’ development of their AI skills,


particularly in connection with prompt generation for AI image creation.
This also complements earlier findings showing most participating stu­
dents’ post-course appreciation for AI image generation tools’ utility in
their learning process.

3.2. Students’ reflections expressed with AI image generation tools

The thematic analysis of the participants’ reflections showed that the


development of their multimodal artifacts had involved an iterative
process characterized by trial and error and ad-hoc strategies, both in
connection with the generative platforms employed and the creation of
textual prompts. For example, most participants first resorted to Open
AI’s DALL-E; however, when the images generated proved to be “very
bland and plastic looking” (Participant 1), they moved on to other
popular options such as Lexica, Leonardo AI, and Wepik. The findings also
revealed that platform choice had been primarily guided by the ease of
use and the creative affordances offered. For instance, Participant 28
chose Wepik because it “proved more efficient [than DALL-E]. Its flexi­
bility with longer prompts and customization options opened up new
avenues for creativity.” Nevertheless, the participants’ overall impres­
sion was that none of the platforms could really reflect the vision they
had wanted to convey. While some students found this frustrating, as
evinced in the following sample statements, most viewed imperfect AI
versions as a source of inspiration, creativity, and learning: Fig. 5. Representation of AI Feedback Created by Participant 17.

“I wasn’t getting anything I liked. However, some images were popping up


that gave me new ideas for prompts.” (Participant 12)
“In the end, I thought this was extremely entertaining. I can see how you
might become entranced with becoming a prompt-whisperer and work to
refine these generated images.” (Participant 1)
“This was such an interesting learning experience. I had no idea what I
was doing, but on my second try I found an image [to express] how I felt.”
(Participant 22)
Clearly, the participants’ image generation entailed several prompt
revisions, particularly because all of them sought to convey their
meaning metaphorically, employing a variety of semiotic resources. For
example, emotions towards and experiences with peer and AI reviews
were expressed through facial expressions, body gestures, and size and
spatial saliency as well as the use of different colors (e.g., Participant 27
chose blue to represent confusion and yellow for excitement). Some
artifacts also relied on figurative tools such as personification or the
establishment of analogies between AI and characters from films. Two
such instances can be seen in the images developed by Participant 17,
who endowed a computer with a human-face mask and also used color
symbolically, and Participant 6, who compared the AI with the No-Face
character from the animated movie Spirited Away. In the following
quotes, these students offer more information on the meaning they
attempted to embed in their AI generated artifacts, presented respec­
Fig. 6. Representation of AI Feedback Created by Participant 6.
tively in Figs. 5 and 6:

8
A.O.(O. Tzirides et al. Computers and Education Open 6 (2024) 100184

“The masked computer shows AI posing as intelligent and human-like, but their skills and abilities in engaging with GenAI tools, such as ChatGPT,
just a fake, hollow representation of one. The melting portrays the ‘melting Bard, and more.
down’ I have observed in the process, each AI review being less useful, The participants’ experience with AI feedback also appears to have
accurate, and complete than the previous one. The jumbled wires are a developed their ability to identify its advantages and disadvantages in
mess surrounding the computer - like lifelines, but in a maze of disorga­ comparison with human reviews. These quotes evince the critical
nization showing generative AI as a prototype in the infancy of its assessment resulting from the students’ exposure to both types of
development. The blank background illustrates the empty promises of AI, feedback:
yet to be realized.” (Participant 17)
“Comparing AI and peer reviews revealed distinct differences in their
“No-Face eats people, and, once he’s eaten them, can speak in their voice. approaches and benefits. AI excelled in adhering strictly to rubric criteria,
That’s sort of what AI is like, or how it works: it needs a database of offering an objective evaluation. However, it lacked the personalized,
training materials to be able to fluently ‘speak.’ No-Face also hides behind contextually rich feedback that peers provided. On the other hand, peer
a mask, a mask that temporarily gives it an expressionless face. This is reviews, despite potential biases and occasional challenges, offered a more
how I felt reading my AI feedback: it was convincingly written, but off- informal, content-specific, and empathetic critique. One instance stood
putting, because of the fact that I know it comes from nowhere: it has out when a peer prompted me to consider what might be lost in the
no perspective, no subjectivity (or at least I don’t think it does, it’s not classroom due to the incorporation of AI technology.” (Participant 25)
clear to me how it could).” (Participant 6; emphasis in original
“AI reviews offer rapid feedback but occasionally need help with
comment)
contextual understanding, as exemplified by my paper’s misinterpretation
The positive effects of prompt development resulting from the AI- of source titles. On the other hand, peer reviews can provide more depth
generated image activity were also highlighted in the post-course sur­ and more contextually accurate feedback. I adopted AI for initial feed­
vey results. Specifically, when asked to reflect on their confidence in back, especially for identifying fundamental issues. I then used peer
creating prompts for AI image generation, 82 % of the participants feedback for deeper, more contextual insights. This combined approach
characterized it as ‘Moderate’ to ‘Extremely high,’ praising the oppor­ allowed me to benefit from the rapidity of AI while leveraging the depth of
tunity to develop the skill of crafting targeted prompts that this task had peer insights.” (Participant 20)
offered them.
Overall, the majority of students in this work seem to have regarded
both AI and human reviews as invaluable tools in knowledge acquisi­
3.3. Students’ reflections on AI tools
tion. In their view, both types of feedback assisted them in reflecting on
their written work, prompting further research to address issues iden­
The thematic analysis of the open-ended questions in the post-survey
tified by both machine and human peers.
and students’ textual reflections on the significance of AI in their classes
Despite most participants’ welcoming attitudes towards AI, there
suggest that they valued AI feedback, and they considered AI useful for
were also recorded concerns associated with privacy, security, and
the generation of ideas, content, and the overall support offered
inaccuracies found in some of the comments originating from machine
throughout the completion of their class projects. For example, partici­
feedback. For example, Participant 30 felt that the AI output “is not 100
pants described AI as a powerful, intelligent, and collaborative tool that
% accurate and cannot be considered a source of truth, so it could cause
enhances productivity and helps develop cognition. This is clearly seen
harm in giving false information.” This quote points to a vital aspect of
in the opinions expressed by Participant 10, who regarded AI as “a
AI literacy involving the responsible utilization of AI, including the
collaboration tool that makes human work more efficient and produc­
verification of information produced by GenAI. Participant 28 also made
tive, allowing it to analyze high volumes of information and provide
reference to another area of concern connected to AI—its potential
valuable information to improve work.” Additionally, students who had
biases, pondering on the “ethical considerations [that] should be taken
first been exposed to AI reviews in the study’s classes felt that this
into account when using AI for assessments, especially considering its
experience had served as an introduction to the capabilities of AI tools
limitations and potential biases.”
and had prompted a heightened interest in exploring potential AI ap­
plications in their future academic study and work areas. The reflections
4. Discussion
below offer evidence for these opinions:
“This class has definitely opened the AI door for me. I now know the The results from the three sources of data in this work suggest that
capabilities of my AI knowledge and will keep developing them as I move the participants regarded their exposure to and work with AI tools as
forward.” (Participant 12) significant for the development of their AI literacy. Both the quantitative
and qualitative responses recorded in the post-survey and textual re­
“I have been using chat GPT for only a few months now. I do feel more
flections point to various reported benefits that mirror previous dis­
confident now. I also think I understand it better.” (Participant 14)
cussions on AI literacy development (e.g., [12,13]), including skill
“I hope to leverage AI more in my day-to-day life. I already have some acquisition, critical thinking, and ethical engagement. Clearly, the par­
experience in prompt generation, but I am not certain how to actually use ticipants’ engagement with AI reviews offered the opportunity to
AI in a specific program. Seeing the innovative uses of AI in this class, such contemplate broader applications of AI and explore additional AI tech­
as an AI feedback tool, inspires me to further my personal reach on what I nologies applicable to both their professional and personal lives. Addi­
can accomplish with AI.” (Participant 19) tionally, the findings offer evidence for a spectrum of student
experiences, from those inspired by the imperfect outputs to others who
Similar views were offered by students who had experienced the
viewed AI as a collaborative partner that augmented their cognitive
study’s AI review tool three or more times in previous courses and re­
processes. Most participants, however, highlighted the advantages of the
ported feeling more confident and comfortable incorporating AI reviews
combination of AI and human feedback, which can be considered as a
into their reviewing process. Furthermore, due to their exposure to AI
reflection of their maturing AI literacy—one that appreciates AI’s
reviews, these students had begun to integrate AI tools, such as ChatGPT,
strengths as a welcomed addition to human intelligence, but also criti­
more extensively into their pedagogical practices and personal studies
cally assesses its limitations, including ethical issues connected with its
beyond the scope of their post-graduate courses. For example, these
lack of accuracy and possible biases. Also, both the quantitative data and
participants felt the incorporation of AI into their classes had motivated
the students’ narratives suggest an educational experience that moved
them to recognize the positive impact that AI could have on their per­
beyond traditional learning paradigms, embracing the complexity and
sonal and professional lives, which had encouraged them to enhance

9
A.O.(O. Tzirides et al. Computers and Education Open 6 (2024) 100184

potential of AI as a multifaceted tool in the landscape of higher 5. Pedagogical recommendations


education.
The analysis of the linguistic reflections that accompanied the AI- Drawing from the findings presented in this paper, we offer peda­
generated image submitted by the participants to convey their experi­ gogical suggestions for educators and educational professionals in
ences with AI reviews also appears to support the growth of participants’ higher education looking to promote AI literacy in their curricula. We
AI knowledge, and, in turn, the development of their AI literacy. For have divided our recommendations into three groups, based on the
example, the findings from the recorded reflections mirror those re­ different aspects of AI literacy highlighted in the literature (e.g., [16]).
ported in previous studies on prompt generation and university stu­ The three groups–Instructional Strategies, Reflective Learning, and Ethical
dents’ AI literacy development. The data show that the participants’ and Critical Engagement—are presented separately in the next three
work with prompts exhibited the same non-intuitive, trial-and-error sub-sections.
characteristics reported by Oppenlaender [22] in his analysis of prompt
generation, which he described as a process entailing “iterative experi­ 5.1. Instructional strategies
mentation akin to brute force trial and error…[and as] an acquired skill
that is associated with a learning curve” (pp. 5–6). The students’ com­ This group of suggestions includes recommended strategies for
ments clearly point to experimentation involving ad-hoc strategies and embedding AI tools into teaching practices, highlighting the value of
several attempts, characterized at times by frustration and struggle, multimodal tools that align with diverse learning preferences and foster
before results matching expectations could be achieved. These results a dynamic, experimental learning environment. These suggestions aim
were similar to those highlighted by Hwang et al. [15] in their study on to provide students with opportunities to experiment with AI technol­
AI image generation by university students in Korea. ogies and develop their critical thinking skills about AI.
The complex, multimodal conceptualizations dissected by the par­
ticipants in their reflections and embedded in their artifacts also un­ • Incorporation of multimodal AI tools: Educators should consider
derscore the opportunity that AI image generation offered them to integrating a range of AI tools that enable multimodal learning. This
develop both their AI and multimodal literacy. That is, tasks like the one study showed that students appear to benefit from engaging with
included in this study can now be deemed essential in higher education various AI platforms, which can cater to different learning styles and
to facilitate students’ “understanding and capability to interact with, encourage creativity. Tools that facilitate visual, gestural, and spatial
utilize, and critically evaluate AI systems and their implications” ([15], learning, such as AI image generation tools, can be particularly
p. 2) and thus function effectively in the era of AI literacy [7,16]. effective.
Clearly, in this work, the participating students need to find the most • Fostering an agile environment: Given the iterative nature of
effective platform and way to visually represent their ideas, resulting in working with AI, as evidenced by students’ trial-and-error ap­
a journey of discovery that strengthens their knowledge of both AI tools proaches, educators should embrace an agile pedagogical approach
and prompt generation. Additionally, the learners’ assessment of the that encourages an experimental mindset. Creating assignments that
generated AI images as well as the changes they made to them to reflect allow for revisions and exploration of different AI functionalities can
their vision allowed them “to engage in multimodal literacy by consid­ enhance students’ understanding and confidence in using these
ering the relationships among different modes of communication, technologies.
deciphering intended meanings, and critically evaluating the effective­ • Provision of diverse AI exposure: This entails the development of
ness of multimodal presentations” ([23], p. 1). This, in turn, might have tasks that can introduce students to a range of AI technologies early
resulted in their growth as multimodal communicators and effective AI in their educational journey. This work revealed a gap in experience
users. with AI image generation tools; thus, educators should ensure that
The overall results of this work suggest that a pedagogical integra­ students are not only familiar with general AI tools but also with
tion of AI tools within human-centric, cyber-social teaching strategies specific applications relevant to their field.
can foster students’ abilities to both critically assess and effectively
employ AI technologies. The study’s data also seem to point to students’ 5.2. Reflective learning
growing proficiency and ethical awareness when using AI tools, mir­
roring the comprehensive educational approach advocated by Lau­ In this section, we focus on the role of self-reflection in developing AI
pichler et al. [7]. This work, therefore, not only mirrors calls by Long literacy, emphasizing the importance of metacognitive activities and
and Magerko [17] and Steinbauer et al. [18] for comprehensive AI ed­ peer collaboration in deepening students’ critical understanding of AI.
ucation, but also offers possible actionable insights into the trans­
formative potential of AI in nurturing a society adept at coexisting with • Encouragement of reflective practice: The use of metacognitive re­
advanced technologies. flections is crucial for developing AI literacy. Students should be
Considering the implications of these findings, it is important to prompted to regularly reflect on their experiences with AI tools, both
acknowledge the benefits and challenges of cyber-social teaching. in written form and through creative expressions like image gener­
Cyber-social teaching, as described by Cope & Kalantzis [5], offers ation. This reflective practice helps students articulate their under­
several benefits, including enhanced cognitive and social learning, standing and critically assess the role of AI in their learning. This
balanced learning agency, and productive diversity. It leverages digital reflection also needs to allow for connections with students’ life­
tools and social learning strategies to create engaging and collaborative world, particularly in the case of programs for teachers, such as the
educational experiences. However, it also has limitations, including ones on which this work is based. This implies that, as part of their
dependence on technology, risk of surface learning, challenges in critical assessment, student teachers need to consider which tech­
assessment integrity, and epistemic provenance issues. Recognizing nologies would be appropriate in their specific fields and how they
these limitations is crucial for developing effective and sustainable could support their practice as well as their own students’ learning
cyber-social teaching practices. Building on the experiences and insights process and AI literacy development.
presented in this paper, we offer pedagogical suggestions in the next • Focus on collaborative intelligence: Activities that promote collab­
section, aimed at harnessing the potential of AI in education and orative intelligence, where students share knowledge and feedback
contributing to ongoing conversations about the future of teaching and with peers, can be augmented with AI tools. This approach was
learning in the age of AI. shown in the study to enhance students’ ability to critically evaluate
AI-generated content against human feedback, leading to the
development of a critical stance towards AI con in general.

10
A.O.(O. Tzirides et al. Computers and Education Open 6 (2024) 100184

5.3. Ethical and critical engagement 7. Conclusions

These recommendations address the need for an ethical framework The study presented in this paper explores the emerging landscape of
and critical thinking skills in AI education, advocating for a curriculum AI in higher education, examining the possible dynamic interactions
that balances technological proficiency with an understanding of AI’s between human and artificial intelligence. This investigation harnessed
broader societal impacts. the synergy of AI tools and human-centric pedagogical strategies to
foster a comprehensive understanding of AI among higher education
• Reinforcement of critical evaluation skills: Educators should guide students. The integration of an AI review tool, tailored to course rubrics
students in evaluating the strengths and weaknesses of AI tools, and complemented by human peer reviews, alongside the utilization of
fostering an analytical mindset. This includes assessing the quality of AI image generation tools, not only offered the participating students the
AI-generated feedback, understanding the limitations of AI systems, opportunity to be exposed to and utilize AI tools, but it might have also
and recognizing the importance of human insights with AI analyses. contributed to the development of their AI literacy, including the critical
• Balance of AI with human intelligence: This study highlighted the assessment of AI’s capabilities and its educational value both for their
importance of balancing AI tools with human insights. While AI can postgraduate learning experiences and their professional educational
provide rapid, objective responses, human insights offer depth and practice. In our work, the participating students have emerged not just
context. Educators should design assignments that utilize both types as passive recipients but as active and critical users and evaluators of AI,
of intelligence to prepare students for the hybrid AI-human in­ reporting also the advancement in their AI literacy.
teractions they will likely encounter professionally. As we move forward, it is crucial that educators, policymakers, and
• Emphasis on ethical considerations: AI literacy involves under­ institutions recognize the urgency and depth that AI literacy commands.
standing the implications of AI; thus, it is important to discuss the This study serves as an example of AI’s possible impact and the indis­
ethical aspects of AI use. Topics such as data privacy, bias in AI al­ pensable role it might play in shaping the future of education. Our
gorithms, reliability of AI outputs, and steps to mitigate the risks participants’ views invite us to embrace AI not simply as a tool but as an
from these implications should be integrated into the curriculum to integral, multifaceted partner in the educational journey, paving the
ensure responsible use of AI. way for a society adept at thriving alongside advanced technologies. The
insights garnered from this investigation thus underscore the necessity
The findings of this study suggest that a multifaceted approach that to integrate comprehensive AI education into curricula, ensuring that
leverages the strengths of both AI and human intelligence, while also the next generation of learners is equipped with the knowledge, skills,
fostering a collaborative learning environment, can be effective in and ethical framework to harness AI responsibly and innovatively.
fostering AI literacy in higher education. These recommendations are
aimed at helping educators create a learning environment that not only Declaration of generative AI and AI-assisted technologies in the
develops and/or improves AI literacy but also prepares students to writing process
navigate the complexities of AI in their future academic and professional
endeavors. During the preparation of this work the authors used Grammarly in
order to improve the text’s spelling and grammar. After using this tool,
6. Study limitations the authors reviewed and edited the content as needed and take full
responsibility for the content of the publication.
While this research provides some insights into the impact of in­
terventions on AI-related skills, it exhibits the limitations often found in CRediT authorship contribution statement
teaching and learning scholarship, such as the description of a short
instructional intervention in a specific educational context with a small Anastasia Olga (Olnancy) Tzirides: Writing – review & editing,
number of student participants and the lack of longitudinal data and Writing – original draft, Validation, Supervision, Project administration,
control group. Also, the study relies solely on self-reported data, which Methodology, Investigation, Formal analysis, Data curation, Conceptu­
introduces the possibility of response bias. Participants might provide alization. Gabriela Zapata: Writing – review & editing, Writing –
socially desirable responses, impacting the accuracy of their reported original draft, Visualization, Formal analysis, Data curation. Nikoleta
familiarity with AI concepts and utilization of AI tools. Furthermore, the Polyxeni Kastania: Writing – review & editing, Writing – original draft,
small sample size may have affected the statistical power of the analysis, Visualization, Methodology, Investigation, Formal analysis, Conceptu­
limiting the precision and generalizability of the observed effects. The alization. Akash K. Saini: Writing – original draft, Visualization,
results are further constrained by the fact that data were collected Investigation, Formal analysis. Vania Castro: Writing – original draft.
during one specific academic term. This temporal limitation restricts the Sakinah A. Ismael: Writing – review & editing, Writing – original draft.
generalizability of findings beyond this specific time frame. Seasonal Yu-ling You: Formal analysis. Tamara Afonso dos Santos: Writing –
variations, academic workload, rapid developments, including those of original draft. Duane Searsmith: Software. Casey O’Brien: Conceptu­
our study’s software, or other temporal factors may influence partici­ alization. Bill Cope: Writing – review & editing, Supervision. Mary
pants’ responses differently in other terms. Additionally, the targeted Kalantzis: Writing – review & editing, Supervision.
courses were graduate-level online courses which might influence the
study outcomes compared to undergraduate-level courses of different Declaration of competing interest
formats (e.g., in-person or hybrid). Finally, the study focuses on short-
term outcomes based only on related responses collected after the The authors declare that they have no known competing financial
courses of focus had been finalized. Long-term sustainability and interests or personal relationships that could have appeared to influence
retention of the reported benefits on students’ ongoing academic prac­ the work reported in this paper.
tices as well as the development of their AI literacy warrant further
investigation.

11
A.O.(O. Tzirides et al. Computers and Education Open 6 (2024) 100184

Appendix A

Survey Questions

Table A.1
Pre-course Survey Questions.

Questions Response options Goal Type

On a scale from 1 to 5, how familiar or not are you now with AI and machine learning 1. Not at all familiar Familiarity with AI concepts Single-select
concepts? 2. Slightly familiar Likert scale
3. Moderately
familiar
4. Very familiar
5. Extremely
familiar
Have you previously used an AI tool (e.g. ChatGPT, Bard, Bing) for review of any academic or - No Previous experience with AI tools for Single-select
professional work? - Maybe review process
- Yes
Have you previously used an AI image generation tool (e.g. DALL-E, Midjourney, Stable - No Previous experience with AI-image Single-select
Diffusion) for any academic or professional work? - Maybe generation tools
- Yes
How confident are you in utilizing AI tools to enhance your learning outcomes, e.g. your 1. Not at all Confidence level Single-select
course project? confident Likert scale
2. Slightly confident
3. Moderately
confident
4. Very confident
5. Extremely
confident
How confident or not are you in creating prompts for AI image generation? 1. Not at all Confidence level Single-select
confident Likert scale
2. Slightly confident
3. Moderately
confident
4. Very confident
5. Extremely
confident

Table A.2
Post-course Survey Questions.

Questions Response options Goal Type

On a scale from 1 to 5, how familiar or not are you now with AI and machine learning 1. Not at all familiar Familiarity level with AI concepts Single-select
concepts after completing this course? 2. Slightly familiar Likert scale
3. Moderately
familiar
4. Very familiar
5. Extremely
familiar
Based on your experience through the course, describe in your own words what AI Comprehension of AI Open-ended
means to you.
After taking this course, how confident are you in utilizing AI tools to enhance your 1. Not at all Confidence level Single-select
learning outcomes, e.g. your course project? confident Likert scale
2. Slightly confident
3. Moderately
confident
4. Very confident
5. Extremely
confident
Please explain why you feel confident or not. Confidence level explanation Open-ended
After taking this course, how confident or not are you in creating prompts for AI image 1. Not at all Confidence level Single-select
generation? confident Likert scale
2. Slightly confident
3. Moderately
confident
4. Very confident
5. Extremely
confident
Please explain why you are confident or not in creating prompts for AI image Confidence level explanation Open-ended
generation.
How useful or not did you find the AI image generation tool for your learning 1. Not at all useful Usefulness of AI image generation tools Single-select
experience during the course? 2. Slightly useful Likert scale
3. Moderately useful
4. Very useful
(continued on next page)

12
A.O.(O. Tzirides et al. Computers and Education Open 6 (2024) 100184

Table A.2 (continued )


Questions Response options Goal Type

5. Extremely useful
What do you think about combining human and artificial intelligence as a support for Feedback on human and artificial Open-ended
learning? Why? collaborative intelligence
What did you learn about using AI in your learning processes from this course? Discovery of takeaways of using AI in the Open-ended
learning process
What skills did you enhance, if any, after completing this course that are relevant to Discovery of perceived skill development Open-ended
using AI in learning? related to AI

References [18] Steinbauer G, Kandlhofer M, Chklovski T, Heintz F, Koenig S. A differentiated


discussion about AI education K-12. KI - Künstliche Intelligenz 2021;35(2):131–7.
https://doi.org/10.1007/s13218-021-00724-8.
[1] McCarthy, J., Minsky, M.L., Rochester, N., & Shannon, C.E. (1955). A Proposal for
[19] Sperling K, Stenberg C, McGrath C, Åkerfeldt A, Heintz F, Stenliden L. In search of
the Dartmouth Summer Research Project on Artificial Intelligence. http://www
artificial intelligence (AI) literacy in teacher education: a scoping review. Comput
-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html.
Educ Open 2024;6:100169. https://doi.org/10.1016/j.caeo.2024.100169.
[2] Nguyen A, Ngo HN, Hong Y, Dang B, Nguyen B-PT. Ethical principles for artificial
[20] Cardon PW, Fleischmann C, Aritz J, Logemann M, Heidewald J. The challenges and
intelligence in education. Educ Info Technol 2022;28:4221–41. https://doi.org/
opportunities of AI-assisted writing: developing AI literacy for the AI age. Bus
10.1007/s10639-022-11316-w.
Professional Commun Q 2023;86(3):257–95. https://doi.org/10.1177/
[3] Tzirides AO, Saini AK, Cope B, Kalantzis M, Searsmith D. Cyber-Social Research:
23294906231176517.
Emerging Paradigms for Interventionist Education Research in the Postdigital Era.
[21] Kong SC, Man-Yin Cheung W, Zhang G. Evaluation of an artificial intelligence
In: Jandrić P, MacKenzie A, Knox J, editors. Constructing Postdigital Research.
literacy course for university students with diverse study backgrounds. Comput
Postdigital Science and Education. Switzerland: Springer Nature; 2023. p. 85–102.
Educ 2021;2:100026. https://doi.org/10.1016/j.caeai.2021.100026.
https://doi.org/10.1007/978-3-031-35411-3_5.
[22] Oppenlaender J. A taxonomy of prompt modifiers for text-to-image generation.
[4] Cope B, Kalantzis M, McCarthey S, Vojak C, Kline S. Technology-Mediated Writing
Behav Inf Technol 2023:1–14. https://doi.org/10.1080/0144929X.2023.2286532.
Assessments: Principles and Processes. Computers and Composition 2011;28(2):
[23] Kang J, Yi Y. Beyond ChatGPT: multimodal generative AI for L2 writers. J Second
79–96. https://doi.org/10.1016/j.compcom.2011.04.007.
Language Writing 2023;62(October):2–3. https://doi.org/10.1016/j.
[5] Cope B, Kalantzis M. On Cyber-Social Learning: A Critique of Artificial Intelligence
jslw.2023.101070.
in Education. In: Kourkoulou T, Tzirides A-O, editors. Trust and Inclusion in AI-
[24] Cope B, Kalantzis M. Artificial intelligence in the long view: from mechanical
Mediated Education: Where Human Learning Meets Learning Machines. Springer;
intelligence to cyber-social systems. Discover Artificial Intelligence 2022;2(1):13.
2024.
https://doi.org/10.1007/s44163-022-00029-1.
[6] Kukulska-Hulme A, Beirne E, Conole G, Costello E, Coughlan T, Ferguson R,
[25] Grigsby, S.S. (2018). Artificial Intelligence for Advanced Human-Machine
FitzGerald E, Gaved M, Herodotou C, Holmes W, Mac Lochlainn C, Nic Giolla
Symbiosis (pp. 255–66). https://doi.org/10.1007/978-3-319-91470-1_22.
Mhichíl M, Rienties B, Sargent J, Scanlon E, Sharples M, Whitelock D, Open
[26] New London Group. A Pedagogy of Multiliteracies: Designing Social Futures.
University, Institute of Educational Technology, & National Institute for Digital
Harvard Educ Rev 1996;66(1):60–92.
Learning. Innovating pedagogy 2020: open university innovation report 8. in
[27] Cope B. and Kalantzis M. "Towards Education Justice: Multiliteracies Revisited,” in
innovating pedagogy 2020: open university innovation report 8. The Open
Multiliteracies in International Educational Contexts: Towards Education Justice,
University; 2020. https://oro.open.ac.uk/69467/1/InnovatingPedagogy_2020.pdf.
edited by Gabriela C. Zapata, Mary Kalantzis and Bill Cope, London: Routledge,
[7] Laupichler MC, Aster A, Schirch J, Raupach T. Artificial intelligence literacy in
2023.
higher and adult education: a scoping literature review. Comput Educ 2022;3:
[28] Twidale M, Hansen P. Agile research. First Monday 2019;7(1):1–13. https://doi.
100101. https://doi.org/10.1016/j.caeai.2022.100101.
org/10.5210/fm.v24i1.9424.
[8] Akgün S, Greenhow C. Artificial intelligence in education: addressing ethical
[29] Laurillard D. Teaching as a design science: building pedagogical patterns for
challenges in K-12 settings. AI Ethics 2021;2(3):431–40. https://doi.org/10.1007/
learning and technology. Routledge; 2012.
s43681-021-00096-7.
[30] Kalantzis M, Cope B. Adding Sense: Context and Interest in a Grammar of Multimodal
[9] Corbeil ME, Corbeil JR. Establishing trust in artificial intelligence in education. In:
Meaning. Cambridge University Press; 2020. https://doi.org/10.1017/
Paliszkiewicz J, Chen K, editors. Trust, organizations and the digital economy:
9781108862059.
theory and practice. Routledge; 2021. p. 49–60. https://doi.org/10.4324/
[31] Lv, K., Yang, Y., Liu, T., Gao, Q., Guo, Q., & Qiu, X. (2023). Full Parameter Fine-
9781003165965-5.
tuning for Large Language Models with Limited Resources. https://arxiv.org/abs
[10] UNESCO. AI and education: guidance for policy-makers. UNESCO; 2021. https://
/2306.09782v1.
doi.org/10.54675/PCSP7350.
[32] Kalantzis M, Cope B. The Teacher as Designer: Pedagogy in the New Media Age. E-
[11] UNESCO. Guidance for generative AI in education and research. UNESCO; 2023.
Learning and Digital Media 2010;7(3):200–22. https://doi.org/10.2304/
https://unesdoc.unesco.org/ark:/48223/pf0000386693.
elea.2010.7.3.200.
[12] Abdelghani, R., Sauzéon, H., & Oudeyer, P.-Y. (2023). Generative AI in the
[33] Creswell JW. Qualitative inquiry and research design : choosing among five
classroom: can students remain active learners? http://arxiv.org/abs/2310.03192.
approaches. 3rd ed. SAGE Publications Ltd; 2013.
[13] Crabtree M. What is ai literacy? a comprehensive guide for beginners. 2023.
[34] Stake RE. The art of case study research. Sage Publications, Inc; 1995.
[14] Dong, C. (2023). How to Build an AI Tutor that Can Adapt to Any Course and
[35] Creswell JW. Research design: qualitative, quantitative, and mixed methods
Provide Accurate Answers Using Large Language Model and Retrieval-Augmented
approaches. 4th ed. SAGE Publications Ltd; 2014.
Generation. http://arxiv.org/abs/2311.17696.
[36] Greene JC, Caracelli VJ, Graham WF. Toward a conceptual framework for mixed-
[15] Hwang, Y., Lee, J.H., & Shin, D. (2023). What is prompt literacy? An exploratory
method evaluation designs. Educ Eval Policy Anal 1989;11(3):255–74.
study of language learners’ development of new literacy skill using generative AI.
[37] Qualtrics. Qualtrics [computer software]. Provo, Utah, USA: Qualtrics; 2024.
https://doi.org/10.48550/arXiv.2311.05373.
Available from, https://www.qualtrics.com/.
[16] Ng DTK, Leung JKL, Chu SKW, Qiao MS. Conceptualizing AI literacy: an
[38] Braun V, Clarke V. Thematic analysis: a practical guide. SAGE Publications; 2022.
exploratory review. Comput Educ 2021;2:100041. https://doi.org/10.1016/j.
https://books.google.com/books?id=mToqEAAAQBAJ.
caeai.2021.100041.
[17] Long D, Magerko B. What is AI literacy? Competencies and design considerations.
In: Proceedings of the 2020 CHI Conference on Human Factors in Computing
Systems; 2020. p. 1–16. https://doi.org/10.1145/3313831.3376727.

13

You might also like