Literacy in Artificial Intelli
Literacy in Artificial Intelli
Article
Literacy in Artificial Intelligence as a Challenge for Teaching in
Higher Education: A Case Study at Portalegre
Polytechnic University
Eduardo Lérias 1 , Cristina Guerra 1 and Paulo Ferreira 1,2,3, *
Abstract: The growing impact of artificial intelligence (AI) on Humanity is unavoidable, and there-
fore, “AI literacy” is extremely important. In the field of education—AI in education (AIED)—this
technology is having a huge impact on the educational community and on the education system itself.
The present study seeks to assess the level of AI literacy and knowledge among teachers at Portalegre
Polytechnic University (PPU), aiming to identify gaps, find the main opportunities for innovation
and development, and seek the degree of relationship between the dimensions of an AI questionnaire,
as well as identifying the predictive variables in this matter. As a measuring instrument, a validated
questionnaire based on three dimensions (AI Literacy, AI Self-Efficacy, and AI Self-Management)
was applied to a sample of 75 teachers in the various schools of PPU. This revealed an average level
of AI literacy (3.28), highlighting that 62.4% of responses are at levels 3 and 4 (based on a Likert
scale from 1 to 5). The results also demonstrate that the first dimension is highly significant for the
total dimensions, i.e., for AI Literacy, and no factor characterizing the sample is a predictor, but
finding a below-average result in the learning factor indicates a pressing need to focus on developing
Citation: Lérias, E.; Guerra, C.; these skills.
Ferreira, P. Literacy in Artificial
Intelligence as a Challenge for
Keywords: artificial intelligence; AI literacy; AI in education (AIED)
Teaching in Higher Education: A Case
Study at Portalegre Polytechnic
University. Information 2024, 15, 205.
https://doi.org/10.3390/
1. Introduction and Literature Review
info15040205
The recent dissemination of artificial intelligence (AI) to the general public has pro-
Academic Editors: Jui-Long Hung,
moted studies on its application in everyday life. The growing impact of AI on Humanity
Xu Du and Juan Yang
is unavoidable, and therefore, it is extremely important to understand what it is and what
Received: 11 March 2024 it can do. The set of skills that include the use, application, and interaction with AI is
Revised: 4 April 2024 currently called “AI literacy”.
Accepted: 4 April 2024 The importance of this topic arises, from the outset, in the field of education—AI
Published: 5 April 2024 in education (AIED)—where this technology is having a huge impact on the educational
community and on the education system itself. Studying the use of AIED is a search for
solutions that can add value to the teaching–learning process, supporting teachers and
students, highlighting the human factor, their thinking skills, teamwork and flexibility,
Copyright: © 2024 by the authors.
management of knowledge, ethics, and responsibility [1].
Licensee MDPI, Basel, Switzerland.
The scientific term “artificial intelligence”, as a science of intelligent machines, accord-
This article is an open access article
ing to [2,3], dates back to 1956. This was followed during the 1980s by a great development
distributed under the terms and
of intellectual skills in machines, as well as the first attempts to replicate the teaching
conditions of the Creative Commons
Attribution (CC BY) license (https://
process using AI [1]. Ref. [3] states that the entire education system should be reviewed, not
creativecommons.org/licenses/by/
only to make it more practical, but also more open to the world of work and to anticipate
4.0/).
the transformations in knowledge. AIED began as a field of recreation and research for
computer scientists, with a great impact on education [4], and today fuels the controversy
referred to by [5] regarding the use of AIED and the fear that the machine will replace the
teacher [6,7].
The acquisition and development of digital skills are seen as essential tools to facilitate
lifelong learning, and are therefore one of the main economic concerns in most developed
countries. “Literacy”, the ability to read and write and to perceive and interpret what is
read, undergoes an important development when it becomes clear that, despite having the
ability to write and read, some people are unable to understand the meaning of what they
read. In terms of information and communication technologies (ICTs), digital literacy has
been studied in depth, but there is still no consensual definition, because the ability to use
a computer is currently an insufficient measure to define digital literacy [8].
When AI is introduced into the concept of digital literacy, the scenario becomes
even more complex. According to [9], AI literacy is more than knowing how to use
AI-driven tools, as it involves lower- and higher-level thinking skills to understand the
knowledge and capabilities behind AI technologies and make work easier. For this author,
it will not be possible to adequately understand this technology as long as we insist on
considering it only as knowledge and skills, as AI involves attitudes and moral decision
making for the development of AI literacy and its responsible use. According to [10,11], AI
literacy is composed of different competencies, enabling individuals to critically evaluate
the use of those kinds of technologies, to communicate and collaborate with AI, and to
use it in different contexts, its objective being to describe the skills necessary for a basic
understanding of AI.
The widely reported and recognized need for AI regulation leads to new steps towards
this. On 26 October 2023, the Secretary-General of the United Nations (UN), António Guter-
res, launched a high-level multisectoral advisory body on AI, to identify risks, challenges,
and main opportunities, while more recently the Spanish presidency of the EU Council an-
nounced that the EU co-legislators, the Council and the European Parliament, had reached
a provisional agreement on the world’s first rules for AI, advancing the preparation of a
regulation aiming to ensure that AI in use in the EU should be safe and respect European
rights and values [12].
On 26 January 2024, the Council of the European Union approved the Proposal for a
Regulation of the European Parliament and of the Council, establishing harmonized rules in
the field of artificial intelligence (Artificial Intelligence Act) and amending certain legislative
acts. Aiming to ensure a high level of protection of health, safety, and fundamental rights,
including democracy, the rule of law, and environmental protection, the possible first
AI Act includes sanctions for non-compliance, impact assessment on fundamental rights,
provisions for testing high-risk AI systems, and rules and obligations for all general-purpose
AI models, regulating the development, deployment, and use of artificial intelligence
systems [13].
In Portugal, the most recent document with official recommendations for the use of
AI is a guide for ethical, transparent, and responsible Artificial Intelligence in Public Ad-
ministration, published in July 2022 by the Agency for Administrative Modernization [14].
This document advances a structuring conceptualization for ethical, responsible, and trans-
parent AI, identifies barriers, challenges, and dangers, and presents recommendations
and a tool for risk assessment. Despite very complete content, the level of dissemination
and the respective scope for an effective contribution to AI literacy in Portuguese public
administration are unknown.
Although research into AIED has at its heart the desire to support student learn-
ing, experience from other areas of AI suggests that this ethical intention is not, in itself,
sufficient [15–18]. Complementing this, education research shows that factors such as
teacher–student interaction, educational programs, teachers’ attitudes, and their decisions
made in the classroom are related to the ethical dimension [18].
So, for the ethical dimension of AI, there is a need to consider issues such as equity,
responsibility, transparency, partiality, autonomy, and inclusion, and also distinguish
Information 2024, 15, 205 3 of 14
between “doing ethical things” and “doing things ethically”, in order to understand and
make pedagogical choices that are ethical and take into account the ever-present possibility
of unintended consequences [18–21]. In academia, where the citizens of the future are
prepared, people will tend to use these AI tools constantly, irresponsibly, and without
ethical principles in their studies [22]. As an example of this, it was recently detected that
around 200 scientific studies had been written with ChatGPT and accepted by scientific
reviews (https://www.dailymail.co.uk/sciencetech/article-13211523/ChatGPT-scandal-
AI-generated-scientific-papers.html, accessed on 28 March 2024).
According to [23], the generalized use of AIED can potentially harm teacher–student
interaction and compromise the independent and capable student’s development. This
threat may be amplified by marketing efforts to make the public believe in neutral and
objective AI algorithms. This reveals two dimensions of the real ethical problem of AI: the
ethical user–system relation in AI systems and ethical use of AI systems by users. Much
work needs to be done in this area and, in this context, it is recognized by [4] that most
AIED researchers do not have the training to deal with emerging ethical issues.
Indeed, Ref. [24] suggests some principles for ethical and reliable AIED that should be
considered, namely
(i) Governance and management principle: AIED governance and management must
take into account interdisciplinary and multi-stakeholder perspectives, as well as all
ethical considerations from relevant domains, including, among others, data ethics,
learning analytics ethics, computational ethics, human rights and inclusion;
(ii) Principle of transparency of data and algorithms: The process of collecting, analyzing,
and communicating data must be transparent, with informed consent and clarity
about data ownership, accessibility, and the objectives of its use;
(iii) Accountability principle: AIED regulation must explicitly address recognition and
responsibility for the actions of each stakeholder involved in the design and use of
systems, including the possibility of auditing, the minimization and communication
of negative side effects, trade-offs, and compensation;
(iv) Principle of sustainability and proportionality: AIED must be designed, developed,
and used in a way that does not disrupt the environment, the global economy, and
society, namely the labor market, culture, and politics;
(v) Privacy principle: AIED must guarantee the user’s informed consent and maintain the
confidentiality of user information, both when they provide information and when
the system collects information about them;
(vi) Security principle: AIED must be designed and implemented to ensure that the
solution is robust enough to effectively safeguard and protect data against cyber-
crime, data breaches, and corruption threats, ensuring the privacy and security of
sensitive information;
(vii) Safety principle: AIED systems must be designed, developed, and implemented
according to a risk management approach, in order to protect users from unintentional
and unexpected harm and reduce the number of serious situations;
(viii) Principle of inclusion in accessibility: The design, development, and implementation
of AIED must take into account infrastructure, equipment, skills, and social acceptance,
allowing equitable access and use of AIED;
(ix) Human-centered AIED principle: The aim of AIED should be to complement and
enhance human cognitive, social, and cultural capabilities, while preserving meaning-
ful opportunities for freedom of choice and ensuring human control over AI-based
work processes.
In turn, ref. [25] states that definitions of AI literacy differ in terms of the exact
number and configuration of skills it entails, and referring to [26], indicates that an analysis
of conceptualizations of AI literacy in education can be organized into four concepts:
(1) knowing and understanding AI, (2) using and applying AI, (3) evaluating and creating
AI, and (4) AI ethics. For that author, the vast majority of conceptualizations of AI literacy
are parallel to Bloom’s taxonomy in terms of its general configuration of skills. Considering
Information 2024, 15, 205 4 of 14
that this taxonomy constitutes the basis of countless formulations of competences in schools
and universities, this is of enormous importance and correlation with AIED.
It is still difficult to measure AI literacy. Four published scales are currently used to
carry out this measurement, three of which are not school-focused, but can be used for
more general measurement purposes. As they are not based on established theoretical
models of competences, it makes the interpretation of the latent factors of these scales seem
arbitrary [25]. In fact, Carolus et al. [25] developed a new measuring instrument based on
the existing literature on AI literacy, which is modular, meets psychometric requirements,
and includes other psychological skills in addition to the classical ones of AI literacy.
Although it is not objectively clear how the development of AI can be applied to
education systems, enthusiasm is growing, with excessive optimism regarding the potential
to transform current education systems [27]. Ref. [4] sought to identify potential aspects
of threat, excitement, and promise of AIED and highlighted the importance of traditional
pedagogical values, such as skepticism, continuing to argue that the ultimate goal of
education should be to promote responsible citizens and healthy educated minds. Therefore,
the adoption of ethical frameworks for the use and development of AIED is extremely
important, ensuring that it will be continually discussed and updated in light of the rapid
development of AI techniques and their potential for widespread application [28].
At the same time, a set of questions must be carefully considered and comprehensively
addressed as soon as possible: “What will be the future role of the teacher, and other school
personnel, in education with AI systems? And how does this align with our beliefs or
pedagogical theories? Do educational leaders and teachers have enough knowledge in the
field of AI to distinguish a poorly developed system from a good one? Or how to apply
them appropriately in the education context? Furthermore, how can we protect student
and teacher data when the skills and knowledge to develop AIED systems are in the hands
of for-profit organizations and not in the education sector?”. In particular, the issue of
aligning AI with pedagogical theory must remain on the table, as any new technology
integrated into education must be designed to fill a pedagogical need [4].
Although the use of questionnaires to assess literacy in AI is still limited, mainly
because there are not many validated questionnaires yet, it is possible to find some evidence
on studies in AI literacy in higher education in the literature. For example, ref. [29]
concludes that the use of tools associated with artificial intelligence, in an exploratory
learning environment context, can benefit teaching itself. Ref. [30] relates the knowledge
of AI by teachers with data literacy, proposing an approach to reflect those data literacy
competencies to use AI. Based on a literature review and through a survey applied in
Serbia, in this case among students, ref. [31] concludes that AI, together with machine
learning, has the potential to improve the learning levels of the student population. In a
recent study, ref. [32] analyzes the adoption of artificial intelligence in higher education
practices, relating it to literacy levels and their opinion on the conditions under which the
use of AI tools is defensible, identifying clear concerns with justice and responsibility, as
well as the lack of knowledge about the phenomenon of AI. In fact, despite the increase
in knowledge about AI applied to education, it is still a challenge, taking into account the
current context [33].
Considering the relevance of understanding the use of artificial intelligence in edu-
cation, in particular in the higher education context, the present study seeks to assess the
level of AI literacy and knowledge among lecturers at Portalegre Polytechnic University
(PPU), aiming to identify gaps and find the main opportunities for innovation and develop-
ment so that the education system can adopt AIED as an ally in promoting higher-quality
education better prepared for the challenges of the future. As specific objectives, we seek to
assess the degree of relationship between the dimensions of AI literacy and identify the
predictive factors.
The remainder of this paper is organized as follows: Section 2 presents the materials
and methods, in particular, the questionnaire; Section 3 presents the results; Section 4
discusses the results and concludes the analysis.
Information 2024, 15, 205 5 of 14
Management and Design (40.0%), followed by the School of Health (29.3%), the School
of Education and Sciences (22.7%), and the School of Biosciences of Elvas (8.0%). It is
noteworthy that most participants teach in more than one study cycle, with 25.3% teaching
in higher technical courses and bachelor’s degrees, 22.7% teaching bachelor’s and master’s
degrees, and 21.3% teaching only bachelor’s degrees, while 20.0% teach in the three study
cycles (higher technical courses and bachelor’s and master’s degree). The main areas of
basic training for participants are health (29.3%), social and behavioral sciences (13.3%),
and business sciences (12.0%).
The instrument used in the present study designed by [25] covers the three dimensions
based on the existing literature (AI Literacy, AI Self-Efficacy, and AI Self-Management),
each containing more than one descriptor factor. The whole questionnaire can be consulted
in Table A1.
3. Results
The results of the questionnaire, presented in Table 1, reveal an average level of AI
literacy (3.28), highlighting that 62.4% of responses are at levels 3 and 4.
3
1 2 4 5
(Neither
Factor (Totally (Somewhat (Somewhat (Totally Mean Values
Disagree
Disagree) Disagree) Agree) Agree)
nor Agree)
Use and apply AI 3.6% 10.0% 18.4% 33.6% 34.4% 3.85
Know and understand AI 4.3% 14.1% 27.2% 40.5% 13.9% 3.46
AI Literacy 3.56
Detect AI 5.3% 19.1% 34.7% 32.4% 8.4% 3.20
AI Ethics 4.0% 9.3% 21.8% 39.1% 25.8% 3.73
AI Problem Solving 6.7% 14.2% 40.9% 25.8% 12.4% 3.23
Self-Efficacy 2.86
Learning 18.7% 33.8% 31.6% 12.0% 4.0% 2.49
AI Self- AI Persuasion Literacy 4.9% 16.0% 40.0% 25.3% 13.8% 3.27
3.41
Management Emotion Regulation 4.9% 6.2% 38.2% 30.7% 20.0% 3.55
Total 6.7% 15.6% 32.0% 30.4% 17.2% 3.28
The AI Literacy dimension recorded the highest average response (3.56), highlighting
that the factor of using and applying AI had the highest average response (3.85), followed
by the ethics of AI factor with an average of 3.73. Still in this dimension, the knowing and
understanding AI factor had an average response of 3.46 and detecting AI was at 3.20 (see
Figure 1). In fact, most respondents reveal the capacity to use and apply AI as well as
saying they are able to act in an ethical way, regarding AI.
In turn, the AI Self-Efficacy dimension obtained the lowest average response (2.86),
highlighting that learning was the factor with the lowest average response (2.49). Specifi-
cally in this factor, 65.4% of participants responded with levels 2 and 3, reflecting that they
have more difficulty in handling problems and challenges related to AI (see Figure 2).
Finally, in the AI Self-Management dimension, which had an average response of 3.41,
the AI persuasion literacy factor had an average response of 3.27 and Emotion Regulation
had an average of 3.55, revealing respondents’ greater perception of the possibility of
controlling their emotions regarding the use of AI than in considering the influence of AI
in their daily life (see Figure 3).
Information 2024, 15, x FOR PEER REVIEW 7 of 14
In turn, the AI Self-Efficacy dimension obtained the lowest average response (2.86),
highlighting that learning was the factor with the lowest average response (2.49). Specifi-
cally in1.this
Figure factor,
Results 65.4%
of the of participants
AI Literacy responded with levels 2 and 3, reflecting that they
dimension.
Figure 1. Results of the AI Literacy dimension.
have more difficulty in handling problems and challenges related to AI (see Figure 2).
In turn, the AI Self-Efficacy dimension obtained the lowest average response (2.86),
highlighting that learning was the factor with the lowest average response (2.49). Specifi-
cally in this factor, 65.4% of participants responded with levels 2 and 3, reflecting that they
have more difficulty in handling problems and challenges related to AI (see Figure 2).
Figure 2.
Figure 2. Results
Results of
of the
the AI
AI Self-Efficiency dimension.
Self-Efficiency dimension.
Finally,
A in the AIanalysis
more detailed Self-Management
of the means dimension,
and standardwhich had an of
deviation average responsecan
each question of
3.41,
be thein
seen AIAppendix
persuasionB literacy factor
(Table A2), hadthe
with an respective
average response of 3.27 ranging
mean values, and Emotion
from Reg-
2.40
(“Despite
ulation hadthe
an rapid
averagechanges
of in
3.55, the field
revealing of artificial
respondents’
Figure 2. Results of the AI Self-Efficiency dimension. intelligence,
greater I can
perception always
of the keep up to
possibility
date”) to 4.25 (“I
of controlling canemotions
their operate AI applications
regarding in everyday
the use of AI thanlife.”).
in considering the influence of
AI inWe continue
their daily
Finally, our
life
in the analysis
(see
AI Figureby3). calculating
Self-Management the internal
dimension, which consistency of the response
had an average instrument,of
obtaining a value of 0.930 for Cronbach’s Alpha. The correlation coefficients
3.41, the AI persuasion literacy factor had an average response of 3.27 and Emotion Reg- between each
question
ulation hadandanthe total suggest
average of 3.55, good internal
revealing validity indices
respondents’ greaterthat exceed the
perception critical
of the index
possibility
(<0.20),
of controlling their emotions regarding the use of AI than in considering the influence of
including the items with the lowest value, highlighting that the vast majority of
items have a correlation greater
AI in their daily life (see Figure 3). than 0.35, with some items reaching an index above 0.5
(see Table 2). It is noteworthy that in the field of AI Self-Management, two factors related
Information 2024, 15, 205 8 of 14
to AI persuasion literacy have a correlation value lower than 0.3. Even so, it was decided to
keep them as their elimination would not make significant improvements to the result of
the instrument and because, above all, the aim is to maintain the theoretical framework
Information 2024, 15, x FOR PEER REVIEW 8 of 14
chosen for the objective of this study, that is, to assess the sample’s level of AI literacy.
A more detailed analysis of the means and standard deviation of each question can
Table 2. Item-Total Statistics.
be seen in Appendix B (Table A2), with the respective mean values, ranging from 2.40
(“Despite the rapid
Scale Mean If Itemchanges in Variance
Scale the fieldIfof artificial intelligence,
Corrected Item-TotalI can always keep
Cronbach’s AlphaupIfto
Item date”) toDeleted
4.25 (“I can operateItem
AI applications in everyday life.”).
Deleted Correlation Item Deleted
We continue our analysis by calculating the internal consistency of the instrument,
Use and apply AI_1 94.55 289.278 0.395 0.929
Use and apply AI_2 obtaining94.59
a value of 0.930 for Cronbach’s
289.111 Alpha. The correlation
0.406 coefficients between
0.929 each
Use and apply AI_3 question and
95.00the total suggest good internal
283.405 validity indices
0.492 that exceed the critical
0.928 index
Use and apply AI_4 (<0.20), including
94.75 the items with the lowest value, highlighting
286.273 0.462 that the vast majority of
0.928
Use and apply AI_5 items have 94.92
a correlation greater 287.507
than 0.35, with some0.417 items reaching an index 0.929above 0.5
Use and apply AI_6 (see Table95.88 288.539
2). It is noteworthy that 0.320
in the field of AI Self-Management, 0.931 related
two factors
Know and understand AI_1 95.51 278.632 0.663 0.926
to AI persuasion literacy have a correlation value lower than 0.3. Even so, it was decided
Know and understand AI_2 95.36 276.396 0.736 0.925
Know and understand AI_3 to keep them
95.36as their elimination would not make significant
278.828 0.671 improvements0.926 to the result
Know and understand AI_4 of the instrument
95.39 and because,277.267
above all, the aim is to 0.674
maintain the theoretical framework
0.925
Know and understand AI_5 chosen for95.11
the objective of this 280.772
study, that is, to assess0.633
the sample’s level of AI literacy.
0.926
Detect AI_1 95.53 278.793 0.662 0.926
Detect AI_2 95.76
Table 2. Item-Total Statistics. 277.023 0.705 0.925
Detect AI_3 95.52 287.415 0.436 0.929
AI Ethics_1 Scale Mean If Item De-284.799
95.23 Scale Variance If Corrected
0.464 Item- Cronbach’s
0.928 Alpha
Item
AI Ethics_2 94.95 leted 279.889 0.609 0.926
Item Deleted Total Correlation If Item Deleted
AI Ethics_3 95.03 279.215 0.622 0.926
Use and apply AI_1 94.55 289.278 0.395 0.929
Problem
Use andSolving_1
apply AI_2 95.32 94.59 275.626289.111 0.7280.406 0.925
0.929
Problem Solving_2 95.68 279.302 0.616 0.926
Use and apply AI_3
Problem Solving_3 95.71
95.00 277.994
283.405 0.718
0.492 0.928
0.925
Use and apply AI_4
Learning_1 96.16 94.75 274.812286.273 0.7700.462 0.928
0.924
Use and apply AI_5
Learning_2 96.4 94.92 277.865287.507 0.6750.417 0.929
0.925
Learning_3
Use and apply AI_6 96.37 95.88 276.156288.539 0.7240.320 0.925
0.931
Know and understand AI_1 95.51 278.632 0.663 0.926
Know and understand AI_2 95.36 276.396 0.736 0.925
Know and understand AI_3 95.36 278.828 0.671 0.926
Know and understand AI_4 95.39 277.267 0.674 0.925
Know and understand AI_5 95.11 280.772 0.633 0.926
Detect AI_1 95.53 278.793 0.662 0.926
Detect AI_2 95.76 277.023 0.705 0.925
Information 2024, 15, 205 9 of 14
Table 2. Cont.
To study the construct validity, the principal component factor analysis (PCFA) method
with varimax rotation was chosen. Following the procedure, a Keyser–Meyer–Olkin
measure of 0.812 was obtained after rotation, which reflects a reasonable variance of the
factors [39]. Bartlett’s test of sphericity is associated with a chi-square of 1783.012. Factor
extraction followed the method advocated by [40], which consists of reading the scree plot
graph. Results of the component factor analysis are presented in Table 3.
Component
Original Dimension Item 1 2 3
Use and apply AI_1 0.407 0.807 −0.08
Use and apply AI_2 0.414 0.83 −0.055
Use and apply AI_3 0.509 0.758 0.082
Use and apply AI_4 0.471 0.802 0.03
Use and apply AI_5 0.431 0.787 0.001
Use and apply AI_6 0.339 0.425 0.131
Know and understand AI_1 0.721 −0.186 −0.052
Know and understand AI_2 0.787 −0.096 −0.015
AI Literacy Know and understand AI_3 0.733 −0.329 −0.168
Know and understand AI_4 0.725 0.031 0.013
Know and understand AI_5 0.684 0.037 0.012
Detect AI_1 0.693 −0.078 0.252
Detect AI_2 0.742 −0.222 0.236
Detect AI_3 0.484 −0.221 0.297
AI Ethics_1 0.523 −0.506 −0.186
AI Ethics_2 0.666 −0.157 −0.461
AI Ethics_3 0.676 −0.159 −0.407
Problem Solving_1 0.764 −0.098 −0.26
Problem Solving_2 0.667 −0.096 −0.402
Problem Solving_3 0.75 0.158 −0.067
AI Self-Efficacy
Learning_1 0.814 −0.057 0.02
Learning_2 0.731 −0.258 0.034
Learning_3 0.774 −0.061 0.006
AI Persuasion Literacy_1 0.318 −0.549 −0.07
AI Persuasion Literacy_2 0.288 −0.54 0.306
AI Persuasion Literacy_3 0.431 0.023 0.286
AI Self-Management
Emotion Regulation_1 0.332 −0.241 0.365
Emotion Regulation_2 0.388 −0.077 0.621
Emotion Regulation_3 0.47 0.043 0.457
After the analysis, reading of the scree plot graph suggested the existence of three
factors. So, considering three factors, the results were relatively aligned with the reference,
with those factors explaining 58.6% of the variance found (the first factor explained 36.09%,
the second 16.20%, and the third 6.31%).
Information 2024, 15, 205 10 of 14
limitations identified. This leads to suggesting future work applying the instrument
to the student body at PPU and expanding similar studies to Portuguese polytechnic
higher education, looking for possible predictors in a broader educational community and
identifying intervention priorities to increase AI literacy in academia in Portugal. The fact
of applying this study to a single higher education institution is another limitation. In
the future, these results could be complemented with other assessments not only in other
institutions, but also in other professional fields, including technical and non-technical
professions, and in other areas where AI could be used. A final note is that this is one of
the first applications of the questionnaire to higher education, making it difficult to make
comparisons. This should also be considered in future work, even comparing different
kinds of higher education institutions.
Author Contributions: Conceptualization, E.L., C.G., and P.F.; methodology, E.L., C.G., and P.F.;
validation, E.L., C.G., and P.F.; formal analysis, E.L., C.G., and P.F.; data curation, E.L., C.G., and P.F.;
writing—original draft preparation, E.L., C.G., and P.F.; writing—review and editing, E.L., C.G., and
P.F. All authors have read and agreed to the published version of the manuscript.
Funding: This research was funded by Fundação para a Ciência e a Tecnologia (grant UIDB/05064/2020).
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: The data will be supplied on request.
Conflicts of Interest: The authors declare no conflicts of interest.
Appendix A
Appendix B
References
1. Bates, A.W. Educar na Era Digital: Design, Ensino e Aprendizagem; Tecnologia Educacional; Artesanato Educacional: São Paulo,
Brazil, 2017.
2. Ergen, M. What is Artificial Intelligence? Technical Considerations and Future Perception. Anatol. J. Cardiol. 2019, 22, 5–7.
[CrossRef] [PubMed]
3. Ganascia, J.-G. A Inteligência Artificial; Biblioteca Básica da Ciência e Cultura; Instituto Piaget: Lisbon, Portugal, 1993.
4. Humble, N.; Mozelius, P. The threat, hype, and promise of artificial intelligence in education. Discov. Artif. Intell. 2022, 2, 22.
[CrossRef]
5. Tavares, L.A.; Meira, M.C.; Amaral, S.F.D. Inteligência Artificial na Educação: Survey. Br. J. Dermatol. 2020, 6, 48699–48714.
[CrossRef]
Information 2024, 15, 205 13 of 14
6. Oliveira, L.; Pinto, M. A Inteligência Artificial na Educação—Ameaças e Oportunidades para o Processo Ensino-Aprendizagem.
2023. Available online: http://hdl.handle.net/10400.22/22779 (accessed on 8 February 2024).
7. Ayed, I.A.H.; Oman Higher Education Institutions Dealing with Artificial Intelligence. BUM—Teses de Doutoramento CIEd—
Teses de Doutoramento em Educação/PhD Theses in Education. 2022. Available online: https://hdl.handle.net/1822/76188
(accessed on 8 February 2024).
8. Miranda, P.; Isaias, P.; Pifano, S. Digital Literacy in Higher Education: A Survey on Students’ Self-assessment. In Learning and
Collaboration Technologies. Learning and Teaching; Zaphiris, P., Ioannou, A., Eds.; Lecture Notes in Computer Science; Springer
International Publishing: Cham, Switzerland, 2018; Volume 10925, pp. 71–87. [CrossRef]
9. Ng, D.T.K.; Wu, W.; Leung, J.K.L.; Chu, S.K.W. Artificial Intelligence (AI) Literacy Questionnaire with Confirmatory Factor
Analysis. In Proceedings of the IEEE International Conference on Advanced Learning Technologies (ICALT), Orem, UT, USA,
10–13 July 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 233–235. [CrossRef]
10. Long, D.; Magerko, B. What is AI Literacy? Competencies and Design Considerations. In Proceedings of the 2020 CHI Conference
on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; ACM: New York, NY, USA; pp. 1–16. [CrossRef]
11. Hornberger, M.; Bewersdorff, A.; Nerdel, C. What do university students know about Artificial Intelligence? Development and
validation of an AI literacy test. Comput. Educ. Artif. Intell. 2023, 5, 100165. [CrossRef]
12. Committee on Artificial Intelligence; Council of Europe. Draft Framework Convention on Artificial Intelligence, Human Rights,
Democracy and The Rule of Law; Council of Europe: Strasbourg, France, 2023.
13. Council of the European Union. Proposal for a Regulation of the European Parliament and of the Council amending Regulation
(EC) No 561/2006 as Regards Minimum Requirements on Minimum Breaks and Daily and Weekly Rest Periods in the Occasional
Passenger Transport Sector—Analysis of the Final Compromise Text with a View to Agreement. 2024. Available online:
https://data.consilium.europa.eu/doc/document/ST-6021-2024-INIT/en/pdf (accessed on 28 March 2024).
14. AMA. Guia para uma Inteligência Artificial Ética, Transparente e Responsável na AP. Available online: https://www.sgeconomia.
gov.pt/destaques/amactic-guia-para-uma-inteligencia-artificial-etica-transparente-e-responsavel-na-ap.aspx (accessed on
2 November 2023).
15. Boulay, B. The overlapping ethical imperatives of human teachers and their Artificially Intelligent assistants. In The Ethics of
Artificial Intelligence in Education, 1st ed.; Routledge: New York, NY, USA, 2022; pp. 240–254. [CrossRef]
16. Boulay, B. Artificial Intelligence in Education and Ethics. In Handbook of Open, Distance and Digital Education; Zawacki-Richter, O.,
Jung, I., Eds.; Springer Nature: Singapore, 2023; pp. 93–108. [CrossRef]
17. Flores-Vivar, J.-M.; García-Peñalvo, F.-J. Reflections on the ethics, potential, and challenges of artificial intelligence in the
framework of quality education (SDG4). Comun. Rev. Científica Comun. Educ. 2023, 31, 37–47. [CrossRef]
18. Yildiz, Y. Ethics in education and the ethical dimensions of the teaching profession. ScienceRise 2022, 4, 38–45. [CrossRef]
19. Eaton, S.E. Postplagiarism: Transdisciplinary ethics and integrity in the age of artificial intelligence and neurotechnology. Int. J.
Educ. Integr. 2023, 19, 23. [CrossRef]
20. Howley, I.; Mir, D.; Peck, E. Integrating AI ethics across the computing curriculum. In The Ethics of Artificial Intelligence in
Education, 1st ed.; Routledge: New York, NY, USA, 2022; pp. 255–270. [CrossRef]
21. Remian, D. Augmenting Education: Ethical Considerations for Incorporating Artificial Intelligence in Education. Master’s
Thesis, University of Massachusetts, Boston, MA, USA, 2019. Instructional Design Capstones Collection 52. Available online:
https://scholarworks.umb.edu/instruction_capstone/52 (accessed on 8 February 2024).
22. Holmes, W.; Porayska-Pomsta, K.; Holstein, K.; Sutherland, E.; Baker, T.; Shum, S.; Santos, O.; Rodrigo, M.; Cukurova, M.;
Bittencourt, I.; et al. Ethics of AI in Education: Towards a Community-Wide Framework. Int. J. Artif. Intell. Educ. 2022, 32,
504–526. [CrossRef]
23. Bom, L. Regresso às provas orais. In 88 Vozes sobre Inteligência Artificial, 1st ed.; Camacho, F., Ed.; Oficina do Livro: Alfragide,
Portugal, 2023; pp. 431–437.
24. Nguyen, A.; Ngo, H.N.; Hong, Y.; Dang, B.; Nguyen, B.-P.T. Ethical principles for artificial intelligence in education. Educ. Inf.
Technol. 2023, 28, 4221–4241. [CrossRef] [PubMed]
25. Carolus, A.; Koch, M.J.; Straka, S.; Latoschik, M.E.; Wienrich, C. MAILS—Meta AI literacy scale: Development and testing of an
AI literacy questionnaire based on well-founded competency models and psychological change- and meta-competencies. Comput.
Hum. Behav. Artif. Hum. 2023, 1, 100014. [CrossRef]
26. Ng, D.T.K.; Leung, J.K.L.; Chu, S.K.W.; Qiao, M.S. Conceptualizing AI literacy: An exploratory review. Comput. Educ. Artif. Intell.
2021, 2, 100041. [CrossRef]
27. Holmes, W.; Persson, J.; Chounta, I.-A.; Wasson, B.; Dimitrova, V. Artificial Intelligence and Education: A Critical View through the
Lens of Human Rights, Democracy and the Rule of Law; Council of Europe: Strasbourg, France, 2022.
28. Birks, D.; Clare, J. Linking artificial intelligence facilitated academic misconduct to existing prevention frameworks. Int. J. Educ.
Integr. 2023, 19, 20. [CrossRef]
29. Mavrikis, M.; Geraniou, E.; Santos, S.; Poulovassilis, A. Intelligent analysis and data visualisation for teacher assistance tools: The
case of exploratory learning. Br. J. Educ. Technol. 2019, 50, 2920–2942. [CrossRef]
30. Olari, V.; Romeike, R. Addressing AI and data literacy in teacher education: A review of existing educational frameworks. In
Proceedings of the 16th Workshop in Primary and Secondary Computing Education (WiPSCE ‘21), Virtual Event, 18–20 October
2021. [CrossRef]
Information 2024, 15, 205 14 of 14
31. Kuleto, V.; Ilić, M.; Dumangiu, M.; Ranković, M.; Martins, O.M.D.; Păun, D.; Mihoreanu, L. Exploring Opportunities and
Challenges of Artificial Intelligence and Machine Learning in Higher Education Institutions. Sustainability 2021, 13, 10424.
[CrossRef]
32. McGrath, C.; Pargman, T.; Juth, N.; Palmgren, P. University teachers’ perceptions of responsibility and artificial intelligence in
higher education—An experimental philosophical study. Comput. Educ. Artif. Intell. 2023, 4, 100139. [CrossRef]
33. Vazhayil, A.; Shetty, R.; Bhavani, R.; Akshay, N. Focusing on Teacher Education to Introduce AI in Schools: Perspectives and
Illustrative Findings. In Proceedings of the 2019 IEEE Tenth International Conference on Technology for Education (T4E), Goa,
India, 9–11 December 2019; pp. 71–77. [CrossRef]
34. Wang, B.; Rau, P.-L.P.; Yuan, T. Measuring user competence in using artificial intelligence: Validity and reliability of artificial
intelligence literacy scale. Behav. Inf. Technol. 2023, 42, 1324–1337. [CrossRef]
35. Ajzen, I. From Intentions to Actions: A Theory of Planned Behavior; Springer: Berlin/Heidelberg, Germany, 1985.
36. Carolus, A.; Augustin, Y.; Markus, A.; Wienrich, C. Digital interaction literacy model—Conceptualizing competencies for literate
interactions with voice-based AI systems. Comput. Educ. Artif. Intell. 2023, 4, 100114. [CrossRef]
37. Cetindamar, D.; Kitto, K.; Wu, M.; Zhang, Y.; Abedin, B.; Knight, S. Explicating AI Literacy of Employees at Digital Workplaces.
IEEE Trans. Eng. Manag. 2024, 71, 810–823. [CrossRef]
38. Dai, Y.; Chai, C.-S.; Lin, P.-Y.; Jong, M.S.-Y.; Guo, Y.; Qin, J. Promoting Students’ Well-Being by Developing Their Readiness for the
Artificial Intelligence Age. Sustainability 2020, 12, 6597. [CrossRef]
39. Martinez, L.; Ferreira, A. Análise dos Dados com SPSS. Primeiros Passos; Escolar Editora: Lisbon, Portugal, 2007.
40. Cattel, R. The ccree test for the number of factors. Multivar. Behav. Res. 1966, 1, 245–276. [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.
Reproduced with permission of copyright owner. Further reproduction
prohibited without permission.