Students Behavioural Intention To Use Content Gen
Students Behavioural Intention To Use Content Gen
https://doi.org/10.1007/s10639-025-13441-8
Abstract
Generative Artificial Intelligence tools have the potential to impact students learn-
ing significantly and positively in several ways. However, the factors responsible
for student’s behavioural intentions to use these tools are still not fully understood,
especially in the context of Nigerian higher education institutions (HEIs). To sup-
port students use of Content Generative - Artificial Intelligence (CG-AI) tools for
learning and research purposes, it is important that HEI administrators and policy
makers understand these factors. Therefore, the purpose of this study is to exam-
ine the factors that influence Nigerian students’ behavioural intentions to use CG-AI
tools for learning and research. Based on structural equation modelling technique,
this study uses the unified theory of acceptance and use of technology (UTAUT) to
examine the relationship between six constructs and students’ behavioural intentions
to use CG-AI. Employing a paper-based survey, responses from 289 students in the
Department of Computer Science were obtained from a State University in north-
ern Nigeria. A two-step approach (Confirmatory Factor Analysis and Path Analysis)
was used to analyse the relationships between both observed and latent variables.
The findings showed that three of the factors, performance expectancy (α = 0.551,
p < 0.001), effort expectancy (α = 0.466, p < 0.001), and social influence (α = 0.507,
p < 0.001) were observed to be determinants of behavioural intentions to use CG-AI
tools. Facilitating conditions, perceived risks, and attitude towards technology, on
the other hand, showed no significant impact on students’ behavioural intention to
use CG-AI tools.
Vol.:(0123456789)
1 Introduction
is used as a theoretical lens to identify what factors are responsible for Nigerian
students’ use of CG-AI tools such as ChatGPT for learning, and research. The
findings of this research will provide insights that can be used to develop policies
and strategies on the use of CG-AI tools that ensure learning outcomes are met,
to ensure fair access to the benefits of CG-AI tools, and to preserve the academic
integrity of an increasing CG-AI reliant society.
The rest of the study is structured as follows: firstly, we review the literature rel-
evant to this study, with emphasis on the use of generative AI in education and on
how the UTAUT framework has been used in the context of CG-AI tools in educa-
tional settings, this is followed by establishing the hypothesized model. Next, the
research methods used to accomplish the purpose of this study are discussed. Then
the analysis of the data and the results of the analysis are presented. The next sec-
tion, the discussion, is where the results are interpreted and explained in the context
of the research questions and existing literature. Finally, we conclude the research by
stating the limitations of the study and then the study’s contributions to theory and
practice.
2 Literature review
2.1 Generative AI in education
Education has critically evolved in recent times; the proliferation of content gen-
erative AI is the latest tool that intertwines education and technology. As a result,
there is an increasing number of learners who rely on online learning environments
that cater for personalized learning experiences (Denny et al., 2023). This means
that there is a need to produce high-quality electronic educational content. CG-AI is
based on large language models (LLMs). LLMs rely on the principles of deep learn-
ing to process and generate text that mimics human natural language. LLMs under-
stand the complexity and structures in language data, and they use this in processing
large data sets to capture existing semantic relationships (Hadi et al., 2024). This is
done by training models on large data sets enabling them to understand and gener-
ate content that in response to the input parameters. LLM’s can not only generate
text but can also perform summary writing,, code creation, conversational agents,
and chat-bots (as they can provide the language understanding as well as the genera-
tion capabilities required to engage in conversations). LLMs are limited as they are
trained and applied to data in textual formats (Wu et al., 2023). On the other hand,
Multi-Modal Large Language Models (MLLMs) can overcome this limitation by the
ability to incorporate diverse data types. Examples of MLLMs are GPT (Genera-
tive Pretrained Transformer) by OpenAI, Gemini by Google, and Microsoft Copilot.
Multimodal perception is essential in enabling artificial intelligence as it facilitates
the acquisition of knowledge and real-world interaction (Wu et al., 2023). This capa-
bility improves the functionality of LLMs.
In the context of education, CG-AI tools are used by curriculum developers to
develop content, assessment, research, and for professional development (Kasneci
et al., 2023; Kurdi et al., 2020). On the other hand, learners use CG-AI tools as
virtual tutors, for research and writing tasks, and the development of problem-solv-
ing and critical thinking skills (Kasneci et al., 2023; Lin, 2024).
For both educators and learners, there is therefore an increasing reliance on
CG-AI tools for teaching, research, and learning. This has brought about several
challenges in the realm of higher education. For example, the concept of plagiarism
in higher education is no longer as simple as it used to be (Hutson, 2024). Higher
education institutions (HEIs) have had to revise their academic policies and peda-
gogical practices as the rampant use of AI by students has surpassed its understand-
ing by their educators. This is especially true for HEIs in developing countries where
the technologies such as Turnitin that can detect the use of AI in assessments are not
affordable. This means there will be differences in the factors that cause students to
use CG-AI tools between students in developed countries and their counterparts in
developing countries where policies and tools required to guide the use of such tools
is non-existent.
Several models have been developed over the years in a bid to understand the factors
responsible for influencing individuals use of technologies. The Unified Theory of
Acceptance and Use of Technology (UTAUT) (Venkatesh et al., 2003) introduced a
unified model that comprehensively integrated elements from eight existing technol-
ogy acceptance models, namely: the Theory of Reasoned Action (TRA), the Tech-
nology Acceptance Model (TAM), the Motivational Model (MM), the Theory of
Planned Behaviour (TPB), a combined model of TPB and TAM, the innovation dif-
fusion theory (IDT), the social cognitive theory (SCT), and the model of PC utiliza-
tion (MPCU).
In its simplest form, the result of this integration is a parsimonious research
model that consists of four fundamental constructs (Performance Expectancy, Effort
Expectancy, Social Influence, and Facilitating Conditions) to explain users’ technol-
ogy acceptance. These constructs are hypothesized to be direct determinants of an
individuals’ behavioural intention to use technology in voluntary environments, see
Fig. 1.
UTAUT has been used by a myriad of studies across varying technological con-
texts, such as e-learning systems (Abbad, 2021), mobile banking (Rachmawati et al.,
2020), health technologies (Edo et al., 2023), and several other technologies. In
the context of the use of CG-AI tools, while there are numerous studies that have
explored factors that influence students’ acceptance and use of CG-AI tools for
learning purposes (Chatterjee & Bhattacharjee, 2020; Du & Lv, 2024), only a hand-
ful were conducted in higher education institutions in developing countries.
Research suggests that that the acceptance of technology is often multifaceted,
with students from differing backgrounds showing different perceptions about the
use of technology (Arenas-Gaitan et al., 2011). Considering CG-AI applications
such as ChatGPT are relatively new tools, the factors that influence new users’ per-
ception of such technologies are not yet properly understood. Previous studies have
called for research that focuses on students’ perceptions of CG-AI tools in learning
and research, as a significant gap in research exists (Zastudil et al., 2023). This study
aims to fill this gap to an extent by investigating students’ behavioural intentions
to use CG-AI tools in the context of a developing country, specifically in Nigeria,
where policies that dictate the use of such tools have not yet been fully formulated
or implemented. To achieve this, the UTAUT model was selected for this study due
to its robustness, validity, and reliability in measuring students’ acceptance levels
(Ylmaz et al., 2023). Also, UTAUTs extensive applications in various fields, includ-
ing education, make it an appropriate lens to examine students’ perception of CG-AI
tools for learning.
In studies investigating students’ behavioural intentions to use CG-AI tools, all 4
constructs (performance expectancy, effort expectancy, social influence, and facili-
tating conditions) were observed to significantly influence behavioural intention
and usage in CG-AI tools such as ChatGPT (Widyaningrum et al., 2024; Wu et al.,
2022). Also, language barriers and ethical concerns were identified as potential
obstacles to AI integration in education (Kanont et al., 2024), these findings under-
score the importance of addressing psychosocial factors and institutional support in
promoting CG-AI adoption in higher education institutions.
2.3 Hypothesis
Figure 2 below, shows the hypothesized research model, for this study, where
the original four key constructs (performance expectancy, effort expectancy, social
influence, and facilitating conditions) are retained and an additional two constructs
(Perceived Risks and Attitude Towards Technology) were added. Perceived risk was
added as a construct to help understand Nigerian students’ perceptions related to the
use of CG-AI tools, considering how novel these tools are. Prior studies have shown
that the higher the risks involved with using new technologies, the lower the individ-
ual’s intention is to use it (Hanafizadeh et al., 2014). In relation to learning, assess-
ment, and research, perceived risk has been used with the UTAUT framework to
investigate student intentions to use e-learning systems (Alwahaishi, 2021) and AI
assisted learning environments (Wu et al., 2022), and ChatGPT for assessment sup-
port (Lai et al., 2024). Attitude towards technology was adapted from the technology
acceptance model (TAM) and was included in the theoretical model as captures an
individual’s complete affective reaction to the use of technology. It is a determinant
of behavioural intention to use technologies such as e-learning (Šumak et al., 2010)
and AI related technologies (Andrews et al., 2021).
PE is defined as “the degree to which an individual believes that using the system
will help him or her to attain gains in job performance” (Venkatesh et al., 2003, p.
447). This study refines this definition as a students’ belief that using CG-AI tools
will improve their learning and research process. Recent research has also observed
that PE is a significant predictor of behavioural intentions to use CG-AI tools in
educational settings (Du & Lv, 2024; Strzelecki & ElArabawy, 2024; Bouteraa et al.,
2024). The usefulness of CG-AI tools, specifically ChatGPT, is fully appreciated
based on the extent to which it fulfils the users’ expectations (Sharma et al., 2022).
This study thus proposes:
According to Venkatesh et al. (2003), EE is the ease associated with using a sys-
tem. It implies that if an individual deems a system as easy to use, they are more
likely to use the system. Consistent with the UTAUT model, a myriad of studies
has supported that EE has a significant influence on students’ behavioural intention
to use technology (Du & Lv, 2024; Strzelecki & ElArabawy, 2024). EE, like PE, is
also a strong predictor of behavioural intentions to use technologies and studies have
shown that as the ease in using a system increases, then so does the level of intention
to use the technology (Tahar et al., 2020). Therefore, we propose:
ATT is a construct adopted from the technology acceptance model (TAM) by Davis
et al. (1989). It refers to an individuals’ perceptions (positive or negative) about
using a specific technology. TAM postulates that ATT has an influence on an indi-
viduals’ behavioural intention to use technology. In the context of students’ adoption
of CG-AI tools, several studies have supported this relationship (Kanont et al., 2024;
Ivanov et al., 2024). In this paper, ATT is defined as the positive or negative percep-
tions towards using CG-AI tools to aid in their research or learning process. This
study therefore posits:
countries where policies and training help to guide the use of CG-AI tools, PR is
expected to have an insignificant influence on the use of such tools as the risks
are reduced (Chatterjee & Bhattacharjee, 2020). However, in developing countries
where these tools are still yet to be understood, it is expected that:
3 Methodology
This section outlines the research approach taken to investigate students’ percep-
tion of using CG-AI tools for their learning and research processes. Details on the
research design, participants, data collection methods, and data analysis techniques
are presented here. The research design is focused on addressing the knowledge gap
identified in the literature review section. The purpose of the study is to determine
the factors responsible for Nigerian students’ use of CG-AI tools for learning and
research purposes.
Based on the population characteristics and the objectives of the study, purposive
sampling method was used to recruit the study participants and a quantitative meth-
odology was adopted like previous studies founded on technology adoption models.
To empirically validate the hypothesis formulated in this study, data was collected
using a paper-based survey and the target population were all registered students, in
the Department of Computer Science, at a state university in northern Nigeria. This
included both undergraduate and postgraduate (MSc and PhD) students. There were
no restrictions regarding the target population in relation to age or gender however,
this information was collected to aid in describing the respondents.
304 responses were received from the participants and after eliminating surveys
that were incomplete, a total of 289 valid responses were obtained. The survey was
administered in the national language (English). Prior to the collection of data, 2
reviewers assisted in reviewing the survey to establish the relevance of the questions
to the research objectives, and the overall structure of the survey. The reviewers
included an associate professor in the field of Information Systems (PhD. Informa-
tion Systems) and a professor in the domain of Computer Science (PhD. Computer
Science). Both reviewers have a combination of over 36 years of experience in
teaching and research of computing subjects and a combined total of over 122 pub-
lished journal articles and conference papers.
The survey instrument was made up of 2 sections. Section A collected demo-
graphic information such as age, gender, level of study and the AI tool of choice for
learning or research purposes. Age was grouped into categories (Below 20, 21 – 30,
31 – 40, 41 – 50, and Over 50). Dummy variables were used to code gender as 1 and
2 to represent male and female, respectively. Level of study was captured as either
Undergraduate or Postgraduate. The variables Yes or No were required to answer
the question pertaining to support in using the CG-AI tool from either the lecturer
or the school. Finally, the question relating to the choice of the CG-AI tool used by
the respondents was open-ended to allow them to respond with the name of their
preferred tool.
Section B contained Likert-styled statements to collect the students’ responses
in relation to the constructs used in the research model (Performance Expectancy,
Social Influence, Effort Expectancy, Facilitating Conditions, Attitude Towards Tech-
nology, Perceived Risks, and Behavioural Intention). The constructs were measured
with 4 or 5 items using a 5—point Likert scale which ranged from strongly agree (1)
to strongly disagree (5).
Prior to administering the survey, a validation of the survey instrument was done
via a pilot test where 20 students were administered the survey to ensure that survey
wordings were unambiguous, and the questions were reliable and valid. Consent was
sought from the respondents to notify them of the benefits and risks associated with
the research and to assure them that their responses will be completely anonymous.
To determine the factors responsible for influencing students use of CG-AI tools’
the proposed research model was used and the constructs were tested using struc-
tural equation modelling (SEM). IBM’s Statistical Package for the Social Sciences
(SPSS) and Analysis of Moment Structures (AMOS) were the chosen tools to ana-
lyse the data and to test all 6 hypotheses. A two-step approach was used to investi-
gate the relationship between the constructs (Anderson & Gerbing, 1988).
As mentioned in the previous section, the collected data was analysed using SPSS.
To understand the respondents’ demography, descriptive analysis was performed.
Table 1 shows that 70.20% of the respondents were male while 29.8% were female.
In terms of the age groups, over half of the participants (50.20%) were between
the ages of 21 – 30 years. This was followed by 28% for the ages 31 – 40 years.
14.20% were below the ages of 20 years, 6.90% between the ages of 41 – 50 years,
and 0.70% over 50 years of age. Regarding the level of studies, 66.80% identified
as undergraduates and 33.20% were at the postgraduate level. Finally, only 22.10%
claimed that they received support from their lecturers/school in using CG-AI tools
for learning and research purposes as opposed to 77.90% who did not.
Table 2 below shows the choice of AI tools as informed by the participants. Chat-
GPT seems to be the preferred AI tool with almost half the responses (47.30%),
closely followed by Gemini with 29.50%. The least used tool is Deep Chat (0.40%)
followed by Elicit (1.10%).
In the second section of the survey instrument, where comments were used to
collect responses based on the constructs in the hypothetical model, structural equa-
tion modelling (SEM) was used to analyse the data. Firstly, multicollinearity tests
were conducted on the 6 constructs data based on the dependent variable (BI). The
results, Table 3, showed that the Variance Inflation Factor (VIF) and tolerance val-
ues fall within the accepted values. VIF and Tolerance are measures of the degree of
multicollinearity (O’brien, 2007). A VIF of over 4.00 and tolerance of less than 0.25
indicate potential multicollinearity problems (Hair et al., 2010; Mason et al., 1989).
The resulting VIF range of 1.41 to 3.04 is less than 4 while the tolerance values are
all above 0.25 indicating inconsequential collinearity.
4.1 Measurement model
The descriptive analysis for the constructs used in the research model are captured
in Table 4. The means for all constructs, except for ATT and PR, are all under 2
SI 0.34 3.04
EE 0.49 2.20
PE 0.72 1.41
PR 0.61 1.85
ATT 0.46 2.18
FC 0.40 2.57
Table 4 Descriptive analysis of Construct Mean Std Deviation Cronbach’s Alpha Number
the constructs of items
showing that the responses for the attributes are positive. Internal reliability was
established via Cronbach’s alpha, George and Mallery (2003) suggests that Cron-
bach’s alpha values are good if they are over 0.8 and excellent if over 0.9. All the
constructs displayed values of over 0.8 except for PR (0.718) this however is accept-
able according to Salkind (2015).
To understand the relationships that exist between the constructs within the
research model, a two-step approach was employed. Firstly, a confirmatory factor
analysis based on the maximum-likelihood estimation (MLE) method was used to
estimate model’s parameters. MLE is a method used to determine parameters such
as means and standard deviations, of a normally distributed random sample data
(Wasserman, 2013).
The next step was to conduct a goodness-of-fit test based on the proposed model.
Table 4 captures the actual measurement model values in comparison to the rec-
ommended values for the fit indicators as recommended by Kline (2015), Hu and
Bentler (1999), and Hair and et al., (2010). All measured indices were observed
to fall within the suggested values. To confirm the reliability of the measured fac-
tors, composite reliability (CR) was used and as captured in in Table 5, all items
measured exhibited a CR reading above the recommended value of 0.70 (Hair et al.,
2010). Validity tests were run on the constructs to measure the average variance
extracted (AVE). The AVE of all constructs was observed to be above the recom-
mended value of 0.5 and this was also measured to be less than the equivalent CR.
Discriminant validity was established by ensuring that the ASV (average shared
Table 5 Construct reliability, convergent validity, discriminant validity, and factor correlation matrix
CR AVE MSV MaxR(H) PE SI FC ATT BI EE PR
CFI comparative fit index, GFI goodness-of-fit index, AGFI adjusted goodness-of-fit, RMSEA root mean
square error of approximation, RMSR root mean square residuals, NFI normed fit index
variance) and the MSV (maximum shared variance) were less than the correspond-
ing AVE value (Hair et al., 2010). Also, the square root of the AVE is expected to be
higher than the correlation value (Bagozzi & Yi, 1988). Based on these conditions,
discriminant validity was established. The next step was to test hypothesis by inves-
tigating the research model (structural measurement) considering all constructs have
shown to be reliable and valid.
4.2 Structural model
To examine the relationships between the constructs, a structural model was built.
Table 6 shows the goodness-of-fit readings for the structural model are all within the
recommended values indicating that the data fits the structural model.
Based on the research model, Table 7 below shows the association between
the hypothesized variables. It captures the path coefficients indicating that all the
hypothesized relationships were supported except for hypothesis H4, H5, and H6.
PE (α = 0.551, p < 0.001), EE (α = 0.466, p < 0.001) and SI (α = 0.507, p < 0.001) had
a positive and significant effect on BI. PR had a negative but insignificant influence
on BI while FC and ATT were observed to have an insignificant effect on BI. Thus,
hypotheses H1, H2, and H3 were supported but there was no support for hypotheses
H4, H5, and H6.
5 Discussion
As established earlier, the purpose of this study is to identify the factors that influ-
ence students’ behavioural intentions to use CG-AI tools for learning and research
purposes in the context of a developing country and specifically in northern Nigeria.
The research utilized an adapted version of the UTAUT model. The findings of this
study provide valuable insights into how CG-AI tools are perceived by students and
the factors that influence the adoption of such tools.
Based on the disruptive effect of CG-AI tools, there has been a rise in issues
pertaining academic integrity and plagiarism. Therefore, there is a need to
address CG-AI usage in HEIs (Moorhouse et al., 2023). Students need to be
informed of the ethical ways to use CG-AI tools as well as the penalties that will
be applied for its misuse. Educational institutions also need to develop policies
and guidelines on its use (Moorhouse et al., 2023). From the descriptive statis-
tics in Table 1, most students (77.90%) who participated in the study declared
that they did not receive any guidance from their lecturers or institutions in using
CG-AI tools for learning, research, or assessments. This implies that learning is
not supported and if there are no checks on assessment submissions academic
integrity issues will be prevalent.
Students identified ChatGPT (47.30%) as their preferred CG-AI tool as captured
in Table 2, this was followed by Google’s Gemini (29.50%). While these tools all
generate content, the quality and style differ based on the training data set used
(Bandi et al., 2023). This means that a query can have different responses based on
the CG-AI tool used. Some tools might be best suited for research, while others will
be better at providing generic information. Further investigation would be required
to understand why these tools are the preferred choice of students for research and
learning purposes.
Aligning with previous studies, the results of this study indicate that PE is a pri-
mary determinant of students’ behavioural intentions to use CG-AI tools for learn-
ing and research purposes. This has been corroborated by similar studies related
to learning tools such as ChatGPT (Strzelecki, 2024), mobile learning systems
(Almaiah et al., 2019), and learning management systems (Al-Adwan et al., 2022).
The results emphasize the benefits of CG-AI tools in relation to student’s perfor-
mance in their academic pursuits which in-turn encourage their intention to use such
tools. The rationale behind this could be attributed to the time saving nature of the
tools to complete learning and research activities (Strzelecki & ElArabawy, 2024)
and the ability to generate quality output in response to a user’s prompt (Chan & Hu,
2023).
With regards to H2, the relationship between EE and BI was observed to be posi-
tive and significant aligning with previous studies related students’ perceptions of
learning tools such as learning management systems (Al-Mamary, 2022), and Chat-
GPT (Strzelecki, 2024; Strzelecki & ElArabawy, 2024). This means that students
are inclined to use CG-AI tools as they find it easy to use requiring very little effort.
This result must be interpreted with caution, as this study did not investigate how
students use the tools. To effectively and ethically use CG-AI tools, effort and time
is required to verify the generated content as well as rewriting it in the student’s
voice.
The final supported hypothesis was observed in the relationship between SI and
BI. This finding was corroborated by studies that investigated students behavioural
intention to use learning tools such as mobile learning (Alowayr, 2022; Kang
et al., 2015) and e-learning systems (El-Masri & Tarhini, 2017). However other
studies have differing results, for example in the context of Egyptian students’ use
of ChatGPT (Strzelecki & ElArabawy, 2024), and Nigerian students with learning
managemnt systems (Yakubu & Dasuki, 2019). The findings suggest that the use
of CG-AI tools by students are influenced by significant figures especially their
peers as most of the students reported that they did not receive support/guidance
from the school administration or from their instructors. However, as Nigerian
HEIs start to develop guidelines for CG-AI tools the significance of SI is expected
to increase.
The study revealed that FC is not a predictor of students’ behavioural intentions
to use CG-AI tools, thus hypothesis H4 was not supported. This does not echo
the findings of previous studies related to learning technologies such as learning
management systems (Al-Adwan et al., 2022), mobile learning systems (Almaiah
et al., 2019) and ChatGPT (Strzelecki & ElArabawy, 2024). However, it agrees
with other research studies on the acceptance MOOCs (Altalhi, 2021) and Chat-
GPT (Faruk et al., 2023; Strzelecki, 2024). The results indicate that students do
not believe that access to organizational and technological infrastructure are avail-
able to facilitate the use of CG-AI tools. This can be explained, to an extent, by the
demographic data showing that most students (77.9%) did not get support in using
the tools by their lecturer or school. Another reason could be that students only
require access to an internet ready device thus there is no other support needed
nor access to complex technologies to access CG-AI tools. Finally, CG-AI tools
are easy to use and similar in functionality and “look and feel” of popular search
engines like Edge or Google, therefore, additional knowledge and support is no
longer required (Faruk et al., 2023).
In the relationship between ATT and BI, analysis of the collected data shows
that there is no influence on students’ attitude towards CG-AI tools and their
behavioural intentions to use them. This means that the learners’ positive per-
ceptions about have no effect on their use of the tools. This result to contrary
to findings from similar adoption studies in AI in content creation (Anandapu-
tra et al., 2024) and ChatGPT (Saif et al., 2024). We attribute our findings to
the fact that more than half the respondents (64.40%) fall within the Gen Z age
group (under 30), and they are more likely to use novel technologies, even if
they have negative attitudes towards using technologies in general. Another fac-
tor to consider is that the student’s might be under pressure to submit and pass
their courses/modules which simulates a mandatory setting where the students
must use the tools regardless of their attitudes.
Finally, hypothesis H6 (the relationship between PR and BI) was not supported.
Even though a negative influence was observed the results were not significant. A
significant and negative effect on BI was expected from PR as we believe that if
students are aware of the penalties applied to the use of CG-AI tools incorrectly,
it would deter them from using it. This is especially true considering that there
might be concerns pertaining to accuracy and factuality the responses received from
GG-AI tools. Similar studies corroborate our findings (Chatterjee & Bhattacha-
rjee, 2020). This study attributes the insignificant results to the fact that students
are aware that their assessment submissions will not be checked for AI misconduct.
Also, it is envisaged that students use these tools mainly to generate learning content
to aid them with their studies. Further research on how Nigerian students use CG-AI
tools for learning and research will help better understand why this relationship was
not supported.
The findings of this study can help formulate policies and guidelines that support
the use of CG-AI tools in higher education, specifically in developing countries such
as Nigeria. Considering that PE was observed to influence students’ behavioural
intentions to use these tools, it implies that students believe that their performance in
learning will be improved by using CG-AI tools. Their use of these tools is further
supported by, social influence, i.e., the influence of those they consider are impor-
tant to them. However, the risks involved in using CG-AI for learning might not be
fully appreciated. Therefore, policy makers can develop existing academic integrity
policies which capture acceptable use of technologies for learning and assessments.
Prevention and detection strategies can be employed to deter academic misconduct.
For example, alternative assessments can be modified to focus on and support higher
order thinking skills. Finally, institutional ethical frameworks can be used to address
issues such as bias, fairness, transparency, and accountability when using AI for
learning, research, and assessments.
6 Conclusion
The widespread use of CG-AI tools like ChatGPT and Google’s Gemini has
sparked interests surrounding students’ perspectives on these tools for learning
and research. Prior research suggests that students’ learning outcomes are influ-
enced by their approach to learning (Biggs, 1999). Their learning approach results
from their perceptions of the learning environments or tools they are exposed to,
the individual’s abilities, and the strategies used to teach them (Chan & Hu, 2023).
With regards to the learning environment/tools, this study aimed at investigating
the factors that influence Nigerian students’ behavioural intentions to use CG-AI
tools. The study adopted the UTAUT framework, incorporating “perceived risks”
and “attitude towards technology” constructs. A total of 6 hypotheses were tested
of which 3 were supported. PE, SI and EE were all identified as determinants of
students’ behavioural intentions to use CG-AI tools. There was no support for
ATT, FC, and PR.
Structural equation modelling (SEM) was used to analyse and model the data
collected to empirically validate the significance of Nigerian students’ views of
using CG-AI tools for learning and research purposes. The main benefactors of the
results are developers, researchers, and academic administrators. Developers can
use the findings to enhance CG-AI tools. Academic administrators can develop
policies and strategies for integrating CG-AI tools into education. Researchers
can further investigate additional factors hypothesized to influence students use of
CG-AI tools.
There are significant practical and theoretical implications derived from this
research relating to the use of CG-AI tools by Nigerian students. The UTAUT
model is a comprehensive model that helps in understanding the factors that
influence an individual to use technology, however, there are very few studies
that validate the model in the context of students’ behavioural intentions to use
CG-AI tools, specifically in a developing country where policies and guidelines
are still being developed. The study therefore contributes to content generation
tools literature by assessing the UTAUT framework in the context of a develop-
ing country especially when considering the cultural and societal differences in
comparison to developed countries. In terms of practical implications, the con-
structs used in this research can help guide HEI administrators on the factors that
influence students use of CG-AI tools. For example, considering FC was not a
determinant of behavioural intention, university administrators can help to guide
and support students in using CG-AI tools effectively so that learning takes place
and academic misconduct is avoided.
This research was conducted on students, within a department, and selected from a
single HEI in northern Nigeria. Therefore, the results may not adequately capture
students’ perceptions towards the use of CG-AI tools for the entire student popula-
tion of Nigeria, let alone other developing countries. Future research could be con-
ducted based on a wider scope of students and take into consideration moderating
factors such as gender, age, and experience.
This research focused on CG-AI tools in general, this could significantly influ-
ence the findings. Research stemming from this study could focus on a specific
tool that has been adopted by the institution where training and support has been
provided.
Quantitative research is subject to generalised; hence, future research could adopt
a qualitative or mixed-methods approaches to provide deeper understanding on the
determinants of students’ behavioural intentions to use CG-AI tools.
Finally, alternative insights could be revealed by testing other theoretical frame-
works for example, the DeLone and McLean IS Success model (DeLone & McLean,
1992), or by adding unique constructs such as habit (Alhwaiti, 2023) and perceived
ethics (Gajić et al., 2024).
Age (years)
o Below 20
o 21 – 30
o 31 – 40
o 41 – 50
o Over 50
Does the University or your lecturer support or guide you in using AI for your assessments?
o Yes
o No
Which AI tool (if any) do you use for learning or research (e.g., Chat GPT, Elicit, etc.)
______________________________________________________________________________________ _
use AI
technology for
teaching and
learning
FC2: I have the
knowledge
necessary to
use AI for
learning and
research
FC3: My
lecturer/school
are available
to assist me
with using AI
for learning
and research
FC4: My
school has all
the necessary
resources to
use AI
technology for
teaching
learning and
research.
Data availability The data set generated and analysed during the current study are available from the cor-
responding author on reasonable request.
Declarations
Conflicts of interest The authors have no relevant financial or non-financial interests to disclose.
The authors have no conflicts of interest to declare that are relevant to the content of this article.
All authors certify that they have no affiliations with or involvement in any organization or entity with any
financial interest or non-financial interest in the subject matter or materials discussed in this manuscript.
The authors have no financial or proprietary interests in any material discussed in this article.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License,
which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long
as you give appropriate credit to the original author(s) and the source, provide a link to the Creative
Commons licence, and indicate if changes were made. The images or other third party material in this
article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line
to the material. If material is not included in the article’s Creative Commons licence and your intended
use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permis-
sion directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/
licenses/by/4.0/.
References
Abbad, M. M. (2021). Using the UTAUT model to understand students’ usage of e-learning systems in
developing countries. Education and Information Technologies, 26(6), 7205–7224. https://doi.org/
10.1007/s10639-021-10573-5
Adarkwah, M. A., Amponsah, S., van Wyk, M. M., Huang, R., Tlili, A., Shehata, B., Metwally, A. H. S.,
& Wang, H. (2023). Awareness and acceptance of ChatGPT as a generative conversational AI for
transforming education by Ghanaian academics: A two-phase study. Journal of Applied Learning
and Teaching, 6(2), 1–6. https://doi.org/10.37074/jalt.2023.6.2.26
Al-Adwan, A. S., Yaseen, H., Alsoud, A., Abousweilem, F., & Al-Rahmi, W. M. (2022). Novel extension
of the UTAUT model to understand continued usage intention of learning management systems:
The role of learning tradition. Education and Information Technologies, 27(3), 3567–3593. https://
doi.org/10.1007/s10639-021-10758-y
Alhwaiti, M. (2023). Acceptance of artificial intelligence application in the post-covid era and its impact
on faculty members’ occupational well-being and teaching self-efficacy: A path analysis using the
UTAUT 2 model. Applied Artificial Intelligence, 37(1), 2175110. https://doi.org/10.1080/08839514.
2023.2175110
Almaiah, M. A., Alamri, M. M., & Al-Rahmi, W. (2019). Applying the UTAUT model to explain the stu-
dents’ acceptance of mobile learning system in higher education. IEEE Access, 7, 174673–174686.
Al-Mamary, Y. H. (2022). Understanding the use of learning management systems by undergraduate uni-
versity students using the UTAUT model: Credible evidence from Saudi Arabia. International Jour-
nal of Information Management Data Insights, 2(2), 100092. https://doi.org/10.1016/j.jjimei.2022.
100092
Alowayr, A. (2022). Determinants of mobile learning adoption: Extending the unified theory of accept-
ance and use of technology (UTAUT). International Journal of Information and Learning Technol-
ogy, 39(1), 1–12. https://doi.org/10.1108/IJILT-05-2021-0070
Altalhi, M. (2021). Toward a model for acceptance of MOOCs in higher education: The modified UTAUT
model for Saudi Arabia. Education and Information Technologies, 26(2), 1589–1605. https://doi.
org/10.1007/s10639-020-10317-x
Al-Tkhayneh, K. M., Alghazo, E. M., & Tahat, D. (2023). The advantages and disadvantages of using
artificial intelligence in Education. Journal of Educational and Social Research, 13(4), 105–117.
https://doi.org/10.36941/jesr-2023-0094
Alwahaishi, S. (2021). Student use of e-learning during the coronavirus pandemic: An extension of
UTAUT to trust and perceived risk. International Journal of Distance Education Technologies
(IJDET), 19(4), 72–90.
Anandaputra, G. V., Yungistira, G. A., Nicholas, O., & Mailangkay, A. B. (2024). Analysis on User’s
Attitude Towards Content Creation Using Artificial Intelligence in Social Media. 3rd International
Conference on Creative Communication and Innovative Technology (ICCIT), (pp. 1–6). https://doi.
org/10.1109/ICCIT62134.2024.10701251
Anderson, J. C., & Gerbing, D. W. (1988). Structural equation modeling in practice: A review and recom-
mended two-step approach. Psychological Bulletin, 10(3), 411.
Andrews, J. E., Ward, H., & Yoon, J. (2021). UTAUT as a model for understanding intention to adopt AI
and related technologies among librarians. The Journal of Academic Librarianship, 47(6), 102437.
https://doi.org/10.1016/j.acalib.2021.102437
Arenas-Gaitán, J., Ramírez-Correa, P. E., & Rondán-Cataluña, F. J. (2011). Cross cultural analysis of the
use and perceptions of web based learning systems. Computers & Education, 57(2), 1762–1774.
https://doi.org/10.1016/j.compedu.2011.03.016
Arfi, W. B., Nasr, I. B., Khvatova, T., & Zaied, Y. (2021). Understanding acceptance of eHealthcare by
IoT natives and IoT immigrants: An integrated model of UTAUT, perceived risk, and financial cost.
Technological Forecasting and Social Change, 163, 12. https://doi.org/10.1016/j.techfore.2020.
120437
Bagozzi, R. P., & Yi, Y. (1988). On the evaluation of structural equation models. Journal of the Academy
of Marketing Science, 16(1), 74–94. https://doi.org/10.1007/BF02723327
Baidoo-Anu, D., & Ansah, L. O. (2023). Education in the era of generative artificial intelligence (AI):
Understanding the potential benefits of ChatGPT in promoting teaching and learning. Journal of AI,
7(1), 52–62. https://doi.org/10.61969/jai.1337500
Bandi, A., Adapa, P. V., & Kuchi, Y. E. (2023). The power of generative ai: A review of requirements,
models, input–output formats, evaluation metrics, and challenges. Future Internet, 15(8), 260.
https://doi.org/10.3390/fi15080260
Biggs, J. (1999). What the student does: Teaching for enhanced learning. Higher Education Research &
Development, 18(1), 57–75. https://doi.org/10.1080/0729436990180105
Bouteraa, M., Bin-Nashwan, S. A., Al-Daihani, M., Dirie, K. A., Benlahcene, A., Sadallah, M., Zaki,
H. O., Lada, S., Ansar, R., Fook, L. M., & Chekima, B. (2024). Understanding the diffusion of AI-
generative (ChatGPT) in higher education: Does students’ integrity matter? Computers in Human
Behavior Reports, 14, 100402. https://doi.org/10.1016/j.chbr.2024.100402
Chan, C. Y. (2023). A comprehensive AI policy education framework for university teaching and learn-
ing. International Journal of Educational Technology in Higher Education, 20(1), 38. https://doi.
org/10.1186/s41239-023-00408-3
Chan, C. K., & Hu, W. (2023). Students’ voices on generative AI: Perceptions, benefits, and challenges in
higher education. International Journal of Educational Technology in Higher Education, 20(1), 43.
https://doi.org/10.1186/s41239-023-00411-8
Chang, H. H., Fu, C. S., & Jain, H. T. (2016). Modifying UTAUT and innovation diffusion theory to
reveal online shopping behavior: Familiarity and perceived risk as mediators. Information Develop-
ment, 32(5), 1757–1773. https://doi.org/10.1177/0266666915623317
Changalima, I. A., Amani, D., & Ismail, I. J. (2024). Social influence and information quality on Genera-
tive AI use among business students. The International Journal of Management Education, 22(3),
101063. https://doi.org/10.1016/j.ijme.2024.101063
Chatterjee, S., & Bhattacharjee, K. K. (2020). Adoption of artificial intelligence in higher education: A
quantitative analysis using structural equation modelling. Education and Information Technologies,
25, 3443–3463. https://doi.org/10.1007/s10639-020-10159-7
Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). Technology Acceptance Model. J Manag Sci,
35(8), 982–1003. https://doi.org/10.1007/978-3-030-45274-2
DeLone, W. H., & McLean, E. R. (1992). Information systems success: The quest for the dependent vari-
able. Information Systems Research., 3(1), 60–95. https://doi.org/10.1287/isre.3.1.60
Denny, P., Khosravi, H., Hellas, A., Leinonen, J., & Sarsa, S. (2023). Can we trust AI-generated
educational content? Comparative analysis of human and AI-generated learning resources.
arXiv:2306.10509.
Du. , L., & Lv, B. (2024). Factors influencing students’ acceptance and use generative artificial intel-
ligence in elementary education: an expansion of the UTAUT model. Education and Information
Technologies, 1–20. https://doi.org/10.1007/s10639-024-12835-4
Edo, O. C., Ang, D., Etu, E. E., Tenebe, I., Edo, S., & Diekola, O. A. (2023). Why do healthcare workers
adopt digital health technologies-A cross-sectional study integrating the TAM and UTAUT model
in a developing economy. International Journal of Information Management Data Insights, 3(2),
100186. https://doi.org/10.1016/j.jjimei.2023.100186
El-Masri, M., & Tarhini, A. (2017). Factors afecting the adoption of e-learning systems in Qatar and
USA: Extending the unifed theory of acceptance and use of technology 2 (UTAUT2). Educational
Technology Research and Development, 65(3), 743–763. https://doi.org/10.1007/s11423-016-9508-8
Ernst, C. P. (2017). The influence of disgust on technology acceptance. Twenty-third Americas Confer-
ence on Information Systems (AMCIS 2017 Proceedings), Boston, pp. 539–547.
Essien, A., Salami, A., Ajala, O., Adebisi, B., Shodiya, A., & Essien, G. (2024). Exploring socio-cultural
influences on generative AI engagement in Nigerian higher education: An activity theory analysis.
Smart Learning Environments, 11(1), 63.
Faruk, L. D., Rohan, R., Ninrutsirikun, U., & Pal, D. (2023). University Students’ Acceptance and Usage
of Generative AI (ChatGPT) from a Psycho-Technical Perspective. 13th International Conference
on Advances in Information Technology, 1–8. https://doi.org/10.1145/3628454.3629552
Feuerriegel, S., Hartmann, J., Janiesch, C., & Zschech, P. (2024). Generative AI. Business & Information
Systems Engineering, 66(1), 111–126.
Gajić, T., Vukolić, D., Bugarčić, J., Đoković, F., Spasojević, A., Knežević, S., ĐorđevićBoljanović, J.,
Glišić, S., Matović, S., & Dávid, L. D. (2024). The Adoption of Artificial Intelligence in Serbian
Hospitality: A Potential Path to Sustainable Practice. Sustainability, 16(8), 3172. https://doi.org/10.
3390/su16083172
George, D., & Mallery, P. (2003). SPSS for Windows step by step: answers to selected exercises. A simple
guide and reference, 63(1), 1461–1470.
Hadi, M. U., Al Tashi, Q., Shah, A., Qureshi, R., Muneer, A., Irfan, M., Zafar, A., Shaikh, M. B., Akhtar,
N., Wu, J., & Mirjalili, S. (2024). Large language models: A comprehensive survey of its appli-
cations, challenges, limitations, and future prospects. Authorea Preprints. https://doi.org/10.36227/
techrxiv.23589741.v6
Hair, J. F., Anderson, R. E., Tatham, R. L., & Black, W. C. (2010). Multivariate Data Analysis. Prentice
Hall.
Hanafizadeh, P., Behboudi, M., Koshksaray, A. A., & Tabar, M. J. S. (2014). Mobile-banking adoption by
Iranian bank clients. Telematics and Informatics, 31(1), 62–78. https://doi.org/10.1016/j.tele.2012.
11.001
Hu, K., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conven-
tional criteria versus new alternatives. Structural Equation Modelling: A Multidisciplinary Journal,
6(1), 1–55. https://doi.org/10.1080/10705519909540118
Hutson, J. (2024). Rethinking Plagiarism in the Era of Generative AI. Journal of Intelligent Communica-
tion, 4(1), 20–31.
Ivanov, S., Soliman, M., Tuomi, A., Alkathiri, N. A., & Al-Alawi, A. N. (2024). Drivers of generative AI
adoption in higher education through the lens of the Theory of Planned Behaviour. Technology in
Society, 77, 102521. https://doi.org/10.1016/j.techsoc.2024.102521
Kang, M., Liew, B. Y., Lim, H., Jang, J., & Lee, S. (2015). Investigating the determinants of mobile
learning acceptance in korea using UTAUT2. In: Chen, G., Kumar, V., Kinshuk, Huang, R. & Kong,
S. C. (eds.), Lecture Notes in Educational Technology, pp. 209–216. https://doi.org/10.1007/978-3-
662-44188-6_29
Kanont, K., Pingmuang, P., Simasathien, T., Wisnuwong, S., Wiwatsiripong, B., Poonpirome, K., Song-
kram, N., & Khlaisang, J. (2024). Generative-AI, a Learning Assistant? Factors Influencing Higher-
Ed Students’ Technology Acceptance. Electronic Journal of e-Learning, 22(6), 18–33. https://doi.
org/10.34190/ejel.22.6.3196
Kasneci, E., Seßler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh,
G., Günnemann, S., Hüllermeier, E., & Krusche, S. (2023). ChatGPT for good? On opportunities
and challenges of large language models for education. Learning and Individual Differences, 103,
102274. https://doi.org/10.1016/j.lindif.2023.102274
Kline, R. B. (2015). Principles and Practice of Structural Equation Modeling. Guilford publications.
Krause, S., Panchal, B. H., & Ubhe , N. (2024). The Evolution of Learning: Assessing the Transformative
Impact of Generative AI on Higher Education. arXiv preprint arXiv:2404.10551. https://doi.org/10.
48550/arXiv.2404.10551
Kurdi, G., Leo, J., Parsia, B., Sattler, U., & Al-Emari, S. (2020). A systematic review of automatic ques-
tion generation for educational purposes. International Journal of Artificial Intelligence in Educa-
tion, 30, 121–204. https://doi.org/10.1007/s40593-019-00186-y
Lai, C. Y., Cheung, K. Y., Chan, C. S., & Law, K. K. (2024). Integrating the adapted UTAUT model with
moral obligation, trust, and perceived risk to predict ChatGPT adoption for assessment support: A
survey with students. Computers and Education: Artificial Intelligence, 6, 100246. https://doi.org/
10.1016/j.caeai.2024.100246
Lin, X. (2024). Exploring the role of ChatGPT as a facilitator for motivating self-directed learning among
adult learners. Adult Learning, 35(3), 156–166. https://doi.org/10.1177/10451595231184928
Mason, R. L., Gunst, R. F., & Hess, J. L. (1989). Statistical design and analysis of experiments: Applica-
tions to engineering and science. Wiley.
Moorhouse, B. L., Yeo, M. A., & Wan, Y. (2023). Generative AI tools and assessment: Guidelines of
the world’s top-ranking universities. Computers and Education Open, 5, 100151. https://doi.org/10.
1016/j.caeo.2023.100151
Murugesan, S., & Cherukuri, A. K. (2023). The rise of generative artificial intelligence and its impact on
education: The promises and perils. Computer, 56(5), 116–121. https://doi.org/10.1109/MC.2023.
3253292
Ob’rien, R. M. (2007). A caution regarding rules of thumb for variance inflation factors. Quality and
Quantity, 41(5), 673–690. https://doi.org/10.1007/s11135-006-9018-6
Rachmawati, I. K., Bukhori, M., Majidah, Y., & Hidayatullah, S. (2020). Analysis of use of mobile bank-
ing with acceptance and use of technology (UTAUT). International Journal of Scientific and Tech-
nology Research, 9(8), 534–540.
Saif, N., Khan, S. U., Shaheen, I., Alotaibi, F. A., Alnfiai, M. M., & Arif, M. (2024). Chat-GPT; validat-
ing Technology Acceptance Model (TAM) in education sector via ubiquitous learning mechanism.
Computers in Human Behavior, 154, 108097. https://doi.org/10.1016/j.chb.2023.108097
Salkind, N. (2015). Encyclopedia of measurement and statistics (1st ed.). Sage.
Sharma, S., Islam, N., Singh, G., & Dhir, A. (2022). Why do retail customers adopt artificial intelligence
(AI) based autonomous decision-making systems? IEEE Transactions on Engineering Management,
71, 1846–1861. https://doi.org/10.1109/TEM.2022.3157976
Strzelecki, A. (2024). Students’ acceptance of ChatGPT in higher education: An extended unified theory
of acceptance and use of technology. Innovative Higher Education, 49(2), 223–245. https://doi.org/
10.1007/s10755-023-09686-1
Strzelecki, A., & ElArabawy, S. (2024). Investigation of the moderation effect of gender and study level
on the acceptance and use of generative AI by higher education students: Comparative evidence
from Poland and Egypt. British Journal of Educational Technology, 55(3), 1209–1230. https://doi.
org/10.1111/bjet.13425
Šumak, B., Heričko, M., Polančič, G., & Pušnik, M. (2010). Investigation of e-learning system accept-
ance using UTAUT. International Journal of Engineering Education, 26(6), 1327.
Tahar, A., Riyadh, H. A., Sofyani, H., & Purnomo, W. E. (2020). Perceived ease of use, perceived useful-
ness, perceived security, and intention to use e-filing: The role of technology readiness. The Journal
of Asian Finance, Economics and Business, 7(9), 5. https://doi.org/10.13106/jafeb.2020.vol7.no9.
537
Tiwari, C. K., Bhat, M. A., Khan, S. T., Subramani, R., & Khan, M. A. (2024). What drives stu-
dents toward ChatGPT? An investigation of the factors influencing adoption and usage of Chat-
GPT. Interactive Technology and Smart Education, 21(3), 333–355. https://doi.org/10.1108/
ITSE-04-2023-0061
Vázquez-Madrigal, C., García-Rubio, N., & Triguero, Á. (2024). Generative Artificial Intelligence in
Education: Risks and Opportunities. In M. D. C. VallsMartínez & J. Montero (Eds.), Teaching Inno-
vations in Economics. Springer. https://doi.org/10.1007/978-3-031-72549-4_11
Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information tech-
nology: Toward a unified view. MIS quarterly, 27, 425–478. https://doi.org/10.2307/30036540
Wang, Y., Liu, C., & Tu, Y. F. (2021). Factors affecting the adoption of AI-based applications in higher
education. Educational Technology & Society, 24(3), 116–129.
Wasserman, L. (2013). All of statistics: A concise course in statistical inference. Springer Science &
Business Media.
Widyaningrum, R., Wulandari, F., Zainudin, M., Athiyallah, A., & Rizqa, M. (2024). Exploring the fac-
tors affecting ChatGPT acceptance among university students. Multidisciplinary Science Journal,
6(12), 2024273. https://doi.org/10.31893/multiscience.2024273
Widyanto, H. A., Kusumawardani, K. A., & Yohanes, H. (2022). Safety first: Extending UTAUT to bet-
ter predict mobile payment adoption by incorporating perceived security, perceived risk, and trust.
Journal of Science and Technology Policy Management, 13(4), 952–973. https://doi.org/10.1108/
JSTPM-03-2020-0058
Wu, W., Zhang, B., Li, S., & Liu, H. (2022). Exploring factors of the willingness to accept AI-assisted
learning environments: An empirical investigation based on the UTAUT model and perceived risk
theory. Frontiers in Psychology, 13, 870777. https://doi.org/10.3389/fpsyg.2022.870777
Wu, J., Gan, W., Chen, Z., Wan, S., & Philip, S. Y. (2023). Multimodal large language models: A survey.
In 2023 IEEE International Conference on Big Data (BigData). IEEE, pp. 2247–2256. https://doi.
org/10.1109/BigData59044.2023.10386743
Yakubu, M. N., & Dasuki, S. I. (2019). Factors affecting the adoption of e-learning technologies among
higher education students in Nigeria: A structural equation modelling approach. Information Devel-
opment, 35(3), 492–502. https://doi.org/10.1177/0266666918765907
Ylmaz, F. G., Yilmaz, R., & Ceylan, M. (2023). Generative artificial intelligence acceptance scale: A
validity and reliability study. International Journal of Human–Computer Interaction, 1–13. https://
doi.org/10.1080/10447318.2023.2288730
Zastudil, C., Rogalska, M., Kapp, C., Vaughn, J., & MacNeil, S. (2023). Generative ai in computing
education: Perspectives of students and instructors. IEEE Frontiers in Education Conference (FIE),
pp. 1–9.
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps
and institutional affiliations.
1. use such content for the purpose of providing other users with access on a regular or large scale
basis or as a means to circumvent access control;
2. use such content where to do so would be considered a criminal or statutory offence in any
jurisdiction, or gives rise to civil liability, or is otherwise unlawful;
3. falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association
unless explicitly agreed to by Springer Nature in writing;
4. use bots or other automated methods to access the content or redirect messages
5. override any security feature or exclusionary protocol; or
6. share the content in order to create substitute for Springer Nature products or services or a
systematic database of Springer Nature journal content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a
product or service that creates revenue, royalties, rent or income from our content or its inclusion as
part of a paid for service or for other commercial gain. Springer Nature journal content cannot be
used for inter-library loans and librarians may not upload Springer Nature journal content on a large
scale into their, or any other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not
obligated to publish any information or content on this website and may remove it or features or
functionality at our sole discretion, at any time with or without notice. Springer Nature may revoke
this licence to you at any time and remove access to any copies of the Springer Nature journal content
which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or
guarantees to Users, either express or implied with respect to the Springer nature journal content and
all parties disclaim and waive any implied warranties or warranties imposed by law, including
merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published
by Springer Nature that may be licensed from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a
regular basis or in any other manner not expressly permitted by these Terms, please contact Springer
Nature at