2024 Exploring Digital Competencies Validation and Reliability
2024 Exploring Digital Competencies Validation and Reliability
https://doi.org/10.1007/s10758-024-09741-6
ORIGINAL RESEARCH
Abstract
Digital competencies are very significant in terms of integrating digital resources into
educational processes. This study presents the validity and reliability of an instrument cre-
ated by Carrera et al. (2011), in order to evaluate the basic digital competence of the three
main educational agents of the educational community (teachers, students, and parents) for
all educational stages (Early Childhood Education, Primary Education, Secondary Educa-
tion and Higher Education), making use of digital resources to (1) Skills in management
and transfer of technological data, (2) Software and hardware skills, (3) Web navigation
skills, (4) Skills in using word processors, (5) Data processing and management skills, y
(6) Multimedia presentation design skills. The application of the instrument was carried
out with a sample of 1,149 participants from all educational stages, coming from the
entire territory of the Dominican Republic. Reliability was assessed using various mea-
sures, including Cronbach’s Alpha, Spearman-Brown Coefficient, Guttman’s Two Halves,
McDonald’s Omega, and composite reliability. To validate the instrument, exploratory
factor analysis (EFA) and confirmatory factor analysis (CFA) were carried out with the
purpose of understanding the validity and dimensionality of the scale (comprehension
validity, construct validity, convergent, discriminant and invariance validity). The results
demonstrated highly satisfactory reliability, and in terms of construct validity, a good fit
of the model was observed, valid for any educational agent and for any educational stage.
The final version of the instrument consists of 20 items classified into six latent factors.
Francisco D. Guillén-Gámez
[email protected]
1
Institución primogénita de Acción Pro Educación y Cultura (APEC), Santo Domingo,
Dominican Republic
2
Faculty of Education Sciences, Department of Didactics and Educational Organization,
University of Málaga (UMA), Blvr. Louis Pasteur, 25, Málaga 29010, Spain
13
J. M. Soriano-Alcantara et al.
1 Introduction
13
Exploring Digital Competencies: Validation and Reliability of an…
In this context, it is important to remember that the use of digital resources plays a deter-
mining role in the acquisition and development of DC as pointed out Guillén-Gámez et al.
(2020). This theory has also been analyzed by the research of Ghomi and Redecker (2019),
as well as by the findings of Cabero-Almenara et al. (2023) and Lucas et al. (2021), which
established a positive correlation between the availability of digital resources and the level
of digital literacy. Nevertheless, the essence does not lie solely in the frequency of use of
ICT resources, but in the way in which they are applied, since, as DC training increases, the
use of these resources becomes more effective (O’Malley et al., 2013).
In this order of ideas and taking into consideration how digital resources advance and are
integrated into classrooms, it is necessary to develop basic digital skills in all the subareas
that make up and structure DC (Tomczyk, 2019), requiring valid and reliable instruments
that allows knowing the level of literacy for any group involved in the educational process.
However, the scientific literature remains quite scarce regarding the existence of instru-
ments that allow analyzing the basic DC of teachers, students and parents, in a heteroge-
neous and joint manner. To justify this statement, a bibliographic search of CD measurement
instruments has been carried out with the following inclusion criteria: (1) the words must
appear in the title of the paper “digital competence instrument”, “digital competence scale”
or “digital competence framework”; (2) that the instruments had some type of validation,
whether expert validation or construct validation (exploratory factor analysis, EFA; confir-
matory factor analysis, CFA); and (3) that the studies are from the last five years. Table 1
shows that there are many instruments, each with different structural taxonomies on how
to measure DC. However, three specific aspects are observed: (1) that no instrument has
measured the DC of the group of parents; (2) that no instrument has measured the DC of
the three main educational agents in a heterogeneous and joint manner, leaving a gap in the
scientific literature; y (3) very few instruments have measured multigroup invariance with
the purpose of offering more robust validity to all groups.
Therefore, the contribution of this study, and consequently, the purpose to be achieved, is
to validate a psychometric instrument with sufficient methodological rigor (including multi-
group invariance), to evaluate the basic DC of the three main agents in the teaching process
(teachers, students, and parents), and which serves to evaluate the DC of these agents for
any educational stage (Early Childhood Education, Primary Education, Secondary Educa-
tion and Higher Education).
2 Method
A non-experimental and ex post facto design was used. Data collection was carried out
using intentional non-probabilistic sampling, as well as snowball sampling (Leighton et
al., 2021). The data was collected during the 2022/2023 academic year, from all over the
territory of the Dominican Republic. Before participants completed the questionnaire, they
were informed about the purpose of the research, which revolved around a doctoral thesis.
Data collection was carried out anonymously using an online form without any marking that
could compromise the identity of the participants.
13
J. M. Soriano-Alcantara et al.
The response rate was 1335 participants. However, Kline (2023) states that there are a series
of important issues to consider in any survey validation process. First, the presence of miss-
ing data manifests itself when participants do not respond to a specific item. Fortunately,
when administering the survey with Google Forms, we took the step of labeling all items as
required. This measure has contributed to minimizing the probability of omitted responses.
Secondly, we managed to detect outliers using the Mahalanobis distance (D2). Kline (2023)
recommends that all observations (subjects) that have a p-value less than 0.001 in the two
calculations of the distances P1 and P2 be eliminated. In this research, we eliminated 166
subjects based on the p-values reported by the AMOS software, thus leaving a sample of
1,169. In addition, 20 cases were also eliminated since some of the participants marked the
13
Exploring Digital Competencies: Validation and Reliability of an…
option of early childhood education students, which was a marking error, since, for this
study, the authors have considered that, due to the chronological age of the student, they are
not qualified to respond to a survey. The final sample was 1149 participants. The distribution
is seen in Table 2.
2.3 Instrument
To measure the CD of the different agents of the educational community, the instrument
by Carrera et al. (2011) was used. However, the instrument lacked psychometric values
which would measure reliability and construct validity since the authors had only carried
out expert judgment validation. The test format is Likert type with 7 response options from
7 (I have the skills to do it) to 1 (I don’t have the skills to do it). The instrument had a total
of 23 dimensions of which the 6 most representative dimensions that measured basic digital
skills were selected. Each of the selected factors is theoretically defined as follows:
13
J. M. Soriano-Alcantara et al.
First, the sample was divided into two subgroups drawn at random, with the objective of
examining the internal composition of the instrument, following the guidelines proposed by
Hinkin et al. (1997). Each subsample was used to analyze the construct validity process by
applying EFA and CFA.
The purpose of the EFA is to discover the underlying structures between the items, clas-
sifying them based on the correlation coefficients between them (Sencan, 2005). In other
words, the EFA assumes that the correlations (covariance) between the observed items can
be explained by a smaller number of latent factors (Mulaik, 2018). For the analysis, the
Oblimin rotation method and the Principal Axis factorization method were used, which ana-
lyzes the common variance between the items to answer questions such as How many fac-
tors? What are the factors? And what are the relationships between the factors? (Mvududu
& Sink, 2013).
For the second type, the CFA was used to verify the relevance of the proposed theoretical
models (Perry et al., 2015). Structural equation modeling was used based on the polychoric
correlation matrix and robust maximum likelihood estimators. Convergent validity was also
verified, which refers to the degree of certainty that the proposed items measure the same
latent factor (Cheung & Wang, 2017) and was evaluated through average variance extracted
values (AVE). For discriminant validity, the MSV index (maximum squared shared vari-
ance) was considered. Regarding the last type of validity analyzed, and with the purpose
of knowing if the factorial structure of the model is shown to be invariant with respect to
the variable typology of educational agent and educational stage, multigroup analyzes were
carried out to determine if the instrument was equally valid.
Once adequate validity was obtained, the assumption of multivariate normality was
verified. For this, multivariate normality was verified by comparing the Mardia coefficient,
which is considered acceptable when its value is less than the result of the formula p(p + 2)
(Raykov & Marcoulides, 2008), where p is the number of items. This assumption is verified
by contrasting the multivariate kurtosis value obtained in SPSS Amos (Ping & Cunningham,
2013). The calculation was carried out assuming the final 20 items of the instrument. The
formula returned a value of 440, while the multivariate kurtosis index was 92.562. There-
fore, since the Mardia coefficient was less than the formula value, we concluded that the
multivariate normality assumption was confirmed.
The last procedure was to check the internal consistency of the instrument, where dif-
ferent reliability coefficients were used such as Cronbach’s Alpha, Composite Reliability,
Spearman-Brown Coefficients, Guttman’s Two-Halves and McDonald’s Omega. All ana-
lyzes were performed using IBM SPSS version 24.0 and AMOS version 24.0.
3 Results
13
Exploring Digital Competencies: Validation and Reliability of an…
subsequent analyses: DIM1.1, DIM1.2, DIM2.7, and DIM5.6. For the same verification,
Meroño et al. (2018) recommends eliminating those items with a standard deviation (ST)
smaller than the value 1. It can be seen in Table 3 that all items meet this requirement.
The degree of discrimination of each item was also used through the corrected correla-
tion coefficient between the item score and the factors. The purpose of this procedure is to
increase the reliability of the factors if any of their corresponding items were eliminated.
Shaffer et al. (2010) state that items must be excluded from the instrument if the item-total
correlation coefficient is less than 0.40. Table 4 presents the analysis of the degree of dis-
crimination through two parameters: corrected total correlation of elements and Cronbach’s
Alpha if the element has been deleted. It is observed that no item of the instrument has a
value less than 0.60 in the column of Corrected Total Correlation of Elements, complying
with the authors’ recommendations. Therefore, no element was eliminated for subsequent
analyses.
Table 5 presents the correlation matrix between the latent factors of the instrument when
applying the oblimim rotations method, indicating that there is a correlation between the
factors. This finding suggests the unidimensionality of the instrument, which is composed
of a base of six latent factors.
The results related to construct validity are shown through the application of the EFA
method, following the recommendations of Gümüş and Kukul (2023). The Kaiser Meyer
Olkin Index (KMO) and Bartlett’s Test of Sphericity were checked, with the purpose of
verifying the suitability of the data for the factor analysis and the relevance of the sample
size. Authors such as Worthington and Whittaker (2006) establish that values greater than
0.8 for the KMO index would be satisfactory and, in our study, the KMO value was 0.975.
Respecto al test de Bartlett, se arrojó un resultado significativo con un valor de p < .05. In
addition, it was determined that the Chi-square value was 45184.141, and the number of
degrees of freedom (DF) was 630. These values are considered appropriate according to the
literature corresponding to the EFA stage (Watkins, 2021).
According to the previous analyses, the EFA was applied with a total of 36 items. In the
literature, factor distributions are expected to have a landa value greater than one (Cattell,
1966). Furthermore, it is recommended that those items with factor loadings less than 0.40
be eliminated from the model, the same occurring for those items that have not saturated
the corresponding factor (Gümüş & Kukul, 2023). Table 6 shows that all items meet the
criterion, so none of them were eliminated for subsequent analyses.
The results of the EFA revealed the 36 items grouped according to the theoretical belong-
ing factor. The emerging factors, according to the theoretical foundations, take the following
names: Factor 1 (items DIM3.2, DIM3.3, DIM3.7, DIM3.6, DIM3.4, DIM3.1, DIM3.5)
which explained 67.04% of the true variance in the participants’ scores; Factor 2 (items
DIM5.5, DIM5.4, DIM5.3, DIM5.7, DIM5.1, DIM5.2) explaining 6.35% of the variance;
Factor 3 (items DIM6.4, DIM6.6, DIM6.3, DIM6.7, DIM6.5, DIM6.2, DIM6.1), explain-
ing 5.03% of the variance; Factor 4 (items DIM4.5, DIM4.6, DIM4.3, DIM4.4, DIM4.2,
DIM4.1) with 3.45% of the total variance; Factor 5 (items DIM2.4, DIM2.6, DIM2.3,
DIM2.5, DIM2.2, DIM2.1) with 3.26% of the variance; and finally, Factor 6 (items DIM1.6,
13
J. M. Soriano-Alcantara et al.
13
Exploring Digital Competencies: Validation and Reliability of an…
Table 3 (continued)
ST A K
5.3 I know how to create, enter data, save and print a spreadsheet with Excel or 2.18 − 0.58 -1.13
another program
5.4 I know how to format a spreadsheet by modifying the distance between cells, 2.17 − 0.49 -1.20
the font, or the margins, among others
5.5 I know how to do simple calculations with formulas in a spreadsheet. 2.20 − 0.42 -1.28
5.6 I know how to do simple calculations by entering the formulas myself. 2.23 − 0.36 -1.63
5.7 I know how to create graphs from entered data. 2.25 − 0.30 -1.40
DIM6. Multimedia presentation design skills
6.1 I know how to use programs to make presentations (PowerPoint, Google 2.05 − 0.97 − 0.51
Presentation, Canva, Genially)
6.2 I know how to design the most common presentation options (slides, back- 2.01 -1.13 − 0.19
ground, effects, transitions, among others)
6.3 I know how to make, save and export a digital presentation to other formats 2.09 -1.03 − 0.46
6.4 I know how to format a presentation by changing the background, the font or 2.08 -1.07 − 0.38
adding images, among other effects
6.5 I know how to add effects and transitions between slides to a presentation 2.10 − 0.96 − 0.58
6.6 I know how to add music, video and animations to the presentation 2.06 -1.03 − 0.42
6.7 I know how to make simple presentations with and without templates 2.08 -1.08 − 0.36
*Note Own elaboration
DIM1.4, DIM1.5, DIM1.3) with 2.04% of the variance. It was found that the rate of total
explained variance was 87.17%.
The CFA was performed to determine how the EFA data fit (Bandalos & Finney, 2018). The
goal was to achieve an instrument that was as simple and concise as possible, with a smaller
number of items, without compromising reliability or validity. The first model was initially
examined with the final latent structure obtained through the EFA. Table 7 shows that this
model did not meet any of the fit indices recommended by Hu and Bentler (1999), which led
to the creation of a second model. In this second model, all those items that had an exagger-
atedly high covariance with any of the rest of the items of the instrument were eliminated,
as recommended by Byrne (2013). To achieve this, the modifications of indices (MIs) of the
covariances between items were analyzed, interpreting this as an interaction of errors. Spe-
cifically, the following items were eliminated: DIM2.1, DIM2.5, DIM3.1, DIM3.2, DIM3.3,
DIM3.4, DIM4.1, DIM4.2, DIM5.1, DIM5.2, DIM5.3, DIM5.6, DIM6.1, DIM6.2, DIM6.3
and DIM6.6.
The second model was significant and met all recommended requirements. Table 7 shows
the coefficients obtained for each index analyzed. The value of the chi-square goodness-of-
fit test (CMIN/DF) was 2.258, which explains the suitability of the sample size of the data,
where values less than 5 are interpreted as satisfactory (Kline, 2011). The comparative fit
index (CFI) and the normalized fit index (NFI) must be equal to or greater than 0.95 (West
et al., 2012). In the second model, values of 0.988 and 0.978 respectively were observed,
interpreting them as acceptable. The IFI (Incremental Fit Index) and TLI (Tucker-Lewis
index) are incremental adjustment indices. The literature recommends that these values be
greater than 0.95 (Hu & Bentler, 1999). In the model these values were 0.988 and 0.985,
13
J. M. Soriano-Alcantara et al.
13
Exploring Digital Competencies: Validation and Reliability of an…
respectively, interpreted as satisfactory. Finally, the RMSEA (Root mean squared error of
approximation) measures the difference between the observed covariance matrix per degree
of freedom and the predicted covariance matrix, where satisfactory values must be less than
0.06 (Hu & Bentler, 1999). In the second model, a value of 0.057 was obtained, considering
it within the accepted standard. Furthermore, to improve the model, the covariance method
was applied to the relationships between the items (Schreiber et al., 2006). Consequently,
covariance graphs were drawn between the error terms e1-e3 and e5-e7, due to the relation-
ships formed between items DIM1.3-DIM1.5 and DIM2.2-DIM2-4, respectively.
Figure 1 shows the final factor model as a result of the CFA and the findings on the rela-
tionship between the latent factors and their items. The standardized correlation values can
also be observed from the CFA results.
In order for the AVE coefficient to be satisfactory, each factor of the instrument must have
a value greater than 0.50. Furthermore, the value of the diagonal of the square root of AVE
must be higher than the correlations between the factors (Hair et al., 2010). Table 8 shows
the AVE values greater than 0.5, and, in turn, how the square roots of the AVE (diagonal)
were higher than the correlations between the latent factors.
To evaluate discriminant validity, the MSV index (maximum shared variance) was used,
where the criterion is that its value is lower than the AVE of each factor (Fornell & Larcker,
1981). When analyzing these results, it is observed in Table 8 that, although both factors are
considerably related, the discriminant validity between them is maintained.
In the literature it is stated that different methods are used to measure the reliability of the
instruments. Nunally (1978) considers that the minimum acceptable value of the reliability
coefficient must be at least around 0.7, although it is better close to 0.80. Furthermore,
according to Çokluk et al. (2012), a value between 0.80 and 1.00 is considered highly reli-
able. The values found in this study were greater than 0.90 (Table 9). Therefore, it can be
stated that the internal consistency coefficient obtained in this study is very good. Regarding
the composite reliability coefficient (CR), the value must be above 0.7 for all factors (Heinzl
et al., 2011). Note that this criterion was also met. Also, the Spearman-Brown, Guttman
Two-Halves and Omega McDonald Coefficients reached the recommended thresholds, so
the reliability of the instrument was very satisfactory in each latent factor and its total.
13
J. M. Soriano-Alcantara et al.
13
Exploring Digital Competencies: Validation and Reliability of an…
13
J. M. Soriano-Alcantara et al.
With the purpose of evaluating the invariance of the factorial structure of the model in rela-
tion to the type of educational agent (teachers, students and parents) and the educational
stage (kindergarten, primary, secondary and university), an analysis was carried out mul-
tigroup. The presence of invariance by type of educational agent would be established if
there were no significant differences (p. > 0.05) between the Unconstrained Model and the
Measurement Weights Model. Likewise, following the proposal of Cheung and Rensvold
(2002), invariance could be corroborated by the CFI coefficient, where a difference equal to
or less than 0.01 in the comparison between the Unconstrained Model and the Measurement
Weights Model would indicate the presence of invariance. For the typology of educational
agent, no significant differences were found between both models (p. = 0.098), assuming
a minimum criterion to accept the existence of model invariance by types of educational
agents (Byrne et al., 1989; Marsh, 1993). In addition, it was reflected that the difference
between CFIs obtained is 0.001, allowing the metric invariance model to be accepted for
both cases. It can be concluded that metric invariance establishes the equivalence of the
basic meaning of the construct through factor loadings between the three groups (teachers,
students and parents). For the analysis between educational stages, no significant differ-
ences were found between both models (p. = 0.583), so the instrument is also equally valid
to analyze DC at any educational stage (See Table 10).
13
Exploring Digital Competencies: Validation and Reliability of an…
13
J. M. Soriano-Alcantara et al.
After the EFA study, a scale composed of 36 items with six factors was created as a result.
The EFA results were confirmed by CFA. For this procedure, some adjusted indices were
used and the results were compared with acceptable values expressed by Hu and Bentler
(1999), Kline (2011) and West et al. (2012). When all these values were examined, several
models were carried out and the last one determined that the results obtained are within the
range of acceptable values specified in the literature. Subsequently, the discriminant and
convergent validity of the instrument was also verified, finding satisfactory values in both
the AVE index and the MSV index, as recommended by Hair et al. (2010) and Fornell and
Larcker (1981). The last type of validity verified was the invariance by type of educational
agent and educational stage, which showed satisfactory coefficients which showed how
valid the instrument was for any group and educational stage.
Therefore, unlike other instruments, the basic DC scale is validated for any type of edu-
cational agent, whether teachers, students, or parents, as well as for any educational stage
(Early Childhood Education, Primary Education, Secondary Education and Higher Educa-
tion). With this scale, each group will be able to evaluate their DC in relation to fundamen-
tal technological skills and address any deficiencies that may exist, thus allowing them to
improve their capabilities in this area.
In addition to carrying out the validity of the instrument, it is essential to reflect on how
to improve both the design and methodology of the study. One limitation lies in the type of
sample used, which was non-probabilistic. Therefore, the results obtained should be inter-
preted with caution when applying them to other members of the educational community
who have similar characteristics, thus avoiding extrapolating the findings to all teachers,
students and parents. Looking to the future, it would be relevant to consider the possibil-
ity of collecting a more representative sample of these agents, in order to achieve a more
adequate generalization of the results and guarantee that the instrument is equally valid for
the entire educational community.
Acknowledgements This scientific article is part of the doctoral thesis of Jesús Manuel Soriano-Alcántara,
assigned to the Doctorate Program in Education and Social Communication of the University of Málaga
(Spain).
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License,
which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as
you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons
licence, and indicate if changes were made. The images or other third party material in this article are
included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material.
If material is not included in the article’s Creative Commons licence and your intended use is not permitted
by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the
copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
References
Alarcón, R., Del Pilar Jiménez, E., & de Vicente-Yagüe, M. I. (2020). Development and validation of the
DIGIGLO, a tool for assessing the digital competence of educators. British Journal of Educational
Technology, 51(6), 2407–2421. https://doi.org/10.1111/bjet.12919.
13
Exploring Digital Competencies: Validation and Reliability of an…
Alharbi, B. A., Ibrahem, U. M., Moussa, M. A., Alrashidy, M. A., & Saleh, S. F. (2023). Parents’ digital skills
and their development in the context of the Corona pandemic. Humanities and Social Sciences Com-
munications, 10(1), 1–10. https://doi.org/10.1057/s41599-023-01556-7.
Bandalos, D. L., & Finney, S. J. (2018). Factor analysis: Exploratory and confirmatory. The reviewer’s guide
to quantitative methods in the social sciences (pp. 98–122). Routledge.
Barragán-Sánchez, R., Corujo-Vélez, M. C., Palacios-Rodríguez, A., & Román-Graván, P. (2020). Teaching
digital competence and eco-responsible use of technologies: Development and validation of a scale.
Sustainability, 12(18), 7721. https://doi.org/10.3390/su12187721.
Bayrakci, S., & Narmanlioğlu, H. (2021). Digital literacy as whole of digital competences: Scale develop-
ment study. Düşünce ve Toplum Sosyal Bilimler Dergisi, 4, 1–30.
Byrne, B. M. (2013). Structural equation modeling with Mplus: Basic concepts, applications, and program-
ming. Routledge.
Byrne, B. M., Shavelson, R. J., & Muthén, B. (1989). Testing for the equivalence of factor covariance and
mean structures: The issue of partial measurement invariance. Psychological Bulletin, 105(3), 456–466.
https://doi.org/10.1037/0033-2909.105.3.456.
Cabero-Almenara, J., Romero-Tena, R., & Palacios-Rodríguez, A. (2020). Evaluation of teacher digital com-
petence frameworks through expert judgement: The use of the expert competence coefficient. Journal
of New Approaches in Educational Research (NAER Journal), 9(2), 275–293. https://doi.org/10.7821/
naer.2020.7.578.
Cabero-Almenara, J., Gutiérrez-Castillo, J. J., Guillén-Gámez, F. D., & Gaete-Bravo, A. F. (2023). Digital
competence of higher education students as a predictor of academic success. Technology Knowledge
and Learning, 28(2), 683–702. https://doi.org/10.1007/s10758-022-09624-8.
Cabero-Almenara, J., Osuna, J. B., Castillo, J. J. G., & Rodríguez, A. P. (2020). Validación Del cuestionario
de competencia digital para futuros maestros mediante ecuaciones estructurales. Bordón: Revista De
pedagogía, 72(2), 45–63. https://doi.org/10.13042/Bordon.2020.73436.
Calderón Garrido, D., Carnicer, G., J., & Carrera, X. (2020). La competencia digital docente del profesorado
universitario de música: diseño y validación de un instrumento. Aloma: revista de psicologia, ciències
de l’educació i de l’esport, 2020, 38(2), 139–148.
Carrera, X., Vaquero Tió, E., & Balsells, M. (2011). Instrumento de evaluación de competencias digitales
para adolescentes en riesgo social. Edutec: revista electrónica de tecnología educatica, 35, 1–25.
https://doi.org/10.21556/edutec.2011.35.410.
Cattell, R. B. (1966). The screen test for the number of factors. Multivariate Behavioral Research, 1(2),
245–276. https://doi.org/10.1207/s15327906mbr0102_10.
Cheung, G. W., & Rensvold, R. B. (2002). Evaluating goodness-of-fit indexes for testing measurement invari-
ance. Structural Equation Modeling, 9, 233–255. https://doi.org/10.1207/S15328007SEM0902_5.
Cheung, G. W., & Wang, C. (2017). Current approaches for assessing convergent and discriminant validity
with SEM: Issues and solutions. Academy of Management Proceedings, 2017(1), 12706. https://doi.
org/10.5465/AMBPP.2017.12706abstract.
Cisneros-Barahona, A. S., Marqués-Molías, L., Samaniego-Erazo, N., Mejía-Granizo, C., & De la Cruz-
Fernández, G. (2023). Multivariate data analysis: Validation of an instrument for the evaluation of teach-
ing digital competence. F1000Research, 12, 1–22. https://doi.org/10.12688/f1000research.135194.2.
Çokluk, Ö., Şekercioğlu, G., & Büyüköztürk, Ş. (2012). Sosyal bilimler için çok değişkenli istatistik: SPSS
ve LISREL uygulamaları (Vol. 2). Pegem Akademi.
Contreras-Germán, J., Piedrahita-Ospina, A., & Ramírez-Velásquez, I. (2019). Competencias digitales, desar-
rollo y validación de un instrumento para su valoración en El contexto colombiano (Development and
Validation of an instrument to assess Digital competences in Colombia). Trilogía Ciencia Tecnología
Sociedad, 11(20), 205–232. https://doi.org/10.22430/21457778.1083.
European Commission (2019). Key competences for lifelong learning Directorate-General for Education,
Youth, Sport and Culture. Publications Office. https://data.europa.eu/doi/10.2766/569540.
European Commission (2006). Recommendation of the European parliament and of the council of 18 Decem-
ber 2006 on key competences for lifelong learning. http://u.uma.es/epG/.
Falloon, G. (2020). From digital literacy to digital competence: The teacher digital competency (TDC) frame-
work. Educational Technology Research and Development, 68(5), 2449–2472. https://doi.org/10.1007/
s11423-020-09767-4.
Fan, C., & Wang, J. (2022). Development and validation of a questionnaire to measure digital skills of Chi-
nese undergraduates. Sustainability, 14(6), 3539. https://doi.org/10.3390/su14063539.
Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable vari-
ables and measurement error. Journal of Marketing Research, 18(1), 39–50. https://doi.
org/10.1177/002224378101800104.
13
J. M. Soriano-Alcantara et al.
Ghomi, M., & Redecker, C. (2019). Digital competence of educators (DigCompEdu): Development and
evaluation of a self-assessment instrument for teachers’ digital competence. CSEDU. https://doi.
org/10.5220/0007679005410548.
Guillen-Gamez, F., Mayorga-Fernández, M. J., & Contreras-Rosado, J. A. (2021). Validity and reliability of
an instrument to evaluate the digital competence of teachers in relation to online tutorials in the stages of
early Childhood Education and Primary Education. Revista De Educación a Distancia (RED), 21(67),
1–20. https://doi.org/10.6018/red.474981.
Guillén-Gámez, F. D., Mayorga-Fernández, M. J., & Álvarez-García, F. J. (2020). A study on the actual use
of digital competence in the practicum of education degree. Technology Knowledge and Learning, 25,
667–684. https://doi.org/10.1007/s10758-018-9390-z.
Guillén-Gámez, F. D., Colomo-Magaña, E., Cívico-Ariza, A., & Linde-Valenzuela, T. (2023a). Which is the
Digital competence of each member of Educational Community to use the computer? Which Predic-
tors have a Greater influence? Technology Knowledge and Learning, 1–20. https://doi.org/10.1007/
s10758-023-09646-w.
Guillén-Gámez, F. D., Ruiz-Palmero, J., Colomo-Magaña, E., & Cívico-Ariza, A. (2023b). Construcción De
Un instrumento sobre las competencias digitales del docente para utilizar YouTube como recurso didác-
tico: análisis de fiabilidad y validez. Revista De Educación a Distancia (RED), 23(76), 1–23. https://
doi.org/10.6018/red.549501.
Guillén-Gámez, F. D., Ruiz-Palmero, J., & García, M. G. (2023c). Digital competence of teachers in the use
of ICT for research work: Development of an instrument from a PLS-SEM approach. Education and
Information Technologies, 1–21. https://doi.org/10.1007/s10639-023-11895-2.
Gümüş, M. M., & Kukul, V. (2023). Developing a digital competence scale for teachers: Validity and reli-
ability study. Education and Information Technologies, 28(3), 2747–2765. https://doi.org/10.1007/
s10639-022-11213-2.
Hair Jr, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2010). Multivariate data analysis. In J. F. Hair
Jr., W. C. Black, B. J. babin, & R. E. Anderson (Eds.), Multivariate data analysis: A global perspective
(pp. 785–785). Prentice Hall.
Heinzl, A., Buxmann, P., Wendt, O., & Weitzel, T. (Eds.). (2011). Theory-guided modeling and Empiricism
in Information Systems Research. Springer Science & Business Media.
Hinkin, T. R., Tracey, J. B., & Enz, C. A. (1997). Scale construction: Developing reliable and valid measure-
ment instruments. Journal of Hospitality & Tourism Research, 21(1), 100–120. https://doi.org/10.1177
%2F109634809702100108.
Hu, L., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure 17 analysis: Conven-
tional criteria versus new alternatives. Structural Eq. 18 Modeling: A Multidisciplinary Journal, 6(1),
1–55.
Iglesias-Rodríguez, A., Hernández-Martín, A., Martín-González, Y., & Herráez-Corredera, P. (2021). Design,
validation and implementation of a questionnaire to assess teenagers’ digital competence in the area
of communication in digital environments. Sustainability, 13(12), 6733. https://doi.org/10.3390/
su13126733.
Jiang, L., & Yu, N. (2023). Developing and validating a Teachers’ Digital Competence Model and Self-
Assessment Instrument for secondary school teachers in China. Education and Information Technolo-
gies, 1–26. https://doi.org/10.1007/s10639-023-12182-w.
Jogezai, N. A., Baloch, F. A., Jaffar, M., Shah, T., Khilji, G. K., & Bashir, S. (2021). Teachers’ attitudes
towards social media (SM) use in online learning amid the COVID-19 pandemic: The effects of SM
use by teachers and religious scholars during physical distancing. Heliyon, 7(4), e06781. https://doi.
org/10.1016/j.heliyon.2021.e06781.
Kiryakova, G. (2022). Engaging learning content for Digital Learners. TEM Journal, 11(4), 1958–1964.
Kline, R. B. (2011). Principles and practice of structural equation modeling (3ª ed.). 9. The Guilford.
Kline, R. B. (2023). Principles and practice of structural equation modeling. Guilford.
Leighton, K., Kardong-Edgren, S., Schneidereith, T., & Foisy-Doll, C. (2021). Using social media and snow-
ball sampling as an alternative recruitment strategy for research. Clinical Simulation in Nursing, 55,
37–42. https://doi.org/10.1016/j.ecns.2021.03.006.
Lin, Y. S., Chen, S. Y., Su, Y. S., & Lai, C. F. (2017). Analysis of students’ learning satisfaction in a social
community supported computer principles and practice course. Eurasia Journal of Mathematics Sci-
ence and Technology Education, 14(3), 849–858. https://doi.org/10.12973/ejmste/81058.
Lin, R., Yang, J., Jiang, F., & Li, J. (2023). Does teacher’s data literacy and digital teaching competence
influence empowering students in the classroom? Evidence from China. Education and Information
Technologies, 28(3), 2845–2867. https://doi.org/10.1007/s10639-022-11274-3.
13
Exploring Digital Competencies: Validation and Reliability of an…
Llorente-Cejudo, C., Barragán-Sánchez, R., Puig-Gutiérrez, M., & Romero-Tena, R. (2023). Social inclu-
sion as a perspective for the validation of the DigCompEdu Check-In questionnaire for teaching digi-
tal competence. Education and Information Technologies, 28(8), 9437–9458. https://doi.org/10.1007/
s10639-022-11273-4.
Lomos, C., Luyten, J. W., & Tieck, S. (2023). Implementing ICT in classroom practice: What else mat-
ters besides the ICT infrastructure? Large-Scale Assessments in Education, 11(1), 1–28. https://doi.
org/10.1186/s40536-022-00144-6.
Lucas, M., Bem-Haja, P., Siddiq, F., Moreira, A., & Redecker, C. (2021). The relation between in-service
teachers’ digital competence and personal and contextual factors: What matters most? Computers &
Education, 160, 104052. https://doi.org/10.1016/j.compedu.2020.104052.
Marsh, H. W. (1993). The multidimensional structure of physical fitness: Invariance over gender and age.
Research Quarterly for Exercise and Sport, 64(3), 256–273. https://doi.org/10.1080/02701367.1993.1
0608810.
Martin, A. (2005). DigEuLit–a European framework for digital literacy: A progress report. Journal of eLit-
eracy, 2(2), 130–136.
Martínez-Piñeiro, E., Couñago, E. V., & Barujel, A. G. (2018). El Papel De La familia en la construcción de
la competencia digital. Revista Ibérica De Sistemas E Tecnologias De Informação, (28), 1–13.
Meroño, L., Calderón Luquin, A., Arias Estero, J. L., & Méndez Giménez, A. (2018). Diseño y validación del
cuestionario de percepción del profesorado de Educación Primaria Sobre El aprendizaje del alumnado
basado en competencias (# ICOMpri2). Revista Complutense De Educación, 29(1), 215–235. https://
doi.org/10.5209/RCED.52200.
Montenegro-Rueda, M., & Fernández-Batanero, J. M. (2023). Adaptation and validation of an instrument
for assessing the digital competence of special education teachers. European Journal of Special Needs
Education, 1–16. https://doi.org/10.1080/08856257.2023.2216573.
Mulaik, S. A. (2018). Fundamentals of common factor analysis. The Wiley Handbook of Psychometric
Testing: A Multidisciplinary Reference on Survey Scale and test Development, 209–251. https://doi.
org/10.1002/9781118489772.ch8.
Mvududu, N. H., & Sink, C. A. (2013). Factor analysis in counseling research and practice. Counseling Out-
come Research and Evaluation, 4(2), 75–98. https://doi.org/10.1177/2150137813494766.
Nikken, P., & Jansz, J. (2014). Developing scales to measure parental mediation of young children’s internet
use. Learning Media and Technology, 39(2), 250–266. https://doi.org/10.1080/17439884.2013.782038.
Nunally, J. C. (1978). Psychometric Theory (2º ed.). McGraw-Hill.
O’Malley, P., Jenkins, S., Brooke, M., Donehower, C., Rabuck, D., & Lewis, M. (2013). Effectiveness of
Using iPads to Build Math Fluency. In Council for Exceptional Children Annual Meeting. San Antonio,
Texas, Apr 3–6.
Pérez, E. R., & Medrano, L. A. (2010). Análisis factorial exploratorio: Bases conceptuales y metodológicas.
Revista Argentina De Ciencias Del Comportamiento (RACC), 2(1), 58–66.
Perry, J. L., Nicholls, A. R., Clough, P. J., & Crust, L. (2015). Assessing model fit: Caveats and recom-
mendations for confirmatory factor analysis and exploratory structural equation modeling. Mea-
surement in Physical Education and Exercise Science, 19(1), 12–21. https://doi.org/10.1080/10913
67X.2014.952370.
Ping, L., & Cunningham, D. (2013). In M. S. Khine (Ed.), Application of structural equation modeling in
educational research and practice (Vol. 7). Sense.
Quiroz, J. E. S., Marchant, N. A., Faúndez, G. A., & Pais, M. H. R. (2022). Diseño y Validación de un instru-
mento para evaluar competencia digital en estudiantes de primer año de las carreras de educación de tres
universidades públicas de Chile. Edutec: Revista electrónica de tecnología educativa, (79), 319–335.
https://doi.org/10.21556/edutec.2022.79.2333.
Raykov, T., & Marcoulides, G. A. (2008). An introduction to applied multivariate analysis. Routledge.
Riquelme-Plaza, I., Cabero-Almenara, J., & Marín-Díaz, V. (2022). Validación Del Cuestionario De Com-
petencia Digital Docente en profesorado universitario chileno. Revista Electrónica Educare, 26(1),
165–179. https://doi.org/10.15359/ree.26-1.9.
Romero Rodrigo, M., Gabarda Méndez, C., Cívico Ariza, A., & Cuevas Monzonís, N. (2021). Families at
the crossroads of media and information literacy. Innoeduca International Journal of Technology and
Educational Innovation, 7(2), 46–58. https://doi.org/10.24310/innoeduca.2021.v7i2.12404.
Schreiber, J. B., Nora, A., Stage, F. K., Barlow, E. A., & King, J. (2006). Reporting structural equation mod-
eling and confrmatory factor analysis results: A review. The Journal of Educational Research, 99(6),
323–338. https://doi.org/10.3200/joer.99.6.323-338.
Sencan, H. (2005). Sosyal ve Davranissal Olçumlerde Guvenilirlik ve Gecerlilik [Validity and reliability in
social and behavioral measures]. Seçkin Yayı ncılık.
13
J. M. Soriano-Alcantara et al.
Shaffer, B. T., Cohen, M. S., Bigelow, D. C., & Ruckenstein, M. J. (2010). Validation of a disease-specific
quality‐of‐life instrument for acoustic neuroma: The Penn Acoustic Neuroma Quality‐of‐life scale. The
Laryngoscope, 120(8), 1646–1654. https://doi.org/10.1002/lary.20988.
Şimşek, A. S., & Ateş, H. (2022). The extended technology acceptance model for web 2.0 technologies in
teaching. Innoeduca International Journal of Technology and Educational Innovation, 8(2), 165–183.
https://doi.org/10.24310/innoeduca.2022.v8i2.15413.
Søby, M. (2013). Learning to be: Developing and understanding digital competence. Nordic Journal of Digi-
tal Literacy, 8(3), 134–138. https://doi.org/10.18261/ISSN1891-943X-2013-03-01.
Tomczyk, Ł. (2019). Skills in the area of digital safety as a key component of digital literacy among teachers.
Education and Information Technologies, 25(1), 471–486. https://doi.org/10.1007/s10639-019-09980-6.
Tzafilkou, K., Perifanou, M., & Economides, A. A. (2022). Development and validation of students’ digital
competence scale (SDiCoS). International Journal of Educational Technology in Higher Education,
19(1), 1–20. https://doi.org/10.1186/s41239-022-00330-0.
Vásquez Peñafiel, M. S., Nuñez, P., & Cuestas Caza, J. (2023). Competencias digitales docentes en El con-
texto de COVID-19. Un Enfoque Cuantitativo. Pixel-Bit: Revista De Medios Y Educación, 67, 155–
185. https://doi.org/10.12795/pixelbit.98129.
Viberg, O., Mavroudi, A., Khalil, M., & Bälter, O. (2020). Validating an instrument to measure teachers’ pre-
paredness to use digital technology in their teaching. Nordic Journal of Digital Literacy, 15(1), 38–54.
https://doi.org/10.18261/issn.1891-943x-2020-01-04.
Wang, X., Wang, Z., Wang, Q., Chen, W., & Pi, Z. (2021). Supporting digitally enhanced learning through
measurement in higher education: Development and validation of a university students’ digital com-
petence scale. Journal of Computer Assisted Learning, 37(4), 1063–1076. https://doi.org/10.1111/
jcal.12546.
Watkins, M. W. (2021). A step-by-step guide to exploratory factor analysis with SPSS. Routledge.
West, R. F., Meserve, R. J., & Stanovich, K. E. (2012). Cognitive sophistication does not attenuate the bias
blind spot. Journal of Personality and Social Psychology, 103(3), 506–519. https://doi.org/10.1037/
a0028857.
Worthington, R. L., & Whittaker, T. A. (2006). Scale development research: A content analysis and rec-
ommendations for best practices. The Counseling Psychologist, 34(6), 806–838. https://doi.
org/10.1177/0011000006288127.
Yazar, T., & Keskin, İ. (2016). Examination of prospective teachers’ digital competence in the context of
lifelong learning. Uluslararası Eğitim Programları ve Öğretim Çalışmaları Dergisi, 6(12), 133–150.
Zakharov, K., Komarova, A., Baranova, T., & Gulk, E. (2022). Information literacy and digital competence
of teachers in the age of digital transformation. In XIV International Scientific Conference INTERAG-
ROMASH 2021 Precision Agriculture and Agricultural Machinery Industry, Volume 2 (pp. 857–868).
Springer International Publishing. https://doi.org/10.1007/978-3-030-80946-1_78.
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and
institutional affiliations.
13