0% found this document useful (0 votes)
113 views18 pages

2021 Developing A Measurement of Employee Learning Agility

Uploaded by

Joyce Chong
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
113 views18 pages

2021 Developing A Measurement of Employee Learning Agility

Uploaded by

Joyce Chong
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

The current issue and full text archive of this journal is available on Emerald Insight at:

https://www.emerald.com/insight/2046-9012.htm

EJTD
46,5/6 Developing a measurement of
employee learning agility
Jinju Lee and Ji Hoon Song
Department of Educational Technology, Hanyang University, Seoul, Republic of Korea
450
Received 28 January 2021
Revised 17 March 2021
Abstract
Accepted 23 March 2021 Purpose – The purpose of this study is to develop a reliable and valid measurement of employee learning
agility for use in human resources development.
Design/methodology/approach – This paper analyzed a total of 365 cases collected from Korean
organizations. This paper conducted exploratory factor analysis, confirmatory factor analysis and Rasch
analysis to validate the measurement. This paper conducted a measurement invariance (MI) test to confirm
the generalizability of the measurement and used Harman’s single factor test to assess the common method
variance (CMV).
Findings – This paper derived a learning agility measurement with six subfactors (Cronbach’s a = 0.893).
This paper verified that it could be applied equally to supervisors and subordinates using an MI test. Factor
analysis confirmed the feasibility of CMV. Based on I-ADAPT theory, the learning agility measurement can
be applied to assess not only leader competency but also general employee competency.
Research limitations/implications – Practitioners may use this model of learning agility for
developing competency-based training programs and evaluations. This study is meaningful because it
extends the concept of learning agility. In particular, the MI test indicated that there are no differences
between supervisors and employees regarding the utility of the concept.
Originality/value – This study is meaningful because it extends the concept of learning agility. In
particular, the MI test indicated that there are no differences between supervisors and employees regarding
the utility of the concept. The concept in this paper is distinguished from other studies by applying the Rasch
model from an item response theory perspective.
Keywords Measurement invariance, Scale development, Rasch analysis, Organizational Agility,
I-ADAPT theory, Learning agility
Paper type Research paper

Learning agility is the willingness and ability to learn new things to obtain results under
difficult or different conditions (De Meuse et al., 2010; Lombardo and Eichinger, 2000). The
business environment is changing rapidly, and organizations increasingly seek to determine
the potential competence of individual employees rather than their current achievements
when assessing core human resources (Corporate Leadership Council, 2005; De Meuse,
2017). In previous studies, learning agility has mainly been studied in the context of leader
success (De Meuse, 2017). Lombardo and Eichinger (2000) emphasized that greater learning
abilities are associated with greater potential within organizations and defined learning
agility as being able to rapidly practice what is learned in a new or first encounter situation
based on ability and willingness to learn from experience (Eichinger et al., 2010; Lombardo
and Eichinger, 2000). However, learning agility is not only required for leaders, it is a
competency critical for all organization members because human resources that adapt
European Journal of Training and
Development
quickly and continuously to changing environments are essential for the sustainability of an
Vol. 46 No. 5/6, 2022
pp. 450-467
organization’s competitive advantage (Barney and Clark, 2007, Garvin et al., 2008).
© Emerald Publishing Limited Although the learning agility of adult learners is closely related to mental and
2046-9012
DOI 10.1108/EJTD-01-2021-0018 psychological factors, including experience and learning disposition (De Meuse, 2017), there
is still confusion about what learning agility is, how to measure it and its relationship to Employee
leader success (De Meuse, 2017; De Meuse et al., 2012; Gravett and Caldwell, 2016). Prior learning
research has been conducted to derive learning agility characteristics and develop
measurement techniques (Bedford, 2011; Burke et al., 2016; De Meuse et al., 2012; Mitchinson
agility
and Morris, 2012; Smith, 2015). However, as there is no consensus definition of learning
agility, there is no widely accepted measure (De Meuse, 2017). Therefore, in this study, we
develop and justify measurements of learning agility used in academia and in practice. We
451
also consider the mechanisms of individuals, teams and organizations as factors that affect
organizational performance in a multidimensional manner (Colquitt et al., 2011). From a
human resource development (HRD) perspective, applying individual experience and
knowledge to job performance is critical to organizational performance. People with high
learning agility must consider individual and organizational learning perspectives
simultaneously because they are continually evolving, growing and trying to maintain a
new way of thinking (Lee and Song, 2020).
The purpose of this study is to develop a valid and reliable measurement of learning
agility. The measurement developed in this study was performed according to the procedure
presented by DeVellis (2016). Content validity was confirmed through two expert reviews,
and the difficulty of the items was considered by performing a Rasch analysis based on item
response theory (IRT). We secured statistical validity by performing measurement identity
verification to confirm the generalization developed in this study. Collecting and reviewing
data on learning agility in this study enabled logical and clear conceptual reasoning,
through which the conceptual integrated model of learning agility can be justified (Whetten,
1989).
Finally, it is necessary to apply learning agility measurements to all members of an
organization rather than confining the assessment to leaders. Learning agility has been
studied in the context of leader competence (De Meuse et al., 2010). However, modern
organizational structure increasingly emphasizes the efficiency of small-scale, team-based
and project-based operations to enable more agile and expeditious responses. Thus, learning
agility is not an aptitude that only a small number of leaders need to have, but rather a
criterion that can differentiate individual competence levels among employees (Ployhart and
Bliese, 2006; Pulakos et al., 2000).
Therefore, in this study we examine existing measurement tools used to capture learning
agility and develop a valid and reliable measurement technique using statistical tests,
including Rasch analysis, based on the conceptual integration model.

Theoretical background
Concept of learning agility
Learning agility is the ability to flexibly change thoughts and behavior and adapt to new
environments based on willingness to learn from experience when faced with unfamiliar
environments (Lombardo and Eichinger, 2000). It focuses on learning types related to new
behaviors or changing attitudes, including competence and the ability to learn from
experience and perform successfully in new and unfamiliar environments.
Learning agility focuses on the ability to learn new behaviors and change attitudes,
exhibit competence and ability to learn from experience and perform successfully in new
and unfamiliar environments. Prior studies of learning agility can be largely classified into
two perspectives (De Meuse, 2017). The bandwidth approach to learning agility is seen as a
multidimensional characteristic of individuals learning effectively through experience (De
Meuse et al., 2012), whereas in the limited approach, learning agility is seen as a factor
EJTD related to the speed and flexibility of learning through experience (DeRue et al., 2012a; Ryu
46,5/6 and Oh, 2016).
However, as there is still no standard definition of learning agility, there are differences
in research approaches. Although most studies have addressed learning agility in the
leadership domain, it is not desirable to limit considerations of learning agility to leaders’
capabilities. Since every employee in all different position also need to acquire new skills
452 and knowledge quickly for individual performance and organizational changes. In the
following sections, we analyze the main models of learning agility, reflecting the flow of
learning agility studies.

Review of learning agility measurements


Learning agility has traditionally been considered an essential leadership ability, so it was revitalized
as a concept by leadership research institutes such as Center for Creative Leadership (CCL), and a
learning agility model was developed by Eichinger et al. (2010). In the following sections, we examine
several learning agility measurements, reflecting the flow of learning agility studies.
Lombardo and Eichinger (2000) attempted to define what top learners would learn
through experience and developed measurements to identify the potential of managers. The
Choices Architecture metric conceptualizes learning agility as the relationship between
people agility, mental agility, change agility and results agility. De Meuse et al. (2012)
developed this model and measured learning agility with viaEDGE, which includes self-
awareness. This measurement has limitations for measuring the learning agility of
employees because it targets only confident leaders.
The Learning Agility Assessment Inventory (LAAI), developed by CCL in collaboration
with Columbia University, is different from Choices Architecture and viaEDGE because it
reflects factors representing “derailment.” LAAI, developed by Mitchinson and Morris
(2012), identified five factors that promote learning agility: innovating, performing,
reflecting, risking and defending. It differs from other measurements because it includes
derailers as defending factors that hinder learning agility.
Smith (2015) found that the vast majority of members in an organization can perform
new tasks to succeed in new environments while at the same time solving unfamiliar and
challenging problems. Smith identified five factors that promote learning agility: feedback
seeking, information seeking, reflection, experimenting and agility. However, this
measurement also has limitations because learning agility and learning from experience are
difficult to distinguish.
The Bedford (2011) measurement was developed as part of the job performance rating for
measuring work performance in the workplace, and thus the measurement itself explores
the relevance of job performance. To reflect individual characteristics, Bedford (2011)
developed a learning agility measurement to diagnose job performance that included
personal characteristics and cognitive ability. Seven characteristics of individuals with high
levels of learning agility were identified: seeking feedback, actively collecting information,
admitting mistakes, taking risks, learning through cooperation, taking on new challenges
and bold action and habit of reflection. Although it is relatively easy to measure learning
agility, it is necessary to discuss whether it is appropriate to diagnose learning agility using
a single dimension based on previous research on the learning agility construction concept.
Moreover, the distinction between general cognitive ability and learning agility is
ambiguous, as both emphasize the speed and flexibility of learning.
Im et al. (2017) developed a learning agility measurement in the Korean context, divided
into five subfactors: self-awareness, growth-orientation, flexible thinking, reflective
behavior seeking and behavioral change. This measurement is significant because it
excludes outcome variables and considers the acquisition of information through personal Employee
learning and feedback with information acquisition. learning
The learning agility measurement mentioned above has a limitation in that it is difficult
to generalize to all members of an organization, mainly because it was developed to predict
agility
potential high performance in leaders. To generalize the measurement, it is necessary to
verify whether it works for various organizational positions. Accordingly, in this study we
performed Rasch analysis based on the IRT, analyzing the relationship between the trait
and item response in a superior fashion than classical test theory (Hong et al., 2003; Rasch, 453
1960). Furthermore, we conducted measurement invariance (MI) test to verify that the
measurement works equally for leaders and employees.

Methods
Samples
In this study, measurements were not focused on a specific industry group or occupation,
and we performed random sampling of workers in various industries and companies.
Learning agility is not specific to particular industry groups and is not limited to specific
positions, genders and ages in modern organizations.
Second, in this study, we conducted a MI test to verify the generalizability of the
measurement. To this end, our sample was divided into employees and managers when
collecting data to determine whether learning agility, which was considered a concept for
predicting the potential of leaders, can be extended and applied to general workers.
We collected data from 390 respondents by distributing online and offline surveys to 450
Korean private sector employees (response rate 86.6%). Responses with missing values
were excluded from the final data set for analysis based on Mahalanobis D distance (Kline,
2015).
The total number of respondents included in the final sample was 365, including 121
males (33.2%) and 244 females (66.8%). The majority of the respondents were in their 30s,
with 1–3 years of work experience. There were 165 managers in the sample (45.2%) and 200
general employees (54.8%).

Measures
As learning agility is a complex concept, a methodological approach that includes a
multiple-item scale rather than a single-item scale should be used for measurement. We
followed the process proposed by DeVellis (2016) to develop the scale. Measurement type
(scale type, response category, response format) should be chosen to minimize questionnaire
information loss (DeVellis, 2016). The item set for the learning agility measure developed in
this study was based on a conceptual model of learning agility (Lee and Song, 2020). Factors
included self-directed learning, seeking constructive feedback, critical reflection, challenging
experience, rational problem-solving and adaptation to the job environment, each of which
extracts a statement similar in context to the learning agility measurement in Table 1.
Confirmation of predictive validity requires examining criteria related to future behavior,
such as the use of reminders and purchasing intent, to predict state changes in a
measurement concept or attribute at a future point. This study measured task performance,
contextual performance and innovative work behavior, which have frequently been used as
outcome variables to assess learning agility in previous reviews.
Task performance relates to core organizational maintenance tasks and services that
contribute to the effectiveness of the organization by supporting the core function (Borman
and Motowidlo, 1993). In this study, we include six task performance items developed by
Lee and Yoo (2016).
EJTD Citation (year) Subfactors (no. of items)
46,5/6
Lombardo and Eichinger People agility (6), mental agility (6), change agility (6), results agility (7)
(2000)
De Meuse et al. (2012) Seeking (5), sense-making (5), internalizing (5), applying (5), self-awareness (5)
Mitchinson and Morris Innovating, performing, reflecting, risking, defending
(2012)
454 Smith (2015) Feedback seeking (5), information seeking (5), reflection (9), experimenting (9),
agility (5), other (10)
Bedford (2011) Learning agility (9)
Job performance rating Learning agility (9), interpersonal effectiveness (3), high potential (6), building
(JPR, 2009) collaboration (3), leading courageously (3), creating alignment (3), team
leadership (3), developing leaders (3), strategic thinking (3), business acumen (3),
critical thinking and judgment (3), planning and organizing (3), managing
execution (5), drive for results (3), innovation and risk taking (3), resilience (3),
Table 1. integrity (3), overall judgments (10)
Sources used to Kember et al. (2000) Habitual action (4), understanding (4), reflection (4), critical reflection (4)
develop the learning Im et al. (2017) Self-awareness (5), growth-oriented (5), flexible thinking (5), reflective behavior
agility measurement seeking (5), behavioral change (5)

Contextual performance is an extended concept that includes organizational roles such as


task performance and behaviors that support the organization. It does not directly
contribute to achieving organizational goals but indirectly supports core functions (Borman
and Motowidlo, 1993; Motowildo et al., 1997). In this study, we used items derived from Kim
and Yoo (2002).
Innovative work behavior is defined as the deliberate creation and application of new
ideas to improve individual, group and organizational performance. Learning agility that
allows individuals to quickly acquire the knowledge needed in a new, unfamiliar
environment and adapt it to changing environments enables members of the organization to
devise ingenious methods while carrying out tasks or operationalizing new ideas.
Innovative work behaviors were evaluated using nine items developed by Scott and Bruce
(1994).

Procedures
Expert review. Two expert reviews were conducted to assess the content validity of
measures based on the concept of learning agility. First, we consulted three bilingual
Korean-English speakers in the HRD field, two with over ten years of experience in HRD
practice and a third who currently teaches at universities in the USA. The experts reviewed
the overall concept, a brief description of the concept and learning agility components and
provided e-mail feedback. Overall, the experts noted that the concept of “jobs” in learning
agility is rooted in the organizational context, so it is necessary to refine the organizational
context to define learning agility. In addition, the six factors as mentioned earlier lack their
specificity, thus, including “learning agility” among the existing names of factor may lead to
multicollinearity. Accordingly, the factor, learning agility is specified and modified to meet
the purpose of the measurement development. The readability of each item and the
overlapping expressions were revised. The areas in which respondents warned possible
misunderstanding were reassessed. To improve readability, the expressions of the items
were revised after the first expert review briefly.
The second review was conducted among seven experts, including experts in HRD and
with doctoral degrees in educational technology. The item level content validity index
(I-CVI) was calculated for each item, and those with low validity were considered for deletion Employee
(Fehring, 1987). The items that were retained had I-CVI coefficients higher than 0.80. The learning
opinions and I-CVI coefficients collected through the first and second expert reviews are
summarized in Table 2, the items removed based on feedback are identified, and the
agility
corrections made for each item are noted.
Rasch analysis. In this study, we examined six factors that made up the learning agility
measurement using a rating scale model in jMetrik software (Meyer, 2009–2018). We
adopted six learning agility factors derived according to a conceptual model developed 455
through an integrated literature review process (Lee and Song, 2020) and determined
(a) whether the item discrimination was similar to or different from other items within the
measurement and (b) whether the measurement was composed of items with varying levels
of difficulty to measure different psychological traits of respondents. Therefore, in this
study, we tested whether the distribution of measurement difficulty level was appropriate
by analyzing item difficulty levels and item curve.
Normalization. Before verifying reliability and validity, the normality, standard
deviation, skewness and kurtosis of the data collected in this study were checked to confirm
normality. The skewness ranged from 0.527 to 0.041. Kurtosis refers to the degree of
sharpness at the center of distribution and is equal to the standard normal distribution if the
kurtosis is zero. Kurtosis ranged from 0.571 to 0.642. The data collected in this study were
therefore confirmed to be acceptable according to skewness and kurtosis.
Factor analysis. The purpose of factor analysis is as follows. First, the number of
variables is decreased through factor analysis, and unnecessary subfactors are removed.
Many variables are reduced to a small number of factors by bundling the relevant items into
a limited number of relevant variables to illustrate learning agility. Second, as the related
variables are bundled, the characteristics of the variables can be better grasped. As the
bundled factors have mutually independent characteristics, the characteristics of each
variable are known. Third, the validity of the measurement can be evaluated. Finally, factor
scores can be applied to further studies, such as regression analysis or cluster analysis
(Kline, 2015; Thompson, 2004).
The first procedure in exploratory factor analysis (EFA) determines the number of
variables after selecting variables using a varimax rotation for the factor rotation. Factors
with eigenvalues greater than one were extracted (Osborne et al., 2014; Williams et al., 2010;
Thompson, 2004).
Confirmatory factor analysis (CFA) can verify factors by analyzing the relationships
between measured and latent variables and evaluating the model’s overall fit (Brown, 2014;
Kline, 2015; Thompson, 2004). If a chi-square result is statistically significant, we may
confirm that the data used in this study are suitable for the research model. However, as the
chi-square goodness-of-fit test results tend to inflate as the sample size increases, we
included several fit indices in the analysis (Hair et al., 1998, 2016) and estimated the internal
consistency reliability (Cronbach’s a).
Measurement invariance and single factor test. We based assessments of validity on
(a) test content, (b) internal structure, (c) external criteria and (d) generalization according to
the validity measure used in Messick (1995). MI tests should be implemented to determine
whether data are used equally without discrimination between supervisors and
subordinates (Cheung and Rensvold, 2002; Van de Schoot et al., 2012; Meredith, 1993;
Vandenberg and Lance, 2000; Steinmetz et al., 2009).
In addition, we minimized CMV, which can be caused by self-reporting. CMV was
minimized in the research design and data collection stages as well as the statistical
processing stage. Before data were collected, the items were revised twice for content
EJTD
Comments I-CVI Modifications
46,5/6
L1 Unnecessary modifiers and 1 Item modified
vocabularies
L2 – 0.92 –
L3 – 0.67 Consider deletion
L4 The learning opportunities offered by 0.5 –
456 the company are less relevant to self-
directed learning
L5 – 0.92 –
L6 – 0.92 –
L7 – 0.75 –
L8 Unappropriated adjectives must be 0.67 Consider modification or
modified deletion
F1 Unappropriated adjectives must be 0.92 Item modified
modified
F2 – 1 –
F3  0.92 
F4 Two perspectives are expressed 0.92 Separated by two questions
together
F5 To modify the sentence order 0.92 Item modified
considering readability
F6 – 0.92 –
F7 The contents of the item overlap with 0.92 –
F1
F8 The contents of the item overlap with 0.83 Consider deletion
F1
R1 May be interpreted as a different factor 0.92 Separate from other factors
R2 – 0.75 –
R3 Inappropriate expression included 0.83 Item modified
R4 – 0.75 Consider deletion
R5 More specifically, you need to be more 0.83 Item modified
specific in terms of readability
R6 Redundant expression 0.83 Item modified
R7 This item can cause biased responses 0.83 Item modified
owing to assertive expressions
R8 – 0.58 Consider deletion
E1 – 0.92 
E2 Two perspectives are expressed 0.75 Item modified
together
E3 To modify the sentence order 0.75 Item modified
considering readability
E4  0.83 
E5 Two perspectives are expressed 0.75 Consider modification or
together deletion
E6 Need to delete unnecessary expressions 0.92 Item modified
to clarify clarity
E7 – 0.92 –
E8 Correct the sentence in the same order 0.75 Item modified
Table 2. as the previous question for
Expert review to consistency
P1 Item does not describe variable 0.67 Consider deletion
ensure content
validity (continued)
Employee
Comments I-CVI Modifications
learning
P2 – 0.83 – agility
P3 Two perspectives are expressed 0.58 Consider separation or
together deletion
P4 – 0.83 –
P5 – 0.67 Consider deletion
P6 – 0.92 – 457
P7 – 0.92 –
P8 – 0.5 Consider deletion
A1 Unclear wording included 0.83 –
A2 Redundant expression 0.75 Consider modification or
deletion
A3 Need to use clearer expressions 0.75 Item modified
A4 – 0.92 –
A5 Unclear wording included 0.83 Item modified
A6 Unclear wording included 0.83 Item modified
A7 Need to delete unnecessary expressions 0.58 Consider modification or
for clarity deletion
A8 Redundant expression 0.92 Item modified

Notes: L = self-directed learning, S = seeking constructive feedback, R = critical reflection, E =


challenging experience, P = rational problem solving, A = adaptation to the job environment Table 2.

validity by expert review. For statistical processing, we conducted Hamman’s single factor
test to check for the occurrence of CMV in the collected data (Podsakoff et al., 2003).

Results
Rasch analysis results
We focused on the following variables to identify the each item’s difficulty level and the
personal trait scores for the items. The distributions of the difficulty levels were compared.
If each item’s infit and outfit were 0.7 or less and 1.3 or more, respectively, and the Z value
was less than 2 or greater than 2, it was considered an inappropriate item (Bond and Fox,
2001). Four items were deleted based on the results of Rasch analysis (Table 3). After
conducting Rasch analysis, eight items were deleted based on the first and second expert
reviews owing to having I-CVI coefficients less than 0.80.

Exploratory factor analysis results


Before conducting EFA, Keiser-Meyer-Olkin (KMO) sample fit tests and Bartlett’s test of
sphericity were performed to determine whether the data collected for this study were

Item Difficulty Std. error WMS Std. WMS UMS Std. UMS

E1 1.22 0.09 1.47 5.52 1.63 6.86


E2 0.17 0.08 1.39 4.79 1.44 5.29 Table 3.
E3 0.66 0.08 1.32 3.89 1.32 3.84 Results of Rasch
E7 0.95 0.07 1.31 4.04 1.35 4.45 analysis
EJTD appropriate for factor analysis (Williams et al., 2010). The KMO value was 0.926, indicating
46,5/6 that the data collected for this study were suitable for factor analysis. In general, a KMO
value greater than 0.90 is considered relatively good. Bartlett’s test of sphericity produced a
x 2 of 7,446.494 and p < 0.01, significant at a 1% significance level. The items used in this
study followed three criteria: (a) communality less than 0.50, (b) factor load less than 0.50
and (c) two or more factors cross loaded.
458 The factors’ eigenvalues were plotted as shown in Figure 1, in which the slope of the
graph becomes flat when divided into factor 6. As the communality of the items is closer to
1, the usefulness of the variable itself is significant. Items with communality less than 0.50
were considered for deletion: L1 = 0.403, R8 = 0.413, A1 = 0.410 and P1 = 0.457. The
communality of the remaining items was within the range of 0.518–0.724.
A total of 18 items were selected after removing items according to exclusion criteria.
The KMO value was 0.872, therefore the extracted factors could be evaluated appropriately.
According to Bartlett’s test of sphericity, x 2 was 1,421.200 and p < 0.01, significant at the
1% significance level. The cumulative variance was 71%, which satisfies social science
standards. The loadings of six factors extracted from the EFA were between 0.601 and
0.860. Specifically, self-directed learning showed the highest explanatory power (12.409%)
among the six factors that constitute learning agility. The results of the EFA are shown in
Table 4.

Confirmatory factor analysis results


CFA was conducted using LISREL 8.8. Each indicator of the six-factor instrument was
found to meet the criterion of 0.40 or higher (p = 0.001). The model fit was also examined
( x 2 = 202.235). When it was divided by the degrees of freedom, the value was 1.685, less
than 3, and hence was identified as a suitable model index (Hu and Bentler, 1999; Kline,
2015). The goodness-of-fit index (GFI) was 0.890, which can be explained by the data in
which the covariance was approximately 89% for the whole measurement model. The
comparative fit index (CFI) was 0.959, for which a good fit is defined as >0.90 (Thompson,
2004). The data collected in this study had an root-mean-squared error of approximation
(RMSEA) = 0.059 and standard root mean residual (SRMR) = 0.064, indicating that the
measurement model was appropriate (Table 5).

Figure 1.
Scree plot
Factors
Employee
L R E F A P learning
agility
L7 0.767 0.301 0.222 0.051 0.156 0.214
L2 0.751 0.061 0.126 0.238 0.083 0.112
L5 0.743 0.294 0.246 0.051 0.145 0.154
R6 0.069 0.847 0.146 0.034 0.100 0.063
R7 0.244 0.732 0.164 0.082 0.143 0.234 459
R5 0.219 0.682 0.175 0.164 0.071 0.102
E6 0.108 0.132 0.798 0.131 0.197 0.189
E4 0.188 0.285 0.728 0.150 0.085 0.036
E8 0.293 0.135 0.705 0.119 0.011 0.258
F2 0.003 0.113 0.215 0.812 0.139 0.036
F6 0.077 0.119 0.012 0.805 0.122 0.246
F4 0.359 0.029 0.150 0.719 0.120 0.050
A3 0.151 0.103 0.019 0.019 0.860 0.035
A4 0.009 0.171 0.071 0.229 0.781 0.169
A6 0.186 0.014 0.228 0.194 0.601 0.282
P5 0.104 0.031 0.143 0.102 0.004 0.854
P8 0.222 0.256 0.129 0.059 0.207 0.714
P7 0.176 0.278 0.283 0.120 0.374 0.608
Eigenvalue 2.234 2.226 2.113 2.094 2.077 2.036
Variance (%) 12.409 12.367 11.737 11.632 11.541 11.314
Cumulative variance (%) 12.409 24.776 36.513 48.145 59.686 71.000
Table 4.
Notes: L = self-directed learning, S = seeking constructive feedback, R = critical reflection, E = Factor analysis
challenging experience, P = rational problem solving, A = adaptation to the job environment results

x2 df x 2/df GFI CFI AGFI NNFI RMSEA SRMR

202.235 120 1.685 0.890 0.959 0.843 0.905 0.059 0.064


Table 5.
Notes: x = chi square, df = degree of freedom, GFI = goodness-of-fit index, CFI = comparative fit index,
2 Results of factor
AGFI = adjusted goodness-of-fit index, NNFI = non-normal fit index, RMSEA = root-mean-squared error of analysis and fit
approximation, SRMR = standard root mean residual indices

Reliability and validity


The skewness ranged from 0.485 to 0.318. Kurtosis ranged from 0.682 to 0.349.
Cronbach’s a coefficient of self-directed learning was highest at 0.812, while the reliability of
environmental adaptation was lowest at 0.729, which exceeded Cronbach’s a standard of
0.7. The analysis of overall reliability of the learning agility measurement showed a high
internal consistency of 0.893 (Table 6).
The discriminant validity tests assess distinct differences among the measured values of
variables. Discriminant validity for rational problem-solving (0.454) and adaptation to the
job environment (0.466), two of the variables developed for this study, were lower than the
standard value of 0.50 after calculating the average variance extracted (AVE) of learning
agility. However, discriminant validity was confirmed for the other four variables, as all
were >0.50 (self-directed learning, 0.625; challenging experience, 0.675; seeking constructive
feedback, 0.509; and critical reflection, 0.502).
EJTD Correlation analysis was performed using SPSS 25.0. We calculated construct reliability,
46,5/6 which confirmed significance. The mean, standard deviation and correlation coefficients of
each variable are summarized in Table 7. As construct reliability (CR) ranged from 0.801 to
0.869 and was therefore greater than the 0.70 criterion, the reliability of the measurement
developed in this study was determined to be reasonable (Fornell and Larcker, 1981;
Anderson and Gerbing, 1988). The correlations between variables were significant at the
460 p < 0.01 level. Additionally, the variance inflation factor (VIF) values ranged from 1.227 to
1.283, therefore indicating no multicollinearity.
We performed regression analysis because there were significant correlations between
variables. The effect of learning agility on innovative work behavior was b = 0.580, on task
performance was b = 0.498 and the effect on contextual performance was b = 0.437, which
was found to have a greater effect on innovative work behavior. This reflects the fact that
learning agility is defined as learning and applying the knowledge and skills required for
the job (Table 8).

Measurement invariance results


To assess MI, we used AMOS is NOT an abbreviation, it is the name of the statistics
package. for data analysis. The recommended steps include 1) configural invariance, 2)
metric invariance, 3) scalar invariance, 4) factor variance invariance, 5) factor covariance
invariance and 6) latent mean invariance (Hong et al., 2003).
Configural invariance. For the first step, we conducted a configural invariance test
(Model 1). The requirement for configural invariance is satisfied if the basic model structure
is invariant across groups. The model fit was significant at 355.947, and the minimum chi
square/degree of freedom (CMIN/df) value was 1.483. The indices normal fit index (NFI) and
Tucker–Lewis index (TLI) did not reach the recommended standard of 0.9, but CFI exceeded

Factor Cronbach’s a Factor Cronbach’s a

Self-directed learning 0.812 Seeking constructive feedback 0.767


Critical reflection 0.759 Adaptation to the job environment 0.728
Challenging experience 0.769 Rational problem-solving 0.774
Table 6. Overall reliability 0.893
Internal consistency
reliability Note: N = 365

Correlation
Constructs M SD LA TP CP IWB AVE CR VIF

LA 3.699 0.449 (0.893) 0.558 0.964


TP 3.618 0.619 0.498** (0.802) 0.509 0.805 1.283
CP 3.526 0.638 0.437** 0.361** (0.797) 0.502 0.801 1.227
IWB 3.376 0.648 0.580** 0.410** 0.361** (0.861)0.675 0.865 1.282
Table 7. Notes: N = 365, M = mean, SD = standard deviation, LA = learning agility, TP = task performance, CP =
Correlation estimates contextual performance, IWB = innovative work behavior, AVE = average variance extracted, CR =
and reliability construct reliability, VIF = variance inflation factor, alpha level was 0.05 for this analysis, ** p < 0.01
the recommended standard of 0.9. The RMSEA was less than the recommended criterion of Employee
0.08 (Hu and Bentler, 1999). Therefore, the configural invariance was accepted. learning
Metric invariance. As the configural invariance test (Model 1) requirements were agility
achieved, we conducted the metric invariance test (Van De Schoot et al., 2012). The model fit
was significant at 369.338, and the CMIN/df value was 1.466, which was within the 1–3
standard range. The difference between the configural invariance (Model 1) and metric
invariance (Model 2) was not significant: D x 2(12) =13.391, p = 0.949. As shown in Table 9, 461
there was no difference in fitness between the two groups (Hong et al., 2003; Steenkamp and
Baumgartner, 1998).
Scalar (intercept) invariance. Scalar invariance implies that subjects with the same
values on the latent construct should have equal values to the observed variable (Meredith,
1993). The model fit was significant at 391.503, and the CMIN/df value was 1.450. As the
RMSEA was less than the recommended standard of 0.08, the scalar invariance (Model 3)
was accepted. The x 2 difference between the metric invariance and the scalar invariance
was not significant, and the scalar invariance was achieved: D x 2(18) = 23.165 (Brown, 2014).
Factor variance invariance. The next step investigated population heterogeneity of latent
variables (Cheung and Rensvold, 2002). Factor variance invariance indicates that the within-
group variability of the constructs is equivalent across groups (Vandenberg and Lance,
2000). The model fit was significant at 394.849. The NFI did not reach the recommended
standard of 0.9, but TLI and CFI exceeded the recommended standard of 0.9. The RMSEA
was less than the recommended criterion of 0.08 (Hu and Bentler, 1999). Therefore, the factor
variance invariance (Model 4) was accepted. The x 2 difference between the scalar invariance
and the factor variance invariance was not significant, thus there was no difference in
fitness between the two groups: D x 2(6) = 3.346.

IV DV B SE b t p R2

LA TP 0.687 0.063 0.498 10.937 0.000 0.248


CP 0.621 0.067 0.437 9.258 0.000 0.191
IWB 0.837 0.062 0.580 13.577 0.000 0.337

Notes: N = 365, IV = independent variable, DV = dependent variable, B = unstandardized regression Table 8.


coefficient, SE = standard error, b = standardized regression coefficient, LA = learning agility, TP = task Results of linear
performance, CP = contextual performance, IWB = innovative work behavior regression

Model CMIN df CMIN/df NFI TLI CFI RMSEA

1 Configural invariance 355.947 240 1.483 0.790 0.893 0.916 0.050


2 Metric invariance 369.338 252 1.466 0.782 0.897 0.915 0.050
3 Scalar invariance 391.503 270 1.450 0.769 0.901 0.912 0.049
4 Variance invariance 394.849 276 1.431 0.767 0.905 0.914 0.048
5 Covariance invariance 403.315 291 1.386 0.762 0.915 0.919 0.045
Table 9.
Measurement
Notes: CMIN = minimum chi square, df = degree of freedom, NFI = normal fit index, TLI = Tucker–Lewis invariance test
index, CFI = comparative fit index, RMSEA = root-mean-squared error of approximation results
EJTD Factor covariance invariance. As all four steps were achieved (configural, metric, scalar,
46,5/6 factor variance invariance), we verified whether the factor covariance was identical for all
factors (Koh and Zumbo, 2008). The model fit was significant at 403.315. The x 2 difference
between the factor variance invariance (Model 4) and the factor covariance invariance
(Model 5) was not significant. Therefore, factor covariance invariance was established:
D x 2(15) = 8.466.
462 Latent mean invariance. The last step was to assess the position differences in the latent
means. Equality tests of latent means are analogous to comparisons of experimental group
means via t-tests or analysis of variance (Brown, 2014). The estimated latent means for the
employee group were higher than for supervisors, except for adaptability to the job
environment. None of the six factors were significant in the 95% confidence interval (Table 10).
For further discussion, Cohen’s d was calculated to compare magnitudes of effect (Cohen, 1988).
It was found to be between 0.016 and 0.277, but the magnitude of the effect was minor in six factors
for the general employee group and the supervisor group to assess learning agility.

Single-factor test results


We conducted Harman’s single-factor test to confirm the occurrence of CMV. As CMV
occurs only when it exceeds 50% of the total variance, CMV did not occur in the present
study. The fit of the single-factor model was high, x 2 = 510.201 (p = 0.001), and the other
fitness indices were also confirmed, indicating that the overall tolerance was outside of the
acceptable range (Table 11).

Conclusions
Findings
The results of this study can be summarized as follows. First, we derived six subfactors
(self-directed learning, seeking constructive feedback, critical reflection, challenging

Latent mean
Supervisors Employees SE CR p Variance Effect size

Self-directed learning 0 0.098 0.106 0.922 0.357 0.269 0.131


Seeking 0 0.038 0.119 0.319 0.750 0.242 0.020
constructive feedback
Critical reflection 0 0.121 0.091 1.322 0.186 0.343 0.277
Table 10. Challenging experience 0 0.078 0.074 1.064 0.287 0.150 0.134
Latent mean Rational problem-solving 0 0.048 0.077 0.618 0.536 0.188 0.016
invariance test Adaptation to the 0 0.101 0.092 1.1 0.271 0.577 0.055
results job environment

x2 df x 2/df GFI CFI AGFI NNFI RMSEA SRMR

510.201 135 3.779 0.758 0.716 0.694 0.868 0.121 0.090


Table 11. Notes: df = degree of freedom, GFI = goodness-of-fit index, CFI = comparative fit index, AGFI = adjusted
Fit of the single- goodness-of-fit index, NNFI = non-normal fit index, RMSEA = root-mean-squared error of approximation,
factor model SRMR = standard root mean residual
experience, rational problem-solving and adaptation to the job environment) based on Employee
empirical analyses of three characteristics of learning agility (individual experience, learning
transformation and adaptation).
Second, to develop a measurement of learning agility, we conducted online and offline
agility
surveys and analyzed the findings using assessments including EFA, CFA, MI, Rasch
analysis and single-factor tests. The overall reliability of the learning agility measurement
had a high internal consistency of 0.893. We focused on validity based on (a) test content,
(b) internal structure, (c) external criteria and (d) generalization, following Messick (1995), 463
and demonstrated significant construct validity. Additionally, we used Hamman’s single
factor test to confirm the occurrence of CMV. Factor analysis indicated that CMV did not
occur in the present study.
Third, we conducted a MI test and verified that it could also be applied across employee
groups, regardless of position. We confirmed that learning agility does not differ according
to organizational position. Therefore, organizational practitioners may use learning agility
as a criterion for selection and placement of employees across the organization, as well as
human resource capacity development.
Finally, we found that our learning agility measurement positively predicts effects on
organizational performance: task performance ( b = 0.498), contextual performance ( b =
0.437) and innovative work behavior ( b = 0.580). These results are in agreement with those
of previous research, which found that learning agility is related to the potential for both
current and long-term outcomes, as well as to the ability to apply what has been learned
through personal experience in new environments

Implications for research and human resource development practice


As changes in the business environment accelerate, the knowledge and skills required for
job performance are also rapidly changing. This underscores the importance of rapid
learning. However, it is even more important to know what knowledge is required and to use
it in applied settings rather than to learn it quickly based on cognitive learning agility.
First, learning agility has been viewed as a capacity that characterizes leaders, but it can
be extended to assessments of capabilities among general organizational members. We
confirmed that learning agility does not differ according to position. Based on our results,
the factors that constitute learning agility can be discussed in relation to individual learning
motivation and organizational culture. Therefore, organizational practitioners should
facilitate formal learning and knowledge sharing.
Second, our results confirmed that adaptation to the job environment factor accounted
for 11.541% of the total variance. Therefore, we confirmed the possibility of developing the
organizational environment to support education and training, rather than being limited by
inherent intellectual traits or IQ. Understanding learning agility can be used to develop
education and training programs that allow organizational members to perform duties
better by developing learning agility and positively influencing organizational performance.
Specifically, HRD experts suggested building a culture that provides active feedback on
member performance.
Finally, we conducted Rasch analysis and MI tests to develop a measurement of learning
agility. The Rasch model is derived from IRT and has been used to evaluate the adequacy of
item discrimination and item difficulty (Rasch, 1960). It has recently attracted attention in
evaluation tool development (Bond and Fox, 2001). Using the MI test, we confirmed that the
learning agility measurement tool developed in this study could be applied equally to
leaders and employees. The item development process applied in this study provides useful
suggestions to researchers who develop and validate measurement tools.
EJTD Limitations and future research
46,5/6 Despite the useful implications of our results, the present study has several limitations.
First, our results need to be validated before generalization to different cultural and national
contexts, as the present findings reflect the Korean context. Learning agility has been
actively studied in western cultures since the 2000s and the Korean context only since 2010.
Therefore, in future research, it will be necessary to determine whether our measurement
464 technique applies to non-Korean companies and whether results differ according to the
specific cultural context or country.
Second, it is necessary to study the individual and organizational dimensions that affect
learning agility. We confirmed the predictive validity of learning agility by using learning
agility as the independent variable and organizational performance as the dependent
variable. Thus, it is necessary to investigate how the organizational environment is related
to learning agility development.
Third, further research should be conducted to reflect differences in types of industries
and scales of organizations. In the present study, we did not directly examine this issue, as
the sample size was insufficient. As learning agility occurs in the course of performing tasks
in the workplace, it is closely related to job duties. Therefore, it is necessary to attempt to
select and identify candidates that can reveal organizational personality and job
characteristics.

References
Anderson, J.C. and Gerbing, D.W. (1988), “Structural equation modeling in practice: a review and
recommended two-step approach”, Psychological Bulletin, Vol. 103 No. 3, pp. 411-423.
Barney, J.B. and Clark, D.N. (2007), Resource-Based Theory: Creating and Sustaining Competitive
Advantage, Oxford University Press.
Bedford, C.L. (2011), “The role of learning agility in workplace performance and career advancement”,
Unpublished doctoral dissertation, University of Minnesota, [S.l.], available at: www.riss.kr/link?
id=T12725161
Borman, W.C. and Motowidlo, S.J. (1993), “Expanding the criterion domain to include elements of
contextual performance”, in Schmitt, N. and Borman, W.C. (Eds), Personnel Selection in
Organizations, Jossey-Bass.
Burke, W.W., Roloff, K.S. and Mitchinson, A. (2016), “Learning agility: a new model and measure”,
Working Paper.
Bond, T.G. and Fox, C.M. (2001), Applying the Rasch Model: Fundamental Measurement in the Human
Sciences, Lawrence Erlbaum.
Brown, T.A. (2014), Confirmatory Factor Analysis for Applied Research, Guilford.
Cheung, G.W. and Rensvold, R.B. (2002), “Evaluating goodness-of-fit indexes for testing measurement
invariance”, Structural Equation Modeling: A Multidisciplinary Journal, Vol. 9 No. 2, pp. 233-255,
doi: 10.1207/S15328007SEM0902_5.
Cohen, J. (2001), Statistical Power Analysis for the Behavioral Sciences (2nd ed.)., Erlbaum.
Colquitt, J.A., Lepine, J.A.,Wesson, M.J. and Gellatly, I.R. (2011), Organizational Behavior: Improving
Performance and Commitment in the Workplace (5th ed.), McGraw-Hill Irwin.
Corporate Leadership Council (2005), Realizing the full potential of rising talent, Corporate Executive
Board.
De Meuse, K.P., Dai, G., Swisher, V.V., Swisher and V.V. (2016), “Leadership development: Exploring,
clarifying, and expanding our understanding of learning agility”, Industrial and Organizational
Psychology, Vol. 5 No. 3, pp. 280-286, doi: 10.1111/j.1754-9434.2012.01445.
De Meuse, K.P. (2017), “Learning agility: its evolution as a psychological construct and its empirical Employee
relationship to leader success”, Consulting Psychology Journal: Practice and Research, Vol. 69
No. 4, pp. 267-295, doi: 10.1037/cpb0000100.
learning
De Meuse, K.P., Dai, G. and Hallenbeck, G.S. (2010), “Learning agility: a construct whose time has
agility
come”, Consulting Psychology Journal: Practice and Research, Vol. 62 No. 2, pp. 119-130, doi:
10.1037/a0019988.
De Meuse, K.P., Dai, G., Swisher, V.V., Eichinger, R.W. and Lombardo, M.M. (2012), “Leadership
development: exploring, clarifying, and expanding our understanding of learning agility”, 465
Industrial and Organizational Psychology, Vol. 5 No. 3, pp. 280-286, doi: 10.1111/j.1754-
9434.2012.01445.x.
DeRue, D.S., Ashford, S.J. and Myers, C.G. (2012a), “Learning agility: in search of conceptual clarity and
theoretical grounding”, Industrial and Organizational Psychology, Vol. 5 No. 3, pp. 258-279, doi:
10.1111/j.1754-9434.2012.01444.x.
DeVellis, R.F. (2016), Scale Development: Theory and Applications, 4th ed., Sage publications, Inc.
Eichinger, R.W., Lombardo, M.M. and Capretta, C.C. (2010), FYI for Learning Agility, Lominger
International: A Korn/Ferry Company.
Fehring, R.J. (1987), “Methods to validate nursing diagnoses”, Heart and Lung: The Journal of Critical
Care, Vol. 16 No. 6 Pt 1, pp. 625-629.
Fornell, C. and Larcker, D.F. (1981), “Evaluating structural equation models with unobservable
variables and measurement error”, Journal of Marketing Research, Vol. 18 No. 1, pp. 39-50.
Garvin, D.A., Edmondson, A.C. and Gino, F. (2008), “Is yours a learning organization?”, Harvard
Business Review, Vol. 86 No. 3, pp. 109-120.
Gravett, L.S. and Caldwell, S.A. (2016), Learning Agility, Springer, doi: 10.1057/978-1-137-59965-0.
Hair, J.F., Jr, Hult, G.T.M., Ringle, C. and Sarstedt, M. (2016), A Primer on Partial Least Squares
Structural Equation Modeling (PLS-SEM), 2nd ed., Sage publications, Inc.
Hair, J.F., Black, W.C., Babin, B.J., Anderson, R.E. and Tatham, R.L. (1998), Multivariate Data Analysis,
5th ed., Prentice hall, Upper Saddle River, NJ.
Hong, S., Malik, M.L. and Lee, M.-K. (2003), “Testing configural, metric, scalar, and latent mean invariance
across genders in sociotropy and autonomy using a non-western sample”, Educational and Psychological
Measurement, Vol. 63 No. 4, pp. 636-654, doi: 10.1177/0013164403251332.
Hu, L. T. and Bentler, P.M. (1999), “Cutoff criteria for fit indexes in covariance structure analysis:
conventional criteria versus new alternatives”, Structural Equation Modeling: A
Multidisciplinary Journal, Vol. 6 No. 1, pp. 1-55, doi: 10.1080/10705519909540118.
Im, C., Wee, Y. and Lee, H. (2017), “A study on the development of the learning agility scale”, The
Korean Journal of Human Resource Development Quarterly, Vol. 19 No. 2, pp. 81-108.
Kember, D., Leung, D.Y., Jones, A., Loke, A.Y., McKay, J., Sinclair, K. and Wong, M. (2000),
“Development of a questionnaire to measure the level of reflective thinking”, Assessment and
Evaluation in Higher Education, Vol. 25 No. 4, pp. 381-395, doi: 10.1080/713611442.
Kline, R.B. (2015), Principles and Practice of Structural Equation Modeling, 4th ed., Guilford
publications, New York, NY.
Kim, D.Y. and Yoo, T.Y. (2002), “The relationships between the Big Five personality factors and
contextual performance in work organizations”, Korean Journal of Industrial and Organizational
Psychology, Vol. 15 No. 2, pp. 1-24.
Koh, K.H. and Zumbo, B.D. (2008), “Multi-group confirmatory factor analysis for testing measurement
invariance in mixed item format data”, Journal of Modern Applied Statistical Methods, Vol. 7
No. 2, pp. 471-477, doi: 10.22237/jmasm/1225512660.
Lee, J. and Song, J.H. (2020), “Developing a conceptual integrated model for the employee’s learning
agility”, Performance Improvement Quarterly, Online first, doi: 10.1002/piq.21352.
EJTD Lee, C. and Yoo, T. (2016), “The effect of personality on task performance and adaptive performance:
the mediating effect of job crafting and the moderating effect of leader’s empowering behavior”,
46,5/6 Korean Journal of Industrial and Organizational Psychology, Vol. 29 No. 4, pp. 607-630.
Lombardo, M.M. and Eichinger, R.W. (2000), “High potentials as high learners”, Human Resource
Management, Vol. 39 No. 4, pp. 321-329, doi: 10.1002/1099-050X(200024)39:4<321::AID-
HRM4>3.0.CO;2-1.
466 Meredith, W. (1993), “Measurement invariance, factor analysis and factorial invariance”,
Psychometrika, Vol. 58 No. 4, pp. 525-543.
Messick, S. (1995), “Validity of psychological assessment: validation of inferences from persons’
responses and performances as scientific inquiry into score meaning”, American Psychologist,
Vol. 50 No. 9, pp. 741-749, doi: 10.1037/0003-066X.50.9.741.
Mitchinson, A. and Morris, R. (2016), “Learning about learning agility”, A White Paper.
Motowildo, S.J., Borman, W.C. and Schmit, M.J. (1997), “A theory of individual differences in task and
contextual performance”, Human Performance, Vol. 10 No. 2, pp. 71-83, doi: 10.1207/
s15327043hup1002_1.
Osborne, J.W., Costello, A.B. and Kellow, J.T. (2014), Best Practices in Exploratory Factor Analysis,
CreateSpace Independent Publishing Platform, Louisville, KY.
Ployhart, R.E. and Bliese, P.D. (2006), “Individual adaptability (I-ADAPT) theory: conceptualizing the
antecedents, consequences, and measurement of individual differences in adaptability”, in Salas, E. (Ed.),
Understanding Adaptability: A Prerequisite for Effective Performance within Complex Environments,
Emerald Group Publishing Limited, Oxford, pp. 3-39, doi: 10.1016/S1479-3601(05)06001-7.
Podsakoff, P.M., MacKenzie, S.B., Lee, J.-Y. and Podsakoff, N.P. (2003), “Common method biases in
behavioral research: a critical review of the literature and recommended remedies”, Journal of
Applied Psychology, Vol. 88 No. 5, pp. 879-903, doi: 10.1037/0021-9010.88.5.879.
Pulakos, E.D., Arad, S., Donovan, M.A. and Plamondon, K.E. (2000), “Adaptability in the workplace:
development of a taxonomy of adaptive performance”, Journal of Applied Psychology, Vol. 85
No. 4, pp. 612-624, doi: 10.1037/0021-9010.85.4.612.
Rasch, G. (2001), “Studies in mathematical psychology: I. Probabilistic models for some intelligence and
attainment tests. Nielsen ^ Lydiche”,
Ryu, H. and Oh, H. (2016), “Learning agility: issues and challenges”, The Korean Journal of Human
Resource Development Quarterly, Vol. 18 No. 4, pp. 119-147.
Scott, S.G. and Bruce, R.A. (1994), “Determinants of innovative behavior: a path model of individual innovation
in the workplace”, Academy of Management Journal, Vol. 37 No. 3, pp. 580-607, doi: 10.2307/256701.
Smith, B.C. (2015), “How does learning agile business leadership differ? Exploring a revised model of
the construct of learning agility in relation to executive performance”, Unpublished doctoral
dissertation, Columbia University, [S.l.], available at: www.riss.kr/link?id=T14028784
Steenkamp, J.-B.E. and Baumgartner, H. (1998), “Assessing measurement invariance in cross-national
consumer research”, Journal of Consumer Research, Vol. 25 No. 1, pp. 78-90, doi: 10.1086/209528.
Steinmetz, H., Schmidt, P., Tina-Booh, A., Wieczorek, S. and Schwartz, S.H. (2009), “Testing measurement
invariance using multigroup CFA: differences between educational groups in human values
measurement”, Quality and Quantity, Vol. 43 No. 4, pp. 599-616, doi: 10.1007/s11135-007-9143-x.
Thompson, B. (2004), Exploratory and Confirmatory Factor Analysis: Understanding Concepts and
Applications, American Psychological Association, Washington, DC.
Van de Schoot, R., Lugtig, P. and Hox, J. (2012), “A checklist for testing measurement invariance”, European
Journal of Developmental Psychology, Vol. 9 No. 4, pp. 486-492, doi: 10.1080/17405629.2012.686740.
Vandenberg, R.J. and Lance, C.E. (2000), “A review and synthesis of the measurement invariance
literature: suggestions, practices, and recommendations for organizational research”,
Organizational Research Methods, Vol. 3 No. 1, pp. 4-70, doi: 10.1177/109442810031002.
Whetten, D.A. (1989), “What constitutes a theoretical contribution?”, Academy of Management Review, Employee
Vol. 14 No. 4, pp. 490-495, doi: 10.5465/AMR.1989.4308371.
learning
Williams, B., Onsman, A. and Brown, T. (2010), “Exploratory factor analysis: a five-step guide for
novices”, Australasian Journal of Paramedicine, Vol. 8 No. 3, pp. 1-13, doi: 10.1.1.414.4818. agility

Further reading
Argyris, C. and Schön, D. (1984), “Theories of action, double loop learning and organizational learning”,
available at: www.infed.org/thinkers/argyris.htm (accessed 23 September 2019).
467
Connolly, J. (2001), “Assessing the construct validity of a measure of learning agility”, Unpublished
doctoral dissertation, Florida International University, doi: 10.25148/etd.FI14060893.
Cyert, R.M. and March, J.G. (1963), A Behavioral Theory of the Firm, Prentice Hall.
De Meuse, K.P. (2019), “A meta-analysis of the relationship between learning agility and leader
success”, Journal of Organizational Psychology, Vol. 19 No. 1, pp. 25-34.
DeRue, D.S. and Ashford, S.J. (2010), “Who will lead and who will follow? A social process of leadership
identity construction in organizations”, Academy of Management Review, Vol. 35 No. 4,
pp. 627-647, doi: 10.5465/AMR.2010.53503267.
DeRue, D.S., Ashford, S.J. and Myers, C.G. (2012b), “Learning agility: many questions, a few answers,
and a path forward”, Industrial and Organizational Psychology, Vol. 5 No. 3, pp. 316-322, doi:
10.1111/j.1754-9434.2012.01465.x.
Hallenbeck, G.S. (2016), Learning Agility: Unlock the Lesson of Experience, Center for Creative
Leadership.
Hoff, D.F. and Burke, W.W. (2017), Learning Agility: The Key to Leader Potential, Tulsa, OK.
Lombardo, M.M. and Eichinger, R.W. (1994), Learning Agility: The Learning Arichitect User’s Manual,
Lominger Inc.
McCauley, C.D. (2001), “Leader training and development”, in Zaccaro, S.J. and Klimoski, R.J. (Eds), The
Jossey-Bass Business and Management Series. The Nature of Organizational Leadership:
Understanding the Performance Imperatives Confronting Today’s Leaders, Jossey-Bass,
pp. 347-383.
Merriam, S.B., Caffarella, R.S. and Baumgartner, L.M. (2012), Learning in Adulthood: A Comprehensive
Guide, 4th ed., Jossey-Bass.
Sessa, V.I. and London, M. (2015), Continuous Learning in Organizations: Individual, Group, and
Organizational Perspectives, Lawrence Erlbaum, doi: 10.1111/j.1744-6570.2008.00119_6.x.
Torraco, R.J. (2016), “Writing integrative literature reviews: using the past and present to explore the
future”, Human Resource Development Review, Vol. 15 No. 4, pp. 404-428.
Wang, S. and Beier, M.E. (2012), “Learning agility: not much is new”, Industrial and Organizational
Psychology, Vol. 5 No. 3, pp. 293-296, doi: 10.1111/j.1754-9434.2012.01448.x.

Corresponding author
Ji Hoon Song can be contacted at: [email protected]

For instructions on how to order reprints of this article, please visit our website:
www.emeraldgrouppublishing.com/licensing/reprints.htm
Or contact us for further details: [email protected]

You might also like