A_study_of_employee_acceptance_of_artificial_intel (1)
A_study_of_employee_acceptance_of_artificial_intel (1)
A_study_of_employee_acceptance_of_artificial_intel (1)
https://www.emerald.com/insight/2444-8494.htm
Abstract
Received 16 June 2020
Purpose – This study aims to reveal the role of artificial intelligence (AI) in the context of a front-line service Revised 8 August 2020
meeting to understand how users accept AI technology-enabled service. 22 August 2020
Design/methodology/approach – This study collected 454 Korean employees through online survey 28 September 2020
methods and used hierarchical regression to test the hypothesis empirically. 18 October 2020
Findings – In the results, first, clarity of user and AI’s roles, user’s motivation to adopt AI-based technology 28 November 2020
Accepted 1 December 2020
and user’s ability in the context of the adoption of AI-based technology increases their willingness to accept AI
technology. Second, privacy concerns related to the use of AI-based technology weakens the relationship
between role clarity and user’s willingness to accept AI technology. And, trust related to the use of AI-based
technology strengthens the relationship between ability and user’s willingness to accept AI technology.
Originality/value – This study is the first one to reveal the role of AI in the context of a front-line service
meeting to understand how users accept AI technology-enabled service.
Keywords Artificial intelligence, Clarity of role, Motivation, Ability, Willingness to accept AI technology
Paper type Research paper
1. Introduction
Employee self-service (ESS) technology is currently an open innovation of particular interest
in the human resource management context because of anticipated cost savings and other
efficiency-related benefits (Giovanis et al., 2019; van Tonder et al., 2020). It is a class of web-
based technology that allows employees and managers to conduct much of their own data
management and transaction processing rather than relying on human resource (HR) or
administrative staff to perform these duties (Marler and Dulebohn, 2005). ESS technology can
allow employees to update personal information, change their benefits selections or register
for training. Shifting such duties to the individual employee enables the organization to
devote fewer specialized resources to these activities, often allowing HR to focus on more
strategic functions. Despite the intended benefits, the implementation of ESS technology
poses many challenges. Because ESS technology functionality is typically not associated
with the core functions of professional employees’ jobs, these employees may be less
motivated to learn and use the ESS technology (Brown, 2003; Marler and Dulebohn, 2005).
However, the full adoption of ESS technology is necessary to realize the intended benefits and
recoup the significant investments in technology. The history of technology has shown that
there is much hype about new technologies, and after the initial inflated expectations, the
trough of disillusionment usually follows (Gartner, 2016). Due to trade press and social media
posts extolling the virtues of new technologies, managers are keen to jump on a new
technology rollercoaster and adopt technological solutions without considering whether they
are worth the effort and justify their mystique/novelty.
© Youngkeun Choi. Published in European Journal of Management and Business Economics. Published
by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC
BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article European Journal of Management
and Business Economics
(for both commercial and non-commercial purposes), subject to full attribution to the original publication Emerald Publishing Limited
and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/ e-ISSN: 2444-8494
p-ISSN: 2444-8451
legalcode DOI 10.1108/EJMBE-06-2020-0158
EJMBE Artificial intelligence (AI) is an example of technology that receives much attention
worldwide in the media, academia and politics (Zhai et al., 2020; Dhamija and Bag, 2020).
However, international readers’ attitudes toward AI range from a positive assessment of
human physical labor and new business opportunities (Frank et al., 2017) to a fear of making
humans obsolete in a fully robotic society (Leonhard, 2016). Therefore, it is essential to
understand the good deeds of AI-based ESS acceptance to increase the chances of success
with the introduction of AI-based ESS. However, few researchers have examined how
employees adopt AI-based ESS.
For this research gap, this study takes a closer look at the employees’ perspective on how
and why they embrace a narrow, business-based AI application when service occurs.
Therefore, this study presents a conceptual framework based on previous reviews, practices
and theories to identify the role of AI in the context of service encounters and explain the
employee acceptance of AI in service research. This framework extends a range of AI beyond
conventional configuration and self-service technology acceptance theories to include AI-
specific variables such as privacy concerns and trust. A process model, organizing salient
variables contributing to employee reaction to the introduction of technology to the service
encounter, is proposed, and hypotheses testing the relationships between and among these
variables are developed. This study concludes with research issues related to the framework
that serve as catalysts for future research. It will be the first study to reveal the role of AI at a
front-line service conference to understand how users accept services based on AI
technology.
3. Methodology
3.1 Sample and data collection
This study adopted an online survey method using a convenience sampling for data
collection. It is instrumental in collecting data from a large number of individuals in a
relatively short time and at a better cost. The survey company asked some of the target
companies for the survey and acquired employees’ email addresses through the human
resources management department of target companies with their agreement.
The professional survey company initially contacted 11 employees in the target
companies in Korea. Each first-level contact (or “sampling seed”) was asked to forward the
invitation email to their colleagues at their organization and to ask those recipients also to
send the email to other staff. The potential maximum number of recipients could be assumed
to include all employees of the target companies, which numbered over 500 at that time. The
seeds of this respondent-driven sampling method (also known as snowball sampling) were
diverse in demographic characteristics. However, this method has been challenged due to
possible self-selection bias or bias that may arise when the topic of the survey is controversial
EJMBE or when differences in the size of social networks is a factor. None of these reported biases was
deemed to apply to the focus of the present study.
According to the theory of social research methodology, it can be said that the response
rate is not a big deal as long as the representativeness of sample selection is secured. Of
course, there are some prerequisites. Since the survey method of this study is a snowball
method, the survey was designed to end when 500 people, 3% of the target company’s
employees, responded. It was considered reasonable considering the survey budget and
sample size.
The professional survey company automatically gave an electronic gift card of the coffee
voucher to respondents after completing this survey to increase the response rate and reduce
the non-response bias for one month from January 1 to 31 in 2019. All participants received an
email explaining the purpose of the survey, emphasizing voluntary participation and asking
for an online survey, along with an email with confidence. Upon completing the survey, the
participants received an electronic gift card of the coffee voucher as a token to participate in
the study. Of the initial pool of participants surveyed, 500 individuals returned completed
surveys, yielding a response rate of 100%. After the deletion of surveys with (1) no code
identifiers, (2) an excessive number of missing cases, this study was left with a final sample
of 454.
The participants are Korean and consist of men (47.6%) and women (52.4%). The age of
them includes 20s (24.1%), 30s (25.7%), 40s (25.4%) and 50s (24.8%). The marital status
includes unmarried (41.2%) and married (48.8%). The occupation includes office work
(66.8%), research and development (33.2%). The level of their education includes middle
school (0.6%), high school (16.3%), community college (21.0%), undergraduate (51.4%) and
graduate school (10.7%). The income includes under 30,000 USD (27.1%), 30,000–50,000 USD
(46.3%) and 50,000–100,000 USD (26.6%).
4. Analysis result
4.1 Verification of reliability and validity
The validity of variables was verified through the principal components method and factor
analysis with the varimax method. The criteria for determining the number of factors is
defined as a 1.0 eigenvalue. This study applied factors for analysis only if the factor loading
was greater than 0.5 (factor loading represents the correlation scale between a factor and
other variables). The reliability of variables was judged by internal consistency, as assessed
by Cronbach’s alpha. This study used surveys and regarded each as one measure only if their
Cronbach’s alpha values were 0.7 or higher. They are role clarity (0.86), extrinsic motivation
(0.77), intrinsic motivation (0.81), ability (0.80), privacy concerns (0.74), trust (0.79) and
willingness to accept AI technology (0.79).
4.2 Common method bias Employee
As with all self-reported data, there is the potential for the occurrence of common method acceptance of
variance (CMV) (MacKenzie and Podsakoff, 2012; Podsakoff et al., 2003). For alleviating and
assessing the magnitude of common method bias, this study adopted several procedural and
AI technology
statistical remedies that Podsakoff et al. (2003) suggest. First, during the survey, respondents
were guaranteed anonymity and confidentiality to reduce the evaluation apprehension.
Further, this study paid careful attention to the wording of the items and developed the
questionnaire carefully to minimize the item ambiguity. These procedures would make them
less likely to edit their responses to be more socially desirable, acquiescent and consistent
with how they think the researcher wants them to respond when answering the questionnaire
(Podsakoff et al., 2003). Second, this study conducted Harman’s one-factor test on all of the
items. A principal component factor analysis revealed that the first factor only explained
34.1% of the variance. Thus, no single factor emerged, nor did one-factor account for most of
the variance.
Furthermore, the measurement model was reassessed with the addition of a latent CMV
factor (Podsakoff et al., 2003). All indicator variables in the measurement model were loaded
on this factor. The addition of the common variance factor did not improve the fit over the
measurement model without that factor, with all indicators still remaining significant. These
results do suggest that CMV is not of great concern in this study.
1 2 3 4 5 6
1. Role clarity 1
2. Extrinsic motivation 0.021 1
3. Intrinsic motivation 0.012 0.024 1
4. Ability 0.046 0.106 0.032 1
5. Privacy concerns 0.043 0.011 0.088 0.032 1
6 .Trust 0.026 0.061 0.042 0.057 0.051 1 Table 1.
7. Willingness to accept AI technology 0.021** 0.011** 0.012** 0.012** 0.111** 0.042** Variables’ correlation
Note(s): *p < 0.05, **p < 0.01 coefficient
EJMBE Willingness to accept AI technology
Model 1 Model 2 Model 3
and the willingness to accept AI technology, model 2 in Table 2 shows that some of the
independent variables have statistical significance with game engagement. Role clarity
(β 5 0.031, p < 0.01) is positively related to willingness to accept AI technology. Extrinsic
motivation (β 5 0.019, p < 0.01) and intrinsic motivation (β 5 0.008, p < 0.01) have positive
relationships with willingness to accept AI technology. Ability (β 5 0.017, p < 0.01) shows a
positive association with willingness to accept AI technology. Therefore, P1–P3 are
supported.
Lastly, model 3, consisting of moderators, shows the interactions between independent
variables and moderating variables on game engagement. Privacy concerns were found to
harm the relationship between role clarity and willingness to accept AI technology.
(β 5 0.063, p < 0.05). Privacy concerns were found to have no significance in the relationship
between other independent variables and a willingness to accept AI technology. Trust was
found to positively affect the relationship between ability and willingness to accept AI
technology. (β 5 0.041, p < 0.05). Trust was found to have no significance in the relationship
between other independent variables and a willingness to accept AI technology. Therefore,
P4 and P5 are partially supported (see Figure 1).
5. Discussion
The purpose of this study was to examine the employee acceptance of AI and explore the AI-
specific moderators’ effect on that process. The results show that the clarity of user and AI’s
roles, user’s motivation to adopt AI-based technology and user’s ability in the context of the
adoption of AI-based technology increases their willingness to accept AI technology. And in
the results, privacy concerns related to the use of AI-based technology weakens the
relationship between role clarity and user’s willingness to accept AI technology. And, trust
3.00 Employee
2.95
acceptance of
AI technology
2.90
2.85
Role clarity
2.80
Low
2.70 High
2.65
2.60
low med high
Willingness to accept AI technology
3.00
2.95
2.90
2.85
Ability
2.80
2.75 Trust
Low
2.70 High
2.65
2.60 Figure 1.
low med high Interaction effect
Willingness to accept AI technology
pertaining to the use of AI-based technology strengthens the relationship between ability and
user’s willingness to accept AI technology.
The relevant studies have shown that privacy considerations and awareness of privacy
risks harm users’ willingness to use personalized services. The value of personal services
may be more important than privacy concerns (Awad and Krisnan, 2006). According to a
study by Lee and Rha (2016) regarding location-based mobile commerce, increasing
confidence in service providers can help alleviate user awareness of privacy risks. So, this
study suggested that privacy concern is an essential factor affecting user acceptance of AI-
based technologies. The results show that privacy concerns related to the use of AI-based
technology weaken the relationship between only role clarity and user’s willingness to accept
AI technology. In contrast, privacy concerns do not affect only other independent variables
and the user’s willingness to accept AI technology. These results mean that privacy concerns
are related to the functional process of using AI devices, and user and AI’s roles in using AI
devices are in the functional process.
EJMBE According to Lee and See (2004), trust connects the distance between the nature of
automation and the individual’s belief in its function and the individual’s intention to use and
rely on it. Concerning e-commerce, Pavlou (2003) distinguishes between two aspects: trust in
the supplier and trust in the trading medium. This differentiation also applies in the context of
AI support service meetings. This study suggested that trust in service providers and specific
AI technologies will contribute to user confidence in AI support services. The results show
that trust related to the use of AI-based technology strengthens the relationship between only
ability and the user’s willingness to accept AI technology. Simultaneously, privacy concerns
do not affect only other independent variables and the user’s willingness to accept AI
technology. These results mean that trust is related to the psychological judgment of using AI
devices, and user’s ability in the context of the adoption of AI-based technology is in the
psychological assessment.
6. Conclusion
For research contribution, first, this study is the first one to reveal the role of AI in the context
of a front-line service meeting to understand how users accept AI technology-enabled service.
Despite growing practical importance, there are few quantitative studies on individual
factors that affect their willingness to accept AI technology. However, this study focused on
the individual factors of participants directly and especially proposed a model that integrates
individual factors rather than identifying fragmentary factors. Although these individual
factors may not coexist or even show conflicts, this study showed that these individual
factors could coexist in the context of AI use. This study revealed that people who use AI
pursue the individual role, motivation and ability related to AI. Second, this study is the first
one to understand AI-specific moderators. The results explained that privacy concerns are
associated with the functional process of using AI devices, and user and AI’s roles in using AI
devices are in the functional process. And this study explained that trust is related to the
psychological judgment of using AI devices, and user’s ability in the context of the adoption
of AI-based technology is in the psychological assessment.
For practical implications, first, the results of this study show that individual factors
such as role, motivation and ability are important to enhance the acceptance of AI.
Therefore, AI device developers need to make the AI users perceive that they experience a
high level of role clarity, motivation and ability. For example, AI users need to use user
interfaces that AI device developers made. Second, the results show that privacy concerns
are related to the functional process of using AI devices, and user and AI’s roles in using AI
devices are in the functional process. Therefore, AI device operators need to make AI users
perceive that they experience a high level of trust. For example, it would be a good idea to
make the privacy process in the role of paly between users and AIs. For example, it would be
a good idea to allow various communication (e.g. text, pictures, voice, video, etc.) between
users and AIs.
By this research results, the present study could have several insights into the acceptance
of users in AI. However, it should also acknowledge the following limitations of this research.
First, the present study collected the responses from users in South Korea. There may exist
some nation cultural issues in the research context. Future studies should re-test this in other
countries to assure these results’ reliability. Second, as the variables were all measured
simultaneously, it cannot be sure that their relationships are constant. Although the survey
questions occurred in reverse order of the analysis model to prevent additional issues, the
existence of causal relationships between variables is a possibility. Therefore, future studies
need to consider longitudinal studies. Finally, this study uses role clarity, motivation and
ability as individual factors and explores privacy concerns and trust as AI-specific
moderators. However, considering the characteristics of AI, future studies may find other
individual factors and other moderating factors. For example, as other personal factors, locus Employee
of control may be considered. Besides, the interaction from AI can be considered as a acceptance of
moderating factor.
AI technology
References
Agarwal, R. and Karahanna, E. (2000), “Time flies when you’re having cognitive absorption and
beliefs about information technology usage”, MIS Quarterly, Vol. 24 No. 4, p. 665.
Awad, N.F. and Krishnan, M.S. (2006), “The personalization privacy paradox: an empirical evaluation
of information transparency and the willingness to be profiled online for personalization”, MIS
Quarterly, Vol. 30 No. 1, pp. 13-28.
Blut, M., Wang, C. and Schoefer, K. (2016), “Factors influencing the acceptance of self-service
technologies: a meta-analysis”, Journal of Service Research, Vol. 19 No. 4, pp. 396-416.
Brown, D. (2003), “When managers balk at doing HR’s work”, Canadian HR Reporter, Vol. 16, p. 1.
Chellappa, R.K. and Sin, R.G. (2005), “Personalization versus privacy: an empirical examination of the
online Consumer’s dilemma”, Information Technology and Management, Vol. 6 No. 2,
pp. 181-202.
Dhamija, P. and Bag, S. (2020), “Role of artificial intelligence in operations environment: a review and
bibliometric analysis”, The TQM Journal, Vol. 32 No. 4, pp. 869-896.
Flavian, C., Guinalıu, M. and Jordan, P. (2019), “Antecedents and consequences of trust on a virtual
team leader”, European Journal of Management and Business Economics, Vol. 28 No. 1, pp. 2-24.
Frank, M., Roehring, P. and Pring, B. (2017), What to Do when Machines Do Everything: How to Get
Ahead in a World of AI, Algorithms, Bots and Big Data, John Wiley & Sons, Hoboken, NJ.
Gartner, G.S. (2016), “Hype cycle for emerging technologies identifies three key trends that
organizations must track to gain competitive advantage”, Retrieved.
Genpact (2017), “The consumer: sees AI benefits but still prefers the human touch”, available at: http://
www.genpact.com/lp/ai-research-consumer (accessed 12 May 2018).
Giovanis, A., Assimakopoulos, C. and Sarmaniotis, C. (2019), “Adoption of mobile self-service retail
banking technologies: the role of technology, social, channel and personal factors”, International
Journal of Retail and Distribution Management, Vol. 47 No. 9, pp. 894-914.
Heater, B. (2017), “After pushing back, Amazon hands over Echodata in Arkansas murder case”,
available at: http://social.techcrunch.com/2017/03/07/amazon-echomurder (accessed 7
June 2018).
Hengstler, M., Enkel, E. and Duelli, S. (2016), “Applied artificial intelligence and trust—the case of
autonomous vehicles and medical assistance devices”, Technological Forecasting and Social
Change, Vol. 105, pp. 105-120.
Hernandez-Fernandez, A. and Lewis, M.C. (2019), “Brand authenticity leads to perceived value and
brand trust”, European Journal of Management and Business Economics, Vol. 28 No. 3,
pp. 222-238.
Hoffman, D.L. and Novak, T.P. (2017), “Consumer and object experience in the Internet of Things: an
assemblage theory approach”, Journal of Consumer Research, Vol. 44 No. 6, pp. 1178-1204.
Hunt, E. (2016), “Tay, Microsoft’s AI chatbot, gets a crash course in racism from Twitter”, The
Guardian, available at: https://www.theguardian.com/technology/2016/mar/24/tay-microsofts-
ai-chatbot-gets-a-crash-course-in-racism-from-twitter (accessed 24 March 2015).
Isaac, M. and Lohr, S. (2017), “Unroll me service faces backlash over a wide spread practice: selling
user data”, The New York Times, available at: https://www.nytimes.com/2017/04/24/
technology/personal-data-firm-slice-unroll-me-backlash-uber.html (accessed 7 June 2017).
Jarvenpaa, S.L., Tractinsky, N. and Vitale, M. (1999), “Consumer trust in an Internet store”,
Information Technology and Management, Vol. 1 No. 12, pp. 45-71.
EJMBE Jones, G.R. (1986), “Socialization tactics, self-efficacy, and newcomers’ adjustments to organizations”,
Academy of Management Journal, Vol. 29 No. 2, pp. 262-279.
Kelly, P., Lawlor, J. and Mulvey, M. (2019), “Self-service technologies in the travel, tourism, and
hospitality sectors: principles and practice”, in Ivanov, S. and Webster, C. (Eds), Robots,
Artificial Intelligence, and Service Automation in Travel, Tourism and Hospitality, Emerald
Publishing Limited, pp. 57-78.
Lardinois, F. (2017), “Google says its machine learning tech now blocks 99.9% of Gmail spam and
phishing messages”, available at: https://techcrunch.com/2017/05/31/google-says-its-machine-
learning-tech-now-blocks-99-9-of-gmail-spam-and-phishingmessages/ (accessed 7 June 2018).
Lee, J.D. and See, K.A. (2004), “Trust in automation: designing for appropriate reliance”, Human
Factors, Vol. 46 No. 1, pp. 50-80.
Lee, J.M. and Rha, J.Y. (2016), “Personalization–privacy paradox and consumer conflict with the use of
location-based mobile commerce”, Computers in Human Behavior, Vol. 63, pp. 453-462.
Leonhard, G. (2016), Technology vs. Humanity: The Coming Clash Between Man and Machine, Fast
Future Publishing, New York.
Lowry, P.B., Gaskin, J.E., Twyman, N.W., Hammer, B. and Roberts, T.L. (2013), “Taking ‘fun and
games’ seriously: proposing the hedonic-motivation system adoption model (HMSAM)”, Journal
of the Association of Information Systems, Vol. 14 No. 11, pp. 617-671.
Lu, L., Cai, R. and Gursoy, D. (2019), “Developing and validating a service robot integration
willingness scale”, International Journal of Hospitality Management, Vol. 80, pp. 36-51.
MacKenzie, S.B. and Podsakoff, P.M. (2012), “Common method bias in marketing: causes, mechanisms,
and procedural remedies”, Journal of Retailing, Vol. 88 No. 4, pp. 542-555.
Markoff, J. and Mozur, P. (2015), “For sympathetic ear, more Chinese turn to smartphone program”,
NY Times.
Marler, J. and Dulebohn, J.H. (2005), “A model of employee self-service technology acceptance”, in
Martocchio, J.J. (Ed.), Research in Personnel and Human Resource Management, JAI Press,
Greenwich, CT, Vol. 24, pp. 139-182.
Mayer, R.C., Davis, J.H. and Schoorman, F.D. (1995), “An integrative model of organizational trust”,
Academy of Management Review, Vol. 20 No. 3, pp. 709-734.
McKnight, D.H., Cummings, L.L. and Chervany, N.L. (1998), “Initial trust formation in new
organizational relationships”, Academy of Management Review, Vol. 23 No. 3, p. 473.
Oliver, R.L. and Bearden, W.O. (1985), “Crossover effects in the theory of reasoned action: a
moderating influence attempt”, Journal of Consumer Research, Vol. 12, pp. 324-340.
Parra-Lopez, E., Martınez-Gonzalez, J.A. and Chinea-Martin, A. (2018), “Drivers of the formation of e-
loyalty towards tourism destinations”, European Journal of Management and Business
Economics, Vol. 27 No. 1, pp. 66-82.
Pavlou, P.A. (2003), “Consumer acceptance of electronic commerce: integrating trust and risk with the
technology acceptance model”, International Journal of Electronic Commerce, Vol. 7 No. 3,
pp. 101-134.
Podsakoff, P.M., MacKensie, S.B., Lee, J.-Y. and Podsakoff, N.P. (2003), “Common method biases in
behavioral research: a critical review of the literature and recommended remedies”, Journal of
Applied Psychology, Vol. 88 No. 5, pp. 879-903.
PwC’s Global Consumer Insights Survey (2018), “Artificial intelligence: touchpoints with consumers”,
available at: https://www.pwc.com/gx/en/retail-consumer/assets/artificial-intelligence-global-
consumer-insights-survey.pdf (accessed 7 June 2018).
Rizzo, J.R., House, R.J. and Lirtzman, S.I. (1970), “Role conflict and ambiguity in complex
organizations”, Administrative Science Quarterly, Vol. 15 No. 2, pp. 150-163.
Rosen, G. (2017), “Getting our community help in real time”, Facebook Newsroom, available at: https://
newsroom.fb.com/news/2017/11/getting-our-communityhelp-in-real-time/ (accessed 7 June 2018).
Rosenberg, M. and Frenkel, S. (2018), “Facebook’s role in data misuse sets off storms on two Employee
continents”, The New York Times, available at: https://www.nytimes.com/2018/03/18/us/
cambridge-analytica-facebook-privacy-data.html. acceptance of
Tyagi, P.K. (1985), “Relative importance of key job dimensions and leadership behaviors in motivating
AI technology
salesperson work performance”, Journal of Marketing, Vol. 49, pp. 76-86.
Upadhyay, A.K. and Khandelwal, K. (2019), “Artificial intelligence-based training learning from
application”, Development and Learning in Organizations, Vol. 33 No. 2, pp. 20-23.
van Tonder, E., Saunders, S.G. and de Beer, L.T. (2020), “A simplified approach to understanding
customer support and help during self-service encounters”, International Journal of Quality and
Reliability Management, Vol. 37 No. 4, pp. 609-634.
Venkatesh, V., Thong, J. and Xu, X. (2012), “Consumer acceptance and user of information technology:
extending the unified theory of acceptance and use of technology”, MIS Quarterly, Vol. 36,
pp. 157-178.
Wang, X., Yuen, K.F., Wong, Y.D. and Teo, C.-C. (2019), “Consumer participation in last-mile logistics
service: an investigation on cognitions and affects”, International Journal of Physical
Distribution and Logistics Management, Vol. 49 No. 2, pp. 217-238.
Weisbaum, H. (2018), “Trust in Facebook has dropped by 66 percent since the Cambridge Analytica
Scandal”, available at: https://www.nbcnews.com/business/consumer/trust-facebook-has-
dropped-51-percent-cambridge-analytica-scandal-n867011 (accessed 28 May 2018).
Wilson, H.J. and Daugherty, P. (2018), “AI will change health care jobs for the better”, Harvard
Business Review, available at: https://hbr.org/2018/03/ai-will-change-health-carejobs-for-the-
better.
Wu, C.-G. and Wu, P.-Y. (2019), “Investigating user continuance intention toward library self-service
technology: the case of self-issue and return systems in the public context”, Library Hi Tech,
Vol. 37 No. 3, pp. 401-417.
Xu, H., Luo, X.R., Carroll, J.M. and Rosson, M.B. (2011), “The personalization privacy paradox: an
exploratory study of decision making process for location-aware marketing”, Decision Support
Systems, Vol. 51 No. 1, pp. 42-52.
Zhai, Y., Yan, J., Zhang, H. and Lu, W. (2020), “Tracing the evolution of AI: conceptualization of
artificial intelligence in mass media discourse”, Information Discovery and Delivery, Vol. ahead-
of-print No. ahead-of-print. doi: 10.1108/IDD-01-2020-0007.
Further reading
Gibbs, N., Pine, D.W. and Pollack, K. (2017), Artificial Intelligence: The Future of Humankind, Time
Books, New York.
LaGrandeur, K. and Hughes, J.J. (Eds), (2017), Surviving the Machine Age. Intelligent Technology and
the Transformation of Human Work, Palgrave Macmillan, London.
Tegmark, M. (2017), Life 3.0: Being Human in the Age of Artificial Intelligence, Alfred A. Knopf,
New York.
Corresponding author
Youngkeun Choi can be contacted at: penking1@smu.ac.kr
For instructions on how to order reprints of this article, please visit our website:
www.emeraldgrouppublishing.com/licensing/reprints.htm
Or contact us for further details: permissions@emeraldinsight.com