Thuyết trình
Thuyết trình
A R T I C L E I N F O A B S T R A C T
Keywords: Companies neither fully exploit the potential of Artificial Intelligence (AI), nor that of Machine Learning (ML), its
Artificial Intelligence most prominent method. This is true in particular of marketing, where its possible use extends beyond mere
Machine Learning segmentation, personalization, and decision-making. We explore the drivers of and barriers to AI and ML in
Marketing Management
marketing by adopting a dual strategic and behavioral focus, which provides both an inward (AI and ML for
Decision- Making
Delphi Method
marketers) and an outward (AI and ML for customers) perspective. From our mixed-method approach (a Delphi
Ethics study, a survey, and two focus groups), we derive several research propositions that address the challenges facing
marketing managers and organizations in three distinct domains: (1) Culture, Strategy, and Implementation; (2)
Decision-Making and Ethics; (3) Customer Management. Our findings contribute to better understanding the
human factor behind AI and ML, and aim to stimulate interdisciplinary inquiry across marketing, organizational
behavior, psychology, and ethics.
“We need to ask ourselves not only what computers can do, but what Thus, AI and ML in marketing, defined as the “activity, set of in
computers should do—that time has come!“. stitutions, and processes for creating, communicating, delivering, and
—Satya Nadella, CEO of Microsoft (Bittu, 2018, p. 1). exchanging offerings that have value for customers, clients, partners,
and society at large” (AMA, 2017), is a promising field of study.
1. Introduction Deploying AI and especially ML applications provides marketers with
ample opportunities to improve process automation, market forecasting,
Due to its potential to generate favorable outcomes in diverse sectors and (managerial) decision-making (Paschen et al., 2019; Huang & Rust,
and industries, Artificial Intelligence (AI), and especially Machine 2021). Further, applications can be used to create value by providing
Learning (ML), is attracting widespread attention. Frontlines include real-time personal recommendations (Davenport et al., 2020), by
health care, where AI and ML are being deployed to manage the COVID- improving services, and by responding individually to customer needs
19 pandemic (e.g., Bragazzi et al., 2020) and to monitor and improve (Rust, 2020). Despite this extant research on the technological possi
mental health (D’Alfonso, 2020; Kim, Ruensuk, & Hong, 2020); educa bilities of AI and ML in marketing, little is known about the human
tion, where they can enhance learning (e.g., Kumar, 2019; Mirchi et al., perspective, particularly from a marketing manager’s viewpoint. Hence,
2020); and agriculture, where they help improve harvests and thus fight we ask: How can marketing managers thrive in the age of AI and benefit
starvation (Dharmaraj & Vijayanand, 2018). Along with their benefits, from its potential to create value?
AI and ML have also been shown to have adverse effects: violations of We explore this question by examining the interplay between (1)
data privacy (e.g., Martin & Murphy, 2016), fear of job replacements marketing management and managerial decisions, (2) psychology and
(Granulo, Fuchs, & Puntoni, 2019; Huang & Rust, 2018), or even individual perceptions of AI/ML, (3) technology, and (4) ethics. We thus
reduced well-being (Etkin, 2016). Thus, a positive net effect of AI and heed calls of prior research to better understand managerial decisions in
ML appears to depend on determining what they should do rather than addition to consumer behavior (Wierenga, 2011), to stimulate further
what they can do (Bittu, 2018). AI and ML need to be implemented to research on the topic of ethics and AI (Baker-Brunnbauer, 2020), and to
augment instead of replace human capabilities, and must ultimately apply an interdisciplinary and exploratory approach, that is needed
serve users’ needs (Jarrahi, 2018). given the complexity of this constantly evolving topic (Keding, 2020).
* Corresponding author.
E-mail addresses: [email protected] (G. Volkmar), [email protected] (P.M. Fischer), [email protected] (S. Reinecke).
https://doi.org/10.1016/j.jbusres.2022.04.007
Received 30 April 2021; Received in revised form 29 March 2022; Accepted 2 April 2022
Available online 1 June 2022
0148-2963/© 2022 The Authors. Published by Elsevier Inc. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
G. Volkmar et al.
Table 1
Influentiala Empirical Research on AI and ML in Marketing.
Authors (Year) Focusb Perspectivec Key Findings
Dietvorst, Simmons, & Massey (2015) Behav. Outw. Shows the tendency of people to dismiss algorithms and have less confidence in them when algorithms make a mistake.
Huang & Rust (2018) Strat. Inw. Specifies four intelligences required for service tasks. Lays out the way firms should decide between humans and machines.
Leung, Paolacci, & Puntoni (2018) Behav. Outw. Demonstrates consumers’ resistance to automation/automated products if identity-relevant processes are being automated.
Castelo, Bos, & Lehmann (2019) Behav. Outw. Shows consumers’ perception of algorithms as being less useful for subjective tasks. This effect is reduced when algorithms are considered as human-like.
Duan, Edwards, & Dwivedi (2019) Strat. Inw. Addresses how humans and AI can be complementary in decision making. Derives research opportunities on designing information systems.
Granulo, Fuchs, & Puntoni (2019) Behav. & Strat. Inw. Shows human preference for being replaced by robots rather than by other humans due to self-threat.
Logg, Minson, & Moore (2019) Behav. Outw. Demonstrates human reliance on algorithmic advice over advice from other humans (i.e., algorithm appreciation).
Davenport et al. (2020) Behav. & Strat. & Inw. & Outw. Focuses on a conceptual paper resulting in a framework helping customers and firms anticipate how AI is likely to evolve and derives a general research
M./T. agenda.
Hildebrand et al. (2020) M./T. Outw. Develops a conceptual framework and illustration of linking vocal features in human voices to experiential outcomes and emotional states.
Loureiro, Guerreiro, & Tussyadiah Behav. & Strat. Inw. & Outw. Provides an AI literature overview in the business context and derives research questions for various domains.
(2021)
Makarius et al. (2020) Strat. & M./T. Inw. Develops a model to efficiently integrate AI within an organization.
Newman, Fast, & Harmon (2020) Behav. Inw. Shows the perception of people when being evaluated by an algorithm as less fair if employees perceive it as reductionistic.
Rai (2020) Strat. Inw. Explores Explainable AI as critical to making AI more transparent within organizations.
Rust (2020) Strat. Inw. & Outw. Explores the nature of change of technological trends and examines the implications for marketing managers, marketing education, and academic research.
600
Du & Xie (2021) Strat. & M./T. Inw. Develops a model for managers to categorize AI-enabled products.
Dwivedi et al. (2021) Strat. Inw. & Outw. Shows AI’s challenges and future opportunities for business and management, government, public sector, and technology.
Huang & Rust (2021) Behav., Strat. & M./ Inw. & Outw. Develops a three-stage framework for AI-based strategic marketing planning: Mechanical AI, Thinking AI, Feeling AI.
T.
Kumar, Ramachandran, & Kumar Strat. Inw. & Outw. Focuses on four technologies – the Internet of Things, AI, ML, and Blockchain, and their roles in marketing, and formulates research questions.
(2021)
Perez-Vega et al. (2021) Strat. & M./T. Inw. & Outw. Develops a conceptual framework on how firms and customers can enhance the outcomes of firm-solicited and firm-unsolicited online customer engagement
behaviors and derives five propositions.
Shah & Murthi (2021) Strat. Inw. Examines the transforming role of marketers and describes challenges by developing a model on how technology expands the scope and role of marketing.
Sowa, Przegalinska, & Ciechanowski Strat. & M./T. Inw. Explores synergies between human workers and AI in managerial tasks by distinguishing levels of proximity between AI and humans in a work setting.
(2021)
Stahl et al. (2021) Strat. Inw. & Outw. Categorizes ethics into three areas: (1) issues related to ML, (2) social and political issues, and (3) metaphysical questions.
a
Given the high number of studies on AI in marketing, we focused on publications in more prestigious journals and/or highly cited papers.
b
Behav. = Behavioral; Strat. = Strategic; M./T. = Methodological/Technological.
c
Inw. = Inward; Outw. = Outward.
Our research makes two main contributions. First, based on a liter (Daugherty & Wilson, 2018). Reasoning means that informed decisions
ature review, we propose a revised technological framework for using AI or recommendations will likely be made to optimize courses of action
and ML in marketing. Our framework holistically links AI methodology (Bellman, 1978; Albus, 1991; Kolbjørnsrud, Amico, & Thomas, 2016).
(specifically ML) to AI capabilities and applications. We validate and Learning means that AI and ML acquires knowledge from distinct in
expand this framework with marketing and technology experts from formation and adapts to an environment exhibiting intelligent behavior
both academia and practice. Second, we develop research propositions (McCarthy et al., 1955; Kurzweil, 1990; Kaplan & Haenlein, 2019).
on the scarcely explored human factor, especially the role of marketing These three aspects combine AI capabilities designed to support human
managers in successfully implementing AI and ML to benefit (market thinking and action.
ing) managers and consumers. We thereby aim to stimulate scholarly Researchers have defined AI in terms of whether a system thinks
and interdisciplinary inquiry into how marketing managers should (Bellman, 1978) or acts (Kurzweil, 1990) like a human, or whether a
employ AI and ML internally and externally (i.e., in customer in system thinks (Charniak & McDermott, 1985) or acts (Nilsson, 1998)
teractions), and how obstacles to adequate utilization might be over rationally. These definitions have either a technological or a human focus.
come. On this basis, we identify influential research on AI and ML in A technological focus emphasizes the ability of computers, machines,
marketing. In doing so, we distinguish a strategic and a behavioral focus, algorithms, or robots to think, to recognize their environment, and thus
and consider an inward and an outward perspective (see Table 1). to solve complex tasks independently (McCarthy et al., 1955; Nilsson,
Using a mixed-methods approach, we build on evidence from three 1998; Kaplan & Haenlein, 2019). A human focus means that technical
distinct investigations: a two-round Delphi study, a quantitative survey, systems require a specific intelligence to perform tasks as humans would
and two focus groups. Round 1 of our Delphi study (based on personal (Kurzweil, 1990; Daugherty & Wilson, 2018).
interviews) explored the potential of AI and ML in marketing manage Despite their capabilities and multidisciplinary appeal, AI and ML
ment and gathered 30 statements from carefully selected technology continue to attract skepticism and concern, as Satya Nadella’s words
experts and marketing managers with profound knowledge of AI and (quoted at the beginning) illustrate: “We need to ask ourselves not only
ML. In Round 2 (online questionnaire), the compiled statements were what computers can do, but what computers should do—that time has
evaluated and discussed by the same experts, and additional statements come” (Bittu, 2018, p. 1). Considering how to enhance human capa
were generated based on expert ratings and comments. After catego bilities raises questions about how to ensure transparent and beneficial
rizing the statements according to three overarching themes, we con human–machine interaction (Jarrahi, 2018). We thus focus on human
ducted a quantitative survey with additional experienced marketing reactions to AI. Taking a managerial perspective, we investigate the
managers to evaluate these statements and themes, and to generate di drivers and barriers for executives when AI and ML proliferate in firms.
mensions and research propositions. We implemented two focus groups Given their boundary-spanning role, we focus on marketing managers
with marketing managers (previously involved in AI and ML projects), in and explore how they can thrive in the age of AI.
order to (1) further refine our research propositions, (2) exclusively link
these to AI and ML, and (3) enhance our contributions by making these
propositions testable and thus a promising avenue for future research. 2.2. The role of AI and ML in marketing management
After triangulating the results, we present theoretical and managerial
implications, discuss the inherent limitations of our study, and outline AI and specifically ML seem to offer infinite opportunities in mar
future research directions. keting. Yet marketing success per definition has always depended on
creating human and personal experiences (Schmitt, 1999; van Osselaer
2. Theoretical Background et al., 2020). This makes studying AI and ML in marketing management
highly promising yet challenging. Both can significantly improve mar
2.1. Understanding AI and ML keting performance (Wright et al., 2019). Ample opportunities exist for
using AI technologies in marketing: for instance, to identify and un
Despite their long history, which began as early as the 1956 Dart derstand existing customers (Loureiro, Guerreiro, & Tussyadiah, 2021);
mouth Summer Conference, there is no universal definition of Artificial to generate insights from customer purchasing data (Wright et al.,
Intelligence and Machine Learning (Torra et al., 2019). Even worse, as 2019); to identify current competitors (Huang & Rust, 2021); and to
the terms are often used interchangeably (e.g., Camerer, 2019), defini segment and target new customers (Martínez-López & Casillas, 2013;
tions remain rather vague (De Bruyn et al., 2020; van Giffen, Herhausen, Jabbar, Akhtar, & Dani, 2020). AI, ML, and robotics have been shown to
& Fahse, 2022). We follow Ma & Sun’s (2020) well-established encompass all 4 Ps of marketing (Xiao & Kumar, 2021): (1) product (e.g.,
distinction: AI refers to machines that perform human intelligence Google Home or Amazon Echo) and service (e.g., Walmart’s autono
tasks while ML denotes computer programs that can learn without mous shopping cart Dash); (2) price (e.g., Ebay’s auction sniper); (3)
following strict human instructions. Following this differentiation, and place (e.g., Tesla’s driverless semi-truck or Softbank Robotics’ Pepper);
as machines can perform intelligent tasks (primarily) based on trained and (4) promotion (e.g., Nike’s Chalkbot).
computer programs, we integrate ML into a comprehensive AI frame AI and ML help to analyze large amounts of data from various media
work. Extending Daugherty and Wilson’s (2018) AI framework, we (e.g., textual, visual, verbal) and sources (web, mobile, in-person) to
understand ML as the predominant AI method for building AI capabilities, gain extensive knowledge (Du & Xie, 2021). These insights support
and ultimately AI applications (see Fig. 1 and Section 4.2). We thus marketers in improving their decision-making capabilities (Paschen,
follow extant research in regarding ML as an essential subdomain of AI Kietzmann, & Kietzmann, 2019)—a critical factor for firm success
(e.g., Mitchell, 1997; Goodfellow, Bengio, & Courville, 2016). (Abubakar et al., 2019). In the last two decades, using AI and ML in
Another categorization of AI well-suited to stimulating interdisci decision-making has been a major achievement (Duan, Edwards, &
plinary research distinguishes AI’s distinct capabilities: understanding, Dwivedi, 2019) and will further disrupt marketers’ decision-making
reasoning, and learning (Russell & Norvig, 2010). Understanding is the (Davenport et al., 2020). Today’s AI systems are capable of improving
human perception and interpretation of environmental information via, decision quality by complementing human decision-making (Jarrahi,
for example, natural language processing and computer vision 2018) and by reducing human error (Logg, Minson, & Moore, 2019).
Gaining competitive advantage through AI and ML (Huang & Rust,
601
G. Volkmar et al. Journal of Business Research 149 (2022) 599–614
Fig. 1. AI and ML Framework. Developed framework, based on Russell & Norvig (2010) and Daugherty & Wilson (2018), and enhanced in Round 1 of our
Delphi study.
2021) is no longer about whether to employ them, but to what extent the wrong reasons. Data may “inherit” an unknown bias, or the model
(Lilien, Rangaswamy, & De Bruyn, 2017). may fail at the slightest deviation from routine (Ransbotham et al.,
Nevertheless, research shows that humans tend to reject algorithms 2017). Further, as evidenced in a marketing context, extensive and
and AI, in particular when mistakes occur (Moon, 2003; Dietvorst, disproportional use of AI, ML, and big data by senior managers can
Simmons, & Massey, 2015) or when humans feel less responsible generate tensions between AI and subordinate managers, who may feel
(Promberger & Baron, 2006; Dietvorst, Simmons, & Massey, 2015). less valued and understood (Wortmann, Fischer, & Reinecke, 2018).
Humans tend to prefer algorithmic advice over human judgment only in Ultimately, such reactions to AI and ML may even elicit fears of robotic
certain situations such as objective or numerical tasks (Castelo, Bos, & job replacement (Granulo et al., 2019).
Lehmann, 2019; Logg et al., 2019; Newman, Fast, & Harmon, 2020). While these examples highlight concerns about using AI and ML
Unsurprisingly, therefore, many marketing managers remain concerned internally (to improve processes, collaboration, and decisions), market
about fully utilizing AI and ML in decision-making (e.g., automated ing needs to find ways of using AI and ML externally (customer in
decisions; Davenport & Kirby, 2016), despite their potential. teractions). As marketing seeks to establish unique, value-creating
Some marketing managers have difficulty trusting AI and ML rec experiences through personal relationships (Schmitt, 1999), there is an
ommendations because machines do not explain their decisions ongoing debate on whether automation and AI technologies augment
(Kolbjørnsrud et al., 2016). Computers might perform very well—but for rather than dilute customer experience (Waytz, 2019). There is an
602
G. Volkmar et al. Journal of Business Research 149 (2022) 599–614
inherent danger that such technologies objectivize customers, and experts’ answers, evaluations, and reasons (Linstone & Turoff, 1975).
thereby damage the customer-employee relationship (Fuchs, Schreier, & This serves to gather expert thoughts and insights and to elicit novel, yet
van Osselaer, 2015; van Osselaer et al., 2020). For example, an identical converging perspectives on an interdisciplinary issue (Rowe & Wright,
computer-based message is evaluated as significantly more pleasurable 1999). Thus, Delphi studies, suited to multifaceted exploratory research
if deemed human-generated (Gray, 2012), highlighting the importance (Okoli & Pawlowski, 2004), open up multiple perspectives without
of human interactions in a service context. Further, consumers may allowing one opinion to dominate—all being critical requirements of our
resist automated and AI-based products if they identify strongly with a investigation.
product and if automation prevents them from demonstrating their skills Delphi studies have become popular among marketing scholars and
(e.g., robotic advisory for experienced financial investors; robotic sur have recently been used to study diverse topics: how to identify and
gery). This raises questions about whether and when AI/ML and auto combat fake news and communication (Flostrand, Pitt, & Kietzmann,
mation make companies lose their most valuable customers (Leung, 2019); challenges to organizations’ social media activities (Poba-Nzaou
Paolacci, & Puntoni, 2018). Finally, if AI and ML are utilized to identify, et al., 2016); the economic power of B2B transactions (Cortez & John
target, and retain key customers through personalized offers, companies ston, 2017); and managers’ appreciation of big-data analytics (Côrte-
must adapt their activities to avoid violating data privacy (Leslie, Kim, & Real et al., 2019). There are four types of Delphi studies (Paré et al.,
Barasz, 2018). 2013): (1) Ranking-type Delphi studies (which seek to rank identified
Marketing organizations, then, need to manage potential tensions key factors); (2) Classical Delphi studies (which attempt to reach a
between humans and AI/ML both internally and externally. To fully consensus); (3) Policy Delphi studies (which define different views in
benefit from AI, marketers need to consider strategy, ethics, and psy social and political contexts); and (4) Decision Delphi studies (which
chology alongside technology. This requires interdisciplinary coopera define future directions based on a small group with decision-making
tion and rethinking the roles and responsibilities of humans and power). To prioritize key statements, and to validate these and
machines (Hoffman & Novak, 2018). To explore the role of AI and ML in generate research propositions for our next studies, we used the ranking-
marketing, we therefore apply both a dual (strategic and behavioral) type method (Poba-Nzaou et al., 2016; Côrte-Real et al., 2019).
focus and a dual perspective (inward and outward). We used both criteria
to identify influential research on AI and ML in marketing and to extend
existing results (Table 1). We examined managerial tasks and explored 4.2. Development of an AI and ML framework
managerial reactions to AI and ML to derive research propositions
designed to stimulate further research intended to help organizations To avoid misconceptions, Delphi study informants should have at
overcome the current challenges of AI and ML. least a common understanding of the core concepts. We therefore
extended Daugherty and Wilson’s (2018) AI framework, made this
3. Overview of studies available to each expert, and used it as a starting point for our Delphi
study without narrowing the topic. The framework comprises and in
To generate research propositions on the AI and ML challenges facing terrelates AI methods, AI capabilities, and AI applications (Fig. 1). AI
marketing managers and firms, and to stimulate future research, we methods are used to process and structure different types of data. Based
used a mixed-methods approach (Fig. 2). We first conducted a two- primarily on ML as an essential subdomain of AI (Goodfellow, Bengio, &
round Delphi study (comprising expert practitioners and academics). Courville, 2016), AI methods encompass statistical methods to endow
Round 1 aimed to capitalize on expert knowledge to identify meaningful systems and computer programs with the ability to learn (Ma & Sun,
themes and novel statements on AI and ML in marketing. Round 2 2020).
sought to validate statements as well as stimulate additional ones based The literature distinguishes three broad subcategories of ML as AI
on those made in Round 1. Having achieved broad expert consensus methods intended to create AI capabilities: supervised learning, unsu
after our Delphi study, we launched a survey with experienced managers pervised learning, and reinforcement learning (Bonaccorso, 2017).
working at the intersection of marketing and AI/ML. We aimed to While training data are labeled in supervised learning (e.g., a picture of a
validate expert views from the practitioner and user perspectives. human is categorized as a human being), computer programs indepen
Marketing managers’ assessment of experts’ themes and statements led dently look for patterns in unlabeled training data, with reinforcement
to dimensions and testable research propositions via exploration and learning providing systems with constant feedback on whether a deci
interpretation. To evaluate its appropriateness, we conducted two focus sion or categorization was correct or not. AI capabilities, then, mostly
groups involving additional marketers with experience in AI and ML. result from ML-based AI methods—enabling systems to understand the
Discussions sharpened dimensions and propositions in an AI and ML environment (e.g., through computer vision). Finally, AI applications are
context, and provided vivid examples. derived from AI capabilities, and culminate in use cases (e.g. facial
recognition based on computer vision). Common to AI applications is
4. Delphi Study: Generating and validating statements on AI and direct employment by end-users (e.g., marketing executives or cus
ML in marketing tomers) (Rai, 2020).
We extended Daugherty and Wilson’s (2018) model in three ways:
4.1. Methodological introduction and procedure First, to update their model according to recent developments, we
included expert rules, neural networks, as well as trial-and-error-based
Delphi studies iteratively collect and summarize participants’ opin learning as AI methods; natural language processing and knowledge
ions and knowledge and share these with a peer group (Brady, 2015). mining as AI capabilities; and user profiling and audience segmentation,
Through multiple data collection and feedback rounds, expert panels mixed reality, emotion and voice recognition, performance optimiza
can revise their initial ideas and opinions (Dalkey & Helmer, 1963). tion, adaptive learning, and decision support systems as AI applications.
Anoymizing experts prevents opposing views from clashing and enables Second, to illustrate the exponential growth of applications, we repre
gaining multiple perspectives on a specific topic (Rowe, Wright, & sented the model as a funnel instead of as a circle. Third, we included a
Bolger, 1991). After every round, the researcher updates and aggregates data cloud to visualize the available information in our environment and
to illustrate that the data used in ML methods and AI applications are
603
G. Volkmar et al. Journal of Business Research 149 (2022) 599–614
Table 2
Overview of Expert Panel Characteristics – Delphi Study.
Expert Panel Characteristics
604
G. Volkmar et al. Journal of Business Research 149 (2022) 599–614
Table 3
Derived Statements of First-Round Delphi Study and Statements Ranked in Second-Round.
Items Expert Statements 5, 6 & 7 Mean SD
CSI1 AI enables humans to focus on tasks of higher value. 88% 6.16 1.08
CSI2 Most companies start with the technology and then look for a use case for AI. 48% 4.44 1.55
CSI3 Lack of knowledge of AI is the biggest obstacle to leveraging AI. 56% 4.56 1.77
CSI4 Market pressure is forcing companies to implement AI. 84% 5.56 1.5
CSI5 Automating administrative management tasks can create enormous added value for companies. 92% 6.58 0.86
CSI6 Most people think that implementing AI will work miracles and solve everything without any effort. 64% 4.8 1.62
CSI7 Most AI pilot projects fail—not because of technological issues but because of overly high expectations of the management. 67% 4.83 1.28
CSI8 It is important to establish a culture of trial and error in the company to learn from mistakes. 96% 6.52 0.94
CSI9 When humans interact with AI, they do not tolerate failure. 68% 4.76 1.48
CSI10 AI is not allowed to make mistakes, humans are. 64% 4.76 1.92
CSI11 Without transparency, AI won’t be accepted. 63% 5.04 1.84
CSI12 Online marketing can be automated, as it is technically feasible. 91% 5.82 0.94
CSI13 It will take longer to solve the ethical questions than to develop the technology and to make it feasible. 63% 5.42 1.71
DME1 With AI, subjective decisions based on gut feeling can be avoided, and objectivity can be increased. 72% 5.08 1.62
DME2 AI makes decisions, but people have the choice. 61% 4.83 2.26
DME3 If the manager makes the decision based on bad 76% 5.24 1.5
AI advice, the manager is responsible.
DME4 Managers should demand that AI reasoning is made transparent to them. 84% 6.16 1.32
DME5 The more transparent the decision-making, the less accountability needs to be discussed. 56% 5.04 2.13
DME6 The riskier a decision becomes regarding ethical and moral 76% 5.24 2.12
values, the less people will hand over decision-making to AI.
DME7 The biggest obstacle is the predictability and understandability of AI systems. 71% 5.13 1.39
DME8 If managers understand the functionality of AI, they are willing to give up control. 60% 4.92 1.57
DME9 People are very skeptical of AI, because they don’t understand it. 76% 5.36 1.69
DME10 Managers have to be able to deal with the consequences of AI. 92% 6.36 1.2
CM1 Having (structured) access to a lot of data will be an important source of competitive advantage in the age of AI. 96% 6.4 1.1
CM2 If you want competition in the market, you can’t have customer data privacy. 9% 1.96 1.43
CM3 Using AI for personalized customer contact can increase customer satisfaction. 75% 5.58 1.5
CM4 Sometimes it is better not to use the gathered customer data but to treat it confidentially, because trust in the relationship has greater value. 92% 6.33 0.94
CM5 Users should be given higher rewards by companies using their data. 58% 5.25 1.83
CM6 Customers are getting closer to the company through AI. 75% 5.25 1.79
CM7 Explaining the decision-making process to customers is very important. 72% 5.68 1.38
only a fraction of the data cloud. 2011). Interviewers received our literature review to ensure content-
specific competence (Meuser & Nagel, 2009). Interviews began with
general questions about industry trends, opportunities, and challenges
4.3. Delphi Study: First round regarding AI and ML in marketing. Next, they focused on the factors
(including ethical issues) influencing managers’ decision-making.
4.3.1. Expert selection and participants Experts were shown the extended AI and ML framework, which
Carefully selecting experts is critical to establishing validity in Delphi served as a common basis for discussion. Interviews were recorded with
studies (Møldrup & Morgall, 2001). This requires recruiting heteroge participant consent (revocable post-interview) and transcribed. Their
neous participants with vast expertise in distinct domains of immediate focus varied based on responses, as is usual with semi-structured qual
relevance to the topic (Caley et al., 2014). To meet these requirements, itative approaches. To reflect the generated insights, we modified the
participants, besides in-depth AI knowledge, needed to represent one of interview guide and the AI and ML framework as data collection
four diverse areas: research and academia, marketing, technology, proceeded.
consultancy. We further ensured that at least two technology experts
represented each AI technology, as per the AI and ML framework (see 4.3.3. Coding of expert interviews and results
Fig. 1). Recruited experts had the opportunity to nominate potential Following grounded theory (Strauss & Glaser, 1967), we transcribed
candidates who had to meet our predefined criteria—to achieve a and analyzed interviews using inductive content analysis (Mayring,
balanced set of experts. 2014). We used inductive coding to develop categories (Gioia, Corley, &
We conducted interviews until findings reached saturation—an in Hamilton, 2013) and to classify interviewees’ ideas into an efficient
dicator of data reliability (Morse et al., 2002). In total, n = 39 experts number of categories representing similar thoughts (Weber, 1990). The
(response rate: 77%) from the following areas were successfully coding scheme was based on the interview questions and the selected
recruited to identify current and future challenges to AI in marketing: expert statements. This scheme enabled open coding and ensured sys
research and academia (6 participants), marketing (9), technology (14), tematically evaluating results (Corbin & Strauss, 2014). Two researchers
and management consulting (10). Table 2 shows the selected experts. So independently performed open coding using the transcripts, primary
as not to jeopardize their openness, experts were not asked direct per findings, and participant information (Charmaz, 2014). They repro
sonal questions. duced category-building with similar outcomes and confirmed the
intercoder reliability of the content analysis.
4.3.2. Data collection and procedure First, we summarized and categorized experts’ statements. Following
Interviews lasted 30 to 90 min (M = 42 min, SD = 15.39). They an iterative process, we identified and classified second-order topics (e.
followed a semi-structured guideline to enable novel ideas and themes to g., “Efficiency” or “Corporate Culture”). Second, we aggregated these
surface (Jamshed, 2014). Three interviewers from different back categories to form two superordinate dimensions: “Chances and Poten
grounds (business administration, psychology, and engineering/ tech tial” and “Challenges.” Third, we examined the resulting topics and
nology) were employed to minimize interviewer bias (Qu & Dumay,
605
G. Volkmar et al. Journal of Business Research 149 (2022) 599–614
Table 4
Specified Statements – Derived from Second-Round Delphi Study.
Items Additional Statements
CSI5_a Some areas of online marketing can be automized, but others require human creativity.
CSI9_a Human tolerance of failure when interacting with AI is lower when the task is perceived as easy.
CSI9_b People tend to be less tolerant with AI decisions when choosing between different options (“Choose product A”) rather than providing an estimate (“Choose 90%
product A”).
CSI9_c People are more forgiving with other humans who makes mistakes than with AI. People are more forgiving with AI making mistakes than with
other humans.
CSI9_d People blame other humans less for making mistakes compared to AI. People blame AI less for making mistakes compared to
other humans.
CSI10_a A greater sense of responsibility is required because AI has a much more systemic impact than what humans can do on their own.
CSI10_b The fact that AI clearly states probabilites (“the result is 90% product A”) as decision output instead of merely stating the result (“the result is product A”)
will help users to better understand AI.
CSI10_c The fact that AI clearly states probabilites (“the result is 90% product A”) as decision output instead of merely stating the result (“the result is product A”)
will increase user acceptance.
CSI10_d Humans will be held responsible for mistakes because AI has no agency of its own (yet).
CSI10_e AI should be allowed to make more mistakes than humans, so they can learn from them rather than make the same mistake again.
CM3_a AI can decrease the customer experience, without an effective combination of AI and humans.
CM6_a AI brings companies closer to their customers.
concepts, which involved systematically considering individual aspects agree) and gave reasons for their evaluations (von der Gracht, 2012).
(typification). Given this grouping and thematic overlaps, we developed They were shown the statements with no additional information on the
three final overarching themes to categorize the derived statements: (1) source(s), in order to preserve anonymity and to limit potential evalu
Culture, Strategy, and Implementation (CSI); (2) Decision-Making and ation bias (e.g., bandwagon effect; Winkler & Moser, 2016). Ratings
Ethics (DME); (3) Customer Management (CM) (see Fig. 3). were calculated, and experts’ reasons for their evaluations were further
We derived 30 statements (see Table 3). Each code represents at least analyzed to generate additional statements (O’Connor & Joffe, 2020).
one statement. A statement can reflect more than one code due to Combining two established criteria, consensus on a statement was
overlapping themes. Statements form a basis for exploring drivers, achieved when it was rated as 5, 6, or 7 by at least 70% of the expert
barriers, and future developments of AI and ML in marketing panel (Hsu & Sandford, 2007), and when its standard deviation was
management. below the upper quartile (in our case: < 1.72) of the standard deviations
of all 30 statements (Holey et al., 2007). General consensus was rela
tively high (see Table 3 for consensus rates). Experts agreed on 17
4.4. Delphi Study: Second round statements. Five statements were agreed on by at least 60% of experts,
with a standard deviation below the upper quartile. For one statement
Round 2 had two goals: (1) to determine the importance of each of (CM2), consensus was obtained by experts agreeing to disagree (83%
the 30 statements developed in Round 1; (2) based on experts’ reasons assigned a 1, 2, or 3; SD = 1.43). The remaining seven statements
for their assessment (Brady, 2015), to generate additional statements on generated relatively low consensus and insightful comments.
AI and ML in marketing, and thus to benefit from experts’ broad One author and an independent experienced researcher separately
knowledge of the field. analyzed all comments, starting with the statements with the lowest
level of expert consensus (i.e., most contested points of view). As a
4.4.1. Expert selection and participants result, 82 potential additional statements were formulated and pre
The same expert panel was invited to participate in a second round sented to the other two authors. They independently evaluated each
(five participants admitted having language problems, which decreased potential statement, with a recommendation to add it or not. Both au
interview quality in Round 1). Of 34 selected experts, we received valid thors agreed to reject 31 statements and accept 22 statements (the latter
responses from 25, yielding a response rate of 74%. were included in our pool; Table 4). Since all original statements yielded
at least moderate consensus among experts, we conducted a second
4.4.2. Procedure and results study (i.e., quantitative survey) with experienced marketing managers
Applying the ranking-type Delphi method (Paré et al., 2013), experts to evaluate their agreement with the 30 original statements and the 22
rated statements via 7-point Likert scales (1 = I do not agree, 7 = I fully
606
G. Volkmar et al. Journal of Business Research 149 (2022) 599–614
newly developed statements. We thus sought to assess the appropri rated a statement with 5, 6, or 7 (Hsu & Sandford, 2007). If, however, at
ateness of experts’ identified overarching themes and statements from least 70% rated a statement with 1, 2, or 3 (i.e., they agreed to disagree),
practitioner and user perspectives. The survey also served to structure we reverse-coded the statement. Third, we combined the two criteria to
statements aiming to develop testable research propositions. determine whether a statement was accepted overall or not.
Fourth and finally, categorizing statements as accepted or not
5. Quantitative Survey: Cross-Validating statements and accepted served as a basis for further interpretation. We retained
deriving propositions statements meeting both criteria but did not eliminate those not cate
gorized as accepted. This information instead formed the basis for in-
5.1. Sampling and procedure depth discussion on why statements did not elicit agreement. From
this exploratory process, we derived 27 topic areas to detect common
5.1.1. Sampling alities and differences between statements, and to better understand and
Via the alumni panel of a major European business school, we refine them. Identifying and merging similar topic areas produced 10
recruited 204 marketing managers (mean age: 50 years; 75% male) for a aggregated dimensions: Avoiding a “Blame-AI” Culture; Recommendation
study titled “Artificial Intelligence in Marketing.” In return, participants Output and Decision Frame; Objectivity versus Human Bias; Expectation
received an executive summary of the study and a digital presentation of Management and Strategy; Humans in the Loop; Understandability; Decision
results. We also raffled prizes worth US$500. Because marketing man Explainability; Responsibility and Accountability; AI and Customer Experi
agers came from various levels (middle and top management), and ence; Customer Data.
because their self-rated knowledge of AI differed (“How would you Appendix A.1 shows (1) the categorization of the 10 generated di
personally rate your experience of using AI?”; 1 = very low, 7 = very high; mensions into our three overarching themes and (2) the topic area from
M = 3.20, SD = 1.57), we qualified our sample to ensure (externally) which a dimension emerged. Each dimension is backed by several
valid responses. We chose our final sample based on two predetermined statements, resulting in 19 propositions on AI in marketing. All propo
criteria, including prespecified cutoff values: First, marketing managers sitions comprise several statements and are based on respondents’ as
needed to be in a leadership role, with direct responsibility for at least sessments of these statements.
one subordinate manager. Second, they were required to have a mini We note that both dimensions and propositions were derived via
mum self-rated AI knowledge of 3. These requirements resulted in a interpretation, after considering marketing managers’ assessment of
sample of 101 marketing managers (mean age: 50 years, SD = 10.18; statements. Hence, the generated dimensions and propositions require
84% male), who on average had direct responsibility for 38 subordinate more formal evaluation. To verify whether propositions and dimensions
managers (SD = 120) and a moderate knowledge of AI (M = 4.29, SD = are adequate, to make them more AI-specific, and to collect vivid ex
1.16). amples, we implemented two focus groups (each comprising additional
marketing managers with AI and ML experience).
5.1.2. Procedure
Respondents were briefly introduced to the study and given an 6. Focus groups to validate dimensions and research
overview of the participants of the Delphi study (experts and thought propositions
leaders). They read that they would evaluate the generated statements
from the expert panel through a practitioner lens, thereby implementing Focus groups enable (usually 6–12) participants to jointly discuss a
a reality check. They were informed that their assessments of statements problem (Prince & Davies, 2001), and thereby explore a topic in-depth
as consumers and users of AI and ML would be critical to developing (Byrne & Rhodes, 2006) and offer rich comments (Al-Qirim, 2006),
meaningful research propositions, thus highlighting their pivotal role in while providing a more comprehensive view of the collected data
our research. To ensure a common understanding of AI among partici (Newby, Watson, & Woodliff, 2003). Given the methodological benefits,
pants, we presented our definition: “AI is a science and technology we conducted two focus groups with experienced managers at the
capable of implementing various tasks intelligently, of recognizing er intersection of AI, ML, and marketing to verify and refine our derived
rors, and of learning from these—thereby having the capability to act dimensions and propositions.
adequately and intelligently in uncertain environments.”.
After some introductory questions about their previous AI experi 6.1. Study context and methodology
ence, marketing managers were asked to rate their level of agreement
with (1) the original 30 statements from Round 1 of the Delphi study and 6.1.1. Sampling
(2) the 22 newly developed statements from Round 2 (1 = I do not agree, Via LinkedIn, we invited managers to take part (free of charge) in a
7 = I fully agree). Statements were categorized by three themes: (1) focus group on AI, ML, and marketing management. We selectively
Culture, Strategy, and Implementation; (2) Decision-Making and Ethics; recruited 11 participants with proven experience in marketing and AI (a
(3) Customer Management. To avoid cognitive load confounding rat prerequisite of our study). Participants represented diverse industries (e.
ings, respondents received randomized statements from each theme. g., Banking/Insurance, IT/Technology, Pharmaceutical) and held
Finally, they were asked to provide demographics, were thanked, and various executive positions (e.g., Global Vice President for Marketing &
could sign up for the executive summary, the virtual results presenta Consumer Intelligence). This served to ensure that our propositions and
tion, and the raffle. dimensions were appropriate, to refine our propositions, and to provide
current examples.
5.2. Results and interpretation We conducted our focus groups (focus group I: n = 5, one female;
focus group II: n = 6, two females) (1) to determine whether partici
We analyzed respondents’ evaluations of statements in four steps. pants’ views converged or diverged, (2) to focus discussion on our
First, we considered their average assessment of statements and propositions (heavily debated in the first focus group) and (3) to foster
accordingly assigned statements to one of four categories: (1) reject (M intense interaction through a limited number of participants (Prince &
≤ 3); (2) tend to reject (3 < M ≤ 4); (3) tend to accept (4 < M ≤ 5); (4) Davies, 2001). While participants could indicate their preferred meeting
accept (M > 5). Two statements were fully rejected, five were catego date, we took care that groups were sufficiently heterogeneous, without
rized as tend to reject, 16 as tend to agree, and 23 as fully agree heterogeneity hampering free-flowing discussions (Morgan, 1996).
(including six statements with a semantic differential). Second, identical Discussions lasted 93 (64) minutes for focus group I (II).
to the Delphi study, we calculated a consensus percentage of the as
sessments, thus determining consent if at least 70% of the respondents
607
G. Volkmar et al. Journal of Business Research 149 (2022) 599–614
6.1.2. Procedure encompasses five dimensions: (1) Avoiding a “Blame-AI” Culture, (2)
Two authors served as moderators, with the third obtaining the role Recommendation Output, (3) Decision Frame, (4) Objectivity vs. Human
of an observer. Moderators followed a guideline comprising our 10 di Bias, and (5) Strategy and Expectation Management. Avoiding a “Blame-AI”
mensions and 19 research propositions (Appendix A.1). They read and culture proved to be a major topic and was confirmed as an independent
elaborated on the propositions. We strongly emphasized active discus dimension by both focus groups. They agreed that people tend to blame
sion, as well as making participants feel comfortable, to ensure they humans less than AI, and are more tolerant of “mistakes” committed by
would openly share their views (e.g., Malhotra, 2019). Discussions were humans compared to AI. Implementing a trial-and-error culture,
recorded with participants’ permission, transcribed, and subjected to although considered very difficult, was seen as potentially effective in
thematic analysis (Braun & Clarke, 2006). preventing a “Blame-AI” culture. Calibrating management expectations
upfront thus was confirmed as a critical starting point.
6.2. Results “For image recognition we require […] 100% accuracy. Even a human
cannot reach this level, but he [the superior] demanded 100% accuracy.
We reduced the number of final propositions from 19 to 13. Comparability with humans is sometimes totally disconnected.”
Following participants’ recommendations, we disentangled Recommen
dation Output and Decision Frame into two separate dimensions (see The dimension Recommendation Output and Decision Frame sparked
Fig. 4). Participants largely agreed on the remaining dimensions and vivid debate in both focus groups and resulted in two separate di
their labels, and propositions were categorized into various dimensions mensions. Participants disagreed on the proposition related to Recom
and overarching themes. mendation Output. According to participants, marketing managers prefer
AI to provide clear recommendations instead of estimates.
6.2.1. Culture, Strategy, and Implementation
The overarching theme of Culture, Strategy, and Implementation
608
G. Volkmar et al. Journal of Business Research 149 (2022) 599–614
“You do not expect a human being to give a finite answer, so human The dimension Decision Explainability is crucial for understanding the
beings can say, e.g., 70%, 30%. However, from an AI system you expect AI model and for ensuring transparency. Participants agreed on the
to get an answer on every single question.” related proposition, highlighting the need to enable a basic under
standing of AI functionalities and thus a certain explainability. However,
The dimension Decision Frame also triggered lively discussion. Par
the level of explanation seemed to depend on the use case and the end-
ticipants argued both in favor of and against its related proposition,
user.
highlighting the need to further investigate how to define the ideal de
cision frame for AI recommendations. “Who of us knows how a CPU works? I guess, nobody. Yet we are using it
all day long via our smartphone and computer. Basically, everybody
“I would go for the estimate, because I would always ask why I should
should know how a transistor works but the CPU is a complex system of a
choose A or B. This is why we use AI because we want numbers to support
lot of transistors and so on, and it’s the same with AI. Maybe we all just
our decision.”
need to get used to it.”
“I would clearly say A or B. Top management has no time and […] they
want a clear road to follow.” The challenges of the dimension Responsibility and Accountability
emerge from the complexity of AI systems. Participants agreed with the
All participants agreed with Objectivity vs. Human Bias and its
dimension and noted various levels of responsibility (i.e., shared
proposition.
responsibility)—with managers being ultimately involved. Thus, re
“I am sure there is a human bias that affects AI somehow, either due to sponsibility and accountability point to statutory issues, which must be
training or the judgment in the end.” addressed and regulated to ensure transparent AI implementation.
Regarding Strategy and Expectation Management, participants “When we think of advertising, maybe it’s easier, because it’s neither
partially agreed on the first proposition, that companies are now pur critical nor life-threatening. But when it’s life-threatening and life-
suing a more solution-oriented AI strategy. They noted that companies changing, then it’s hard to go to the judge and say, well, it’s not my
are now aligning technology with use cases to achieve direct benefits. fault, the machine gave the wrong recommendation. In my mind, this
Participants agreed that while management expectations are still high, connects the discourse on explainability. Managers need to understand
they are gradually becoming more realistic. the impact because this is not just experimenting or child’s play but could
have far-reaching consequences.”
“There are still high expectations. If there weren’t any, a lot of companies
wouldn’t invest that much. But at the same time, it’s balanced by being
more realistic.” 6.2.3. Customer Management
This theme comprises (1) AI and Customer Experience and (2)
Participants fully agreed on the second proposition of Strategy and Customer Data. AI proliferation can both increase and diminish customer
Expectation Management and emphasized the importance of raising experience. Participants agreed with the first proposition, stating that if
awareness of ethical issues. AI and humans are not properly combined, this may adversely affect
“Developing technology is quite steep but, once you get there, you get customer experience.
there. I think ethics involves lots of soft skills, lots of nuances and “We’re at the stage where I don’t think we have fully explored the full
consideration of cultural differences. So, it’s a bit more complex.” potential [of AI]. Depending on specific use cases, we may not need
human intervention, but at the same time, depending on certain use cases,
6.2.2. Decision-Making and Ethics I think we’ll need human intervention, because if AI is left alone it will
decrease the customer experience.”.
This theme consists of four dimensions: (1) Humans in the Loop, (2)
Understandability, (3) Decision Explainability, and (4) Responsibility and The second proposition received no full agreement. While agreeing
Accountability. Humans in the Loop addresses the need to include human that transparency is key to customer experience, participants identified
judgment in the decision process, thus sparking further lively debate. a tension between a seamless experience on the one hand and trans
Opinion was divided in both focus groups. Agreement on this proposi parency on the other. Deciding which steps need to be explained to the
tion appeared to depend on both the use case and the ethical and moral customer, and in which detail, appears to significantly challenge mar
components of a decision. keting managers.
“There are so many options that we can design […] how we get to a “I don’t think customers necessarily want to understand the whole pro
decision. It might be different for all the systems and you as a user might cess. People want to have a smooth and seamless experience. They want to
not have the final choice, but the designer had a choice.” know that their data isn’t being mishandled, that they have control over
“For certain cases when you use AI, you’re not the final decision-maker in their own data, but what then happens, and how that’s used, whether it’s
the end. This is exactly the problem where ethics come into play.” an AI system or someone manually changing things. I don’t think people
Understandability highlights the difficulties of implementing AI sys necessarily want to think about those things too much either”.
tems not fully understood by managers and customers. Participants The dimension Customer Data outlines the potential of generating
agreed that marketing managers should possess a basic understanding of insights through AI to enable better understanding customers. Managers
AI to justify their decisions. agreed but were unsure how companies can move closer to their cus
“I think this is really needed. Decision-makers don’t understand what tomers through AI without engaging in personal relationships.
they decide, that’s a big problem.” “This [focus group] will be treated with discretion. That immediately
“They think they understand it. This is called the Dunning Kruger effect. made me feel at ease. […] I think that it’s the right thing to do, to be fully
[…] I’m confronted with some hilarious requirements and projects transparent and to tell your audience how their data will be managed.”
because they don’t understand what they want and the impact.”
609
G. Volkmar et al. Journal of Business Research 149 (2022) 599–614
“I think we are all obliged to make it as easy as possible for customers to managers either lack the time to discuss possibilities or prefer AI not to
understand and to gather data.” behave like humans. This discussion illustrates the topic’s relevance and
implies that preferring AI recommendation output and decision frames
is contingent on various factors (e.g., hierarchy level).
7. General Discussion Across our studies, participants agreed that excluding human bias
seriously challenges any AI system, highlighting the importance of the
According to Katrina Lake, the celebrated founder and former CEO of dimension Objectivity versus Human Bias. Du and Xie (2021) have
Stitch Fix, her company, which sends consumers personalized parcels addressed these biases on the product level when customers interact
matching their fashion style (based on insights generated from both AI/ with AI. In addition, we find that these biases can occur on different
ML and human stylists), is successful because it does not train “machines levels and involve various factors: training data, system design, human
to behave like humans and certainly not […] humans to behave like use of a system, and human judgment. Surprisingly, focus-group par
machines” (2018, p. 40). Instead, she highlights the importance of ticipants were skeptical about this pivotal issue: They observed some
acknowledging that “we are wrong sometimes—even the algorithm” (p. human bias will always exist, either in developing an AI system (e.g.,
40), and that her company’s most critical success factor is to keep selecting biased training data; Buolamwini & Gebru, 2018) or subse
learning. Our investigation supports Lake’s view by identifying chal quently in interpreting an AI and ML outcome.
lenges of working with AI and ML, not being limited to profit-seeking As long as AI systems are fallible and involve human bias, delegating
organizations, but also encompassing NGOs. decisions to AI and ML has a strong ethical component. As part of our
We employed a dual strategic and a behavioral focus on the role of AI overarching theme Decision-Making and Ethics, we identified the
and ML in marketing, as well as an inward and an outward perspective. dimension Humans in the Loop as critical. Across all three studies, we
We examined the organizational tasks of marketing managers, how firms consistently found that managers prefer AI systems that theoretically give
might strategically use AI and ML to improve internal processes and reach them the final choice. This does not imply they always want to have a final
out to customers, and how both marketing managers’ and customers’ choice, but rather the chance to decide, depending on the situation.
reactions may influence AI and ML effectiveness and hence strategy. Our Further, the dimension Responsibility and Accountability revealed a
findings are based on responses from a panel of experts and from expe moderating variable. While humans may not need to make the final de
rienced AI and ML users (i.e., marketing executives working with AI/ML). cision in domains such as personalization or (programmatic) advertising,
We thus contribute to valuable conceptual research focusing either on a delegating the final choice to humans is critical in life-threatening de
single perspective (Duan, Edwards, & Dwivedi, 2019), or on deriving cisions or with decisions having a strong ethical component (Dwivedi
research questions (Davenport et al., 2020; Loureiro, Guerreiro, & Tus et al., 2021). Whether the importance of having humans in the loop de
syadiah, 2021; Huang & Rust, 2021) by identifying and structuring creases as AI/ML improve is both an empirical and a relevant question.
research propositions from both perspectives based on empirical research. In line with the EU Commission (2019), that decision explicability is
Our derived propositions (Fig. 4) thus contribute to theory and practice. crucial for building and maintaining user trust in AI, two further di
mensions proved relevant from an inward perspective: Understandability
7.1. Theoretical Contributions and Decision Explainability. Participants agreed that decision-makers need
to understand an AI system in the managerial context. While this ac
7.1.1. Inward Perspective knowledges that expert knowledge is not required, a gap currently exists
From an inward perspective, our results illustrate organizational between perceived and actual understanding. Participants not only com
drivers of and barriers to deploying AI and ML in marketing, outline future plained that superior managers tend to display overconfidence but also
developments and suggest boundary conditions. Consistent with previous noted that working with AI and ML on a too superficial level may increase
research (Moon, 2003; Dietvorst, Simmons, & Massey, 2015) and Katrina confidence but not actual knowledge (i.e., Dunning–Kruger effect; Kruger
Lake’s statement, we suggest that managers tend to be less tolerant of & Dunning, 1999). Whether and to what extent an AI system explains its
failure when dealing with AI than with humans. As humans favor algo decision again depends on the specific context, essential from both an
rithmic advice on objective or numerical tasks (Castelo, Bos, & Lehmann, inward and an outward perspective. When receiving an AI-based recom
2019; Logg, Minson, & Moore, 2019; Newman, Fast, & Harmon, 2020), mendation, consumers may want to know why they have received it.
managerial expectations about AI and ML are likely even higher with such Finding the right level of explanation is challenging, as evidenced by
tasks, making managers less tolerant of errors. While our participants companies reluctant to share data of their algorithms (e.g., Facebook),
confirmed this boundary condition, we also found that managers’ toler hence weakening a primary competitive advantage.
ance of AI/ML relative to human failure may be even lower when tasks are
perceived as easy. This finding was also confirmed by one focus-group 7.1.2. Outward Perspective
participant, whose superior expects AI image recognition to be 100% From an outward perspective, our findings relate to the overarching
accurate, a succees rate unattainable for human beings. theme Customer Management. They imply that AI and ML can increase
Whether low failure tolerance of AI and ML results mainly from an customer experience and alert companies that they need to set clear
inherent Blame-AI culture or unrealistic management expectations is a goals and thoroughly understand consumer behavior. Ideally, AI and ML
critical question, in theory and practice. It highlights the need to tackle facilitate understanding customer needs, which enables companies to
the challenge at its root. If expectations about AI and ML become address needs faster and in a more personalized way, and thus
increasingly realistic (as our study suggests), the former mechanism may improving customer experience (Kumar, Ramachandran, & Kumar,
prevail. Thus, less tolerance of AI and ML failure would be a novel 2021). However, if AI and ML are not used correctly or for the wrong
manifestation of defensive decision behavior (Ashforth & Lee, 1990) customer (Loureiro et al., 2021), efforts may backfire, and customer
rather than a consequence of unrealistic expectations. experience decreases. Examples include a formerly human customer
The dimensions Recommendation Output and Decision Frame deserve service now operating as a chatbot failing to benefit customers (Khan &
further attention. While respondents in our quantitative survey Iqbal, 2020); or an identity-relevant product (e.g., cooking device) that
confirmed that managers prefer AI and ML to provide estimates rather is fully automated and denies passionate chefs the possibility to
than make a choice, this proposition was heavily debated in both focus demonstrate their skills (Leung, Paolacci, & Puntoni, 2018). In both
groups. Some participants agreed that estimates are preferable because cases, AI and ML diminished customer experience, suggesting that rather
decision-makers want to know why or by which margin AI and ML prefer than focusing on external, customer-related goals, companies focused on
a certain course of action. Others simply preferred AI to make a clear their own goals or targeted the wrong customers.
recommendation. Supporters of the latter view argued that top Similarly, whether customer data and AI systems help companies
610
G. Volkmar et al. Journal of Business Research 149 (2022) 599–614
move closer to customers was heavily debated in our focus groups. inward and outward perspectives by employing a strategic and behav
Participants agreed that companies need a clear strategy to be effective ioral focus. This should not imply that our themes, dimensions, per
in this regard, in particular if data privacy is concerned. While customers spectives, and focuses are independent. On the contrary, they are
need to be convinced to disclose their data, future research needs to profoundly interrelated. For example, Waymo, a subsidiary of Google’s
center on strategies for transparently obtaining data and for using such parent company Alphabet Inc. developing autonomous driving tech
data for the benefit of companies and customers alike. Even though nologies, is keen to avoid a Blame-AI culture from an outward (i.e., the
exploiting customers’ ignorance about their digital footprints may be customer’s) perspective. With their business resting entirely on AI and
alluring (i.e., “privacy paradox”; see Kokolakis, 2017), educating cus customer experience, they try to find the right amount of decision
tomers on how to handle their data carefully is an alternative path, one explainability, so as not to alienate their customers, while gathering as
leading to trustful and sustainable relations. much customer data as possible to prevent system failure. In case of
failure, it is critical to have human support as close as possible (i.e.,
7.2. Managerial Contributions humans in the loop). Hence, further research should not view our prop
ositions as separate but further explore when and how themes and di
7.2.1. Inward Perspective mensions are interrelated.
Organizations should recognize that managers tend to be less We aimed to identify overarching themes and dimensions, and ulti
tolerant of AI/ML versus human failure, and that lower tolerance is not mately research propositions, in order to stimulate further innovative
limited to objective and numerical tasks. To successfully tackle man research seeking to improve organizations’ approach to AI. While our 13
agers’ strict assessment of AI/ML, organizations need to evaluate research propositions should be seen as providing directions for further
whether this is due to unrealistically high expectations about AI/ML or research, four concrete research questions may be particularly and
whether managers are simply waiting for AI/ML to fail. The two immediately relevant: First, given that managers appeared less tolerant of
mechanisms require distinct strategies to achieve a more balanced view AI and ML failure, and given that this is likely due to defensive decision-
of AI/ML. Regarding the former, organizations are advised to launch making than unrealistic expectations (as AI and ML become increasingly
training programs that increase AI/ML literacy among managers (Long advanced), further research could investigate whether providing man
& Magerko, 2020), in order to understand the limitations, reduce un agers with more autonomy or allowing them to make wrong decisions
realistic optimism, and manage expectations. Regarding the latter, or reduces a Blame-AI culture and boosts tolerance of AI and ML failure.
ganizations are likely to have cultural issues. Like Stitch Fix, they need to Second, the fact that experts remained skeptical of reducing human bias in
foster fruitful human–machine interaction (Lake, 2018), in order to re AI and ML illustrates the need for psychologists, AI technology experts,
gard AI/ML as augmenting rather than threatening managers’ work, and and data scientists to jointly address this multifaceted challenge. Third,
to ultimately prevent defensive decision-making, which reflects a blame future research would need to explore the roles of humans and AI in
AI decision-making culture. important topics such as decision-making: Do we want AI to play a low or
Further, our results illustrate that organizations need to carefully a high agentic role, and what might be critical contextual factors in this
consider the extent of information given to decision-makers. AI/ML regard (e.g., Novak & Hoffman, 2019)? Finally, future investigations
output should be personalized. While preferences are individual, our should raise awareness of and uncover conflicts between inward and
results point to the importance of hierarchy level in this regard: Top outward goals, as well as investigate how to mutually achieve superior
managers prefer clear decisions and middle managers favor probabilities operational excellence and customer experience using AI and ML.
and reasoning potential courses of action. Finally, organizations should
be aware that most algorithms involve a human bias, which most likely CRediT authorship contribution statement
unfolds already when developing AI systems (i.e., selecting biased
training data) and intensifies when interpreting AI/ML outcomes. Thus, Gioia Volkmar: Writing – review & editing, Writing – original draft,
particular emphasis should be placed on receiving objective training Visualization, Validation, Supervision, Software, Project administration,
data and on rationally evaluating AI/ML decisions. Methodology, Investigation, Formal analysis, Funding acquisition, Data
curation, Conceptualization. Peter M. Fischer: Writing – review &
7.2.2. Outward Perspective editing, Validation, Supervision, Software, Project administration,
While AI and ML aim to increase operational excellence and effi Methodology, Investigation, Formal analysis, Data curation, Conceptu
ciency from an inward perspective, they should enhance customer alization. Sven Reinecke: Conceptualization, Resources, Validation,
experience from an outward perspective. Firms must understand that Writing – review & editing, Funding acquisition.
operational efficiency and enhanced customer experience can create
tensions, leading to delicate tradeoffs. For example, as illustrated, AI- Declaration of Competing Interest
and ML-based chatbots should not be exclusively regarded as a means of
alleviating employee’s workload but of (1) enhancing customer expe The authors declare that they have no known competing financial
rience via a novel channel and (2) improving critical face-to-face interests or personal relationships that could have appeared to influence
customer touchpoints by enabling employees to invest more time in the work reported in this paper.
such interactions. Thus, companies need to consider both the inward
and outward perspectives of AI and ML from a holistic strategic angle. Acknowledgements
Similarly, as data availability is a prerequisite of successful ML and AI, a
key challenge for companies will be to treat and store data responsibly We gratefully acknowledge financial support from the Swiss National
and safely (Rauschnabel et al., 2022), and to convince customers to Science Foundation [2120481].
grant access to their data. Our research shows companies need a trans
parent strategy in this regard (Brough et al., 2022). Appendix A1. Research propositions derived based on results of
the quantitative study
7.3. Limitations, Conclusions, and Future Research
611
G. Volkmar et al.
Culture, Strategy, & Implementation Decision-Making & Ethics Customer Management
Avoiding a Blame-AI 1 Managers have a lower tolerance for failure when Humans in the Loop 1 AI systems are able to make decisions and can AI & Customer 1 If AI is not carefully managed and combined with
Culture dealing with AI than with other humans; play multiple roles in the decision-making Experience human expertise to enhance the customer
therefore a trial-and-error culture is needed to process, but ultimately humans have the final experience, it will significantly decrease the
learn from the mistakes. choice. customer experience.
Topics:Mistakes & Topics: Humans in the Topics: Customer
Failure, Culture, Blame, loop, Control, Decision- experience,
Tasks making Customer
satisfaction
Propositions: CSI8, CSI9, CSI9_c, CSI9_d, CSI10, Propositions: DME1_a, DME2, DME2_a Propositions: CM3, CM3_a
CSI10_d, CSI10_e
2 Managers’ tolerance of failure when interacting 2 The riskier a decision becomes regarding 2 Explaining the decision-making process to customers
with AI is lower when the task is perceived as ethical and moral values, the less people will will become very important to enhance the customer
easy. hand over decision making to AI. experience and increase the transparency.
Propositions: CSI9_a, CSI9_d Propositions: DME6, DME_6a Propositions: CM7
Recommendation 1 Managers prefer the output of AI Understandability 1 A conceptual framework will be necessary to Customer Data 1 As companies get closer to their customers through
Output and Decision recommendations stating probabilities rather ensure understanding in the managerial AI, it will become critical for them to address the use
Frame than giving a certain result; thus, increasing the context and to explain and distinguish the use of the gathered customer data in order to ensure a
acceptance and understandability of AI. of AI systems and their impact on trustworthy relationship with greater value and data
management. privacy.
Topics: Decision Output, Topics: Topics: Data
Decision Frame Understandability, privacy, Data,
Transparency, Trust Competition
Propositions: CSI10_b Propositions: DME8, DME8_a, DME8_b Propositions: CM4, CM6_a
2 AI-based decisions giving an estimation rather 2 Managers will need to demand that AI 2 Having access to a lot of (structured) customer data
than the choice between different options are reasoning is made transparent to them in order will be an important source of competitive
preferable. to ensure the right understanding. advantage in the age of AI.
612
Objectivity vs. Human 1 Even as AI is more objective than humans, an Decision Explainability 1 Explainability represents a challenge for the
Bias inherent human bias can hardly be excluded from acceptance of AI and is crucial for ensuring
the equation. transparency.
Topics: Human Bias Topics: Explainability,
Acceptance,
Implementation
Propositions: DME1, DME1_b Propositions: DME4_a, CSI11
2 It will become necessary to explain the
functionalities of AI and its impact on
management to ensure a sufficient
understanding, as managers tend to be
skeptical of AI systems.
Propositions: DME4_a, DME9
References EU Commission (2019). Ethics Guidelines for trustworthy AI. Retrieved from htt
ps://www.aepd.es/sites/default/files/2019-12/ai-ethics-guidelines.pdf, Accessed
November 2, 2021.
Abubakar, A. M., Elrehail, H., Alatailat, M. A., & Elçi, A. (2019). Knowledge
Flostrand, A., Pitt, L., & Kietzmann, J. (2019). Fake News and Brand Management: A
Management, Decision-Making Style and Organizational Performance. Journal of
Delphi Study of Impact, Vulnerability and Mitigation. Journal of Product & Brand
Innovation & Knowledge, 4(2), 104–114.
Management, 29(2), 246–254.
Albus, J. S. (1991). Outline for a Theory of Intelligence. IEEE Transactions on Systems,
Fuchs, C., Schreier, M., & Van Osselaer, S. M. (2015). The Handmade Effect: What’s Love
Man, and Cybernetics, 21(3), 473–509.
Got to Do With It? Journal of Marketing, 79(2), 98–110.
Al-Qirim, N. (2006). Personas of E-commerce Adoption in Small Businesses in New
Gioia, D. A., Corley, K. G., & Hamilton, A. L. (2013). Seeking Qualitative Rigor in
Zealand. Journal of Electronic Commerce in Organizations, 4(3), 18–45.
Inductive Research: Notes on the Gioia Methodology. Organizational Research
American Marketing Association (AMA). (2017). Definitions of Marketing. Retrieved
Methods, 16(1), 15–31.
from https://www.ama.org/the-definition-of-marketing-what-is-marketing/.
Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. Boston, MA: MIT Press.
Accessed March 10, 2021.
Granulo, A., Fuchs, C., & Puntoni, S. (2019). Psychological Reactions to Human versus
Ashforth, B. E., & Lee, R. T. (1990). Defensive Behavior in Organizations: A Preliminary
Robotic Replacement. Nature Human Behavior, 3(5), 1062–1069.
Model. Human Relations, 43(7), 621–648.
Gray, K. (2012). The Power of Good Intentions: Perceived Benevolence Soothes Pain,
Baker-Brunnbauer, J. (2020). Management Perspective of Ethics in Artificial Intelligence.
Increases Pleasure, and Improves Taste. Social Psychological and Personality Science, 3
AI and Ethics, 1(4), 1–9.
(5), 639–645.
Bellman, R. (1978). An Introduction to Artificial Intelligence: Can Computers Think? San
Hildebrand, C., Efthymiou, F., Busquet, F., Hampton, W., Hoffman, D., & Novak, T.
Francisco, California: Thomson Course Technology.
(2020). Voice Analytics in Business Research: Conceptual Foundations, Acoustic
Bittu, K. (2018). How Microsoft, Google, AWS and Facebook Are Battling to Democratize
Feature Extraction, and Applications. Journal of Business Research, 121(9), 364–374.
AI for Developers. Retrieved from https://www.bestdevops.com/how-microso
Hoffman, D. L., & Novak, T. P. (2018). Consumer and Object Experience in the Internet of
ft-google-aws-and-facebook-are-battling-to-democratise-ai-for-developers/,
Things: An Assemblage Theory Approach. Journal of Consumer Research, 44(6),
Accessed March 1, 2021.
1178–1204.
Bonaccorso, G. (2017). Machine Learning Algorithms. Birmingham, UK: Packt Publishing
Holey, E. A., Feeley, J. L., Dixon, J., & Whittaker, V. J. (2007). An Exploration of the Use
Ltd.
of Simple Statistics to Measure Consensus and Stability in Delphi Studies. BMC
Brady, S. R. (2015). Utilizing and Adapting the Delphi Method for Use in Qualitative
Medical Research Methodology, 7(1), 1–10.
Research. International Journal of Qualitative Methods, 14(5), 1–6.
Hsu, C. C., & Sandford, B. A. (2007). The Delphi Technique: Making Sense of Consensus.
Bragazzi, N., Dai, H., Damiani, G., Behzadifar, M., Martini, M., & Wu, J. (2020). How Big
Practical Assessment, Research, and Evaluation, 12(1), 10–17.
Data and Artificial Intelligence Can Help Better Manage the COVID-19 Pandemic.
Huang, M. H., & Rust, R. T. (2018). Artificial Intelligence in Service. Journal of Service
International Journal of Environmental Research and Public Health, 17(9), 1–8.
Research, 21(2), 155–172.
Braun, V., & Clarke, V. (2006). Using Thematic Analysis in Psychology. Qualitative
Huang, M. H., & Rust, R. T. (2021). A Strategic Framework for Artificial Intelligence in
Research in Psychology, 3(2), 77–101.
Marketing. Journal of the Academy of Marketing Science, 49(1), 30–50.
Brough, A. R., Norton, D. A., Sciarappa, S. L., & John, L. K. (2022). The Bulletproof Glass
Jabbar, A., Akhtar, P., & Dani, S. (2020). Real-Time Big Data Processing for
Effect: Unintended Consequences of Privacy Notices. Journal of Marketing Research,
Instantaneous Marketing Decisions: A Problematization Approach. Industrial
59(1), 1–16.
Marketing Management, 90(7), 558–569.
Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities
Jamshed, S. (2014). Qualitative Research Method: Interviewing and Observation. Journal
in Commercial Gender Classification. Proceedings of Machine Learning Research,
of Basic and Clinical Pharmacy, 5(4), 87–88.
77–91.
Jarrahi, M. H. (2018). Artificial Intelligence and the Future of Work: Human-AI
Byrne, A., & Rhodes, B. (2006). Employee Attitudes to Pensions: Evidence from Focus
Symbiosis in Organizational Decision Making. Business Horizons, 61(4), 577–586.
Groups. Pensions: An International Journal, 11(2), 144–152.
Kaplan, A., & Haenlein, M. (2019). Siri, Siri, in My Hand: Who’s the Fairest in the Land?
Caley, M. J., O’Leary, R. A., Fisher, R., Low-Choy, S., Johnson, S., & Mengersen, K.
On the Interpretations, Illustrations, and Implications of Artificial Intelligence.
(2014). What Is an Expert? A Systems Perspective on Expertise. Ecology and
Business Horizons, 62(1), 15–25.
Evolution, 4(3), 231–242.
Keding, C. (2020). Understanding the Interplay of Artificial Intelligence and Strategic
Camerer, C. F. (2019). Artificial Intelligence and Behavioral Economics. In A. Agrawal,
Management: Four Decades of Research in Review. Management Review Quarterly, 71
J. Gans, & A. Goldfarb (Eds.), The Economics of Artificial Intelligence: An Agenda (pp.
(4), 1–44.
587–608). Chicago: University of Chicago Press, IL.
Khan, S., & Iqbal, M. (2020). AI-Powered Customer Service: Does it Optimize Customer
Castelo, N., Bos, M. W., & Lehmann, D. R. (2019). Task-Dependent Algorithm Aversion.
Experience?. 8th International Conference on Reliability, Infocom Technologies and
Journal of Marketing Research, 56(5), 809–825.
Optimization, 590-594.
Charmaz, K. (2014). Constructing Grounded Theory. London: Sage.
Kim, T., Ruensuk, M., & Hong, H. (2020). In Helping a Vulnerable Bot, You Help
Charniak, E., & McDermott, D. (1985). Introduction to Artificial Intelligence. Reading, MA:
Yourself: Designing a Social Bot as a Care-Receiver to Promote Mental Health and
Addison Wesley.
Reduce Stigma. CHI Conference on Human Factors in Computing Systems, 1-13.
Corbin, J., & Strauss, A. (2014). Basics of Qualitative Research: Techniques and Procedures
Kokolakis, S. (2017). Privacy Attitudes and Privacy Behaviour: A Review of Current
for Developing Grounded Theory. Thousand Oaks: Sage Publications.
Research on the Privacy Paradox Phenomenon. Computers & Security, 64(1), 122-
Côrte-Real, N., Ruivo, P., Oliveira, T., & Popovič, A. (2019). Unlocking the Drivers of Big
134.
Data Analytics Value in Firms. Journal of Business Research, 97(4), 160–173.
Kolbjørnsrud, V., Amico, R., & Thomas, R. J. (2016). How Artificial Intelligence Will
Cortez, R. M., & Johnston, W. J. (2017). The Future of B2B Marketing Theory: A
Redefine Management. Harvard Business Review, 2(8), 1–6.
Historical and Prospective Analysis. Industrial Marketing Management, 66(7), 90–102.
Kruger, J., & Dunning, D. (1999). Unskilled and Unaware of it: How Difficulties in
D’Alfonso, S. (2020). AI in Mental Health. Current Opinion in Psychology, 36(6), 112–117.
Recognizing One’s Own Incompetence Lead to Inflated Self-assessments. Journal of
Dalkey, N., & Helmer, O. (1963). An Experimental Application of the Delphi Method to
Personality and Social Psychology, 77(6), 1121–1134.
the Use of Experts. Management Science, 9(3), 458–467.
Kumar, N. S. (2019). Implementation of Artificial Intelligence in Imparting Education
Daugherty, P. R., & Wilson, H. J. (2018). Human + Machine: Reimagining Work in the Age
and Evaluating Student Performance. Journal of Artificial Intelligence, 1(1), 1–9.
of AI. Boston: Harvard Business Press.
Kumar, V., Ramachandran, D., & Kumar, B. (2021). Influence of New Age Technologies
Davenport, T. H., & Kirby, J. (2016). Just How Smart are Smart Machines? MIT Sloan
on Marketing: A Research Agenda. Journal of Business Research, 125(3), 864–877.
Management Review, 57(3), 21–25.
Kurzweil, R. (1990). The Age of Intelligent Machines. Cambridge, MA: MIT Press.
Davenport, T., Guha, A., Grewal, D., & Bressgott, T. (2020). How Artificial Intelligence
Lake, K. (2018). Stitch Fix’s CEO on Selling Personal Style to the Mass Market. Harvard
Will Change the Future of Marketing. Journal of the Academy of Marketing Science, 48
Business Review, 35–40.
(1), 24–42.
Leslie, K. J., Kim, T., & Barasz, K. (2018). Ads That Don’t Overstep. Harvard Business
De Bruyn, A., Viswanathan, V., Beh, Y. S., Brock, J. K. U., & von Wangenheim, F. (2020).
Review, 96(1), 62–69.
Artificial Intelligence and Marketing: Pitfalls and Opportunities. Journal of Interactive
Leung, E., Paolacci, G., & Puntoni, S. (2018). Man versus Machine: Resisting Automation
Marketing, 51(3), 91–105.
in Identity-Based Consumer Behavior. Journal of Marketing Research, 55(6), 818–831.
Dharmaraj, V., & Vijayanand, C. (2018). Artificial Intelligence (AI) in Agriculture.
Lilien, G. L., Rangaswamy, A., & De Bruyn, A. (2017). Principles of Marketing Engineering
International Journal of Current Microbiology and Applied Sciences, 7(12), 2122–2128.
and Analytics. State College: DecisionPro.
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm Aversion: People
Linstone, H. A., & Turoff, M. (1975). The Delphi Method (pp. 3–12). Reading, MA:
Erroneously Avoid Algorithms After Seeing Them Err. Journal of Experimental
Addison-Wesley.
Psychology: General, 144(1), 114–126.
Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm Appreciation: People Prefer
Du, S., & Xie, C. (2021). Paradoxes of Artificial Intelligence in Consumer Markets: Ethical
Algorithmic to Human Judgment. Organizational Behavior and Human Decision
Challenges and Opportunities. Journal of Business Research, 129(5), 961–974.
Processes, 151(2), 90–103.
Duan, Y., Edwards, J. S., & Dwivedi, Y. K. (2019). Artificial Intelligence for Decision
Long, D., & Magerko, B. (2020). What is AI literacy? Competencies and Design
Making in the Era of Big Data: Evolution, Challenges and Research Agenda.
Considerations. CHI Conference on Human Factors in Computing Systems, 1–16.
International Journal of Information Management, 48(5), 63–71.
Loureiro, S. M. C., Guerreiro, J., & Tussyadiah, I. (2021). Artificial Intelligence in
Dwivedi, Y. K., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T., …
Business: State of the Art and Future Research Agenda. Journal of Business Research,
Williams, M. D. (2021). Artificial Intelligence (AI): Multidisciplinary Perspectives on
129(5), 911–926.
Emerging Challenges, Opportunities, and Agenda for Research, Practice and Policy.
Loureiro, S. M. C., Japutra, A., Molinillo, S., & Bilro, R. G. (2021). Stand by Me:
International Journal of Information Management, 57(2), 1–47.
Analyzing the Tourist-Intelligent Voice Assistant Relationship Quality. International
Etkin, J. (2016). The Hidden Cost of Personal Quantification. Journal of Consumer
Journal of Contemporary Hospitality Management, 33(11), 3840–3859.
Research, 42(6), 967–984.
613
G. Volkmar et al. Journal of Business Research 149 (2022) 599–614
Ma, L., & Sun, B. (2020). Machine Learning and AI in Marketing – Connecting Computer Rauschnabel, P. A., Babin, B. J., tom Dieck, M. C., Krey, N., & Jung, T. (2022). What Is
Power to Human Insights. International Journal of Research in Marketing, 37(3), Augmented Reality Marketing? Its Definition, Complexity, and Future. Journal of
481–504. Business Research, 142(3), 1140–1150.
Makarius, E. E., Mukherjee, D., Fox, J., & Fox, A. (2020). Rising with the Machines: A Rowe, G., & Wright, G. (1999). The Delphi Technique as a Forecasting Tool: Issues and
Sociotechnical Framework for Bringing Artificial Intelligence into the Organization. Analysis. International Journal of Forecasting, 15(4), 353–375.
Journal of Business Research, 120(8), 262–273. Rowe, G., Wright, G., & Bolger, F. (1991). Delphi: A Reevaluation of Research and
Malhotra, N. K. (2019). Marketing Research: An Applied Orientation (What’s New in Theory. Technological Forecasting and Social Change, 39(3), 235–251.
Marketing?). New York, NY: Pearson. Russell, S. J., & Norvig, P. (2010). Artificial Intelligence: A Modern Approach. Upper Saddle
Martin, K., & Murphy, P. (2016). The Role of Data Privacy in Marketing. Journal of the River, NJ: Prentice-Hall.
Academy of Marketing Science, 45(2), 135–155. Rust, R. T. (2020). The Future of Marketing. International Journal of Research in Marketing,
Martínez-López, F. J., & Casillas, J. (2013). Artificial Intelligence-Based Systems Applied 37(1), 15–26.
in Industrial Marketing: A Historical Overview, Current and Future Insights. Schmitt, B. (1999). Experiential Marketing. Journal of Marketing Management, 15(1–3),
Industrial Marketing Management, 42(4), 489–495. 53–67.
Mayring, P. (2014). Qualitative Content Analysis: Theoretical Foundation, Basic Shah, D., & Murthi, B. P. S. (2021). Marketing in a Data-Driven Digital World:
Procedures and Software Solution. Klagenfurt. Implications for the Role and Scope of Marketing. Journal of Business Research, 125
McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (1955). A Proposal for the (3), 772–779.
Dartmouth Summer Research Project on Artificial Intelligence. Research Project on Sowa, K., Przegalinska, A., & Ciechanowski, L. (2021). Cobots in Knowledge Work:
Artificial Intelligence. Retrieved from http://www-formal.stanford.edu/jmc/his Human – AI Collaboration in Managerial Professions. Journal of Business Research,
tory/dartmouth/dartmouth.html, Accessed October 15, 2020. 125(3), 135–142.
Meuser, M., & Nagel, U. (2009). The Expert Interview and Changes in Knowledge Stahl, B. C., Andreaou, A., Brey, P., Hatzakis, T., Kirichenko, A., Macnish, K., Shaelou, L.,
Production. In A. Bogner, B. Littig, & W. Menz (Eds.), Interviewing Experts (pp. Patel, A., Ryan, M., & Wright, D. (2021). Artificial Intelligence for Human
17–42). London: Palgrave Macmillan. Flourishing – Beyond Principles for Machine Learning. Journal of Business Research,
Mirchi, N., Bissonnette, V., Yilmaz, R., Ledwos, N., Winkler-Schwartz, A., & Del 124(1), 374–388.
Maestro, R. (2020). The Virtual Operative Assistant: An Explainable Artificial Strauss, A., & Glaser, B. (1967). The Discovery of Grounded Theory. Chicago: Aldine
Intelligence Tool for Simulation-Based Training in Surgery and Medicine. PLoS ONE, Publishing Company.
15(2), 1–15. Torra, V., Karlsson, A., Steinhauer, J., & Berglund, S. (2019). Artificial Intelligence. In
Mitchell, T. (1997). Machine Learning. New York City, NY: McGraw Hill. A. Said, & V. Torra (Eds.), Data Science in Practice (pp. 9–26). Springer.
Møldrup, C., & Morgall, J. M. (2001). Risks of Future Drugs: A Danish Expert Delphi. Van Giffen, B., Herhausen, D., & Fahse, T. (2022). Overcoming the Pitfalls and Perils of
Technological Forecasting and Social Change, 67(2–3), 273–289. Algorithms: A Classification of Machine Learning Biases and Mitigation Methods.
Moon, Y. (2003). Don’t Blame the Computer: When Self-Disclosure Moderates the Self- Journal of Business Research, 144(4), 93–106.
Serving Bias. Journal of Consumer Psychology, 13(1–2), 125–137. Van Osselaer, S. M., Fuchs, C., Schreier, M., & Puntoni, S. (2020). The power of personal.
Morgan, D. L. (1996). Focus Groups. Annual Review of Sociology, 22(1), 129–152. Journal of Retailing, 96(1), 88–100.
Morse, J. M., Barrett, M., Mayan, M., Olson, K., & Spiers, J. (2002). Verification von der Gracht, H. (2012). Consensus Measurement in Delphi Studies: Review and
Strategies for Establishing Reliability and Validity in Qualitative Research. Implications for Future Quality Assurance. Technological Forecasting and Social
International Journal of Qualitative Methods, 1(2), 13–22. Change, 79(8), 1525–1536.
Newby, R., Watson, J., & Woodliff, D. (2003). Using Focus Groups in SME Research: The Waytz, A. (2019). When Customers Want to See the Human Behind the Product. Harvard
Case of Owner-Operator Objectives. Journal of Developmental Entrepreneurship, 8(3), Business Review, 97(3).
237–246. Weber, R. P. (1990). Basic Content Analysis. Newbury Park: Sage.
Newman, D. T., Fast, N. J., & Harmon, D. J. (2020). When Eliminating Bias Isn’t Fair: Wierenga, B. (2011). Managerial Decision Making in Marketing: The Next Research
Algorithmic Reductionism and Procedural Justice in Human Resource Decisions. Frontier. International Journal of Research in Marketing, 28(2), 89–101.
Organizational Behavior and Human Decision Processes, 160(6), 149–167. Winkler, J., & Moser, R. (2016). Biases in Future-Oriented Delphi Studies: A Cognitive
Nilsson, N. J. (1998). Artificial Intelligence: A New Synthesis. Burlington: Morgan Perspective. Technological Forecasting and Social Change, 105(4), 63–76.
Kaufmann. Wortmann, Ch., Fischer, P. M., & Reinecke S. (2018). The Holy Grail in Decision-
Novak, T. P., & Hoffman, D. L. (2019). Relationship Journeys in the Internet of Things: A Making? How Big Data Changes Decision Processes of Marketing Managers.
New Framework for Understanding Interactions Between Consumers and Smart European Marketing Academy Conference, Glasgow, UK.
Objects. Journal of the Academy of Marketing Science, 47(2), 216–237. Wright, L. T., Robin, R., Stone, M., & Aravopoulou, D. E. (2019). Adoption of Big Data
O’Connor, C., & Joffe, H. (2020). Intercoder Reliability in Qualitative Research: Debates Technology for Innovation in B2B Marketing. Journal of Business-to-Business
and Practical Guidelines. International Journal of Qualitative Methods, 19(1), 1–13. Marketing, 26(3–4), 281–293.
Okoli, C., & Pawlowski, S. D. (2004). The Delphi Method as a Research Tool: An Xiao, L., & Kumar, V. (2021). Robotics for Customer Service: A Useful Complement or an
Example, Design Considerations and Applications. Information & Management, 42(1), Ultimate Substitute? Journal of Service Research, 24(1), 9–29.
15–29.
Paré, G., Cameron, A. F., Poba-Nzaou, P., & Templier, M. (2013). A Systematic
Gioia Volkmar is a Ph.D. student at the University of St. Gallen, Switzerland. She holds an
Assessment of Rigor in Information Systems Ranking-type Delphi Studies.
M.Sc. in mechanical engineering combined with business administration from the Tech
Information & Management, 50(5), 207–217.
nical University of Darmstadt (Germany). Gioia has gained practical experience in mar
Paschen, J., Kietzmann, J., & Kietzmann, T. C. (2019). Artificial Intelligence (AI) and Its
keting and technical sales for medical devices. Her research interests include Artificial
Implications for Market Knowledge in B2B Marketing. Journal of Business & Industrial
Intelligence, managerial decision-making in marketing with a focus on psychological and
Marketing, 34(7), 1410–1419.
ethical aspects.
Perez-Vega, R., Kaartemo, V., Lages, C. R., Razavi, N. B., & Männistö, J. (2021).
Reshaping the Contexts of Online Customer Engagement Behavior via Artificial
Intelligence: A Conceptual Framework. Journal of Business Research, 129(5), Dr. Peter Mathias Fischer is Senior Lecturer and member of the management team at the
902–910. Institute for Marketing and Customer Insight at the University of St. Gallen as well as guest
Poba-Nzaou, P., Lemieux, N., Beaupré, D., & Uwizeyemungu, S. (2016). Critical professor at HEC Paris. In addition, he served as a guest professor in the MBA program of
Challenges Associated with the Adoption of Social Media: A Delphi Panel of the Wharton Business School. Peter’s particular areas of research include how managers
Canadian Human Resources Managers. Journal of Business Research, 69(10), and consumers alike react to data and technology, and decision biases in general. For
4011–4019. instance, he is interested in the behavioral effects of AI and conversational interfaces,
Prince, M., & Davies, M. (2001). Moderator Teams: An Extension to Focus Group intended to improve human-machine interactions to the benefit of its users. His work has
Methodology. Qualitative Market Research: An International Journal, 4(4), 207–216. been published in leading journals such as the Journal of International Business Studies.
Promberger, M., & Baron, J. (2006). Do Patients Trust Computers? Journal of Behavioral
Decision Making, 19(5), 455–468.
Dr. Sven Reinecke is Associate Professor of Marketing at the University of St. Gallen,
Qu, S. Q., & Dumay, J. (2011). The Qualitative Research Interview. Qualitative Research in
Switzerland, and Executive Director of the Institute of Marketing and Customer Insights.
Accounting & Management, 8(3), 238–264.
He is the academic head of the English track of the Master of Arts in Marketing Manage
Rai, A. (2020). Explainable AI: From Black Box to Glass Box. Journal of the Academy of
ment and is managing director of the Marketing Review St. Gallen. Sven teaches strategic
Marketing Science, 48(1), 137–141.
marketing and marketing management control in the executive education programs at
Ransbotham, S., Kiron, D., Gerbert, P., & Reeves, M. (2017). Reshaping Business with
several leading business schools. His research focuses on strategic marketing, price man
Artificial Intelligence: Closing the Gap between Ambition and Action. MIT Sloan
agement, marketing performance management, and management decision behavior. He
Management Review, 59(1), 1–17.
has published in several academic journals, including Journal of Marketing Research and the
International Journal of Research in Marketing.
614