0% нашли этот документ полезным (0 голосов)
39 просмотров491 страница

Understanding Artificial Intelligence Fundamentals, Use Cases, and

Книга 'Future of Business and Finance' исследует будущее бизнеса и финансов, акцентируя внимание на стратегических направлениях, технологических достижениях и вызовах, которые могут повлиять на ведение бизнеса. Она включает в себя обсуждение искусственного интеллекта, его основ, применения и этических аспектов, а также его влияние на различные сектора, такие как производство, маркетинг и здравоохранение. Основная цель книги - инициировать дискуссии и дальнейшие разработки в этих областях.

Загружено:

clarisse melo
Авторское право
© © All Rights Reserved
Мы серьезно относимся к защите прав на контент. Если вы подозреваете, что это ваш контент, заявите об этом здесь.
Доступные форматы
Скачать в формате PDF, TXT или читать онлайн в Scribd
0% нашли этот документ полезным (0 голосов)
39 просмотров491 страница

Understanding Artificial Intelligence Fundamentals, Use Cases, and

Книга 'Future of Business and Finance' исследует будущее бизнеса и финансов, акцентируя внимание на стратегических направлениях, технологических достижениях и вызовах, которые могут повлиять на ведение бизнеса. Она включает в себя обсуждение искусственного интеллекта, его основ, применения и этических аспектов, а также его влияние на различные сектора, такие как производство, маркетинг и здравоохранение. Основная цель книги - инициировать дискуссии и дальнейшие разработки в этих областях.

Загружено:

clarisse melo
Авторское право
© © All Rights Reserved
Мы серьезно относимся к защите прав на контент. Если вы подозреваете, что это ваш контент, заявите об этом здесь.
Доступные форматы
Скачать в формате PDF, TXT или читать онлайн в Scribd
Вы находитесь на странице: 1/ 491

Future of Business and Finance

Ralf T. Kreutzer

Understanding
Artificial
Intelligence
Fundamentals, Use Cases and Methods
for a Corporate AI Journey
Future of Business and Finance
The Future of Business and Finance book series features professional works aimed
at defining, analyzing, and charting the future trends in these fields. The focus is
mainly on strategic directions, technological advances, challenges and solutions
which may affect the way we do business tomorrow, including the future of
sustainability and governance practices. Mainly written by practitioners, consultants
and academic thinkers, the books are intended to spark and inform further discussions
and developments.
Ralf T. Kreutzer

Understanding Artificial
Intelligence
Fundamentals, Use Cases and Methods
for a Corporate AI Journey
Ralf T. Kreutzer
Campus Schöneberg
Hochschule für Wirtschaft und Recht
Berlin, Germany

ISSN 2662-2467     ISSN 2662-2475 (electronic)


Future of Business and Finance
ISBN 978-3-658-46130-0    ISBN 978-3-658-46131-7 (eBook)
https://doi.org/10.1007/978-3-658-46131-7

Translation from the German language edition: “Künstliche Intelligenz verstehen” by Ralf T. Kreutzer,
© Der/die Herausgeber bzw. der/die Autor(en), exklusiv lizenziert an Springer Fachmedien Wiesbaden
GmbH, ein Teil von Springer Nature 2023. Published by Springer Gabler Wiesbaden. All Rights
Reserved.

This book is a translation of the original German edition “Künstliche Intelligenz verstehen,” 2nd edition,
by Ralf T. Kreutzer, published by Springer Fachmedien Wiesbaden GmbH in 2023. The translation was
done with the help of an artificial intelligence machine translation tool. A subsequent human revision was
done primarily in terms of content, so that the book will read stylistically differently from a conventional
translation. Springer Nature works continuously to further the development of tools for the production of
books and on the related technologies to support the authors.

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Fachmedien
Wiesbaden GmbH, part of Springer Nature 2024
This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether
the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of
illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and
transmission or information storage and retrieval, electronic adaptation, computer software, or by similar
or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the
editors give a warranty, expressed or implied, with respect to the material contained herein or for any
errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Fachmedien Wiesbaden GmbH,
part of Springer Nature.
The registered company address is: Abraham-Lincoln-Str. 46, 65189 Wiesbaden, Germany

If disposing of this product, please recycle the paper.


Contents

Part I Fundamentals of Artificial Intelligence


1 What Is Meant by Artificial Intelligence and How Can
It Be Used?��������������������������������������������������������������������������������������������������   3
1.1 What Is the Core of Artificial Intelligence?����������������������������������������   5
1.1.1 Manifestations of Intelligence������������������������������������������������   5
1.1.2 Nature and Abilities of Artificial Intelligence������������������������   7
1.1.3 Types of Learning ������������������������������������������������������������������ 13
1.1.4 Generative AI�������������������������������������������������������������������������� 21
1.2 Summary �������������������������������������������������������������������������������������������� 21
1.3 What Goals Can Be Achieved with Artificial Intelligence?���������������� 22
1.4 Summary �������������������������������������������������������������������������������������������� 26
1.5 Fields of Application of Artificial Intelligence ���������������������������������� 26
1.5.1 Natural Language Processing (NLP): Processing
of Natural Language �������������������������������������������������������������� 28
1.5.2 Natural Image Processing—Computer Vision: Image
Processing ������������������������������������������������������������������������������ 37
1.5.3 Expert Systems: Expert Systems�������������������������������������������� 47
1.5.4 Robotics: Robots�������������������������������������������������������������������� 49
1.5.5 Affective Computing: Emotional AI—Emotional
Artificial Intelligence�������������������������������������������������������������� 57
1.6 Summary �������������������������������������������������������������������������������������������� 60
1.7 Ethics and Fairness of Artificial Intelligence�������������������������������������� 60
1.7.1 Current State of Research on Ethics and Fairness
of Artificial Intelligence���������������������������������������������������������� 60
1.7.2 Intelligence Explosion, Superintelligence,
and Transhumanism���������������������������������������������������������������� 67
1.7.3 Explainable Artificial Intelligence: Explainable
Artificial Intelligence�������������������������������������������������������������� 71
1.7.4 Reduction of Noise as a Success Factor �������������������������������� 76
1.8 Summary �������������������������������������������������������������������������������������������� 77
1.9 Where Do Companies in Germany Stand in the Global AI
Environment?�������������������������������������������������������������������������������������� 78
1.10 Summary �������������������������������������������������������������������������������������������� 93

v
vi Contents

1.11 EU Legal Framework for the Use of AI���������������������������������������������� 94


1.12 Summary �������������������������������������������������������������������������������������������� 101
References���������������������������������������������������������������������������������������������������� 102
2 Drivers of Artificial Intelligence �������������������������������������������������������������� 107
2.1 Moore’s Law and the Effects of Exponentiality �������������������������������� 108
2.2 Digitalization and Dematerialization of Products, Services,
and Processes�������������������������������������������������������������������������������������� 109
2.3 Networking of Products, Services, Processes, Animals
and People������������������������������������������������������������������������������������������ 112
2.4 Big Data���������������������������������������������������������������������������������������������� 116
2.5 New Technologies������������������������������������������������������������������������������ 124
2.6 Necessity of Sustainable Corporate Governance�������������������������������� 137
2.7 Summary �������������������������������������������������������������������������������������������� 141
References���������������������������������������������������������������������������������������������������� 141

Part II Fields of Application of Artificial Intelligence


3 Production, Maintenance, Mobility, Transport, Agriculture ���������������� 145
3.1 Production and Maintenance Sector �������������������������������������������������� 145
3.1.1 Trend Towards Smart Factory������������������������������������������������ 145
3.1.2 Networking as a Core Element of a Smart Factory���������������� 150
3.1.3 Technologies of Smart Manufacturing������������������������������������ 153
3.1.4 SheIn: Smart Manufacturing as a Business Model ���������������� 179
3.1.5 Overall Assessment of the Development Towards
Smart Manufacturing�������������������������������������������������������������� 182
3.2 Summary �������������������������������������������������������������������������������������������� 190
3.3 Mobility and Transport Sector������������������������������������������������������������ 190
3.3.1 Automation of Mobility���������������������������������������������������������� 190
3.3.2 Developments of Autonomous Vehicles �������������������������������� 193
3.3.3 Innovative Logistics Solutions������������������������������������������������ 197
3.3.4 Predictive Risk Intelligence���������������������������������������������������� 201
3.4 Summary �������������������������������������������������������������������������������������������� 203
3.5 Agriculture������������������������������������������������������������������������������������������ 203
3.5.1 AI Applications in Agriculture������������������������������������������������ 203
3.5.2 Status of AI Use in Agriculture���������������������������������������������� 205
3.6 Summary �������������������������������������������������������������������������������������������� 208
References���������������������������������������������������������������������������������������������������� 208
4 Marketing, Sales, Customer Service�������������������������������������������������������� 211
4.1 Customer-Company-Expectation Matrix as a Framework����������������� 211
4.2 Customer Service: From Simple Chatbot to Digital Assistant����������� 213
4.2.1 Chatbots and Digital Assistants���������������������������������������������� 213
4.2.2 Large Language Models: ChatGPT���������������������������������������� 226
4.2.3 Social Bots and Social Media Listening �������������������������������� 237
4.2.4 From Avatar to Digital Human������������������������������������������������ 240
4.3 Summary �������������������������������������������������������������������������������������������� 241
Contents vii

4.4 Design of Voice Marketing ���������������������������������������������������������������� 242


4.4.1 Corporate and Brand Language���������������������������������������������� 244
4.4.2 Voice Branding������������������������������������������������������������������������ 245
4.4.3 Voice Search and Voice Engine Optimization������������������������ 246
4.4.4 Voice Commerce�������������������������������������������������������������������� 256
4.4.5 Voice Integration �������������������������������������������������������������������� 262
4.4.6 Voice Controlling�������������������������������������������������������������������� 263
4.5 Summary �������������������������������������������������������������������������������������������� 268
4.6 Voice Identification and Speech Analysis ������������������������������������������ 268
4.7 Summary �������������������������������������������������������������������������������������������� 274
4.8 With AI to Effective Language ���������������������������������������������������������� 274
4.8.1 System for Increasing Text Performance�������������������������������� 274
4.8.2 Solutions for AI-Supported Translation���������������������������������� 276
4.8.3 Concepts for AI-Supported Text Generation�������������������������� 277
4.9 Summary �������������������������������������������������������������������������������������������� 280
4.10 Conversion of Text into Image and Video Content:
AI Creation of Music�������������������������������������������������������������������������� 281
4.10.1 AI-Supported Development of Images ���������������������������������� 281
4.10.2 AI-Supported Development of Videos������������������������������������ 285
4.10.3 AI-Supported Development of Music������������������������������������ 288
4.11 Summary �������������������������������������������������������������������������������������������� 291
4.12 AI-Supported Customer Acquisition and Advertising Design������������ 292
4.12.1 AI-Supported Customer Acquisition�������������������������������������� 292
4.12.2 AI-Supported Analysis of Advertising Messages ������������������ 296
4.12.3 AI-Supported Advertising Planning���������������������������������������� 297
4.12.4 AI Solutions for the Post-Cookie Era ������������������������������������ 298
4.13 Summary �������������������������������������������������������������������������������������������� 300
4.14 Dynamic Pricing �������������������������������������������������������������������������������� 301
4.15 Summary �������������������������������������������������������������������������������������������� 303
4.16 Optimization of Product Development and Customer
Experience������������������������������������������������������������������������������������������ 303
4.17 Summary �������������������������������������������������������������������������������������������� 305
4.18 Retail �������������������������������������������������������������������������������������������������� 305
4.19 Summary �������������������������������������������������������������������������������������������� 311
4.20 Hotel and Hospitality Industry������������������������������������������������������������ 311
4.21 Summary �������������������������������������������������������������������������������������������� 314
References���������������������������������������������������������������������������������������������������� 314
5 Health and Safety�������������������������������������������������������������������������������������� 319
5.1 Health Sector�������������������������������������������������������������������������������������� 319
5.1.1 Diagnosis and Medical Imaging �������������������������������������������� 319
5.1.2 Precision Medicine and Personalized Therapy ���������������������� 323
5.1.3 Use of Robots in the Medical Sector�������������������������������������� 324
5.1.4 Patient Monitoring and Early Detection
of Complications�������������������������������������������������������������������� 326
5.1.5 Support in Drug Research������������������������������������������������������ 326
viii Contents

5.1.6 Improvement of Health Systems and Medical


Administration������������������������������������������������������������������������ 328
5.1.7 Bio-Hack and Self-Optimization�������������������������������������������� 328
5.2 Summary �������������������������������������������������������������������������������������������� 333
5.3 Security Sector—Social Scoring�������������������������������������������������������� 333
5.3.1 AI Use in the Security Sector—Face and Object
Recognition ���������������������������������������������������������������������������� 333
5.3.2 Social Credit System in China������������������������������������������������ 337
5.4 Summary �������������������������������������������������������������������������������������������� 340
References���������������������������������������������������������������������������������������������������� 341
6 Energy Sector and Smart Home�������������������������������������������������������������� 343
6.1 AI-Supported Applications in the Energy Sector�������������������������������� 343
6.2 AI-Supported Solutions for Smart Homes������������������������������������������ 345
6.3 Summary �������������������������������������������������������������������������������������������� 354
References���������������������������������������������������������������������������������������������������� 355
7 Education and Knowledge Transfer�������������������������������������������������������� 357
7.1 Challenges in the Education Sector���������������������������������������������������� 357
7.2 Development of Adaptive Learning Systems–Incl.
Automated Evaluation������������������������������������������������������������������������ 358
7.3 Ai-Based Administration and Student Support���������������������������������� 362
7.4 Development of Immersive Learning Environments—VR
and AR Solutions�������������������������������������������������������������������������������� 363
7.5 Summary �������������������������������������������������������������������������������������������� 366
References���������������������������������������������������������������������������������������������������� 366
8 Human Resource Management (HRM) �������������������������������������������������� 367
8.1 Strategic HR Planning������������������������������������������������������������������������ 368
8.2 AI-Supported Recruitment������������������������������������������������������������������ 369
8.3 Capture and Analysis of Resumes������������������������������������������������������ 375
8.4 Matching of Candidates and Job Requirements���������������������������������� 377
8.5 Onboarding New Employees�������������������������������������������������������������� 378
8.6 Talent Management—Training and Development������������������������������ 379
8.7 Performance Management������������������������������������������������������������������ 381
8.8 Incentivization������������������������������������������������������������������������������������ 383
8.9 Employee Retention Management������������������������������������������������������ 383
8.10 Status of the Use of (AI-Based) HR Technologies
in Germany������������������������������������������������������������������������������������������ 384
8.11 Summary �������������������������������������������������������������������������������������������� 389
References���������������������������������������������������������������������������������������������������� 389
9 Financial Services Sector�������������������������������������������������������������������������� 391
9.1 Wealth Management—Asset Management—Robo Advisor�������������� 392
9.2 High-Frequency Trading�������������������������������������������������������������������� 393
9.3 Fraud Detection and Fraud Prevention ���������������������������������������������� 394
Contents ix

9.4 AI-Supported Process Automation in the Financial Sector���������������� 396


9.5 Summary �������������������������������������������������������������������������������������������� 398
References���������������������������������������������������������������������������������������������������� 399
10 Military Sector ������������������������������������������������������������������������������������������ 401
10.1 AI Application Areas in the Military Sector ������������������������������������ 401
10.2 Autonomous Weapon Systems���������������������������������������������������������� 402
10.3 Summary ������������������������������������������������������������������������������������������ 405
References���������������������������������������������������������������������������������������������������� 406
11 Metaverse���������������������������������������������������������������������������������������������������� 407
11.1 Characterization of the Metaverse���������������������������������������������������� 407
11.2 Interesting AI Applications in the Metaverse������������������������������������ 409
11.3 Will the Metaverse Become a Success Story?���������������������������������� 412
11.4 Summary ������������������������������������������������������������������������������������������ 414
References���������������������������������������������������������������������������������������������������� 414

Part III AI Challenge – How to Anchor Artificial Intelligence


in the Company
12 Developing an AI Journey in your Own Company �������������������������������� 417
12.1 Phase Concept for the Company’s Own AI Journey������������������������ 417
12.2 Analysis of the Status Quo���������������������������������������������������������������� 419
12.2.1 Analysis of the Impact of Artificial Intelligence
on Competitive Dynamics���������������������������������������������������� 420
12.2.2 3-Horizon Model������������������������������������������������������������������ 421
12.2.3 AI Maturity Map ������������������������������������������������������������������ 423
12.3 Building AI Basics���������������������������������������������������������������������������� 428
12.3.1 Building Comprehensive in-House AI Competence������������ 428
12.3.2 Development of Goals and Strategies for AI Use ���������������� 429
12.3.3 Development of AI Governance�������������������������������������������� 432
12.4 Optimize Processes and Realign������������������������������������������������������ 434
12.4.1 Identification of Exciting Use Cases������������������������������������ 434
12.4.2 Access to AI Solution Modules�������������������������������������������� 438
12.4.3 Funding Programs for AI Applications �������������������������������� 444
12.5 Enrichment of Products and Services through AI:
Product and Service Innovations—Embedding AI
in the Business Model���������������������������������������������������������������������� 445
12.6 Accompanying Change Management ���������������������������������������������� 445
12.7 Controlling the AI Journey���������������������������������������������������������������� 446
12.8 Summary ������������������������������������������������������������������������������������������ 449
References���������������������������������������������������������������������������������������������������� 450
13 Outlook������������������������������������������������������������������������������������������������������� 453
13.1 Summary ������������������������������������������������������������������������������������������ 457
References���������������������������������������������������������������������������������������������������� 457
x Contents

14 Data, Law and Responsibility—Legal Challenges in the Use


of Artificial Intelligence ���������������������������������������������������������������������������� 459
14.1 The Ambivalence of Artificial Intelligence and the Role
of Law ���������������������������������������������������������������������������������������������� 459
14.2 Artificial Intelligence and Law: Concept, Definition,
Legal Questions�������������������������������������������������������������������������������� 460
14.2.1 Concept and Definition �������������������������������������������������������� 460
14.2.2 Legal Questions Regarding the Use of AI���������������������������� 460
14.3 Tsunami of the Information Society ������������������������������������������������ 461
14.3.1 Copyright������������������������������������������������������������������������������ 461
14.3.2 Related Rights���������������������������������������������������������������������� 462
14.3.3 Obligation to Label �������������������������������������������������������������� 463
14.4 Generative Artificial Intelligence: Regulation
Is at a Crossroads������������������������������������������������������������������������������ 463
14.5 Protection of Personal Data: High Requirements
for Automated Decision-Making Systems���������������������������������������� 464
14.6 Reactions from GDPR Legislators and Business������������������������������ 465
14.7 Red Flags in Handling AI Tools�������������������������������������������������������� 465
14.8 Outlook �������������������������������������������������������������������������������������������� 466
References���������������������������������������������������������������������������������������������������� 467

Index�������������������������������������������������������������������������������������������������������������������� 469
Part I
Fundamentals of Artificial Intelligence
What Is Meant by Artificial Intelligence
and How Can It Be Used? 1

Summary The core of Artificial Intelligence is to take over tasks that were previ-
ously reserved for humans. Through the ability of AI systems to analyze large data
sets, recognize patterns, and generate “new” content, a wide range of applications
open up. These range from processing human language and images to expert sys-
tems and robots that can independently perform a multitude of tasks. Ethical bound-
aries must be considered when using AI.

Impossible is not a fact! It is an opinion!

Artificial Intelligence is a very comprehensive research field, which is increasingly


emerging from the realm of specialists. Often we encounter Artificial Intelligence in
a way that initially does not make us think of AI. We just notice that something is
easier and more straightforward than before. Just think of the digital personal
assistants like Alexa, Google Home or Siri, which play the desired piece of music
from Spotify via voice command, create shopping lists and initiate purchases. Via
voice command, you can also arrange appointments, search for terms and even com-
pletely control your smart home. We speak of digital personal assistants because
these are no longer physically experienced assistants. In some cases, these assis-
tants—like Alexa and Google Home—have now received their own “home” in the
form of an electronic product. Then we speak of a Smart Speaker. The speaker is
“smart” in the sense that it is connected to the internet.
When you use translation aids on the internet, be it Google Translate or the
German start-up DeepL, you are also accessing AI applications. With the triumph of
ChatGPT, millions of people have experienced the quality of language processing
and generation that has already been achieved. At this point, it should also be

Supplementary Information The online version contains supplementary material available at


https://doi.org/10.1007/978-­3-­658-­46131-­7_1.

© The Author(s), under exclusive license to Springer Fachmedien Wiesbaden 3


GmbH, part of Springer Nature 2024
R. T. Kreutzer, Understanding Artificial Intelligence, Future of Business and
Finance, https://doi.org/10.1007/978-3-658-46131-7_1
4 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

clarified what the abbreviation GPT actually means: Generative Pretrained


Transformer. A “transformer” here represents an architecture for machine learn-
ing. This is already pre-trained (“pretrained”), so unlike many chatbots, training by
the user is no longer necessary. In addition, GPT is creatively active, hence the
adjective “generative”.
Also in the case of facial recognition (for example, for Access control in com-
panies or for activating your smartphone) AI algorithms for image recognition are
used. In the evaluation of medical records, X-rays, and CT scans by computers,
AI-supported expert systems are used.
A seemingly inexhaustible field of application for artificial intelligence are
robots. Their use is no longer limited to production and logistics tasks. Service
robots are being used in more and more areas—such as in hotels and hospitals.
Robots are also increasingly being used in private households. The autonomous
driving is also a field of application for AI. The corresponding vehicles use a robot
equipped with perception systems that takes over the tasks of a human driver.
The greatest challenge of artificial intelligence is still the comprehensive repli-
cation of the human brain. The technical term for this is Artificial General
Intelligence. When or if this will ever be possible is a matter of wide-ranging
estimates!
Is it worth it for you as a student or manager, is it worth it for companies and
countries to engage intensively with developments around artificial intelligence?
We unequivocally say: “Yes”! We would like to substantiate this assessment with
the following figures. Here, the market for artificial intelligence takes into account
the following aspects:

• Type of offer : Hardware, software, services


• Technology used : machine learning, natural language processing, contextual
computing, computer vision
• Place of use : on-site, in the cloud
• Company size : small, medium and large companies
• Areas of application in companies : Production, logistics, marketing and sales,
security, finance, law, human resources, others

The thus defined global market for artificial intelligence was estimated at about
120 billion US dollars in 2022. By the year 2030 a market volume of approx. US$
1600 billion is expected. This corresponds to an annual growth rate of almost
40% in the period from 2022 to 2030. It is expected that the Asia-Pacific market
will grow the fastest from 2022 to 2030 at an annual rate of 42% (see [1]).
Before we delve deeper into the various fields of application of artificial intel-
ligence, we first clarify what is exactly meant by artificial intelligence and what
goals can be achieved through it. Subsequently, various fields of application are
considered to illustrate the range of AI usability. In addition, we examine where
Germany stands today in the use of artificial intelligence and what legal frame-
work is emerging in the European Union (EU). From this, a comprehensive
Action needs to be taken—for individuals, companies, economy, politics, and
society alike.
1.1 What Is the Core of Artificial Intelligence? 5

1.1 What Is the Core of Artificial Intelligence?

Before we approach “artificial” intelligence, it is worth taking a look at intelligence


itself. We should not narrow our view and only refer to the intelligence quotient.
Rather, it is important to consider the diverse dimensions of human intelligence.

1.1.1 Manifestations of Intelligence

In the multiple intelligence approach the following manifestations of intelli-


gence are distinguished (cf. [2], p. 488; [3]):

• Linguistic Intelligence
This refers to the ability to analyze information and understand and create con-
tent in oral and written form, such as in the form of books, presentations, and
speeches. A prerequisite for this is to develop a sensitivity for spoken and written
language. This also includes the ability to learn a language and use it context-
specifically. Linguistic intelligence even enables people to exchange ideas about
things that never existed, do not yet exist, or that may only occur in the future.
Through language, humans are able to set goals for themselves, for example.
• Musical Intelligence
Musical intelligence describes the talent to make music and compose. Also, the
attribution of meaning to different types of music is part of this.
• Logical-mathematical Intelligence
This includes the ability to logically analyze problems and perform mathemati-
cal operations. Also, the competence to work on scientific questions, recognize
cause-effect relationships, and make deductions for the future is part of this form
of intelligence.
• Visual-spatial Intelligence
This intelligence includes the ability to grasp the structure and extent of spaces
and to move safely within them.
• Bodily-kinesthetic Intelligence
This intelligence describes the ability to use the body or parts of it to solve prob-
lems or to create something new. No athletic performance is conceivable without
this intelligence.
• Interpersonal/Human Intelligence
Interpersonal intelligence refers to the ability to recognize and understand the
moods, desires, motives, and intentions of other people. This competence is
indispensable for peaceful and constructive coexistence among people.
• Intrapersonal Intelligence
Intrapersonal intelligence enables people to recognize and understand their own
moods, desires, motives, and intentions. This type of self-awareness can lead to
personal growth processes.
• Naturalistic Intelligence
This intelligence describes the ability to grasp, distinguish, and evaluate natural
phenomena. As a result, the development of the earth, the occurrence of ebb and
6 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

flow, the emergence of tsunamis, and lunar and solar eclipses can no longer be
attributed to gods, but to natural laws.
• Existential/Spiritual Intelligence
This intelligence enables humans to reflect on their own existence. For example,
one can reflect on the meaning of life.
• Creative/Inventive Intelligence
It describes the ability of humans to produce new achievements in words and
deeds (such as music, text, film, dance, painting, sculpture, architecture).
Through this intelligence, humans can also think about how their own life or
nations would have developed if certain events had not occurred or had occurred
differently. People can also invent stories that have little or nothing to do with
real life. A beautiful example of this is the novels about Harry Potter.

The different manifestations of intelligence can also be divided into the following
five levels:

1. Thinking and Deduction


This level refers to the ability to draw logical conclusions and solve problems
through structured analysis. Deduction is a logical inference process, where
specific conclusions are derived from general premises.
2. Learning and Induction
At this level, new knowledge is acquired and used to recognize patterns and
make predictions for future events or behaviors. Induction is an inference pro-
cess, that derives general rules or principles based on individual, specific obser-
vations or data points.
3. Creativity and Cognition
This involves the ability to generate new ideas or solutions and to understand
the world in a more complex and abstract way.
4. Awareness and Perception
This level refers to the ability to perceive one’s own existence and the environ-
ment and to react to it.
5. Self-Awareness and Self-Perception
At the highest level of intelligence, there is a reflection on one’s own self, which
includes the ability for self-criticism and understanding of one’s own emo-
tional state.

Humans possess—to varying degrees—all five levels of intelligence. Today’s AI


systems are primarily at level 3 and 4 (here only “perception”), without yet having
their own consciousness.

Note Box
Human intelligence arises from imagination and knowledge about the con-
nections in the real world. This requires a deep understanding—a world-
view—which artificial intelligence still lacks today.
1.1 What Is the Core of Artificial Intelligence? 7

If the AI agent were a human with emotions and its own consciousness, it would
therefore look enviously and sadly at this rich treasure of human intelligence!
At this point, it is not intended to discuss to what extent these different manifes-
tations of intelligence exist independently of each other. The list is rather intended
to give a feeling for the diversity of human intelligence. Against this background,
it will take many years or decades before machines will be able to cover the fields
of intelligence innate and learned by humans in their entirety. Here we are thinking
of the already mentioned Artificial General Intelligence. Therefore, in my opinion,
it is not only premature, but also exaggerated, to focus mainly on horror scenarios
in the AI discussion, in which AI machines take over world domination. A very
readable example of this comes from the best-selling author Frank Schätzing and
bears the beautiful title “The Tyranny of the Butterfly”.

Note Box
The study of Artificial Intelligence (AI) covers two areas. This includes first
the research on how “intelligent” behavior solves problems. Based on the
knowledge gained in this way, systems are developed that automatically pro-
vide “intelligent” solutions (should) generate. However, the search for solu-
tions is not limited to developing them as humans would. Instead, the aim is
to find results that may lie outside the solution space conceived by humans.
The core of Artificial Intelligence is software. And this software usually
requires large volumes of data, to arrive at convincing solutions. To process
this data through the algorithms of Artificial Intelligence, very powerful
computers are needed. All of this is available today—in many countries—to
a high degree.
In addition, one must be clear about one thing. At its core, AI applications
are always about one thing: pattern recognition !
As already mentioned: All AI systems—still today—show a lack of a gen-
eral worldview—a worldview that is inherent to humans.

1.1.2 Nature and Abilities of Artificial Intelligence

There are various approaches to conceptually grasp the nature of Artificial


Intelligence. The following very flexible definition by Rich [4] is best suited for the
basic clarification. It reads:

Artificial Intelligence is the study of how to make computers do things at which, at the
moment, people are better.

This characterization of Artificial Intelligence illustrates that the boundaries of what


is possible are constantly being redefined. Or did you expect 10, 15 or 20 years ago
that a chatbot—if we knew it back then—could convincingly solve scientific tasks
in seconds?
A slightly more precise definition of Artificial Intelligence reads:
8 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

Artificial Intelligence refers to the ability of a machine to perform cognitive


tasks that we associate with the human mind.
Gartner [5] defines Artificial Intelligence as follows:

AI is the application of advanced analysis- and logic-based techniques (including machine


learning), to interpret events, support and automate decisions, and take action. (Translation
from English by the author)

In this work, the definition of the European Parliament is followed, which character-
izes Artificial Intelligence as follows [6]:

Artificial Intelligence is the ability of a machine to mimic human abilities such as logical
thinking, learning, planning, and creativity. (Translation from English by the author)

These abilities of Artificial Intelligence include perception capabilities as well as


the abilities to argue, to learn independently and thus to find problem solutions on
its own (see also [7]). Five types of evaluations—combined or isolated—can
be used:

• Description (Description of the “status quo”)


• Inspection (Analysis of the “Why”)
• Prediction (Prediction of the “Will”)
• Prescription (Recommendation of the “What”)
• Generation (Creation of “New”)

Figure 1.1 shows that—starting from “Description” to “Generation” both the com-
plexity of the task and often the achievable value for the company increase. The
High

Prescipon
(recommendaon
Value for the company

Predicon of the "what")


(predicon o he
"will“)
Inspecon
(analysis of the
„why“)
Descripon
(descripon of the
„actual“)
Low

Low Complexity of the task High

Fig. 1.1 Classification of various tasks according to complexity and achievable value for
a company
1.1 What Is the Core of Artificial Intelligence? 9

use of Machine Learning and Deep Learning enables the creation of predictions as
well as the derivation of recommendations. This allows an AI-controlled process
to make automated decisions without the need for a human decision-maker to take
action. In addition, AI systems can create “new” content.
During the development of Artificial Intelligence, an interesting phenomenon
occurred. The first tasks of Artificial Intelligence were difficult for humans, but easy
for AI systems to handle. This includes, for example, complex computational pro-
cesses. For the AI systems, it was very easy to process large amounts of data based
on rules. On the other hand, it is often much more difficult for computers to accom-
plish tasks that are easy for humans to master. This is the case with language and
object recognition. A human can very easily recognize when a physical object is a
table and when it is a chair. Both usually have four legs, but the function is different.
To learn this, the AI system often has to be shown thousands of images. Nevertheless,
this system—not yet—can recognize the actual “meaning of objects”. The reason
for this lies in the—still—missing worldview.
If an AI system has learned the distinction between a German Shepherd and
a wolf through a large number of photos, the system can be easily misled if a
German Shepherd is seen in a picture with snow. Then it can happen that the German
Shepherd is recognized as a wolf, if during the training of the AI system many pho-
tos of wolves also showed snow. A wolf was then not recognized based on specific
features, but because the pattern “snow” was recognizable in the photos. Or the
other way around: If a wolf in a photo is wearing a leash collar, the AI system will
certainly suspect a German Shepherd, because on the training photos for the AI
algorithms, there were hardly any wolves with collars, but German Shepherds with
collars. So much for (current) computer intelligence.
The contents of Artificial Intelligence are best approached through Fig. 1.2. An
important element of Artificial Intelligence are the so-called neural networks. This
term originally comes from neuroscience. There, a neural network refers to the con-
nection between neurons, which as part of the nervous system perform certain
Perform functions. Computer science tries to replicate such neural networks. A spe-
cial feature of them is that information in the networks is not processed via linear
functions. In addition, parallel processing of information takes place, which is made
possible by the linking of the neurons and the special processing functions. In this
way, even very complex, non-linear dependencies of the original information can be
mapped. Crucial is that neural networks learn these dependencies autonomously.
Here, different learning processes are distinguished, which will be presented later.

Generic term
Artificial intelligence
Concepts for modelling
Neural networks
Artificial creation of knowledge from experiences
Machine learning
Special algorithms supporting machine learning
Deep learning

Fig. 1.2 Performance components of artificial intelligence


10 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

Black box

Input layer Hidden layer 1 Hidden layer 2 Output layer

Fig. 1.3 Various layers in neural networks

Under a neural network is understood a system of hardware and software,


whose structure is based on the human brain. It thus represents the masterpiece of
Artificial Intelligence. A neural network usually has a large number of processors
that work in parallel and are arranged in several layers (see Fig. 1.3). The first layer
(Input Layer or Input Layer) receives the raw data. This layer can be compared
with the optic nerves in human visual processing. Each subsequent layer (here
Hidden Layer 1 and 2) receives the output of the previous layer—and no longer the
data that was processed in the upstream layers. Analogously, neurons in the human
system that are further from the optic nerve receive signals from the neurons that are
closer to them. This natural process is replicated in neural networks. To process the
data, a very large number of Hidden Layers can be used—often not only 100, 1000
or 10,000. The AI system learns from each transition to another layer (ideally) in
addition. The last layer (Output Layer or Output Layer) produces the output of
the results of the AI system (see [8], pp. 801–810; see in depth [9, 10]).
Artificial Neuronal Networks (ANN) or artificial neural networks try to imi-
tate the functioning of the human brain. The connected units—here the artificial
neurons—are organized in layers to process the input information. Each unit can
pass a signal to another unit—as in the human brain. However, the neurons in the
human brain are connected in complex and (so far) unpredictable ways. In contrast,
the artificial neurons of AI systems are arranged in a linear sequence. The overall
process of converting inputs into outputs is based on the programming of the indi-
vidual neurons.
Each processing node has its own Knowledge area. This includes not only the
rules with which it was originally programmed. Rather, it also includes the knowl-
edge and rules that were developed in the course of so-called Machine Learning or
machine learning as a supplement or correction. This means that the “machine”
learns independently and can thus move more or less far away from the original
1.1 What Is the Core of Artificial Intelligence? 11

“knowledge”. Special algorithms are used for this purpose—specifically so-called


self-adaptive algorithms. This allows the machines to learn independently later,
without programmers having to intervene in the ongoing learning process (cf. in
depth [11]). In machine learning, an algorithm or a statistical formula is created that
converts a series of data points into a single result. The algorithms of machine learn-
ing “learn” through training. They recognize patterns and correlations in the data
and use these to deliver new insights and predictions without being explicitly pro-
grammed for it.

Note Box
What is actually an algorithm? An algorithm is a clear action instruction
or a programmed instruction, to process data to solve a specific problem in
a specific way. The following requirements for algorithms are to be
considered:

• Uniqueness
An algorithm must not have any contradictory elements. In this sense, the
algorithm must be unique.
• Executability
An algorithm may only contain instructions that are executable.
• Finiteness (also Finiteness)
The description of the algorithm must be finite. Consequently, an algo-
rithm can only include a finite number of action instructions.
• Termination
The algorithm must deliver a result after a finite number of steps.
• Determinacy
The algorithm must always come to the identical result with the same data
situation.
• Determinism
Within the instructions, there must always be only one possibility for con-
tinuation at any point in time of execution. This means that each subse-
quent step must be clearly defined.

In this sense, a cooking recipe is already an algorithm. Also the formulas for
determining the Body Mass Index, for calculating the return on sales or for deter-
mining CPI (Cost per Interest) or CPO (Cost per Order) are examples of algorithms.
Significantly more complex algorithms are used, for example, by Google and
Facebook to determine which content a specific user gets to see. Many companies
use algorithms in their recommendations (keyword Recommendation Engine). The
aim of their use is to recommend exactly those offers that have the highest probabil-
ity of purchase (cf. in depth [12]).
A special feature of neural networks is therefore their adaptability within a cer-
tain field of application. This leads to the neural networks changing and thus
12 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

developing independently. The insights gained on the basis of the so-called “initial
training” through the training data and through the processing of further data are
incorporated. The weighting of the respective input streams is of great importance.
The AI system independently weights those data inputs higher that contribute to
getting correct answers. In Machine Learning, programs learn from existing data
and apply this knowledge to new data or use it to predict data. In machine learning,
the AI systems independently develop new learning algorithms and improve exist-
ing ones. These algorithms make it possible to analyze large amounts of complex
data and handle various tasks.
A special configuration of neural networks and a subset of Machine Learning is
the so-called Deep Learning (cf. Fig. 1.3; cf. [8], pp. 801–839; [13, 14]). The algo-
rithms of Deep Learning have several layers of neural networks that process infor-
mation on many levels. Before the development of Deep Learning, artificial neural
networks often only had three layers. Deep Learning networks today often have ten
or more layers. The “Deep” therefore refers to the large number of layers of the
neural network. The biggest difference between Deep Learning and other machine
learning techniques is that larger neural networks continuously improve their per-
formance through larger amounts of data. Deep Learning achieves better perfor-
mance especially when processing complex and often high-dimensional data such
as images, speech and text than Machine Learning.
Convolutional Neural Networks (CNN) function similarly to ordinary neural
networks. The only difference is that the connections between the neural layers
resemble the part of the brain that processes images. These architectures are pro-
grammed to perceive each input as an image. Recurrent Neural Networks (RNN)
differ from other neural networks in their architecture. The neurons are connected
in such a way that they can send feedback signals to each other. As a result, the
information loops from layer to layer.

Note Box
The “machine” increasingly emancipates itself from the original inputs (data
and rules) during its use. This is the central difference to the classic rule-
based systems. In rule-based systems, the data is processed according to rules
that were defined in advance. The systems of Artificial Intelligence on the
other hand are able to learn and thus develop independently.
This process improves the existing algorithms—but also leads to also for
the development of new algorithms. Based on these algorithms, more con-
vincing results are to be achieved than by applying predefined rules.
Consequently, the initially used algorithms only represent the starting mate-
rial for the further development of the algorithms. If the further developed
algorithms prove to be more meaningful in the course of the learning process,
the “machine” continues to work independently with these. This process is
called Machine Learning.
1.1 What Is the Core of Artificial Intelligence? 13

To support the described learning processes, the various levels are extensively
interconnected. As Fig. 1.3 illustrates, the inputs of each node of a level “n” are con-
nected to many nodes of the preceding level “n − 1”. An exception is the input layer,
which can also have only one node (Fig. 1.3 shows three nodes here). In addition,
the outputs of level “n” are connected to the inputs of the subsequent level “n + 1”.
The described links enable the step-by-step transfer of information from layer to
layer. The second exception regarding the number of nodes is provided by the out-
put layer. There can be one (as in Fig. 1.3) or several nodes from which answers can
be read.
To describe neural networks, the depth of the model can be used. This is defined
by the number of layers that lie between input and output. Here we speak of the so-
called hidden layers of the model (hence Hidden Layer). Neural networks can also
be described by the width of the model. This takes into account the number of hid-
den nodes of the model or the number of inputs and outputs per node. Variations of
the classic neural network design allow different forms of forward and backward
propagation of information between the levels.

1.1.3 Types of Learning

Before AI systems can develop independently, they must first be trained themselves.
In machine learning, different types of learning can be distinguished:

• Supervised Learning (supervised learning)


• Unsupervised Learning (unsupervised learning)
• Reinforcement Learning (reinforcing learning or learning by reward)
• Self-supervised Learning

Supervised Learning
In Supervised Learning, the AI system already knows the correct answers. The
system is therefore trained to respond appropriately to certain stimuli. The AI sys-
tem must develop algorithms for this that derive the correct answers as precisely as
possible from the existing data set. To do this, the learning algorithm is fed with a
series of inputs and the corresponding outputs. The result that an algorithm is sup-
posed to deliver is already known here. To create a training data set for this super-
vised learning, humans must each Tag elements of the input data with various
features and characterize them accordingly. In addition, the output variables need to
be defined. The algorithm is trained to find a connection between the input data and
the output variables. This involves using, among other things, linear regression,
linear discriminant analysis and the decision tree method. The training is completed
when the algorithm delivers the desired results with sufficient accuracy. This algo-
rithm is then applied to new data that were not previously part of the training dataset.
The task of such an AI system to be trained can be to recognize faces of wanted
persons in photos or in video recordings. For this purpose, corresponding training
data must be created. This involves humans labeling the photos of the wanted
14 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

Unstructured data

+ =
Texts
Labelling the data
Pictures Training data
through a human

Videos

Sounds

Fig. 1.4 Process of supervised learning—stage 1

persons (keyword labeling ; cf. Fig. 1.4). Here, the system is thus informed what the
correct answer would be for each individual input. For this purpose, the training
data are enriched by humans with the decision-relevant data. In this example, these
are labeled photos, because the AI system is to be trained on image recognition. For
training translation systems, identical texts in different languages are provided to
the system during the training phase.
The process of supervised learning is to be illustrated using an example. To train
a neural network accordingly, it is first supplied with large amounts of data. At the
same time, the network must be informed what the output should look like. To train
a network for the identification of faces of known actors, the system is provided
with a large number of photos of actors, non-actors, masks, statues, paintings, ani-
mal faces etc. during the initial training. Each individual photo is labeled (“label-
ing”), which describes the contents of the photo as accurately as possible. These can
be the names of the actors depicted there. On the other hand, this provides clues that
it is not an actor, but a mask or an animal.
To shorten the training phase, certain rules can also be given to the system during
the initial provision of material. This is referred to as the so-called preloading
rules. Such rules can not only accelerate training, but also make the model more
powerful. A face recognition system, for example, could be given the follow-
ing rules:

• Eyebrows are located above the eyes.


• Mustaches are under the nose (hence also upper lip beard).
• Beards are found above and/or next to the mouth, on the cheeks and at the base
of the neck.
• Beards are predominantly found in men; however, there are also women’s beards.
• Eyes are usually at the same height next to each other.
• Eyes are located on the right and left above the nose.
• The mouth is located under the nose.
• Etc.

Through the provision of descriptive Information and possibly from Preloading


Rules the model can adjust its internal Weights. In this way, it learns to continu-
ously improve its operation. For example, nodes A, B, and D in Fig. 1.3 can inform
1.1 What Is the Core of Artificial Intelligence? 15

node BB of the next layer that the input image is a photo of Daniel Craig. In con-
trast, node C believes it has recognized Roger Moore in the picture. This misassign-
ment may be due to the fact that the corresponding photo also shows an Aston
Martin, which both used in James Bond films. If the training program now confirms
that the photo actually shows Daniel Craig, node BB will reduce the weight of the
input from node C because it has delivered a wrong evaluation. At the same time,
the system will increase the weights for nodes A, B, and D because their results
were correct. Each node decides independently which inputs from the previous
layer are sent in which form to the next layer. To make these decisions, neural net-
works use rules and principles. For Classification decision trees, random forests,
linear classifiers, naive Bayes classifiers, and k-nearest classification can be used.
For determining Regression both linear and logistic regression as well as polyno-
mial regression can be used.
Based on these training data, algorithms are developed to achieve the predefined
results. The goal of such an AI system can also be to clearly recognize photos of
certain people (such as criminals) in a multitude of photos. As long as the desired
hit quality is not yet achieved, further training data must be generated by labeling
and made available for training the algorithm (see Fig. 1.5).
Such a trained AI system was used as a Test for Face Recognition 2018 at
Berlin-Südkreuz station. The task was to recognize people marked as “wanted” in
advance in the crowds that use the station every day. Regardless of whether a
“wanted” person wore glasses or a scarf, the AI system was able to reliably identify
the relevant faces in over 80% of cases. The evaluation of the test results made it
clear that automated face recognition systems can provide significant added value
for police work. The rate of false hits—the system recognizes person A, although it
is person B—was on average below 0.1%. Out of 1000 matches, only one would be
faulty. By combining different systems, these “false positive” erroneous hits can be

Training data

Unstructured data

Texts
Description
Model of machine Inspection
Pictures
learning Prediction
Prescription
Videos

Sounds

Further data
labelling

Fig. 1.5 Process of supervised learning—stage 2


16 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

reduced to up to 0.00018% (see [15]). The use of such systems can support police
work if the legislator creates the necessary framework conditions.
Another exciting task for supervised learning is to identify the people in the
social media. to identify those who are particularly susceptible to, believing false
messages or trusting conspiracy theories, to comment positively on these and
forward them. The phenomenon to be explained here is the “susceptibility”, which
needs to be identified—by whatever characteristics—in the gigantic dataset of
social media. Such insights may lie outside of what people would suspect. Such an
application became known through the Cambridge Analytica/Facebook scandal in
2016 in relation to the manipulation of the US presidential election. Here too, the
core issue is pattern recognition, on which then manipulative messages are based
(cf. in depth [16], pp. 82–84).

Unsupervised Learning
In contrast to supervised learning, the AI system in unsupervised learning does not
have predefined target values, but must independently recognize similarities and
thus patterns in the data. The user of the AI system is therefore not necessarily aware
of such patterns in advance. The insights gained by the system can thus also lie
outside of what has been previously “humanly conceivable”. In unsupervised learn-
ing, the AI system is therefore not given the correct answer. It is rather expected to
come to relevant results on its own. This happens by exploring the data to find some
kind of structure or pattern. The AI system uses its experiences in solving one prob-
lem to solve another, related problem. As input, the algorithm receives unlabeled
data. In these, it should independently recognize a structure (e.g. the characteristics
of resigners or potential new car buyers—or of susceptible people). To this end, the
algorithm identifies data groups that exhibit similar behavior or similar characteris-
tics (cf. Fig. 1.6). Hierarchical and k-means clustering are used, among other things.
To convey the basic principle of unsupervised learning, a very simple example is
sufficient. Imagine, in a large shopping basket are all your purchased products.
These represent the data input. The AI system now has the task of recognizing
patterns in the shopping basket. For this, one product after another is taken out of
the basket and examined. First of all, packaged and unpackaged products can be
recognized and grouped. Among the packaged products, cold products can be

Unstructured data

Texts
Description
Algorithm of Inspection
Pictures
machine learning Prediction
Prescription
Videos

Sounds

Fig. 1.6 Process of unsupervised learning


1.1 What Is the Core of Artificial Intelligence? 17

recognized and those that are at room temperature. Furthermore, these products can
be grouped by weight, volume, color, packaging material, etc. Whenever something
suitable is found, the assignment is made—e.g. the tenth pack of UHT milk to the
corresponding group of packaged products. With the unpackaged products, longer
ones (e.g. bananas, leeks), round ones (lemons, oranges, grapefruits) and scaly ones
(pineapples, whole fish) can be recognized. If the comparison process does not
result in a pair, a new group is defined. This process is carried out until all products
are assigned. The result is various groups or clusters. A structure in the source data
becomes visible. In unsupervised learning, the AI system thus finds independent
solutions. Here too, the core is again pattern recognition, to which further process-
ing steps can be connected.

Reinforcement Learning
In Reinforcement Learning (also reinforcement learning, learning through reward
or through trial and error) there is no optimal solution path at the beginning of the
learning phase. The system must iteratively try out solution paths through a trial-
and-error process independently. The paths taken must then be discarded and/or
further developed—depending on whether a desired or an undesired result was
achieved. This iterative process is driven by “rewards” (for good solution ideas) and
“punishments” (for bad approaches). The AI system memorizes every experience
and uses the results achieved for subsequent experiences. This learning concept is
often used when only a few training data are available or the ideal result is not
clearly definable is. It is also used when something can only be learned from inter-
action with the environment.
In the course of reinforcement learning, the algorithm makes a decision and acts
accordingly. If the action leads the AI system to a closer approach to the goal (e.g. a
victory in a chess game or in a successful investment decision, keyword Robo
Advisor in Sect. 9.1), the system receives a reward. Alternatively, it experiences a
punishment, if it moves away from the goal. This can be, for example, the falling
price of a just bought share, if the system has not bet on falling prices. The algo-
rithm optimizes its actions independently, by continuously correcting itself (see
Fig. 1.7).
The suggestions at Netflix, Spotify & Co. use reinforcement learning. After all,
the AI systems here learn very quickly which recommendations are followed—and
when the listening to a song or watching a certain movie is interrupted. Through
“reward” (user listens/watches the recommendation completely) or “punishment”
(recommendations are ignored or only partially used) the AI agent learns the prefer-
ences of each individual user better and better. This makes individualized recom-
mendations possible—one-to-one—for millions of users.
What Reinforcement Learning means can be explained particularly vividly using
the example of the game of Go (see Sect. 1.2). The first AI system used AlphaGo
was still classically trained with thousands of human amateur and professional
games to learn the game of Go. Here, supervised learning was first used—the algo-
rithm was supposed to achieve a certain goal, here winning the game. The further
developed AI system AlphaGo Zero on the other hand skipped this training step and
18 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

AI
Agent

Status Reward Action

Environment

Fig. 1.7 Process of reinforcement learning

relied on reinforcement learning. For this TensorFlow was used, a special frame-
work for data stream-oriented programming. Noteworthy is the fact that the neural
network used knew nothing about the game Go on the launch day—except for the
rules to be followed. Unlike the earlier versions of AlphaGo, AlphaGo Zero only
perceived the stones of the board. Information about unusual Go board positions
and other historical data were not communicated to the system. The AI system
learned independently through reinforcement. It played again and again against
itself, until it could predict and recognize which moves would positively or nega-
tively influence the outcome of the game.
AlphaGo Zero thus learned the game solely by playing Go games against itself.
After just three hours, AlphaGo Zero reached the level of a human beginner in the
game of Go. After 19 h, the system behaved like an advanced player and after 70 h
like a real super professional. In this way, it quickly surpassed the human level of
play and defeated the previously published version of AlphaGo, which defeated the
champion Lee Sedol, with 100–0 games. To achieve this learning success, AlphaGo
Zero played a total of 4.9 million games against itself in quick succession in the first
3 days. The earlier AI system AlphaGo, on the other hand, still needed several
months of training to reach the same level. After 40 days, AlphaGo Zero achieved
the world’s best results—without human intervention and without having been
trained with historical data beforehand (cf. [17]). A video vividly illustrates this
development: https://deepmind.com/blog/article/alphago-­zero-­starting-­scratch.

Food for Thought


What happens when self-learning machines acquire knowledge, which is
hardly or no longer comprehensible for humans?
1.1 What Is the Core of Artificial Intelligence? 19

Self-Supervised Learning
A new way to train AI systems is Self-supervised Learning. To enable such self-
supervised learning, computers must be able to understand the world and the data
describing it in its entirety, without having been programmed for a specific task.
Tasks can be solved by these AI systems solely through the knowledge about the
structure of the provided data. What is meant by this can be explained as follows: A
baby has learned by the age of about 9 months that unsupported objects fall to the
ground due to gravity. It has, so to speak, built an intuitive model of the world. The
baby has experienced through several experiences, what happens when you drop
loose objects—be it balls, dolls or towels. This knowledge leads to the fact that in
the future certain objects (e.g. favorite dishes) are no longer dropped. For this, the
baby never had to see a cup fall to the ground. The baby abstracts from the indi-
vidual experiences and derives from them generally valid laws. If then nevertheless
a plate of spinach moves towards the ground …
Similarly, when we hear the words house or city. Here, people immediately have
a Concept of it—independent of a specific house or a specific city. Humans have the
ability to learn from a small amount of data. Humans know the “concept of a
house” and the “concept of a city”. This enables us to recognize and correctly
distinguish between different manifestations of houses and cities from huts or vil-
lages. AI models cannot do this—yet.
Here the question arises about the nature of this learning process. How do
humans and animals learn to understand the world in its entirety through observa-
tion and experiences, independent of specific tasks? How is it possible to derive
insights for the future from just a few examples or attempts (“stove = hot”)? At the
same time, humans can also abstract during the learning process: If “stove = hot”
means, then “pot on stove = hot” and possibly even “soup in pot on stove = hot” also
applies. This is about learning processes that abstract from specific tasks. The core
issue is to convey an understanding of the world to AI systems—to breathe com-
mon sense into them, so to speak. What is needed for this? A large amount of data,
powerful algorithms and above all: very large computing power. The development
of AI systems that go in this direction has picked up speed (cf. [18], p. 19).
How the self-supervised learning works exactly will be described here. In this
type of machine learning, the model learns from the data itself, without needing
explicit—human-assigned—labels. In this form of learning, a prediction task is
defined by hiding part of the existing data. The model then tries to predict the hidden
part based on the visible part. This is to be explained for the use of texts using the
sentence “I love to walk on the beach.” In the process of self-supervised learning,
for example, the word “walk” would be hidden. Then the model is asked to predict
the missing word. In this case, the model learns from the context that the missing
word is probably an activity that one does on the beach. Since the AI engine knows
the correct solution, no human needs to check the correctness of the answer any-
more. The AI system itself already knows the correct answer. That’s why it’s called
“self-supervised”. If the AI engine can predict words well, the task can also be more
difficult. Then whole sentences and later even whole sections have to be predicted.
Here too, the learning system already knows what the correct answer is. Therefore,
20 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

much faster learning progress can be achieved here. After all, the results no longer
have to be checked by humans.
Self-supervised learning is also used for images. Suppose we have a picture of a
dog. Then part of the picture is hidden, for example, the tail of the dog. The model
is asked to predict or “fill in” the missing part of the picture—again based on the
visible part of the picture. Here too, the correct answer is “known”. In both cases,
the model learns to recognize patterns in the data. These patterns are used to make
predictions regarding the missing parts. Thousands, hundreds of thousands or mil-
lions of learning processes can be carried out without human intervention—without
labeling (cf. Fig. 1.4).

Note Box
The big questions still are:

• How can AI systems acquire an understanding of the world?


• And by when will this possibly have been achieved for a multitude of
applications?

Note Boxdemystification of Artificial Intelligence one could put it quite bluntly:


For the
The core of Artificial Intelligence consists in, independently processing
large amounts of data, independently recognizing patterns in it and making
decisions and/or predictions based on these patterns or generating “new” con-
tent. AI applications are often faster and—depending on the system—also
more cost-effective than humans or rule-based systems.

Food for Thought


There are—still—significant differences in the learning processes of humans
and machines. Humans learn and orient themselves in the world by consis-
tently building on their previous experiences. In this process, everything that
humans experience is contextualized and thus placed in a specific context.
Today’s AI systems lack such background knowledge. Moreover, they lack
instinct and intuition to further secure decisions.
Furthermore, for humans, information often simultaneously acts as a
emotional trigger. Words, images, numbers, smells, taste impressions etc.
often—in connection with memories—trigger entire chains of emotions. And
these chains of emotions influence human action. Such processes are (still)
foreign to AI systems.
Here it applies: Knowledge is different from mere presence of informa-
tion. And Wisdom in turn is something quite different from knowledge. And
today’s AI systems are still far from wisdom.
1.2 Summary 21

1.1.4 Generative AI

An increasing importance is today given to the so-called Generative AI—the cre-


ative Artificial Intelligence. This is a sub-discipline of Artificial Intelligence. It
deals with the generation of new data, images, texts or other content through the use
of algorithms and models. In contrast to other applications of Artificial Intelligence
for the analysis and evaluation of data, the Generative AI aims at creating “new”
content.
Generative AI also uses various methods of machine learning to recognize what
certain data structures look like and how they can be generated. The algorithms are
trained by analyzing a large amount of data and trying to find patterns in this data.
Once the model is trained it can be used to generate “new”, unseen data.
Exemplary applications of Generative AI include the creation of artworks, the
generation of texts (keyword ChatGPT), the creation of realistic-looking images or
the creation of virtual worlds. Generative AI has the potential to revolutionize vari-
ous industries by demonstrating new ways of creating content. However, there are
also concerns about the potential misuse of Generative AI, especially in terms of
creating fake content or creating Deepfake videos. Therefore, it is important to
understand and address the risks and challenges associated with this technology to
ensure safe and responsible use.

Food for Thought


Geoffrey Hilton, known as the Godfather of Artificial Intelligence, worked
at Google in AI development until 2023. He formulated an interesting thought:
There are few examples where a more intelligent entity is controlled by a less
intelligent entity. In AI applications, the less intelligent entity is humanity (see
[19], p. 24).

1.2 Summary

• The Artificial Intelligence with its various applications has already arrived in
many areas of our everyday life—without being visible everywhere.
• The facets of human intelligence are so diverse that many years of development
work and high budgets are necessary to even come close to human intelligence.
The challenge here is: Creation of an Artificial General Intelligence.
• The applications of Artificial Intelligence build on the insights gained through
neural networks. Concepts of Machine Learning or Deep Learning are used.
• The processing of information in AI systems takes place in various layers in the
sense of processing layers.
• To advance the automated learning process, the learning forms supervised
learning (Supervised Learning), unsupervised learning (Unsupervised
Learning) and reinforcement learning (Reinforcement Learning) are used.
22 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

• A relatively new learning concept is self-supervised learning (Self-supervised


Learning). For this, computers must be able to understand the world and the data
describing it in its entirety, without having been programmed for a specific task.
• Through Generative AI or creative Artificial Intelligence “new” content is
created.

1.3 What Goals Can Be Achieved with Artificial Intelligence?

Humans have always strived to imitate nature and emulate the solutions found there.
A claw was derived into a knife. The ability of birds to fly inspired humans to
develop various flying devices. Fire, with its various properties, was “domesticated”
by humans into a stove, a light bulb, and a heater. Humans have now taken on a task
that has not yet been solved: the machine replication of human intelligence.
Work on the first calculating machines began as early as the seventeenth century.
The abacus, a mechanical calculating aid still in use today, is even dated back to the
second pre-Christian millennium. The development towards the computer was sig-
nificantly driven forward in the 1940s by the German developer Konrad Zuse. The
machines Zuse Z3 and Zuse Z4 were the first universally programmable computers.
Even then, the primary goal was to become equivalent to human intelligence with
the help of technology. Since then, further groundbreaking breakthroughs have been
achieved. Figure 1.8 shows in which areas automation and digitalization—increas-
ingly with AI support—are taking place. Here, a distinction is made between the
type of task with the manifestations “manual” or “cognitive” and the routine level
of the task with the dimensions “high or low routine level”.
Manual tasks with a high routine level are already largely automated or digi-
talized (Field I in Fig. 1.8). Even manual tasks that only have a low routine level are
already often automated or digitalized (Field II). In both areas, there are a variety of
Lower degree
Degree of routine in task

II. IV.
Many tasks have Current focus of new
already been business models and
automated/digitized solutions

I.
Higher degree

Already largely III.


automated/digitized Automation/ digitization
tasks process has started

Manual Cognitive
Type of task

Fig. 1.8 Fields of application of automation and digitalization—increasingly with AI support


1.3 What Goals Can Be Achieved with Artificial Intelligence? 23

tasks in production or logistics processes. Here, AI agents are increasingly being


used to make the processes more efficient.
Only much later could automation and digitalization, as well as the use of AI,
penetrate into areas where cognitive tasks have to be mastered. This was first
achieved in Field III. Here, the cognitive tasks still have a high routine level. An
example that made the world sit up and take notice was the competition between the
then reigning Chess World Champion Garry Kasparov and the chess computer
Deep Blue. In 1997, a computer managed to beat a world champion in chess for the
first time. Why is this a cognitive task with a high routine level? Quite simply:
because there is only a limited number of allowed moves in chess.
It was not until 2011 that the knowledge show Jeopardy, peppered with wit,
irony, and free association, saw the reigning Champions Ken Jennings and Brad
Rutter were defeated. Here, cognitive tasks with a low degree of routine had to be
solved (see Field IV in Fig. 1.8). The winner was the IBM computer Watson.
Interesting was the comment by Ken Jennings [20]:

Brad and I were the first knowledge-industry workers put out of work by the new generation
of ‘thinking’ machines.

It is important to understand what prerequisites had to be met for the victory of the
IBM computer Watson. First, Watson was fed with the contents of millions of
books with encyclopedic knowledge, including dictionaries and lexicons. Watson
was also equipped with an AI to understand natural language and recognize irony.
In addition, the AI system was able to decode abstract statements, access knowledge
specifically and make quick decisions. Watson also learned the special structure of
the questions of this quiz show by analyzing the question and answer sequences of
previous game shows and using them for training. In addition, Watson was trained
to not only search for keywords in the content—like a classic search engine. Watson
also analyzed the sentences in the context of keywords to recognize larger contexts.
Through this, Watson could additionally draw on supplementary sources from dif-
ferent contexts. Based on this comprehensive database and a very intelligent pro-
cessing of the questions as well as the available “knowledge”, Watson developed a
way to find the answers to new questions within fractions of a second.
How did the question-and-answer game Jeopardy exactly proceed? Watson could
already respond in natural spoken language. However, Watson could not yet under-
stand the natural spoken language in 2011. Therefore, the computer was given the
quiz questions as text in written form. Subsequently, algorithms searched the knowl-
edge archive for words that are in a relationship to the request. Watson could access
online Wikipedia as well as the last 10 years of the New York Times. From this,
50–60 information units were selected and a ranking of up to 200 hypotheses was
created. The questions to be answered were about geography, exact dates or even
word games. Based on many thousands of Jeopardy questions from previous game
rounds, Watson determined which algorithms best answered which category of
questions. For this, over 1000 algorithms worked in parallel processes. Watson
defeated the human geniuses in a field where—unlike chess—ambiguities, irony
and word games have their place (see [21]).
24 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

Even deeper into Field IV in Fig. 1.8 is the Go competition advanced. However,
it took another 5 years for AI to achieve a victory in the world’s most difficult board
game. It wasn’t until March 2016 that an AI system managed to beat the reigning
Go world champion, Lee Sedol from South Korea, 4:1. The winner was—as already
mentioned in Sect. 1.1—the AI system from Google named AlphaGo. Lee lost four
out of five games against the self-learning, continuously improving software. Before
the game, the world champion was still confident of victory. After all, the game of
Go is much more varied than chess. The game board does not have 64, but 361
fields. This results in many more game possibilities—a challenge for both humans
and machines.
Recognized specialists had previously claimed that a computer could never achieve
this! Why? The 19 × 19 large Go game board has 361 positions. Each position can
either be unoccupied or occupied by a white or black stone. Therefore, the number of
all possible positions is 3 to the power of 361. Consequently, there are 10 to the power
of 172.241 different moves possible. Experts claim that this number exceeds the num-
ber of atoms in the universe, as their number is “only” about 10 to the power of 80—in
comparison to the possible Go moves, almost a negligible number.
To compete in this competition, the world champion only had one—albeit very
well trained—brain. AlphaGo on the other hand, could access two neural networks
with millions of connections. The computer could “think” and predict the most
likely moves of its opponent. The special thing was the combination of knowledge
with intuition. Deep learning algorithms not only enabled an analysis of thousands
of game moves. Through trial and error, the neural network also trained itself to
learn from its own experiences—like a human, but much faster (see Reinforcement
Learning under Sect. 1.1.3).
After the competition, Lee Sedol said two things: The computer had repeatedly
surprised him with moves that no human would make and that had never been
played before. At the same time, he often had the feeling of playing against a human.

Food for Thought


The Sputnik -Moment for the USA was October 4, 1957. At this time unex-
pectedly, the Soviet Union managed to launch the first human-made satellite
named Sputnik into space. This triggered a space race between the then super-
powers, culminating in the moon landing by the Americans in July 1969.
The Sputnik -Moment for China was the year 2016 (see [22], p. 3). In this
year, the Go world champion Lee Sedol lost to AlphaGo. AlphaGo is the
DeepMind aforementioned Artificial Intelligence system by Google. The leader-
ship in China recognized through the defeat of the Go world champion: If
Artificial Intelligence is capable of winning in Go, then this technology has enor-
mous potential that needs to be utilized. Until the defeat in Go, only a few spe-
cialists in China dealt with the topic of Artificial Intelligence. That was over
afterwards. From 2016 on, a real AI rush began, which made hundreds of thou-
sands of researchers curious about the new technology and motivated the
Chinese state to make large investments—with considerable progress in this field.
Will there be such a Sputnik moment for Germany or Europe as well? And
if so, how long do we have to wait for it?
1.3 What Goals Can Be Achieved with Artificial Intelligence? 25

The triumph of AI continues to advance. One thing is already becoming appar-


ent: So far, the improvisational ability of AI systems has been very limited. AI appli-
cations mostly follow algorithms that only allowed them to act in a certain situation
in a certain way. In new (untrained) situations, the AI systems were quickly over-
whelmed—they lacked the “common sense”. Now, however, AI systems are slowly
starting to surpass humans in more and more areas.
The AI named Libratus developed by the Carnegie Mellon University defeated
four of the world’s best professional poker players in a 20-day poker marathon
in 2017. Originally, it was said that an AI system could not win at poker because
it was not capable of bluffing. The opposite was proven in this competition at the
Rivers Casino in Pittsburgh. To achieve this goal, a meta-algorithm of the AI
system analyzed which gaps the pros had identified and exploited in the strategy
of Libratus. These gaps were then prioritized and closed overnight by newly
developed algorithms. Libratus was the first computer to defeat four professional
poker players. Due to the magnitude of the victory, experts were 99.98% sure
that the victories were statistically significant and therefore not due to chance
(see [23]).
In 2021, an AI computer named Dr. Fill beat 1287 competitors at an annual
American crossword puzzle tournament. Here, participants from around the
world compete to solve eight original crossword puzzles. The most successful par-
ticipant in 2021 was no longer a human, but an AI system, the development of which
involved a research team from the University of California, Berkeley. Dr. Fill solved
the last puzzle in 49 s. It was more than two minutes faster than the best human
participant. This success was preceded by a comprehensive training phase. During
this training, the AI system learned to read the clues in the crossword puzzles and
generate possible answers. Here, two stages have to be completed. First, several
answer alternatives have to be developed for each clue. Then it has to be checked
which of the possible answers fit into the fit the respective context of the crossword
puzzle. For each answer alternative, the probability was calculated that it would be
the correct answer (cf. [24]).
Finally, the publication of ChatGPT at the end of November 2022 ensured that
AI applications were discussed in public in a way never seen before. People who
had never actively used AI applications before suddenly recognized the perfor-
mance of AI systems (cf. in depth Sect. 4.2.2). This effect was further amplified by
the fact that with DALL-E 2, Midjourney and Stable Diffusion applications became
available for the general public to generate images based on texts. Other systems
now allow music to be generated from texts, for example with Riffusion or Google
MusicLM. Artificial intelligence has arrived in the breadth of society (cf. in depth
Sect. 4.6).

Note Box
The introduction of ChatGPT at the end of November 2022 was for artificial
intelligence the iPhone -Moment—the breakthrough for the technology
called Artificial Intelligence.
26 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

Food for Thought


The importance individual companies attach to artificial intelligence is made
clear by the statement of the CEO of Google, Sundar Pichai. He refers to
Google now as an “AI-first” company. After that, all further developments
are primarily focused on expanding the AI competence of Google. Why this is
so, is made clear by another assessment by Pichai:
AI is more important than fire or electricity.

You don’t have to fully agree with this statement, but artificial intelligence is
definitely playing in a league with electricity!
Artificial Intelligence is the key technology of this century!

1.4 Summary

• Artificial Intelligence is not just a mere imitation of human intelligence. It also


serves to carry out activities that have not been done by humans before, not as
quickly and/or could not be performed as well.
• The fields of application for automation and digitalization can be distin-
guished by the type and degree of routine of tasks and are further driven by the
use of AI.
• AI-based systems are opening up year by year new fields of application. At
regular intervals, services are provided that could never before be generated by a
combination of hardware and software. A convincing example of this is ChatGPT.
• The target system of AI solutions is open-ended!
• The time to get into AI is now !

1.5 Fields of Application of Artificial Intelligence

There is currently no uniform approach to describing the different fields of applica-


tion of artificial intelligence. Some experts focus more on the IT reference. This
results in AI categories like “Machine Learning”, “Deep Learning”, “Modeling”,
“Problem Solving” or “Uncertain Knowledge” (cf. [8, 25]). Such classifications are,
in my opinion, not very useful, because they rather aim at the basics of artificial
intelligence and thus on its “engine room” and not on the exciting areas of use.
Against this background, we see the most important fields of application of artifi-
cial intelligence as shown in Fig. 1.9.
However, it should be noted at this point that many AI solutions use several of the
fields of application of artificial intelligence shown in Fig. 1.9 at the same time. This
becomes clear in the example of a self-driving vehicle :
1.5 Fields of Application of Artificial Intelligence 27

Natural Language Processing Natural Image


(NLP) Processing/Computer Vision
Recording, processing and answering in Recognition, storage, processing and
natural language (e.g. speech-to-text, text- generation of images (e.g. text-to-picture)
to-speech,
to s speech-to-speech, data-to-text,
text-to-music)

Expert Systems Robotics


Collection, storage and processing of different Mechanical, computer-controlled systems that
types of information to derive can perform a variety of different tasks
recommendations/action instructions

Emotional AI/Affective AI

Fig. 1.9 Fields of application of artificial intelligence

• If the driver enters his destination by voice command and the car confirms the
destination via natural spoken language with: “The destination Königswinter has
been recorded”, then a speech processing takes place at input and output.
• An autonomous or semi-autonomous car must continuously process a large
amount of image information from various cameras and from further sensors.
Only in this way are red traffic lights, stop signs and speed restrictions as well as
pedestrians, cyclists and other road users recognized. The basis for this is an
image processing.
• During the journey, the passenger can be informed about the cheapest gas sta-
tions, tourist attractions and interesting restaurants and hotels. For this purpose,
expert systems are used.
• Finally, the entire vehicle with its integrated technologies (including speech and
image recognition) represents a particularly powerful robot. Its task is to trans-
port passengers and/or things safely and economically from A to B.

Many AI applications today are thus already hybrid forms of the various fields of
application of artificial intelligence.

Note Box
Artificial intelligence is a cross-cutting technology—just like computers,
automobiles, telephony, and the internet itself. Therefore, AI applications will
penetrate all industries and all stages of value creation—sooner or later, more
or less comprehensively.

The following will shed more light on the individual fields of application of arti-
ficial intelligence.
28 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

1.5.1  atural Language Processing (NLP): Processing


N
of Natural Language

The processing of natural language is discussed here as the first important field of
application of artificial intelligence (see Fig. 1.9). Natural languages are those spo-
ken by humans. This is to be distinguished from programming languages like Java
or C++. Natural Language Processing (NLP) or speech processing (Speech
Recognition) deals with computer programs that enable machines to understand
human language—in word and writing. This involves a specific form of automated
pattern recognition, called linguistic intelligence. The processing of natural lan-
guage enables an intuitive form of communication between humans and intelligent
systems. NLP controls modern interactive voice dialogue systems (IVR—
Interactive Voice Response), by processing speech to improve communication.
Chatbots are the most common application of NLP in business.
The importance of speech recognition in the future is indicated by the expected
growth rates in this segment. The global market for natural language processing
is expected to grow from about $27 billion in 2022 to about $160 billion in 2029.
This corresponds to an annual growth of 18% during this period (see [26]). This
development is being driven by companies like Amazon, Google, HewlettPackard
Enterprise Development LP, IBM, Microsoft/OpenAI and SAP. This enormous
growth and the commitment of the tech giants provide good reasons for you to also
deal with the corresponding fields of application of artificial intelligence.
The following forms of application of Natural Language Processing are to be
distinguished here:

• Speech-to-Text (STT)
In this application, the spoken word is immediately converted into digital text.
This is the case when using Siri (Apple), when dictating emails or notes into the
smartphone. This text can then be further processed in other applications.
• Speech-to-Speech (STS)
Such an application is present in Google Translate, when there is a voice input
in German, for example, and an immediate translation into Japanese or Chinese
is generated with voice output. The so-called Natural Language Generation
(NLG) is used for the output of speech. Question and answer sequences when
using digital personal assistants (like Alexa or Google Home) also use this vari-
ant. However, it should be more precisely stated here: STT—Processing—
TTS. The digital assistants first convert the spoken language into a digitally
available text. This text is then interpreted and processed. The answer is initially
again a digital text. This is finally output through spoken language—and all this
in a few seconds.
Google already demonstrated in 2018 what AI systems are capable of here.
Google Assistant made a hairdresser appointment—without the person on the
other end of the line realizing it. The video can be found here: https://www.you-
tube.com/watch?v=yv_8dx7g-­WA.
1.5 Fields of Application of Artificial Intelligence 29

Film Tip
An inspiring thought experiment, on how natural the future Speech-to-
Speech communication with a digital assistant can feel, is shown in the film
HER by Spike Jonze.

• Text-to-Speech (TTS)
This application creates a spoken version of the text based on digital documents.
Emails, SMS and other content (e.g. from books or websites) can be “read out”
in this way. Also, acoustic announcements in voice dialogue systems belong to
this category. This function can be particularly helpful for visually impaired peo-
ple, enabling them to “read” screen information.
Amazon Polly uses AI technologies to synthesize naturally sounding human
speech (see [27]). This allows written texts to be converted into natural language.
Amazon provides dozens of lifelike voices in a wide range of languages for
voice-activated applications. This option is suitable for RSS feeds and website
content. The voice outputs from Amazon Polly can also be saved and played back
during calls.
• Data-to-Text (DTT)
Data-to-Text solutions are based on structured data. This can be attributes, for
example, that are already available in table form. These are product features, busi-
ness results of a company, or results of sporting competitions. The user retains
control over the text output during the generation of the text. The texts generated
in this way are consistent and meaningful. The text output is multilingual; i.e., the
output is possible in various languages simultaneously. Larger amounts of text
can be generated based on structured data sets with variable details through Data-
to-Text solutions. Such applications are suitable for product descriptions in
e-commerce, in the financial sector, and in sports reporting (cf. [28]).
• Text-to-Text (TTT)
In TTT applications, for example, an electronically available text is translated
into another language—also in text form—using a translation program like
DeepL or Google Translate. In applications like ChatGPT, a task or question
formulated in natural language is answered also in natural language and output
as text. The answers can only be generated one after the other—and always only
in one language. The user has no control over the generated text.

AI systems have now made great strides in processing spoken language and texts. In
doing so, they are increasingly mastering the challenges that lie in the fact that every
person has an individual oral and written expression. The human language con-
sists of an individual mix of the following aspects (cf. [29], pp. 141–143):

• Dialect
• Accent
• Vocabulary (such as colloquial language, technical terminology)
30 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

• Phonology (this is about the sounds in a language system)


• Morphology (this looks at the structure and structure of words)
• Syntax (this is essentially about the study of sentence construction; it is about
the usual connections of words into word groups and sentences in a language as
well as the correct linking of linguistic units in a sentence)
• Semantics (this is about the meaning of linguistic signs and sequences of signs;
essentially, it is about the meaning or content of words, sentences, or entire texts)
• Pragmatics (it examines the context-dependent contents of specific linguistic
utterances)

NLP Applications They must be able to understand the “true” meaning of a state-
ment as much as possible, despite sometimes significant differences in all these fac-
ets—just as a human brain tries to do, even if not always correctly! A data dilemma
that is still difficult for AI systems to overcome arises when language wit, irony, sar-
casm, puns, and rhetorical phrases are used in communication. These phrases include
statements like “I’m just saying”, “just now”, “now”, “at the end of the day”, etc.

Food for Thought


The following example of a translation by DeepL shows, what challenges
AI-supported translation still has to overcome in 2023.

• According to Sinek, the fundamental difference between the “Apples” of


the world and everyone else is that they start with “why”.
• According to Sinek, the fundamental difference between the “Apples” of
the world and everyone else is that they start with “why”.

Although the term “Apples” starts with a capital letter, the program does not
recognize the pun that “Apples” does not mean apples, but tech giants from
the Apple league.

The AI process responsible for processing spoken language is referred to as


Natural Language Understanding (NLU). This form of text processing is about
giving the text meaning and extracting information from which meaningful insights
can be derived. The particular challenge lies not only in the pure comprehension of
a sentence, but also in the multi-layered meaning that can be associated with a sin-
gle sentence—especially depending on the respective context.
This challenge is to be illustrated using the example of the so-called 4-Ear-
Listenings. This is also referred to as the Four-Ears-Model or the Message-Square
(see Fig. 1.10; see [30]). According to this, every verbal message can be checked for
four different dimensions of communication:

1. Factual Content
This is about the specific, the “pure”, the “objective” information of a state-
ment. But which recipient listens “purely objectively”?
1.5 Fields of Application of Artificial Intelligence 31

How are the facts to be


understood?
Factual content

What kind of person is this?

What should I do, feel, think


based on the message?
Appeal for action
Self-revelation
What about her?
Message

p
Relationship
How does this person talk to me?
Who does he think he is talking to?

Fig. 1.10 4-Ear-Listening—the four aspects of a message

2. Self-Revelation
With a message, the sender simultaneously transmits—intentionally or unin-
tentionally—information about himself that he wants to share with the other
person—or not.
3. Relationship
With the terms we use and the way we emphasize them, we also “reveal”
something about how we think about the other person and what our relationship
is with this person.
4. Call to Action
Often a message also contains a request or a demand directed at the other
person. But this does not have to be the case.

What the recipient hears from a message from us is therefore not clearly defined
and is highly subjectively colored—based on previous experiences. Our conversa-
tion partner potentially listens with all four ears and decides—consciously or
unconsciously—independently about which dimension he hears or wants to hear
from a message.

Note Box
Many misunderstandings in everyday communication—private and pro-
fessional—are due to the fact that we are usually not aware of all four aspects
of a message we send or receive.
Consequently, misunderstandings are a logical consequence—however,
these are avoidable in many cases.
You just have to talk about it!
32 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

A well-known example should illustrate this. Imagine the following situation:


She is at the wheel of their shared car, he is in the passenger seat. Now he says: “The
light is green!” What can be heard—depending on the quality of the relationship
and the experiences made so far between the two protagonists?

• Factual Content
The light is green and we can drive. Here, only a neutral fact is precisely
described.
• Self-Disclosure
I am much better qualified to drive a car than you. Because I have—compared to
you—already noticed that the light has long been green.
• Relationship
“Always” I have to tell you what to do. “Never” do you become active on
your own.
• Call to Action
Now finally start driving!

The stability of the assumed relationship between the two people largely depends on
which of the four ears the message is received and interpreted in such and a thou-
sand other contexts.

Note Box
Use the 4-Ear-Listening for a few days in your professional and private
everyday life—and recognize which misunderstandings occur when we are
not aware of the different dimensions of our messages. Here we can only
get better!
How can this be achieved? By communicating—for example in the case of
unexpected reactions from the other person—what kind of message we “actu-
ally” wanted to send (for example, that the traffic light was “green”—and
not more).

This—not fully developed in humans either—ability of 4-Ear-Listening must


be developed by an AI system in the future if it wants to become an empathetic,
compassionate and appreciative conversation partner. Many applications are still far
from this, as we can experience day by day.
Everything that belongs to the field of Natural Language Processing is shown
in Fig. 1.11 (see [31]). Here the term Natural Language Understanding (NLU)
appears again. Understanding language is a prerequisite for Natural Language
Generation (NLG). Functions that go beyond pure language understanding or pure
language reproduction are assigned to the area of Natural Language Generation.
Both fields of application together are referred to as Natural Language Processing.
Natural Language Understanding refers to the decoding of natural language.
This is about the machine processing of information input, which is presented as
text or spoken words (see Fig. 1.11). This is done by semantic parsing (Semantic
1.5 Fields of Application of Artificial Intelligence 33

Natural Natural Language


Language Understanding
Generation Semantic Syntactic Relationship
Parsing parsing extraction
Translation by
machines Paraphrases Sentiment analysis

Create summaries Word type Proper name Coreference


annotation recognition resolution

Dialog
agents Answer
questions

Fig. 1.11 Functions within natural language processing

Parsing) or the extraction of information. The semantics deals with the meaning of
linguistic signs and sequences of signs and tries to recognize the content of a word,
a sentence or a text. The term parsing stands for dissection or analysis. In addition
to NLU, the syntactic parsing (Syntactic Parsing) is used. In contrast to semantics,
the syntax is the study of how sentences are formed. This is about how the connec-
tions of words and groups of words in sentences usually occur. Consequently, the
grammatical structures of a text are analyzed and used to represent a context-free
relationship of the individual word elements. Through the combined use of Semantic
Parsing and Syntactic Parsing, it becomes possible to understand not only individ-
ual words or sentences, but to grasp the entire content.
Through the relationship extraction (Relationship Extraction), the content of
texts is captured. In addition, multiple relationships within the sentences are ana-
lyzed in context. If, for example, in a text the board member Kirsten Gabriel of the
Jahns & Friends AG answers journalists’ questions, this means that Kirsten Gabriel
is at the Jahns & Friends AG is busy. When later in the text Mrs. Gabriel is men-
tioned, it should always be kept in mind that she is a board member of Jahns &
Friends AG. Therefore, her statements must be attributed to Jahns & Friends AG.
In addition, paraphrases are used. These are rewordings of a linguistic expres-
sion with other words or expressions. The aim of a paraphrase is to reproduce the
original meaning of a word or expression even when it is reworded. In the context
of artificial intelligence, it is mainly about determining a semantic identity of differ-
ent sentences.
Through a sentiment analysis (Sentiment Analysis), evaluative information
from voice messages is specifically identified. This often distinguishes between
positive, neutral, and negative moods. Thus, from Twitter comments or Facebook
posts, it can be inferred whether the sender of this message is critical, neutral or
positive towards a politician, a party and/or certain political projects. The same can
be done with regard to brands, managers and companies. This sentiment analysis is
34 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

of great importance in the context of online and social media marketing (see in
depth [32], pp. 88–90).
To ensure the understanding of language, the so-called part-of-speech annota-
tion (Part-of-Speech Tagging) is used. The term annotation means note or addition.
Specifically, in the context of Natural Language Understanding, this means that
additional explanations or additional information are added to words or entire texts
to increase understanding. This is intended to exclude ambiguities. This type of
addition or annotation can be explained using the following example sentence:
“The woman works in the company.”
The corresponding annotations are:

• “the” (Annotation: definite article, female, singular, nominative)


• “woman” (Annotation: noun, female, singular, nominative)
• “works” (Annotation: finite verb, present tense, 3rd person singular, indicative,
derived from the base form “work”)
• “in” (Annotation: preposition)
• “the” (Annotation: definite article, neuter, singular, dative)
• “company” (Annotation: noun, neuter, singular, dative)

Another method in connection with information extraction is the named entity rec-
ognition. Here, all proper names, such as first names, last names, brand names,
company names, are identified and assigned accordingly. If this is done correctly,
the author duo “Kreutzer/Land” will no longer be mistakenly translated into
“Kreutzer/Country” (as happened in a test run). Then also “Twitter” is translated
correctly in content and no longer—contextually wrong—as “tweeting”! Translation
results like “Facebook, Tweeting and email are the most important communication
channels for the company” will only be avoided when named entity recognition
works cleanly.
In the further processing of a text, the coreference resolution determines which
words belong to the same entity (a size or unit). Only through this are relevant con-
nections recognized. An example can illustrate this: In one sentence, “Volkswagen”
is discussed. The following sentence then follows: “The company looks back on a
long tradition of car manufacturing, in which it was able to grow in the long term.”
In this case, “company” and “it” belong to the entity “Volkswagen”. This assign-
ment is an important basis for a deep understanding, as is indispensable in NLU tasks.

Note Box
Programs of Natural Language Understanding start with the pure content
decoding of the text or the spoken word. In addition, the text is analyzed for
grammatical structures. Words are assigned to certain word groups. In addi-
tion, overarching connections are made that go beyond the actual content of
the text. Only the interaction of the various analysis steps enables a compre-
hensive understanding of the correct relationship of words—as a basis for
successful communication in the course of a Natural Language Processing.
1.5 Fields of Application of Artificial Intelligence 35

The transition to Natural Language Generation (NLG) takes place when


answering questions, which are not based on “text preserves”, but are generated in
real time. This is the case, for example, with ChatGPT and Bard. Also, the transla-
tion of a text into another language requires a Natural Language Understanding.
This is indispensable for processing the original text and making a synonymous
translation—another exciting field of application for Natural Language Generation.
The automatic translation of texts from one language to another has evolved from
the primary use of predefined rules to the use of complex statistical models to the
use of neural networks to mimic human thinking.
In the summarization of texts, NLG is also used. The most compressed form of
summary is the categorization of a text, which summarizes the entire content in one
word or a group of words. This is to be illustrated using customer feedback. The
original text read: “The headphones were way too expensive!!! I have never experi-
enced such poor workmanship and the plug is also already breaking.” The comment
can be assigned to the “price” category by the keyword “expensive”. However, it is
often useful, as here, to make multiple assignments. Because in this example, the
categories “product” (through the term “headphones” and “plug”) and “quality”
(recognizable by the terms “workmanship” and “breaking”) are also addressed. A
concise summary of the customer feedback would read here: “Product too expen-
sive”, “Quality not convincing” and “Plug defective”. With many thousands of com-
ments that an online retailer receives, the evaluation is greatly facilitated if the
feedback is summarized in such categories.
The goal of many NLP applications is to enable machines to be relocated, to
communicate with people through natural language. In addition to a human-to-
machine communication, corresponding programs today also enable improved
human-to-human communication, by compensating for speech, writing and/or
reading disabilities through the use of artificial intelligence.
Natural Language Processing is of particular importance in human-to-machine
communication through so-called dialogue agents (Dialogue Agents). These
include chatbots, which can now be found on many websites. Also, digital personal
assistants, like Alexa, Bixby, Cortana, Google Assistant, Siri & Co., are among the
dialogue agents (see in depth Sect. 4.2.1). Here, we also speak of Interactive Voice
Response (IVR).
The term Chatbot is composed of “Chat” for “chatting” and “Bot” as a short
form of “robot”. A chatbot is a software with which a user can converse in written
or spoken language. A chatbot can answer questions on selected topics or perform
certain tasks (see [33], pp. 19–28):

• The first variants of chatbots were purely text-based dialogue systems, which
allowed chatting between a human and a technical system. For this, the chatbot
offers an area for text input and text output. Thus, a human can communicate in
natural, written language with a system. The text inputs are interpreted by Natural
Language Understanding. Subsequently, pre-formulated answers are presented.
• In voice-based dialogue systems, an input can also be made by the spoken word.
However, here too, only pre-formulated answers are presented, which are ori-
36 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

ented towards an expected dialogue pattern. These are therefore rule-based bots
in both cases. If users break out of these expected dialogue patterns and thus out
of the predefined rules of a dialogue tree, the chat comes to a halt—or a human
steps in.

To control the chatbot, an avatar is often used. An avatar is an artificial person or a


graphic figure, which can be clearly assigned to the virtual world. Most users are
familiar with such figures from computer games. In the context of chatbots, these
are virtual helpers that are supposed to make communication with the system more
“natural” (see further Sect. 4.2.4).

Note Box
Not every chat window on a website is a chatbot. Many websites today offer
a live chat. This means that the user can communicate here with a real person.
The use of artificial intelligence is then not necessary. In live chat, natural
intelligence is used!
Furthermore, not every chatbot relies heavily on artificial intelligence.
Often, they are dialogue systems. Here, the questions are interpreted by
Natural Language Understanding. However, the answers are predefined and
are played out based on the corresponding trigger words.

Artificial intelligence is only used extensively in the so-called digital (personal)


assistants for language input and output. With digital assistants, the answers are
also generated by artificial intelligence—in real time. This is an important applica-
tion field of Natural Language Generation.
The distinction between chatbots and digital (personal) assistants is not always
easy. Simple, rule-based chatbots use speech recognition to understand the written
or spoken texts. The answers are then assigned based on predefined keywords.
However, a dynamic dialogue in which the AI system generates independent
answers does not occur here. This can only be achieved by a digital (personal) assis-
tant (see Sect. 4.2.1).
A subgroup of chatbots are the so-called social bots. These are active in social
media and operate from one account. They can create texts and comments, like and
forward content. When they enter into direct dialogue with users, their functionality
corresponds to that of chatbots. Social bots analyze posts and tweets and could
automatically become active when they recognize certain hashtags or other key-
words defined as relevant. As a result, social bots can amplify content (text and
image) circulating in social media and thus—depending on the evaluation—act eco-
nomically and politically manipulative. Such activities of social bots were observed
in election campaigns in the USA and Great Britain (see [34]). If these social bots
pretend to be real people, they are fake accounts with falsified user profiles. Such
fake accounts are often operated by countries that want to weaken Western democ-
racies through misinformation (so-called fake news) and conspiracy theories. Social
bots can also identify themselves as machines (see Sect. 4.3).
1.5 Fields of Application of Artificial Intelligence 37

Through text-to-speech applications, AI agents can be trained with recordings


of speakers, politicians or actors so that these AI systems can adopt the voice color,
speech melody, emphasis and even the tempos of the original. As a result, all people
whose speech patterns are available online can be put into words and sentences that
they have never said. The following systems can be used for this, among others:

• Deepdub : https://deepdub.ai/
• DeepZen : https://deepzen.io/
• Firefly by Adobe: https://firefly.adobe.com/?workflow=route-­to-­path
• Respeecher: https://www.respeecher.com/

In the future, dubbing actors may no longer be needed—the translation is done by


AI agents trained on these applications.
Meanwhile, more and more AI application fields are being presented that are
based on text input. Concepts such as Text-to-Music come to mind. Riffusion gen-
erates music from text specifications. This is based on the open-source project
Stable Diffusion. The AI engine is trained with the spectrograms of music as well as
with descriptive texts. A spectrogram is the visual representation of the temporal
course of the frequency spectrum of a signal—here of music. Through machine
learning, it is determined what relationship text and music have to each other. As a
result, Riffusion is able to generate “new” music from text input.
Users of Riffusion are asked to name their favorite styles and their favorite instru-
ments such as saxophone, piano and/or violin. In addition, genres such as jazz or
gospel, but also dances like tango or rumba can be entered as Text-Prompt. A
Prompt is an input that the AI model responds to. If desired, sounds like church bells
or rain are also integrated into the music. The result reflects the wishes, but often still
sounds very synthetic. You can test it here yourself: https://www.riffusion.com.
Google presents with MusicLM another text-to-music concept (see [35]). Based
on text input, the corresponding music is generated. To support further research, the
company published MusicCaps. This is a dataset with 5500 music-text pairs as well
as extensive text descriptions provided by human experts. This is intended to make
it easier for other researchers to work with it. These examples can be found here:
https://google-­research.github.io/seanet/musiclm/examples/.
Text-to-Picture concepts will be presented in the next section because they rep-
resent a particularly innovative form of computer vision.

1.5.2  atural Image Processing—Computer Vision:


N
Image Processing

The Image Processing represents another important field of application for Artificial
Intelligence (see Fig. 1.9). Image processing (also Natural Image Processing or
Computer Vision) refers to the processing of signals that represent images. The
result of image processing can either be an image again or a data set that represents
38 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

the features of the processed image (compare [36]). In the latter case, we speak of
image recognition or Image Recognition and of machine vision. Image recogni-
tion can refer to still images (photos) and to moving images (videos). In a subse-
quent step, the image information is processed to initiate decisions or further process
steps. This also involves a specific form of automated pattern recognition, which
visual intelligence is called. This forms the basis for the development of applica-
tions such as self-driving cars, autonomous drones, and automatic inventory checks
in retail stores. This form of image processing is distinct from image editing, where
the contents of images themselves are changed (e.g., through Adobe Photoshop).
An evaluation of still images (photos) is present, when people are to be recog-
nized in images. The process of image recognition is referred to as tagging. It was
used—intensely criticized—among others by Facebook. As a result, until 2021,
users could be automatically recognized in photos and videos uploaded to Facebook,
without having been previously marked by others. For this task, Facebook used the
profile pictures of the users as well as photos on which the persons had already been
clearly marked. Based on this data, a so-called digital identification tag was cre-
ated, which then served as a search grid over existing or newly uploaded image
material. Through this approach, Facebook received interesting data about activities
of and the networking between users:
Person A knows Person B. Persons A, C, and G were together at a party, hiking,
at the beach, on the Great Wall of China, etc. Person X was traveling alone in
Thailand.
This evaluation was heavily criticized by many parties. Besides the violation of
personal rights, there was primarily the risk that democratic and non-democratic
states could extensively monitor their citizens in this way. This form of identifica-
tion of individuals is today in China commonplace. Thus, people who had partici-
pated in demonstrations against the Corona regulations could still be recognized
and arrested later on photos and videos. This is also facilitated by the surveillance
cameras installed in public areas. The goal is to install one camera per eight people
in China in public spaces. That would be about 1.75 billion cameras (see in-depth
on social scoring).
Image recognition is also used to find images that resemble a template. This
application can be found at Google Reverse Image Search. How convincingly this
already works today is shown by the search results in Fig. 1.12. The input of this
search was based on the photo, which can be seen in the figure at the top left. First
of all, I was clearly identified as “Ralf Kreutzer” based on the photo. As you can see,
this search also delivers quite convincing results—far beyond photos. In contrast to
earlier research, no misassignments were displayed on the first page of hits. The
meta-data of the photos may have contributed to this.
However, it is also often reported where image recognition still fails today. A
particular challenge for image recognition is shown in Fig. 1.13. Why do many AI
systems still not always succeed in correctly distinguishing between a dog and a
muffin? This is easy for humans! The explanation is quite simple: The algorithms
used today are trained by hundreds of thousands of images that show different
objects and are labeled with corresponding descriptions (see Supervised Learning
1.5 Fields of Application of Artificial Intelligence 39

Fig. 1.12 Results of an image-based search on Google Reverse Image Search

Fig. 1.13 Limits of image recognition

under Sect. 1.1). The systems, however, do not understand the content meaning of
the photo, but focus on pure pattern recognition. The human intellect, on the other
hand, can easily distinguish a living creature from a pastry, because human intelli-
gence recognizes more than just vague patterns in the images.
The limits of image recognition were also recently visible at Facebook. Users
who had watched the tabloid Daily Mirror video “White man calls police because
of black men at the harbor” were subsequently asked: “Do you want to continue
watching videos about primates?” A Facebook spokeswoman was then forced to
make the following statement : “This was clearly an unacceptable error!” Happened
in AI year 2021 (see [37]). Due to the current limits of AI-supported image
40 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

recognition, thousands of people—so-called “cleaners” or euphemistically “content


moderators”—are in use in many countries around the world. They check visual
posts in social media for unacceptable content (see Sect. 1.5).

Film Tip
“The Cleaners” is a documentary by directors Moritz Riesewieck and Hans
Block from 2018. This film describes the tasks and problems of content mod-
erators in the Philippines who have to delete “inappropriate” content in social
networks. Very worth seeing!

Note Box
The limits of image recognition by AI systems are (still) determined by the
fact that artificial intelligence only compares visual patterns with each other.
The meaning behind the patterns remains hidden to the systems (still).
In contrast, our thinking and our natural intelligence also rely on our abil-
ity to recognize the essence of a thing and to distinguish a thing from its
surface. Our perception thus goes beyond the superficial impression because
we associate additional content with the visual impression (cf. [38], p. N4).

The reason for this is simple: algorithms perceive facts differently than humans
because they lack a “model of the world” as a generic treasure of experience.
What AI systems (still) do not have is a sense of body and an intuitive understanding
of physics. We humans learn this “on the side” during our socialization. Therefore,
we often only need a single encounter with an object (a training unit) to reliably
recognize an animal again. Intuitively, we compare the new object with our first
experience: a body with four legs, fur, and a snout? That must be an animal! Very
easy for us, still extremely difficult for AI systems (cf. [39], p. 33).
AI systems still lack the ability to create a higher-level symbol (cf. Malsburg
[40], p. 11). Such a higher-level symbol is, for example, the image that we create in
our minds when we think of Easter Sunday morning. This image is composed of
a multitude of stored memories:

• Experiences (fruitless Easter egg hunt in our childhood garden)


• Images (a beautifully set breakfast table with daffodils)
• Smells (a wonderful roast lamb)
• Tastes (like the colorful fondant eggs)
• Sounds (the ringing of the city church)
• Feelings (when touching a Steiff Easter bunny)
• Moods (as when reciting Goethe’s Easter walk “From the ice freed...”)
1.5 Fields of Application of Artificial Intelligence 41

The complexity of this linkage of very different memories at the trigger word
“Easter” can still not be achieved by any AI system today. We humans perform such
mental journeys—without any effort!
Today’s AI systems, on the other hand, have no biography of their own. They do
not have a phenomenal consciousness that is inherent to humans. Phenomenal con-
sciousness refers to the summary of all mental states and events experienced by a
human being. and those that have a subjective character (see [41], p. 267). AI sys-
tems still lack a concept of consciousness. This presupposes that a living being
perceives the world with its senses and participates in it in order to develop and to
survive (see [42], p. 18).

Note Box
AI systems to this day still do not possess a common sense. They don’t even
know that they exist—and they can’t perceive themselves.

Through computer vision, known individuals can be recognized. Through facial


recognition and analysis, the similarity of a face to another image or to an image
from its own image database can be determined. Such facial comparisons can be
used in onboarding and authentication workflows to verify the identity of registered
users. This allows, for example, warnings in a smart home when unauthorized per-
sons are on the property. If, on the other hand, authorized persons approach the
entrance, the apartment or house can be adjusted to the respective arrival. This
includes, for example, the automatic turning on of lights in predefined rooms, play-
ing the preferred music up to the regulation of the heating to the desired tempera-
ture. In the course of facial recognition, other attributes can also be determined,
such as open eyes, glasses and facial hair. The text recognition allows it, to extract
processable text from skewed and distorted text from images and videos of street
signs as well as from posts in social media and from product packaging.

Note Box
It should only be pointed out for the sake of completeness that with applica-
tions for facial recognition, the relevant laws of the respective countries on
data protection must be taken into account.

Facial recognition systems are already being used in many areas today. With
Touch ID, Apple had already enabled authentication through a fingerprint. The
Face ID from Apple now allows authentication via facial recognition—intuitive to
perform and secure. For this, a quick glance at the display is enough to securely
unlock iPhone or iPad Pro. With Face ID, purchases in the iTunes Store, the App
Store and the Book Store as well as payments with Apple Pay and other transac-
tions can also be authorized. App developers can also allow users to log in to their
apps using Face ID. Another use case for facial recognition is the access control
42 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

for employees and suppliers of companies. Also Event attendees can authenti-
cate themselves through facial recognition systems before they pass through the
entrance.
An equally fascinating and frightening use of facial recognition started in
Moscow: With the payment system via facial recognition named FacePay, the face
replaces the required ticket in the Moscow subway. The passengers can pay here
with a glance into a camera. To use this payment system, users must register in
advance on the website of the Moscow Metro. Here, the phone number and bank
details must also be provided. In addition, a current personal photo must be uploaded
(cf. [43]).
What speaks for this solution? Convenience! But what risks are associated with
it in a country that is not exactly known for its democratic and pluralistic society?
Through FacePay, very comprehensive movement profiles of people can be cre-
ated. This can determine when people go to demonstrations or to “politically sensi-
tive places”. These places include, for example, prisons or scenes of court
proceedings against political prisoners. The state has direct access to this data—
digitally provided by the respective users themselves! If the data from FacePay is
linked with data from other stationary and mobile surveillance cameras, a compre-
hensive transparency about the activities of political opponents, activists, journal-
ists, and others can be created. Especially in Russia, facial recognition software is
intensively used—and in Moscow alone, more than 200,000 surveillance cameras
are already installed.

Fictional Reading Tip


Anyone who wants to understand in an exciting way the consequences of
comprehensive video recording for each individual and society as a whole
should read the very exciting book by Dave Eggers “The Circle”. The epony-
mous film with Emma Watson is also good; but in my opinion, the book is
far better.

Interesting applications of computer vision today offer solutions like DALL-E 2


(belongs to OpenAI), Midjourney and Stable Diffusion (belongs to Stability AI).
These are text-to-picture generators. By entering a text description, images can be
created. This is also a generative artificial intelligence. In which areas can text-to-
picture generators be used effectively? AI-generated images can be used to illus-
trate blogs and advertising campaigns, but also for book covers for example. An
example of such a cover is shown in Fig. 1.14. There are no limits to creativity
regarding further areas of application (cf. in depth Sect. 4.6).
The potential copyright infringements by Text-to-Picture generators are cur-
rently being intensively discussed, especially when existing image material is used
in the creation process. If the training of the AI agents used image material with a
corresponding Creative Commons license, complaints against these services will be
1.5 Fields of Application of Artificial Intelligence 43

Fig. 1.14 AI-generated


book cover (source: ©
Springer Fachmedien
Wiesbaden GmbH)

difficult. Here, it is often difficult for artists to concretely prove the extent to which
their own creations have been incorporated into a final image.
However, Text-to-Picture concepts also pose other dangers. Here, one can think
of Deepfakes. To create fake, but deceptively real-looking images as well as audio
or video recordings, image or audio material from the real world is collected, ana-
lyzed, and then systematically manipulated. This creates content that appears very
realistic, but is fake. Deepfakes are not only used—completely legally—in films,
but also illegally, for example by turning personalities into porn actors. It becomes
particularly critical, however, when misinterpretations and rumors are spread in this
way, intended to undermine democracies. If convincing misinformation is intro-
duced into public discourse, this can weaken trust in state organs, government
action, and thus democracy as a whole. States like China and Russia have a great
interest in this, in order to be able to “shine” themselves. An example of Deepfakes
is provided by the following video with Barack Obama : https://www.youtube.com/
watch?v=cQ54GDm1eL0.
Based on the precise calculation of image points, familiar faces can be inserted
into all possible settings. In addition to the positive effects of new design
44 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

possibilities, this advance carries significant dangers. In the future, the image con-
tent of any photo on social media can be questioned, as the alleged evidence photo
of a crime could be a forgery. These creative forgeries not only pose challenges to
the justice system, but also to enlightened humanity in terms of the credibility of
image material. These possibilities for forgery can also be transferred to videos, as
the above example of Barack Obama impressively shows. The AI system used was
fed with 13 h of video material of the former US president. This allowed it to accu-
rately capture the movement of his mouth and apply it when entering a new speech.
With such an application, any interested user can hear their own words from the
mouth of Barack Obama. The possibility for such forgeries is possible for any per-
son for whom enough digital image material is available.

Food for Thought


Fake News 2.0 : The possibilities of forgery in photos and videos will make
it increasingly difficult to detect forgeries. Since images leave a particularly
lasting impression on viewers, the danger of comprehensive manipulation of
individuals and entire societies should not be underestimated.
The exciting question therefore is: What effects can be expected when any
event can be visually staged at any time—by whoever—regardless of whether
it ever took place in that way?
The technical term for these forgeries is Deepfakes coined.

In light of such developments, Fake Detection represents an important task field


of Artificial Intelligence. The core issue here is to identify false reports from correct
news in the different online sources. The collection of training data often poses the
greatest challenge in the development of such AI algorithms. After all, information
must be available about which news is “correct” and which is “false”—and this in
an infinitely comprehensive gushing flood of information that constantly generates
new content. This can lead to content that was false yesterday being correct today.
For example, the statement that Elon Musk has laid off 3000 employees at Twitter
could have been false yesterday. Today the layoffs were carried out—consequently,
the statement is now correct.
Here, the attribute “false” can appear in various forms:

• Posts can be glaringly incorrect i.e., not truthful.


• Posts can represent a correct result, but make (some) false interpretations.
• Posts can “dress up” as “pseudoscientific”, i.e., establish an apparent scientific
reference, which is not actually given (e.g., in a non-representative survey).
• Posts can be news disguised as opinions, offers and/or company
recommendations.
• Posts can be distorted because a Pro-domo effect is present (here a company can
speak positively about its own services, even if this is not true).
• Posts can ironically or sarcastically express the opposite of what is actually
meant—a form of expression that not everyone may recognize as such.
1.5 Fields of Application of Artificial Intelligence 45

Food for Thought


The significance of discovering biased and/or false news has been shown by
the developments in the US election campaign and the Brexit vote. Current
observations of how individual groups aim for targeted disinformation of
broader population groups and thus a weakening of Western democracies,
underline the relevance of Face Detection.

• Posts can contain quotes from other sources, whose statement the author
agrees with—or not.
• Posts can be taken out of context distorting the meaning and thus convey a
completely different content than originally intended by the sender.
Among the tasks to be mastered here is also the identification of fake accounts,
which have nestled in the social media (see also Sect. 4.2.3). Such fake accounts
are also called sock puppet or sockpuppet. This term is derived from a ventrilo-
quist with a hand puppet. Fake accounts refer to (additional) user accounts, which
are used to pursue various goals. The use is legitimate for the protection of one’s
own privacy. However, they can also serve to represent opinions within a commu-
nity with multiple voices—in order to distort the mood. Fake accounts are also
used to circumvent the rules of a community and deliberately provoke or disrupt
dialogues.
It is regularly reported that Facebook or Twitter have again identified and closed
hundreds of such fake accounts. It can be assumed that at the same time, new fake
accounts are being opened by the so-called troll factories. A troll in the online
environment is a person who through their communication primarily emotionally
provokes other participants in the conversation, hinders communication on the
Internet in a destructive way and/or wants to spread biased contributions. These
trolls cause trouble not only in social networks but also in discussion groups, blogs,
and chat rooms. In addition, these propagandists try to place their “contributions” in
wikis as well, in order to manipulate the public perception and opinion. In addition,
ratings of videos and other contributions in social media are falsified. At the same
time, attempts are made to give their own posts greater visibility. For this purpose,
the troll can provoke and motivate people or chatbots to comment or share contribu-
tions at a certain time in a certain way. This can dramatically distort the supposed
popularity of a message.
It is not a trivial task for platform operators to identify and exclude these black
sheep. If the criteria are set too “sharp”, accounts of “uninvolved” are also closed—
possibly because they have repeatedly spread false reports to draw attention to the
problem. If the criteria are too “blurry” defined, many “black sheep” remain unrec-
ognized. Artificial intelligence can make a significant contribution to recognize pat-
terns that indicate manipulative bots and posts. Appropriate triggers can be the
timing and frequency of posts, the focus on a specific target audience as well as the
dominant contents and their tonality.
46 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

Food for Thought


As artificial intelligence progresses in the identification of fake news, its use
for generating these fake news will also be improved.

Despite these challenges, the human eye is still indispensable. For this purpose,
at Facebook, Google and Co. the already mentioned content moderators are
involved. In the corresponding service centers, these employees have the task of
checking the not clearly evaluable content before it is blocked or released. AI-driven
algorithms are still not sufficient for this.
The early detection of false information can be relevant for very different busi-
ness areas (see [44]). Here, one should think of the marketing department, which
should quickly recognize tendentious (false) representations. Risk management, the
research and development department, sales, and even the human resources depart-
ment can benefit from early detection. Companies are attacked in the following
ways, which are not mutually exclusive:

• Spreading defamatory content about the company, its representatives and/or its
offers and brands
• Deterrence of potential customers through misinformation about the quality of
products and/or services
• Impairment of the employer image through fake reviews by employees, who
have never worked there

Note Box
There will be more and more AI systems in the future that manipulate texts,
images, and videos. At the same time, more and more AI systems are being
developed to detect such manipulations.
However, it should be noted:
A manipulation can only be recognized as such after its publication. But by
then, this fake news may have already spread worldwide, sowing hatred and
discord, and possibly triggering regional and global crises.
Also, it is questionable how many of the worldwide recipients of a manipu-
lated message will even notice the later added note “Beware,
manipulation!”?
Here too, only a comprehensive development of media competence can
help to recognize forgeries as an informed recipient.
The “Digital Natives” (born from 1980 onwards) are often mistakenly
attributed comprehensive digital competence. However, this is often not the
case. The digital competence of these individuals often does not extend
beyond a mere operational competence of applications. In addition, they
are often not able to distinguish between credible and incredible content on
the Internet. This is where the lack of media competence is particularly
evident.
1.5 Fields of Application of Artificial Intelligence 47

1.5.3 Expert Systems: Expert Systems

Expert systems are another exciting application area of artificial intelligence (see
Fig. 1.9). Expert systems are computer programs that assist people in solving com-
plex issues—like a human expert. For this purpose, the programs derive concrete
recommendations for action based on a systemically available knowledge base. To
do this, the systems need to be provided with a multitude of information. An impor-
tant data basis are the so-called if-then relationships, which make human knowledge
comprehensible for computers. The use of artificial intelligence has significantly
advanced the expert systems that have been in use for many decades. On the one
hand, AI systems can evaluate increasingly extensive data sets. In addition, new
insights are generated through the various forms of machine learning—often even
in real time! We will see more of this in the next Significant developmental leaps
can be observed over the years.
The following components of expert systems can be distinguished:

• Knowledge acquisition component


This component is about building and expanding a knowledge base to base deci-
sions on the most relevant data possible. The challenge is, among other things, to
tap into the data streams of Big Data (Second and Third Party Data) in addition
to internal company data (First Party Data). The relevant information for deci-
sions must be filtered out from the variety of this data.
• Component for developing problem solutions
Various approaches can be used in the solution-oriented evaluation of the knowl-
edge base. On the one hand, a development in the future can be inferred based on
the available data. This is referred to as a data-driven or forward-chained
approach. This is used, for example, with Robo Advisors in investment advice
(see Sect. 9.1). Developments are predicted from past data.
On the other hand, it can be “calculated back” from an identified development
status in the future. For example, it should be determined which triggers were
responsible for certain developments. With this event-induced or backward-
chained approach, hypotheses about the course of already completed processes
can be formed. Global warming is a good example of this. Based on the knowl-
edge gained in this way, decisions or decision recommendations can be gener-
ated, for example, to ensure the extent of “acceptable” global warming. Such
concepts are used, for example, by the Intergovernmental Panel on Climate
Change (IPCC).
• Component for communicating solutions
Another important component of an expert system is the “output function”. This
is about explaining the found solutions to the user of the system. The quality of
this explanation is an important acceptance criterion for the found solutions.
After all, users of AI systems generally do not like to blindly submit to an incom-
prehensible recommendation (see Sect. 1.7.3). In the case of global warming, for
example, the challenge is to prepare the recognized relationships in such a way
48 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

that they are actually comprehensible for interested persons and ideally show
concrete impulses for action.

It is easy to see why such expert systems are of very high importance in the context
of artificial intelligence. AI-supported systems can—based on a certain starting
knowledge constellation—learn independently and thus expand and potentially
overcome the horizon of human knowledge. The gained knowledge can either be
made available to the users. Then humans implement the Results and recommenda-
tions are put into actions. Such an expert system focused on creation was used to
assist music specialists in the composition of the tenth Beethoven Symphony (see
Sect. 4.6.3).
The insights gained can also be directly incorporated into ongoing processes
without human intervention. This happens, for example, in production for quality
assurance and in logistics for controlling the flow of goods (see Chap. 3). Expert
systems are also used in the evaluation of X-ray and CT images in the healthcare
sector (see Sect. 5.1).
The access to powerful expert systems will be made increasingly available to
“normal” users in the future. This is where so-called (digital) Self-Service
Technologies (SST) come into play. A simple application of this kind underlies
every Google search query. Translation aids from DeepL, Google Translate, Skype &
Co. also use corresponding expert systems for Text-to-Text—as well as for Speech-
to-Speech translation in real time. Users often completely rely on the translation
services achieved in this way. Also, ChatGPT can be interpreted in this sense as an
expert system, but its results should still be critically reflected (see Sect. 4.2.2).

Food for Thought


In our everyday life and work environment, we often make only good-enough
decisions due to a lack of or insufficient data (see [45], p. 110). Why are there
huge waiting halls at airports? They have to provide space for passengers to
wait. The reason for their waiting is due to an information gap that each
individual traveler has to compensate for with a time buffer. Due to insuffi-
cient networking of different information, a just-in-time arrival is not pos-
sible. For this, we lack information about

• The availability of taxis at the starting point,


• The traffic conditions on the feeder roads (for your own car, a taxi, or
the bus),
• The extent of the tardiness and unreliability of the German Railways and
other train companies,
• The parking conditions and the traffic volume at the airport,
• The shortest routes on the airport grounds,
• The queues at the baggage check-in and the security checks as well as
• The actual boarding time and the real departure time of the booked flight.
1.5 Fields of Application of Artificial Intelligence 49

Therefore, we must plan for many buffer times if we do not want to miss our
flight. Perhaps in the distant future, an autonomously acting vehicle will pick
us up at the latest possible time and take us to the airport, without us ever
missing a flight—and all without stress. For this, the relevant information
streams would be combined in a traffic expert system. A minimal prelimi-
nary stage of this is the app of the German Railways, in which at least the
expected delays are displayed relatively precisely—and often also the seat,
when trains enter the station in reverse car order. Sometimes recommenda-
tions for alternative means of transport are already being made when once
again “delays in the operational sequence” have occurred. However, you can’t
always rely on this often real-time available information!

Note Box
Expert systems will also increasingly penetrate the working world in the
future. Thanks to AI systems, hopefully, fewer good-enough decisions will
have to be made in the future based on an insufficient data basis.

1.5.4 Robotics: Robots

Robots represent another field of application for Artificial Intelligence (see Fig. 1.9).
The term robot refers to technical apparatuses that take over human—so far mostly
mechanical—work or other tasks. The following types of robots can be distin-
guished. The demarcation between the individual categories is not always clear-cut.

• Classification of robots according to fields of application


• Industrial robots (e.g. in the automotive industry or for filling cosmetics and
processing food)
• Medical robots (for direct/indirect patient care, nursing, performing surgeries)
• Service robots
• Business use (e.g. for check-in at hotels and airports)
• Private use (such as a vacuum, window cleaning, weed pulling, or lawn mow-
ing robot)
• Exploration, firefighting, and military robots (for exploring the surface of
Mars, for firefighting or for defusing mines and unexploded ordnance; such
robots are also used as drones)
• Toy robots (children can design models with motors and sensors using Lego
Boost and activate them with code commands; other toy robots are offered by
Tinkerbot and Cozmo)
• Navigation robots (e.g., for autonomous driving for people and materials)
• Classification of robots according to their degree of mobility
50 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

• Stationary robots (integrated into production lines, e.g., in the manufacture of


chips and batteries)
• Mobile robots (for example, for logistics processes in delivery by self-driving
vehicles and drones or for self-controlled use as a mowing robot); an exciting
example can be found here: https://www.youtube.com/user/BostonDynamics
• Classification of robots according to the degree of their interaction
with humans
• Classic robots (work independently of humans; are often located in fenced
areas, so that humans are not harmed by robots)
• Cobots/Collaborative robots (can work “hand in hand” with humans, because
the robots recognize humans and act accordingly “carefully”; new variants of
cobots even react to touch)
• Classification of robots according to the degree of their “human appearance”
• Machine-like robots (look like machines and also act like them)
• Humanoid robots (resemble humans and increasingly mimic human behaviors;
through their human-like appearance opens up new fields of application, e.g. as
sex dolls with adjustable moods and additional features, to e.g. conduct conver-
sations and tell jokes)

Note Box
With Robotics so far it has mainly been about the development and training
of robots, so that they can interact with people and the world in a predictable
way can. Now robots are increasingly emancipating themselves from a pre-
defined task fulfillment. Through the increased use of AI robots can learn
independently to act appropriately even in unplanned situations and to show a
certain degree of “digital” self-awareness.

Food for Thought


In recent years, the motor skills of robots have already improved enormously.
Today, robots can also move raw eggs and move safely on legs through diffi-
cult terrain move. However, it will probably take decades before robots inde-
pendently empty the dishwasher and distribute the rinsed objects correctly in
the cabinets. If at all!

The following basic components of robots are to distinguish:

• Sensors for capturing the environment


Robots are equipped with different sensors with which they can capture their envi-
ronment. In autonomous driving, information is obtained through cameras as well
as radar and LiDAR systems (Light Detection and Ranging). LiDAR is a capture
of the environment through three-dimensional laser scanning.
Through the information gained via sensors, robots can react to the continuously
changing environment. This perception can refer to a suddenly appearing pedes-
1.5 Fields of Application of Artificial Intelligence 51

trian, to the next workpiece to be processed or to a drop in pressure at turbine 13.


Movements are also perceived via sensors and interpreted by AI, e.g. to interact
with people (keyword Cobot). Finally, direct instructions can also be given to the
robot via spoken language, which are then converted into actions.
• Set of functions
Depending on the complexity level of the robot, it can only perform “hard-wired”
functions. This can be the setting of 24 welding points or the painting of a body.
An AI-controlled robot, on the other hand, can learn independently and increase
the efficiency and range of its use continuously.
• Movement components
Simple industrial robots are firmly anchored and are separated from humans by
a cage, because they would not be recognized by the robot and possibly injured.
“Seeing” robots (for example in logistics use) can independently navigate
through warehouses and avoid obstacles.
• Interaction with the environment
For interaction with the environment, gripping arms and the like may be present
to perform the programmed functions. In addition, an interface for interaction
with the robot is required to make its tasks and other data available to it. This can
be done classically through program codes. Alternatively, the robot can perceive
its environment through sensors. Then the robot recognizes independently what
to do next. Or it learns through “pre-lived” motion sequences that humans show
it. Alternatively, an auditory interface can also be used. Then, for example, the
command is: “Request next workpiece!” The robot’s reaction can be a special
movement—or the output of information by text or spoken language.

Compared to humans, robots have been able to play out a multitude of advantages
for many years. These include above all:

• Strength
• Endurance (no need for breaks or holidays; no classic states of exhaustion; no
reservations about night and weekend shifts)
• Precision
• Speed
• Unflappability (such as through mood swings or distractions of all kinds)
• Lower hourly wage (including all maintenance costs)
• No representation by trade unions

Today, another essential component is added, which will massively increase the
triumph of robot use in the coming years: artificial intelligence.

Note Box
Through artificial intelligence, robots increasingly have a very essential addi-
tional strength: Intelligence.
The resulting additional fields of application will fundamentally change
the world!
52 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

Fig. 1.15 Communication with a humanoid robot named Pepper

The briefly mentioned humanoid robots will now be discussed in more detail.
In the course of the development of these robots, many technical challenges had to
be and still have to be overcome. Artificial intelligence has made a significant con-
tribution to this. Humanoid robots are supposed to interact autonomously with
their respective environment and also move independently. For this purpose, either
legs or a platform with wheels are used. Their human likeness is given to the
robots through artificial arms and hands and a face modeled after humans (see
Fig. 1.15).
The “cute” form of the humanoid robot Pepper does not represent the end of the
development of this type of robots. The stage of development that has already been
reached is demonstrated by the example of the robot named Sophia, which was
introduced as early as 2016. An impression of Sophia is provided by the following
video: https://www.youtube.com/watch?v=W0_DPi0PmF0. Here, one can no lon-
ger simply speak of a “human-like” face. So far, humanoid robots like Pepper have
been deliberately portrayed as cute to avoid scaring people and to prevent fears of
substitution. However, this time is now over.
Now, humanoid robots are becoming increasingly human-like. These robots can
increasingly also show human facial expressions. You can see what this might look
like here, using the example of the robot named Amecas : https://www.youtube.
com/watch?v=IPukuYb9xWw. If you combine the human appearance of these
robots with the knowledge of IBM Watson or other AI systems, then humans are
facing increasingly powerful competition. After all, comprehensive research proj-
ects are underway to enable robots to also have human-like perception. This should
enable robots to also perceive moods and emotional states. In addition, they should
1.5 Fields of Application of Artificial Intelligence 53

recognize and correctly interpret the gestures and facial expressions of their
counterparts.

Note Box
A copy of a human combined with the gigantic learning and performance
capabilities of computers is no longer a fiction.
Such a robot could learn new languages every day via download from the
cloud, have the latest scientific findings and other new “tricks” at its disposal.
Necessary updates can be made in real time to always be up to date!

What does dealing with robots look like today—and in which direction should
developments be driven? At the moment, people in Germany want robots that speak
like humans, behave like humans, and can also recognize human emotions. But they
should not look like real humans—not yet!
In this context, the term Uncanny Valley is used. This refers to a “creepy valley”
or a “horror trench”, which describes the acceptance gap for “too human” robots.
The acceptance of technically simulated behavior depends on the human likeness of
the carrier (e.g., robot, avatar). As Fig. 1.16 shows, acceptance initially increases
steadily with increasing human likeness. Then there is a sharp drop. The high human
likeness of the robots becomes uncomfortable for humans. This drop is referred to
as the “Uncanny Valley”. This means: humans sometimes find highly abstract, com-
pletely artificial figures more sympathetic and acceptable than figures that are par-
ticularly human-like or naturally designed. The Uncanny Valley could potentially be
overcome by robot versions that can no longer be distinguished from real humans.
let. It’s all a matter of habit!
Even with the already mentioned autonomous driving vehicle it is essentially a
complex robot, which accesses a multitude of functions of artificial intelligence.
First, several cameras record the vehicle’s surroundings. The obtained images are

Uncanny Valley
Familiarity
+ Humanoid Healthy
person
robot

Industrial robots
Plush
animal Human
0% 100%
likeness
Corpse
Moving
Motionless
- Zombie

Fig. 1.16 Uncanny Valley—the creepy trench


54 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

evaluated and decisions are derived from them—all in real time. If a red light rele-
vant to the own lane is recognized, the car stops—oriented to further environmental
information (e.g. which vehicles are also braking down, which are following). If a
speed limit of 70 km/h is recognized as relevant for the own direction of travel, the
vehicle is automatically slowed down to this target speed if it was previously travel-
ing faster. Since human lives can be directly affected here, as several deaths in con-
nection with the use of autonomous driving vehicles have shown, particularly high
safety standards must be considered here.

Food for Thought


Perhaps we will soon be talking about a Generation R or Generation
Robotic, the Robotic Natives. The members of this generation will interact
with robots as naturally as kids today do with smartphones and the internet—
and in the past with Lego bricks and the Märklin train set.

Especially the perception of the environment has always been a great challenge
for robots. Even early robot models in the 1970s (like e.g. ELIZA) were programmed
to recognize a wall in a room. Today, however, it is about much more. A robot
should not only locate a building, but ideally also map it. This task is referred to as
Simultaneous Localization and Mapping (SLAM)—as simultaneous position
determination and mapping. It is a skill that humans master at a very young age:

• Where is the door?


• Through which room do I have to go to get to the bathroom?
• Where are which objects in the room?
• Where am I myself?

The exploration of unknown environments and safe orientation in them is intuitive


for humans. The sensory organs eyes, skin, ears, and nose pick up the relevant infor-
mation for this. They enable absolute and relative orientation in space.
For robotics, the autonomous exploration to create a map of an unknown envi-
ronment is a fundamental problem. The quality of the map has a direct influence on
the quality of subsequent robot operation. Instabilities in Simultaneous Localization
and Mapping can lead to maps of poor quality and navigation errors during or after
exploration. Therefore, various AI systems have been developed that can perform a
SLAM safely and effectively. For this, the systems are equipped with to equip with
various sensors. These include ultrasound, cameras or LiDAR. Only the three-
dimensional capture of the environment allows a robot to move locally safely, to
detect obstacles in time and to avoid them. If the absolute position of the robot is
known (for example through GPS sensors), a map can be built up.
Only through SLAM is a robot therefore able to map its surroundings and at the
same time determine its position within this environment. So far, SLAM has been a
challenge for AI systems. After all, the robot had to determine its position in real
time and at the same time create a map of the environment. This requires complex
1.5 Fields of Application of Artificial Intelligence 55

data processing and analysis as well as the ability to move in real time in a changing
environment. The great advances in machine learning and in image processing have
now led to a new generation of SLAM systems. These systems can perform real-
time positioning and mapping at a higher level of accuracy and are more robust and
faster than previous SLAM systems.
Examples of AI systems that master SLAM safely include self-driving cars,
drones, and robots in industrial automation. These systems can determine their posi-
tion in real time and map their environment. As a result, these robots can avoid
obstacles and navigate effectively. Although SLAM is already an advanced AI tech-
nology, there is still room for improvements and advances. This particularly affects
the accuracy and reliability of positioning and mapping. Today there are already
many providers of SLAM solutions. These include, among others, the following
companies:

• Microsoft
Spatial Mapping provides a detailed representation of the real surfaces in the
environment of the HoloLens and allows developers to create a compelling
mixed-reality experience. By merging the real world with the virtual world, an
application can make holograms appear real. Applications can also adapt more
naturally to user expectations by offering familiar behaviors and interactions
from the real world (see [46]).
• Amazon
Amazon has developed a series of robots with Amazon Robotics that are used in
industrial automation. They master SLAM-based navigation and the correspond-
ing mapping (see [47]).
• Intel
Intel has developed a SLAM system called RealSense that is integrated into cam-
eras and sensors. It allows developers to create applications for robotics, automa-
tion, and augmented reality (see [48]).

There are also many smaller companies that offer SLAM solutions. This is espe-
cially the case with robotics and industrial automation. As the demand for SLAM
technologies in various industries and application areas continues to grow, the num-
ber of providers of SLAM solutions will also continue to increase.
Artificial intelligence technologies and especially robots can be differentiated
according to their degree of automation. The five-stage model in Fig. 1.17 visual-
izes the possible division of labor between human and machine action. The
degree of automation of decisions depends on the complexity of the issue and the
performance of the AI system used. The following examples make clear the legal,
ethical, and economic questions associated with the chosen division of labor.

1. In assisted decision-making, an AI system supports humans in their decisions.


This can be an AI algorithm that suggests purchases on Amazon or leads to auto-
completion on our searches on Google. The autocorrect on the smartphone is
another example of assisted decision-making. Many funny and annoying exam-
56 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

5
Autonomous
4 decision
making
Delegated
3 decision
making
Verified
2 decision
making
Paral
1 decision
making
Assisted
Human
decision
decides making

Fig. 1.17 Five-stage model of decision automation

ples of correspondingly “optimized” text messages make it clear that some users
already allow the AI system to make autonomous decisions.
2. In partial decision-making, the AI system already makes decisions for the user.
This is the case with online search processes. Here, information is presented or
withheld from the user according to certain (non-transparent) algorithms.
Ultimately, however, it is up to the searcher to decide whether he is satisfied with
the results or searches for further content with a modified query.
3. In checked decision-making, possible decisions of the AI system are checked
by humans. This leads to a cross-validation. If the AI system and the human
come to the same result, it must fit. The currently available AI-based translation
programs—especially DeepL—are already in the transition area from checked
decision-making to delegated decision-making. However, a critical analysis of
the translation results achieved today can—especially with more complex
issues—still reveal many errors. But there are fewer and fewer.
4. In delegated decision-making, (partial) tasks are consciously shifted from
humans to an AI system. This is often the case with quality controls in produc-
tion. Here, corresponding systems independently decide whether a product
meets the quality requirements or not. Depending on the result, process steps
for error prevention and/or error correction automatically follow. These can be,
for example, a more in-depth quality check or an immediate removal of the
products.
5. In autonomous decision-making, entire task complexes are shifted to an AI
system. Tasks and decisions are executed or made there without further human
intervention or review. Autonomous driving is a convincing example of this.
1.5 Fields of Application of Artificial Intelligence 57

Such a delegation of decisions for AI systems has different consequences. It is of


less great relevance, which purchase recommendations are made to a customer at
Amazon or Zalando solely AI-supported and thus without human intervention.
Errors by automated translation systems like Google Translate or—much more
powerful—DeepL will in most cases not have serious effects on life and survival.
The situation is quite different with autonomous driving. Here, the term alone
makes it clear that the driver has handed over the complete decision-making respon-
sibility to a robot. The clearest sign of this is when these vehicles lack both the
steering wheel and the gas and brake pedals! AI systems must make all decisions in
real time—where it always involves life and death. Because even a slight and brief
deviation from one’s own lane can endanger one’s own and other people’s lives
(compare to the trolley problem).
Autonomous decision-making also exists in the social media. Here, users are
presented with or withheld information according to certain (non transparent) algo-
rithms of the platform operators. If a user does not balance this one-sided informa-
tion supply by using other media, which are not or are played out by other algorithms,
a very one-sided worldview can emerge. Then it comes to the so-called filter bub-
ble or to a echo chamber, in which each individual only receives the information
that matches the algorithmically determined preferences. Then everyone lives in
their own (apparent) world, which can be more or less far removed from reality (see
fundamentally [49]; on the associated “digital seduction” [16]).

1.5.5  ffective Computing: Emotional AI—Emotional


A
Artificial Intelligence

The development of Artificial Intelligence has in recent years focused primarily on


the further development of the fields of application shown in Fig. 1.9. This was
mainly about increasing linguistic, mathematical and logical thinking skills, improv-
ing image recognition and the implementation in actions, for example through
robots. A focus was also on Generative AI. An increasing importance is now given
to the emotional Artificial Intelligence (also Emotional AI). Often the term
Affective Computing is used. This is about the development of algorithms and
systems to recognize human emotions, to interpret, process and simulate them:

• Emotional AI for texts analyzes the written word through NLP and sentiment
analysis (Sentiment Analysis). For example, in reviews, comments or messages,
it is analyzed whether the contents are positive, neutral or negative.
• Emotional AI for spoken Language analyzes natural human language. Here,
calls in the service center or in emergency call centers can be analyzed and evalu-
ated. This way, not only speech patterns can be recognized, but also the contents
can be interpreted holistically.
• Emotional AI for video content analyzes eye and facial movements as well as
gait and body language. These signals provide information about emotional
states and can also be integrated into application processes (see Sect. 8.2).
58 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

Note Box
Affective Computing refers to the study and development of systems and
devices that can recognize, interpret, process, and simulate human emotions.
It is an interdisciplinary field that combines computer science, psychology,
and cognitive sciences. The goal of Affective Computing is to create
machines that can respond to human emotions in a way that resembles human
reaction, to enable a more natural and intuitive interaction between humans
and machines (see fundamentally [50]).

The idea behind Emotional AI is to give machines a kind of emotionality to


enable more human-like interactions. This includes, for example, recognizing
aggression or affection in language as well as in photos and videos. Expert systems
can then also convey their responses—in writing and speech—in different emo-
tional ways. This gives an AI system much more options for action. Through an
emotion analysis, companies can, for example, better understand the emotional
state of their customers and employees. Then chatbots can react to the respective
emotional state of the users. Virtual assistants would be able to provide emotional
support. The emotional machine intelligence is also indispensable when robots are
to take on more tasks in the care and support of people. Such machine emotional
intelligence could also make a decisive contribution to overcoming the Uncanny
Valley—the creep gap—shown in Fig. 1.17. Perhaps the most successful AI sys-
tems will soon have such emotional machine intelligence, which is very similar to
human interaction.
The following shows selected application areas of Affective Computing (see
also Sect. 4.4):

• In the healthcare sector, Affective Computing can be used in therapy sessions


to recognize and monitor the emotions of patients. It can also be used to detect
and treat mood disorders, for example through remote monitoring.
• In the education sector, Affective Computing can be used to contribute to rec-
ognizing the emotional state of learners. This allows both human and digital
instructors to individually approach each person—through further learning hints,
different training material, and the like (see in depth Chap. 7).
• In customer service, affective computing can contribute to improving customer
satisfaction by enabling customer service staff and automated systems to better
respond to customers’ emotions.

Despite its great potential, affective computing still has some challenges to over-
come. One of these is the accuracy of emotion recognition. People express emo-
tions in different ways, and it can be difficult to account for these differences in a
computer model. AI-supported emotion recognition becomes more reliable when
additional data is taken into account. This includes the voice, breathing and heart
rate, and body posture.
1.5 Fields of Application of Artificial Intelligence 59

In addition, there are also ethical concerns associated with Emotional AI. This
involves the protection of personal data used to determine emotional states. The col-
lection and analysis of emotional data can be seen as an invasion of privacy.
Moreover, artificially generated “human emotions”—the simulation of feelings—
also offer numerous opportunities for manipulation. The development of guidelines
and regulations is therefore crucial. This ensures that the benefits of emotional arti-
ficial intelligence are utilized without violating users’ privacy.
One thing to keep in mind:

Note Box
The lack of human emotions in AI systems has led to these systems mak-
ing fewer mistakes. After all, such AI systems were unfamiliar with mood
swings that can cloud human analysis, judgment, and decision-making
behavior.
What happens when AI systems are now given “human emotions”? Will
the AI solutions then become more human—and possibly automatically
worse? Or can better results be achieved by combining emotional and (analy-
sis-related) cognitive intelligence, if AI agents can also reflect their own feel-
ings and take them into account in their work?
After all, high emotional intelligence in humans is associated with the
ability to control one’s own impulses and suppress strong emotions—a pre-
requisite for acting rationally and minimizing emotional disturbances.
In any case, affective computing has the potential to fundamentally
change the way we interact with technology.

Many of the AI systems used today are trained to perform a single task. So far,
however, it is often not yet possible to transfer the skills acquired in one field to
other areas of application. For example, chess computers cannot give investment
advice and translation programs cannot generate images. This can either be affec-
tionately referred to as island intelligence or more harshly as specialists. In con-
trast, humans are real Universalists. Here, human emotions play the role of an
important navigator to find one’s way in new situations and to use other skills there.
Gut feeling is often a convincing navigator. If it were possible to equip AI systems
with similar abilities, this could be a major breakthrough for the self-supervised
learning described in Sect. 1.1 !
AI systems with emotional intelligence can be used in various fields. Sensors
today already make it possible to perceive not only words, but also different facial
expressions, the respective body posture, gestures used, and the language used with
voice frequency, volume, and speech pauses, etc. Combined with intelligent pro-
cessing of the content thus obtained it is possible to respond to the emotional state
of the human counterpart. Various companies are already incorporating emotional
machine intelligence into their applications, for example in recruiting processes
(see Sect. 8.2).
60 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

1.6 Summary

• The fields of application of artificial intelligence are closely interlinked.


• An important field of application of artificial intelligence is the processing and
generation of natural language. It enables new forms of communication between
humans and machines and will—for example through ChatGPT—fundamen-
tally change the search and writing behavior of people.
• The possibility of processing and generating images allows AI agents to take on
new tasks. This makes it possible for machines and humans to work hand in
hand. In addition, artificial intelligence is increasingly becoming the creator of
“new” visual content.
• However, the “new creation” of voices, photos, and videos is also to be critically
assessed. In the future, due to the emerging deepfakes, it will be increasingly
difficult to distinguish between fiction and truth, because AI-based forgeries will
hardly be recognizable anymore. This poses a great potential threat to
democracies.
• The provision of comprehensive knowledge through expert systems offers the
chance to make better decisions. However, a prerequisite for this is a trustworthy
database. In addition, it is necessary to check on the basis of which premises and
values decisions are made by these expert systems.
• Intelligent robots often use several or all fields of application of artificial intelli-
gence at the same time. They can “naturally” hear and see, make informed deci-
sions and execute them independently. They will have the greatest impact on
companies and economies.
• The possible division of labor between human and machine action is made visi-
ble by the five-stage model.
• Emotions are now also making their way into AI systems. They will enable
entirely new—more human—fields of application. This is the task field of
Affective Computing.

1.7 Ethics and Fairness of Artificial Intelligence

1.7.1  urrent State of Research on Ethics and Fairness


C
of Artificial Intelligence

The use of Artificial Intelligence without discussing ethics and fairness should
not exist. The state of affairs is documented by a study from Stanford University
[51]. According to this, the research on fairness and transparency in Artificial
Intelligence has increased explosively since 2014. The number of relevant publi-
cations at ethics conferences has quintupled. Algorithmic fairness and AI bias
are no longer purely academic issues. Rather, such questions have developed into
a mainstream research topic with far-reaching implications. With the dynamic
1.7 Ethics and Fairness of Artificial Intelligence 61

development of AI applications, various phenomena are emerging (cf. [51],


pp. 11 f., 100–138):

• The language models used today are more powerful than ever before—but often
also more biased. A model with 280 billion parameters, developed in 2021, has a
29% higher triggered toxicity than a model with 117 million parameters from
2018. The systems are becoming increasingly powerful over time, but with
increasing power, the potential severity of their biases also increases.
• Multimodal models, which can process various data (text, image, sound etc.),
learn multimodal prejudices. By training multimodal text-image models,
unprecedented results have been achieved in image classification and the cre-
ation of images from text descriptions. However, these often reflect societal ste-
reotypes and prejudices in their results. For example, images of black people are
twice as likely to be classified as non-human as the images of other ethnicities.
• Commercial facial recognition systems and resume verification systems can
lead to discriminatory results based on race, age, and/or gender. The reason for
this is that AI models can reflect and amplify human social bias. Algorithms, on
the other hand, are considered fair when they make predictions that neither favor
nor discriminate against individuals or groups based on protected characteristics
that cannot be used for decision-making (see Sect. 8.3).

Such phenomena have led to many speculations about the future development of
Artificial Intelligence and the relationships between humans and machines.
Stephen Hawking has urgently warned that full AI could mean the end of human-
ity (see [52]). Also, the philosopher Nick Bostrom discusses in his much-noticed
book Superintelligence the possibility that computers could overtake humans in
terms of cognitive abilities (see [53]). In light of recent advances and the resulting
benefits of AI as well as the necessity of human intervention at various levels, a
cooperative approach between artificial intelligence and humans is expected or
demanded for the future. Since the use of AI cannot be stopped worldwide, the
question arises about the indispensable ethical requirements for artificial
intelligence.
When ethical questions are discussed, the core issue is the distinction between
“good” and “evil”. An important topic within ethics is the question of morality. The
empirical ethics or the descriptive ethics analyzes the behavior as well as the cus-
toms and values as well as the morality of different groups, cultures and/or coun-
tries. Artificial intelligence can make important contributions to this through
description, inspection and prediction (see Fig. 1.1). The normative ethics on the
other hand deals with the “should”. Here it is defined what should happen. This
refers to the area of prescription shown in Fig. 1.1. Especially here the importance
of morality and the distinction between “good” and “evil” becomes apparent. The
ability of an AI system to distinguish between these two poles is indispensable when
AI systems trigger autonomous decisions that are no longer questioned by humans.
Due to many uncertainties, the further AI development is not predictable. Even
specialists find it difficult to define when which results are to be expected, when or
62 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

Food for Thought


Artificial intelligence can be used very comprehensively and—yes—world-
changing. The possible fields of application of artificial intelligence are—
technically—(almost) limitless. Therefore, the boundaries should be set by
ethical norms.
However, it must be seriously doubted that an agreement on ethical norms
could succeed worldwide. After all, the prevailing and often peace- and pros-
perity-creating multilateralism is increasingly being questioned. In its place is
increasingly a veritable system competition between the democratic coun-
tries on the one hand and an increasing number of autocracies and dictator-
ships on the other hand.

if a technological singularity or an intelligence explosion will be reached—and


which developments in the field of transhumanism are to be expected. Thus, the
necessity of defining ethical goals remains.
A prominent example of the relevance of ethical goals is the so-called trolley
problem with autonomously acting vehicles:

• What should the system decide in a dangerous situation?


• Should it endanger the life of a child in an unavoidable accident?
• Or cause the death of an older couple?
• Or should it crash into the wall and thus risk the life of the driver and possibly
other passengers?
• Can we, in advance, categorize human lives into more and less valuable groups
through a program, thereby deciding on life and death?
• Or do we want to use a random generator that makes this final decision indepen-
dently of the programmers’ specifications?

Food for Thought


In this context, we also speak of the death algorithm. Because ultimately, an
algorithm is supposed to decide who will survive and who will not.
What is interesting in this discussion is that higher moral standards are
defined for the “machine” than for the self-driving humans themselves.
Because even with humans as drivers, there is no binding rule on how to
decide in such a case.
After all, § 1 of our Basic Law states:

Human dignity is inviolable.

This applies equally to all potential accident victims mentioned. However,


this fundamental promise of values does not contribute to the solution of the
trolley problem!
1.7 Ethics and Fairness of Artificial Intelligence 63

In autonomous driving, there is also the problem of the semantic gap—with a


risk to life and limb. The term semantic gap describes a specific discrepancy. This
discrepancy arises between often unspoken expectations of an autonomous driving
system on the one hand and the ability on the other hand to specify, implement and
test this system accordingly. The causes for this semantic gap are based on the fol-
lowing three factors (cf. [54], p. 21):

• The operating environments of systems are very complex and in their diverse
design possibilities unpredictable. Here, for example, one should think of the
environment in which a vehicle moves. The range extends from a well-lit, less
frequented highway to a downhill, unmarked country road in heavy rain at night.
• The systems themselves have a high complexity. Therefore, not all possible
manifestations can be played through in advance of a deployment. Here one
should think, for example, of the multitude of data that is collected during auton-
omous driving through the use of cameras as well as radar and LiDAR systems.
• At the same time, a self-learning system also has an internal dynamic of change.
After all, the AI applications learn independently and can therefore trigger com-
pletely different reactions in a similar situation than in a similar situation before.
These changes are as such hardly predictable. Here, for example, one should
think that an autonomously driving car realizes that hardly any road user meticu-
lously adheres to the given speed limits. Most vehicles drive a little faster—and
anyone who adheres correctly to the speed becomes a traffic obstacle and is
urged to drive faster with flashing lights. What does the AI system learn from
this? And what should it learn from these experiences?
• The largest semantic gap arises from the necessity, to transfer human decisions
to the systems decisions of humans to transfer. People make their decisions
based on a more or less developed ethical judgment. Additionally, human deci-
sions incorporate knowledge of law-abiding behavior with an anticipation of
possible penalties for misconduct. For example, one might consider whether to
park in a no-parking zone. This decision weighs the convenience on one hand
and the amount of the possible fine on the other, weighted by the probability of
actually being caught.
• Additionally, the human intuition towards the behavior of other road users
comes into play. Humans anticipate that a vehicle on the highway in front of
them will suddenly (without indicating) swerve—and it does!

Note Box
Artificial Intelligence is not yet capable of closing relevant semantic gaps.
Will it ever be—in a complex world like ours?

The missing answers in this semantic gap can have lasting social, political, eco-
logical and economic implications. The fact that mastering AI technology is not
trivial is already evident today, as people often can no longer understand how some
AI programs arrive at their decisions. This is because Artificial Intelligence uses
64 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

different algorithms. The result of a classic decision tree can still be well under-
stood. However, when concepts such as Reinforcement Learning or Deep
Learning (see Sect. 1.1) are used, the traceability of process and result in the pro-
cessing of possibly many millions of parameters is difficult or impossible to achieve
even for specialists.

Note Box
When using AI algorithms, there is increasingly a trade-off between trace-
ability and precision. Users must decide whether they value the traceability
of the approach or the accurate results more. Both together are often not
achievable.

Whether AI-generated solutions will be better or worse than those made by


humans can only be decided based on values. Consequently, various ethical ques-
tions arise:

• Who defines the values that AI systems base their decisions and actions on—still
humans or the machine itself?
• What would be the “right” values?
• How is “good” distinguished from “evil”?
• What would be “fair”—and from whose perspective is this defined?
• What happens when the values of humans and machines no longer coincide?
• Who decides in corresponding conflict situations—possibly in real-time—about
the relevant set of values?
• Which auditing bodies would be capable of this value assessment?
• How long will humans still have the say?
• Will humans still be involved in decision-making situations by AI systems—and
if so, by whom and when and with what consequences?

The relevance of values is to be illustrated by the following thought experiment :

Food for Thought


Imagine an AI system is to develop a pesticide that does not endanger bees.
The entire process of developing the pesticide, its production, and the drone-
based application of the pesticide are fully automated.
Suddenly it is discovered that all humans in the area of pesticide use have
died. One had simply forgotten to give the AI system the “side condition” that
humans should also survive the pesticide use!
This thought experiment makes one thing indispensable: Responsible
companies (and governments) must ensure that ethical principles in the use
of AI are necessarily considered. Parallel to the development of ever new
application fields, the ethical questions of AI must be thought through and
answered.
1.7 Ethics and Fairness of Artificial Intelligence 65

If AI-based decisions—despite a lack of transparency—are to be followed, these


AI systems “values” must be firmly programmed, on the basis of which decisions
are to be made. This raises fundamental questions:

• What happens if the AI system finds that the programmed values severely restrict
the relevant solution space and prevent a supposedly “best” solution? After all,
the values defined by humans based on today’s knowledge may be outdated—
given a much more comprehensive, AI-generated knowledge.
• Can the system independently further develop and thus change the values in such
a situation?
• When can or when may an AI agent deviate from the originally defined values?
• And who has to give their OK for this—the AI agent itself or a human?
• And if a human, then which one?
• And who checks their decisions?
• And based on which values?
• And who decides which values are to be used for this?

The relevance of these questions becomes clear in the following development.

Food for Thought


In the course of the discussion about ChatGPT it was demanded in China by
the state that such chat robots “socialist values” must be internalized (cf. [55],
p. 22). The indoctrination of such values is therefore also to be ensured through
appropriate AI applications. This underlines once again that China is increas-
ingly moving from being a partner and competitor towards becoming a sys-
temic rival. This is also due to the fact that China is increasingly moving away
from its original position as a duplicator and becoming a global innovator.

Food for Thought


What would happen if an AI system determined that the survival of planet
Earth is only possible in the long term with a population of one billion peo-
ple—or without people and their massive interventions in nature? What deci-
sions would have to be made and who would have to enforce them?
Or is the premise wrong that planet Earth should continue to exist—when
there are plenty of other planets (whether inhabited or not)?
Or should we focus solely on the further growth of humanity—no matter
the cost?
Questions upon questions that cannot be answered without a value
framework.
But who is allowed to develop this value framework—and who legiti-
mizes it?
Where is there a hotline to higher truth, which defines universally
valid values for all people worldwide, which all countries and people
must then adhere to?
66 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

It must be defined in any case in the run-up to the use of AI to what extent arti-
ficial intelligence can decide independently and where the human control instance
is indispensable. We should define this limit in advance. Perhaps—or even cer-
tainly—this limit will continually shift towards the autonomy of AI systems,
because we have had good experiences with the results. Can we therefore create a
safe and pro-human artificial intelligence at all? At the latest when using AI systems
by the military we reach massive limits (cf. Chap. 10).
How companies implement ethical principles of AI use in guidelines is shown
by the example of SAP [56]. The company has formulated the following guiding
principles for artificial intelligence :

1. We are driven by our values.


2. We design for people.
3. We enable business beyond bias.
4. We strive for transparency and integrity in all that we do.
5. We uphold quality and safety standards.
6. We place data protection and privacy at our core.
7. We engage with the wider societal challenges of artificial intelligence.

Food for Thought


• How much autonomy do we want to grant AI technologies as users and as
decision-makers in companies?
• Where do we set the ethical boundaries for artificial intelligence—in gen-
eral and in our own company?
• How can we determine that this boundary setting is effective?
• What goals guide us?

Deloitte [57] has developed a framework for Trustworthy AI. Here, the company
places trust at the center of its actions. This AI framework is designed to assist
companies in the development of ethical safeguards in six key dimensions. This is
intended to ensure that the risks associated with artificial intelligence are managed
and that the benefits dominate when used. Such a trustworthy AI requires gover-
nance and compliance with legal regulations throughout the entire AI lifecycle:
from the idea to the design, development, and deployment to the operation of AI
engines. The following dimensions of a trustworthy artificial intelligence should
be considered:

• Fair and impartial processes and results


• Robust and reliable results
• Respectful of privacy
• Secure and protected information processing
• Responsible and accountable use
• Transparent and explainable use
1.7 Ethics and Fairness of Artificial Intelligence 67

Such AI governance must be integrated into the technologies, the processes, and
especially into the training of users. This also includes compliance with applicable
regulations. These include, for example, a risk assessment, the establishment of
control mechanisms, and general compliance with regulations. In the interplay of
governance and compliance, ethical and trustworthy AI use can take place (see
[57]; see also on AI compliance Sect. 9.4).

1.7.2 I ntelligence Explosion, Superintelligence,


and Transhumanism

In the definition of AI challenges, there are different approaches. A common


approach is to Artificial Intelligence can be divided as follows:

• Weak Artificial Intelligence (also Weak/Narrow Artificial Intelligence)


• Strong Artificial Intelligence (also Artificial General Intelligence, AGI)

This classification was first introduced in 1980 by the American philosopher John
Searle (see [58], pp. 5–7). Weak Artificial Intelligence is satisfied with performing
a task at least a human level. This includes playing chess, providing customer infor-
mation, and internet searches. The primary focus here is not on imitating human
abilities, but on solving complex problems. In addition, tasks should be solved more
convincingly than human cognition and physical capabilities allow. Applications of
weak AI can now be found in many everyday objects—for example, in digital assis-
tants or in cars as parking aids. Online recommendation algorithms or translation
programs also use weak AI.
Strong Artificial Intelligence on the other hand, describes the endeavor to rep-
licate human abilities through technology in many areas of our everyday life, to
optimize them and even to push into new performance spheres. For this, it would be
important to imbue AI with a kind of “consciousness” in order to perceive and inter-
pret the world much more comprehensively than before. While weak AI is capable
of beating humans at a specific task, an Artificial General Intelligence could far
surpass humans in almost any cognitive task in terms of quality and speed.
While applications of weak artificial intelligence have dominated so far, research-
ers are increasingly pushing into applications of strong artificial intelligence. It is
expected that AI technologies through their self-learning ability will cross a critical
“knowledge mass” in the medium term. This self-learning ability leads to an AI
system being able to supplement its knowledge base and thus optimize its problem-
solving behavior without external support, based solely on the experience data
gained, its own observations and conclusions. This can result in a true intelligence
explosion, leading to a superintelligence—an intelligence that overcomes the lim-
its of human thinking, feeling and acting (see Fig. 1.18; see [53, 59]).
An intelligence explosion in Artificial Intelligence refers to the idea that a strong
AI is capable of improving its own intelligence and developing new AIs. This can
lead to an exponential increase in intelligence. Thus, a superintelligence could
68 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

System capacity

Superintelligence

Intelligence
Adaptation
explosion

Merger

Human
cognition

Time

Present Transition period Takeover

Fig. 1.18 Development towards an intelligence explosion

develop that is far superior to human intelligence. Then no more humans would be
needed to develop AI systems (further)! Artificial Intelligence continues to evolve
autonomously... This concept carries both opportunities and risks. A highly devel-
oped AI could solve complex problems better and replace human labor. However,
such a superintelligence could also get out of control and lead an unwanted life of
its own.

Note Box
Currently, the idea of an intelligence explosion is still a purely theoretical
phenomenon. After all, as of today, no AI system would be capable of inde-
pendently improving itself to such an extent that it would trigger an intelli-
gence explosion.
Today’s AI systems do not have consciousness. They do not experience
their own feelings and also do not exhibit their own will.

Instead of intelligence explosion, some also speak of technological singularity.


This technological singularity marks the point at which machines improve them-
selves through artificial intelligence at such a speed that they accelerate technologi-
cal progress to an extent that no longer allows a prediction of the future of humans
and humanity (cf. [60, 61]). Thus, the technological singularity refers to a hypo-
thetical point in the future when technological development increases exponentially
and fundamentally changes human civilization. For example, problems that were
previously considered unsolvable could be solved. Another aspect of the techno-
logical singularity is the idea that developments would progress so quickly that they
could get out of humanity’s control. The consequences of such a development are
difficult to estimate, as the technologies would be so complex and far-reaching.
1.7 Ethics and Fairness of Artificial Intelligence 69

A superintelligence would be the epitome of strong artificial intelligence or


Artificial General Intelligence. Such a development could help end crises and
wars, as well as fight diseases and poverty. This could become one of the greatest
events in human history. However, there are also legitimate concerns that this step
towards superintelligence could also become the last event in human history. This
would be the case if humans are not able to align the goals of artificial intelligence
with the goals of humans before it becomes superintelligent and emancipates itself
from the humans developing it.

Food for Thought


Almost any technology has the potential to cause harm in the wrong hands, but with
AI and robotics, the hands might be operating on their own. …
We can hope that a robot that is smart enough to figure out how to terminate the
human race is also smart enough to figure out that that was not the intended utility
function; but in building intelligent systems, we want to rely not just on hope, but on
a design process with guarantees of safety. ([8], p. 1056)

A superintelligence could emancipate itself from human intelligence and come


up with different solutions than humans have conceived so far—based on more data,
faster processing, and a (more objective?) evaluation.
When the 1.18 shown intelligence explosion could occur is unknown. The
expected time of entry has been repeatedly pushed back by decades. Part of the
knowledge community even believes that humans cannot determine this point in
time themselves (cf. [8], p. 1055).

Note Box
Even if it is still unclear whether or when an intelligence explosion could
occur, we must consider the possibility of developing a superintelligence.
Because one thing is undisputed: If a superintelligence will occur, it will
have dramatic effects on our society!

Fictional Reading Tip


Anyone who wants to see these developments illuminated in a good, fictional
novel should read the already mentioned bestseller Frank Schätzing “The
Tyranny of the Butterfly”.

In the context of superintelligence, the terms uploading and upshifting are used
to describe the transition of a human mind or consciousness into a digital form.

• Uploading refers to the idea that the human consciousness could be copied into
a digital form to upload it to a computer or another digital medium. This digital
70 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

copy would be able to think and act like a human brain. This transferred con-
sciousness would encompass the entire personality of a person with all his mem-
ories, experiences, emotions etc. Uploading is often considered a possibility to
overcome the limits of the human body and the human mind.
• The Upshifting refers to the idea, that human consciousness can be expanded
through technological changes. These changes can lead to the human conscious-
ness improving and expanding itself to achieve higher intelligence and abilities.
Through upshifting, human intelligence is to be expanded and improved without
crossing the limits of the human body.

It should be noted that uploading and upshifting are still theoretical concepts.
Currently, there is no technology that would allow us to transfer human conscious-
ness into a digital form or to expand it in other ways that go beyond simple bio-
hacking (cf. Sect. 5.1.7). In addition, there are also many ethical and philosophical
questions that need to be clarified before these concepts should actually be
implemented.
In connection with the idea of technological singularity and the creation of a
superintelligence, the term Transhumanism is used. Transhumanism is a philo-
sophical movement that uses the application of technology to expand the limits of
human biology and human abilities. This expansion can refer to mental/intellectual
and refer to physical competencies. This goal is to be achieved through a fusion of
humans and technologies. For this purpose, for example, the biological extension of
humans through the use of technology and artificial intelligence is used. The trans-
humanist movement advocates the improvement of human health and life expec-
tancy through technology. This is achieved, for example, through the improvement
of prosthetics and implants, the artificial enhancement of sensory perceptions, or the
use of AI for the diagnosis and treatment of diseases. Transhumanists believe that
humanity through the use of technology will be able to expand its physical and
mental abilities and thus achieve a higher stage of evolution (cf. [62, 63]).
This “extension” may sound abstract at first, but let’s take a closer look at a medi-
cal development. Technology has always played a crucial role in medicine. For
example, prosthetics in any form serve as extensions of impaired body parts and
replace destroyed functions. What started with wooden legs, glasses and later pace-
makers, is now increasingly expanding on a neuronal level. The suffering of patients
with Parkinson’s, epilepsy or mental illnesses such as depression is to be alleviated
through interventions directly in the brain. For this purpose, neurotechnological
implants are used. Electrodes are implanted in certain brain areas and restore func-
tions that are disturbed or have failed. This can make the life of people with depres-
sion and Parkinson’s patients more bearable.
Against this background, the following questions need to be answered:

• How much transhumanism is already present in our current research?


• What impact do such interventions directly into the brain have on human person-
ality and our society?
• What defines us as humans?
1.7 Ethics and Fairness of Artificial Intelligence 71

• In what form should we incorporate the goals and possibilities of transhumanism


into political, societal and economic decisions?
• What ethical boundaries need to be observed here—and who defines these?

Food for Thought


The development towards transhumanism makes it clear that the current
limits of artificial intelligence are no longer in the technology. There is rather
an urgent need for action to clarify the associated ethical questions before the
technological developments proceed unreflectively!
Otherwise, visionary scientists will take over the AI steering wheel—from
whatever regions of the world they come from and with whatever value foun-
dation they may be equipped with.

1.7.3  xplainable Artificial Intelligence: Explainable


E
Artificial Intelligence

An indispensable task for any AI application against this background is: creation of
an Explainable Artificial Intelligence (XAI/Explainable AI ; also explainable
Artificial Intelligence). This refers to the attempt to make a Black Box “Artificial
Intelligence” (Black Box AI ; see Fig. 1.3) to avoid and at least create a Grey Box
“Artificial Intelligence” (Grey Box AI). It is intended to enable at least partial
traceability of results and decisions. A White Box “Artificial Intelligence” (White
Box AI) is not pursued. It would be a utopian goal to convey complete transparency
to users and those affected by AI systems.

Note Box
We still do not understand how artificial intelligence comes to its
statements.
This even applies to certified AI specialists! Perhaps we will never under-
stand how complex AI engines operate.

Through the construction of a Grey Box “Artificial Intelligence”, the process


and the results of AI use should be made more comprehensible (see in depth [64]).
To create a Grey Box, three fields are distinguished—the transparency of the data,
the algorithms and the delivery of the data:

• Transparency of the data


Since the quality and “incorruptibility” of artificial intelligence depends on the
available data, the interested user should be able to check the data basis of the
AI application. If the training data contains biases or if irrelevant populations
are represented by the training data, the results of the AI system cannot be trusted.
72 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

This leads—even with highly developed algorithms—to the so-called GiGo


effect : “Garbage-in-Garbage-out”. The GiGo effect describes the phenomenon
that only poor insights can be gained from bad data. Colloquially, this effect is
also called “Shit-in-Shit-out”.
However, it should be noted that it often requires expert knowledge to critically
review the data basis of AI systems. For outsiders, this is usually not possible—
and often not desired, when company internal data is processed. Here, certifica-
tion processes with corresponding seals of approval for the used data could at
least help in selected areas of application.
A particular challenge is to achieve a fairness of artificial intelligence. People
who define preloading rules (see Sect. 1.1) and feed data into the systems for
training purposes, are inherently biased—it’s in our nature. Thus, the rules used
here as well as the data can show a bias (a distortion). A bias can affect later
evaluations and decisions without this being (easily) recognizable. How can AI
systems develop a bias? Systems develop a bias when the data from which the AI
systems are supposed to learn, are incomplete and/or unbalanced. The data can
be due to be structured towards a specific result. However, since the input data
for an AI system initially represents the only “learning material”, the imbalances
or biases contained in the input data become the basis of the AI results.
For example, in creditworthiness checks, “female customers aged 20–30” could
be denied a loan if in the non-representative training dataset many customers
of this age have not met their payment obligations. This dataset would therefore
show a distortion—a bias. Hence the note that this dataset would be “non-­
representative”. Here is another example of a bias: An AI system is supposed to
learn to recognize shoes. If the system was only fed with photos of sneakers dur-
ing the training phase, it could later not recognize high heels, Birkenstock san-
dals, or western boots as shoes in photos.
• It has significantly more serious effects when a HR department selects candi-
dates for leadership positions using AI—and the system was trained with data
from the past. Since leadership positions in many areas were primarily filled with
men, the system learns that men appear to be more suitable for leadership posi-
tions. This data-based insight will be the basis for the AI system’s evaluation of
resumes and its hiring recommendations will primarily be directed at men. If
leadership positions in a company are evenly filled with men and women and it
is checked after 5 years which gender was able to hold a leadership position, the
data world looks different again. An AI recommendation based on this would
then reflect the reality of the respective company.
• How quickly such distortions can occur is also shown by a study from the year
by Lambrecht and Tucker [65]. Here it was recognized that Facebook advertise-
ments were played out in a gender-discriminatory manner. These were job
advertisements in the STEM field, which were shown less frequently to women
than to men. This unconsciously built-in discrimination resulted from the fact
that young women on Facebook are a sought-after target group. Consequently, it
is more expensive to place an advertisement with them. So if the algorithm had
the choice, given the same click rates, to decide between a man and a woman, the
choice fell on the cheaper option—in this case the man.
1.7 Ethics and Fairness of Artificial Intelligence 73

• An example from the judiciary can also impressively demonstrate this danger
of a bias. In the USA, an AI system was supposed to make court judgments. It
was trained using old court judgments. An interesting phenomenon was observed
when it was used: If the skin color of the defendants was changed from white to
black, the sentence suddenly increased. It became clear that the often complained
about harsher sentences for black offenders, which were documented in the pre-
vious judgments used as training material, were unreflectively transferred by the
AI system to the new legal cases (see in depth [66]).

Food for Thought


A major source of error in AI applications: Bias in—Bias out!
After all, the reality on which artificial intelligence is based depends on the
data with which it was trained and continues to be fed. And this data often
comes from the past.
The more AI models are reflected in everyday actions, the more these
actions remain trapped in patterns of behavior from the past. The more diffi-
cult it becomes to break out of such traditional patterns in order to avoid an
eternal feedback loop that only reproduces variations of the past.

However, one thing must also be pointed out here: If certain groups of people do
not receive an invitation to interviews, it may simply be due to the fact that members
of this group have not proven themselves in everyday business. And if banks assess
the risk of non-repayment of loans as higher for certain groups than for others, this
may simply be due to the fact that such people have not repaid their loans on time.
Therefore, it is important to distinguish between bias on the one hand and represen-
tative data bases on the other. However, “unequal treatment” of people is always
justified when they themselves also behave unequally. Then it is not a matter of
discrimination, but a reaction to certain patterns of behavior.
The effects of unbalanced training of face recognition can be seen in Fig. 1.19.
These are results of a facial analysis software from the Chinese AI company
Sensetime. In two analyses of the author in China, conducted 2 years apart, among
other things, the age was determined. In 2018, the author’s age was estimated at 43
years (left photo)—and in 2020 in two different runs at 40 years. This is flattering
for the author, but was 20 years off the real value. This shows a weakness of AI
systems when they are trained only with data from the Chinese population—and
so-called “long noses” did not appear in the training data set.
However, AI systems can also act “compensatory” and avoid distortions where
people are at risk of losing their “objectivity” due to their previous experiences, the
respective emotional state or other changing conditions. It has often been observed
in jurisprudence that judgments are harsher when the “own” football club loses,
when the weather is bad—or simply in the afternoon of a strenuous working day.

• Transparency of the algorithms


With AI applications, it would be interesting to know on the basis of which steps
of an algorithm a certain result was achieved. However, since the machine learns
74 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

Fig. 1.19 Facial recognition leads to further data

independently, this process is not easy to understand. For the acceptance of the
AI results, it is indispensable to at least be able to recognize the main influenc-
ing factors of a decision. Thus, the user of an AI system for credit rating of
customers should recognize what the central factors of the decision were. It
should therefore be possible to understand why, in an AI-supported credit rating,
person A does not receive a loan compared to person B. This knowledge is also
indispensable for a recommendation that a certain person should be approached
for a new offer via a certain channel in three days. Also in an AI-supported prepa-
ration of court judgments, it should become clear why a probationary sentence is
proposed for defendant X and why defendant Y should go to prison.
Today’s AI systems usually cannot explain why they come to this or that result.
These systems can indeed make many Solving tasks competently. However, AI
agents often still lack the declarative knowledge to explain the found correla-
tions and to transmit these to the users of AI systems as an answer to the question
“Why?”. But as a user and as an “object of decision”, one wants to know why
decisions were or are being made in certain situations.
• Transparency of data delivery
The results of AI systems must be prepared for the users and/or the affected par-
ties in such a way that even a person with little mathematical and/or statistical
training can understand the gained insights. However, this necessity is already
obsolete today in cases where results are automatically incorporated into subse-
quent processes.

What could a solution look like to avoid bias in the development of AI systems and
at the same time increase the explainability of AI systems? First of all, we must real-
ize that it is people who set the framework conditions for the algorithms with their
decisions. It is also people who provide (selected) data for training AI systems. And
1.7 Ethics and Fairness of Artificial Intelligence 75

it is again (often) people who process the results of AI. The challenge is therefore to
achieve as professional, bias-free collaboration between human and machine as
possible. This is primarily achieved through diversity : Therefore, in the responsi-
ble AI programming team, there should be high diversity (by age, gender, nation-
ality, ethnicity, etc.), so that neither in the selection of training data sets nor in the
development of preloading rules are (unconscious) stereotypes and prejudices of the
programmers included. In addition, diversity or—better—representativeness of the
used training data must be ensured.

Note Box
To avoid possible biases in your data, you should use different (reliable) data
sources. The data should each be representative of the total population to
which an AI system is to be applied later.
A Data Audit can provide valuable support here by systematically check-
ing the quality of the incoming data. It is particularly valuable if such a data
audit is carried out by third parties.
High diversity in your teams leads—almost automatically—to the fact
that prejudices or stereotypes in AI systems can be avoided.

Food for Thought


The algorithms and consequently the results of an AI system are significantly
influenced by the respective design and the used training data. AI systems are
designed, partially trained, and used by fallible humans.
What should therefore be avoided in the development and use of AI?
That the fallibility of humans is magnified and multiplied by AI systems!

Without Explainable Artificial Intelligence AI applications remain a black box,


making a critical analysis of the underlying data, the processes applied, and the
results obtained difficult or impossible. Then users cannot understand how a result
is achieved—and it is harder to “trust” the results anyway. The research on explain-
able AI tries to develop techniques so that AI systems not only deliver results, but
also explanations of how they came about.
With a (at least partial) transparency about AI processes and the results
achieved by them, the willingness to trust AI systems stands or falls. After all,
(today) nobody wants to rely on systems and their decisions that are not at least
roughly comprehensible. Or have we already started to blindly trust AI systems and
their decisions and recommendations? Like the robo advisor who automatically and
incomprehensibly makes our investment? Or music recommendations from Spotify
and film recommendations from Netflix, which we follow unchecked? Even with the
information provided to us via social media, we do not question the underlying AI
algorithms. We usually deal with the content that appears in the news feed without
thinking that there could be completely different news there.
76 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

Where will the limits of AI acceptance be in 5 or 10 years? Will we rely exclu-


sively on AI agents for partner selection and career choice? Because they know us
better than we know ourselves? Or will the question of transparency no longer arise
in 10 years because everyone has learned that AI decisions are “the better ones”?

Note Box
Building trust is a key success factor for AI use! Trust is created through
transparency. Therefore, you should rely on Explainable Artificial
Intelligence from the start—even if your AI specialists would rather do
without it.
Without a certain transparency about data, processes and results, you
will—as of today—find it difficult to gain acceptance among users for AI
applications and their results. The people involved must at least be able to
understand the basics of artificial intelligence.

1.7.4 Reduction of Noise as a Success Factor

To reduce the fallibility of man and machine, the reduction of noise can contribute.
As noise (in German simply “noise” or also “interference”) everything is referred to
that disturbs, obscures, reduces or otherwise impairs the clarity or precision of an
ongoing process. This can be the transmission of a message or the making of a deci-
sion (cf. [67]). Noise leads to a random and unwanted scattering of decisions.
Kahneman et al. [68] have pointed out that even important decisions can turn out
completely differently if only minimal changes are made in the context and the noise
level changes. Moreover, different people—even under otherwise identical condi-
tions—often make very different decisions depending on the respective noise level.
Why is it important to be aware of the effects of noise and interference in
decision-making situations? Disturbing noises make it impossible to predict deci-
sions precisely. The range of possible decisions can be enormous. Depending on the
judge, the sentence for a bank robber can be 5 years or 15 years in prison. But not
only in the courtroom does noise affect decisions. The approach of doctors—with
the same symptoms of a patient—also changes with the respective noise level. The
grades of exams as well as the results of job interviews also vary—depending on the
noise currently affecting the decision-making process and the people involved (cf.
in depth [68]).
What are possible solutions to reduce the effects of noise? Here, the four-eyes
principle and again the already recommended diversity can help. Also, a greater
amount of time can reduce distorting influences. Then decisions do not have to be
made out of a good or bad mood. Therefore, it is important that not only the influ-
ence of bias, but also of noise is more strongly addressed in AI applications. This
can identify a blind spot in decision-making that is often still prevalent and then
eliminate it.
Overall, artificial intelligence needs to be supplemented by human intelligence.
This way, implausibilities can be detected early on that are based on various distor-
tions. For this purpose, “Human-in-the-loop” systems are used. The human in the
1.8 Summary 77

loop is a person who trains, tests, and optimizes AI applications to achieve more
reliable results. This can be illustrated with an example. An AI system for distin-
guishing different bird species can easily encounter difficulties because many birds
have very similar characteristics. Here, a human can intervene and point out impor-
tant distinguishing features that the system should consider (keyword preloading
rules). This guides the AI system to more convincing results. Through this collabo-
ration, two different types of intelligence are used simultaneously: human and arti-
ficial intelligence. Based on the data provided by a human, the AI system can
continuously check its progress and determine the quality of the results. Human
knowledge is supplemented by the speed of the AI system, and any disadvantages
of the involved partners are balanced out.
It’s almost as if we need to install a babysitter for the AI. Or to put it another
way: “AI can help humans with bias—but only if humans are working together to
tackle bias in AI” [69].

Note Box
In many areas of application, a AI-Engine should and can—still today—only
be a co-pilot, but not an auto-pilot. The rule here is:
The boundaries between co- and auto-pilot will shift in the shift
increasingly towards autopilot in the coming years.
This depends on our experiences with AI systems. And perhaps our grand-
children will 1 day ask:
Why, for heaven’s sake, did you ever put yourself behind the wheel of a
vehicle, choose partners “out of love” and professions “out of affection”
instead of relying on AI-based recommendations right away?

1.8 Summary

• A common division of Artificial Intelligence is into weak Artificial Intelligence


and strong Artificial Intelligence.
• Weak Artificial Intelligence aims to achieve human abilities at the same or a
slightly higher level (as in chess, for example).
• Strong Artificial Intelligence describes the effort to achieve human abilities
through the use of technology in almost all areas of our everyday life through
artificial applications.
• In the development of strong Artificial Intelligence, phenomena such as intelli-
gence explosion, technological singularity, superintelligence and transhu-
manism occur.
• A major challenge is to make the processes and results of Artificial Intelligence
understandable. This is referred to as Explainable Artificial Intelligence.
• Explainable Artificial Intelligence refers to the transparency of the data used,
the transparency of the algorithms used as well as to the transparency of the
delivery of result data.
78 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

• Creating at least partial transparency (Grey Box AI) is a prerequisite for the
acceptance of AI systems—within and outside the company.
• It is important to ensure that the data for the training of the algorithms as well
as the initially used algorithms are free of biases, prejudices and stereotypes.
Otherwise, neutral results cannot be achieved.
• AI is a powerful technology that brings many potential dangers with it. These
include lethal autonomous weapons, unintended side effects and errors, and
deliberate misuse. People who work with AI technology have an ethical obliga-
tion to effectively avoid sources of danger.
• Before the development of strong Artificial Intelligence, it therefore requires
ideally—a global ethical agreement on the use of AI, which probably will never
come about.

1.9  here Do Companies in Germany Stand in the Global


W
AI Environment?

To unlock the almost unlimited potentials of AI deployment, various sciences


must leave their silos and ivory towers and contribute their knowledge to corre-
sponding AI networks. Further development requires a large number of scientists
and their extensive networking. Which sciences are of particular importance for the
development of artificial intelligence? These include biology, cognitive sciences
(such as psychology, philosophy, and linguistics), economics, computer science,
mathematics, and engineering.
The biology provides the basics from which the “ideal image” of AI systems is
derived. To develop a humanoid robot, comprehensive knowledge from the fields of
anatomy and psychology as well as neuroscience is required. After all, a humanoid
robot should not only mimic humans in appearance but also imitate human behavior
and make social decisions. For this, insights from the cognitive sciences must be
incorporated.
The mathematics in turn provides the tools to develop the algorithms of AI
technologies. It enables computer scientists to write powerful AI programs.
Engineers implement the cognitive and physical performances of robots and
machines in conjunction with these algorithms. For these developments to reach
market maturity, it ultimately requires the economists, who recognize customer
needs early on and enrich the upstream work steps with them. As far as it goes
beyond basic research, economists are also largely responsible for achieving a
return on investment (ROI) so that AI investments become profitable in the long
term. Finally, philosophers can work on discussing values relevant to AI applica-
tions. How artificial intelligence is embedded in total with its fields of application
and methods is shown in Fig. 1.20.
The question arises as to how well this networking has already been achieved in
Germany and Europe to unlock the potentials of artificial intelligence. Important
data on the state of AI in Germany is provided by the AI Monitor of the BVDW
[70]. Here are the central results summarized:
1.9 Where Do Companies in Germany Stand in the Global AI Environment? 79

Humanoid Robots … …


Fig. 1.20 Input sciences, methods, and application fields of artificial intelligence

• The value of the AI index is 122 points in 2022. Compared to the base year
2019 with the index value 100, this represents an increase of 22 index points.
This shows a positive development.
• The AI index has, however, deteriorated by 1.22 index points in 2022 compared
to 2021. This decline is significant, as a positive development is expected in the
future technology of AI. would have been expected and desirable or necessary.
• Compared to the dynamic development in previous years, the positive momen-
tum was lost in 2022.
• A look at the individual categories of the AI index provides interesting insights.
The category framework conditions has made a negative contribution every
year. This means that the framework conditions for the use of AI in Germany
have deteriorated from year to year. This includes the digital infrastructure.
Although this has improved compared to the previous year, it still has significant
catching up to do in an international comparison. The importance of collabora-
tions between companies and research institutions and the number of final
exams in the field of computer science have remained almost unchanged. On
the other hand, the importance of AI in the Bundestag and in scientific publica-
tions has decreased.
• The largest contribution to the development of the AI index is made by the econ-
omy. In 2022, however, the contribution of this category was negative for the
80 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

first time. This could be due to the inflation, the Russian war of aggression
against Ukraine, and the aftermath of the Corona pandemic. The decline in this
category results from the fact that the use of AI, the assessment of the impor-
tance of AI, the mention of AI in business reports and AI job advertisements
showed moderate declines in the reporting year 2022. Only the AI patent appli-
cations have developed positively.
• Interestingly, the category society contributed positively to the index develop-
ment in 2022. This increase results from an increased AI knowledge of the
population. The state of the discussion of AI in society is still neutral. The
search interest in AI on Google Search has decreased—possibly due to an over-
all increasing knowledge.

Comparable results are provided by a Bitkom study, which under the title “Artificial
Intelligence—Where does the German Economy stand?” was published. As part
of this study, 606 companies with 20 or more employees from all sectors in Germany
were surveyed. A central finding of the study is [71]:

Note Box
German companies recognize the opportunities of Artificial Intelligence and increas-
ingly see advantages in the use of this technology—but practical use of AI is hardly
progressing.

Specifically, for the year 2022, the question “Do you see Artificial Intelligence
more as an opportunity or more as a risk for your company?” following assess-
ments (see [71]):

• 65% see AI as an opportunity for their own company.


• 21% describe AI as a risk for their own company.
• 13% assume that AI has no influence on their own company.

In response to the question “To what extent does your company use AI or plan or
discuss its use?” the results shown in Fig. 1.21 are displayed. Only 9% currently
indicate a use of AI in their own company—in one of the industries that still

Application Planned or Not an issue


discussed
30% 100% 87% 86%
10% 9% 30% 71%
8% 25% 80%
8%
22% 59% 64%
6% 60%
20%
6%
3% 9% 40%
4% 8%
2% 10%
2% 20%
0% 0% 0%
2018 2019 2020 2021 2022 2018 2019 2020 2021 2022 2018 2019 2020 2021 2022

Fig. 1.21 To what extent does your company use AI or plan or discuss its use?
1.9 Where Do Companies in Germany Stand in the Global AI Environment? 81

describes itself as leading in the world! Compared to the previous year, this is only
a very small increase of one percent. The percentage of companies that discuss or
plan the use of AI, has even fallen from 30 to 25%. At the same time, the proportion
of companies for which AI is not an issue has increased from 59 to 64%. The expla-
nation for the fact that less attention is paid to artificial intelligence as a future
technology includes various factors. These also include increased energy costs, high
inflation rates, and disrupted supply chains. The development of AI use is largely
going in the wrong direction here.
A more detailed evaluation shows that the use of artificial intelligence depends
to a large extent on the size of the company (see [71]):

• Companies with 20–99 employees only use AI solutions in 5% of cases.


• For companies with 100–499 and 500–1999 employees, it is already 18%.
• 48% of companies with 2000 or more employees use artificial intelligence
solutions.

The answers to the question: “What advantages do you see in the use of AI in
companies?” (see Fig. 1.22 ; see [71]). Of the surveyed companies, particularly
faster and more precise problem analyses (52%), accelerated processes (43%)
and a lower resource consumption (39%) are mentioned. Only 11% expect cost
reductions. Further advantages are seen in personnel. Here, it is about the avoid-
ance of human errors (38%) and the possibility of bringing expert knowledge
into the company through the use of artificial intelligence (36%). 26% see the pos-
sibility through the use of AI that employees can focus on other tasks. Also with
regard to their own business model, AI can provide valuable contributions. 46%
expect a strengthened competitiveness through AI. 27% rely on AI for improved and
21% for completely new products or services. Given these important benefits of

Enables faster and more precise problem analysis

Accelerates processes
Reduces resource consumption and thus environmental
impact

Avoids human errors in everyday work


Provides expert knowledge that otherwise wouldn't be
available
Allows employees to concentrate on more important
tasks

Enables the improvement of existing products or


services
Enables completely new products or services

Saves costs for the company

2022 2021 0% 20% 40%

Fig. 1.22 What benefits do you see in the use of AI in companies? (data source: [71])
82 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

New IT security risks 79%

Violations of data protection regulations 61%

Application errors in AI usage 49%

Lack of traceability of results 49%

Errors in programming 48%

Lack of controllability of AI systems 48%

Error sources in training data sets are hard to detect 47%

Lack of training data sets 42%

Liability obligations for damages 40%

Loss of know-how in the company 39%


Reputational damage due to critical public debate on
33%
AI
Insecurity among employees 27%

Loss of control/competence among executives 27%

Promotion of employee dependency 16%

0% 20% 40% 60% 80% 100%

Fig. 1.23 What risks do you see in the use of AI in companies? (data source: [71])

using AI, it is all the more difficult to understand that the focus on AI has decreased
in 2022 (see Fig. 1.21).
In addition to looking at the benefits, potential risk areas also need to be high-
lighted. The following question was asked: “What risks do you see in the use of
AI in companies?” The answers are shown in Fig. 1.23 (cf. [71]). First of all, it
should be noted that all surveyed companies see risks in the use of AI. At the top of
the list are IT security risks (79%), followed by possible violations of data pro-
tection regulations (61%) and expected application errors in the use of AI (59%).
49% of companies are concerned about the lack of traceability of the results dis-
cussed. Furthermore, 48% expect errors in programming as well as a lack of
control over AI systems. Also, the possible, but hard to detect error sources in
the learning data sets (47%) and lack of learning data sets in total (42%) are
pointed out. Still, risks in the workforce are mentioned, albeit at a lower level. 27%
see the risk of uncertainty among employees or a loss of control or competence
among managers. The statements underline the importance of significantly increas-
ing the level of knowledge about the field of artificial intelligence in Germany as
a whole.
Investments in Artificial Intelligence will be in the coming years—albeit
slowly (cf. [71]):

• 6% of companies had already invested in AI in 2022.


• 10% plan an AI investment for 2023.
1.9 Where Do Companies in Germany Stand in the Global AI Environment? 83

• 20% want to invest in 2024 or later.


• The vast majority—64%—have not yet invested in AI and do not plan to do so in
the future.

Note Box
Waiting is not a strategy!
International competitors are not waiting for us! Consequently, we should
not wait for a German or a European Sputnik moment. That would take far
too long!
Perhaps now the iPhone moment of ChatGPT is enough for many compa-
nies as well as society and politics to rethink in the long term.

Another question was: “In which areas of your company are AI tools used or
in which area do you consider future use likely?” The answers shown in Fig. 1.24
come from 54 companies that already use AI and from 539 companies without AI
use. The percentages for the answers “very likely” and “rather likely” are shown (cf.
[71]). Today, AI applications are mainly found in marketing (81%) and especially
for customer retention (61%). Furthermore, artificial intelligence is used in pro-
duction (54%), in purchasing (54%) and in accounting (50%). It is interesting that
AI systems are hardly used in strategy development (38%), in IT and logistics
(each 35%). Also in the HR department (23%) and in the area of research and
development (15%), AI is hardly used.
One look at the evaluation in Fig. 1.24 clearly shows: The companies that do not
yet use AI solutions could imagine their use in practically all areas of the company.

In marketing
For customer loyalty
In production
In procurement
In accounting
For management support in strategy development
In the IT department
In logistics
In the HR department
In research and development
In the legal and tax department

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
Companies that do not use AI Companies that use AI

Fig. 1.24 In which areas of your company are AI tools used or in which area do you consider
future use likely? (data source: [71])
84 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

Here, percentages above 70 and sometimes 80% are found almost consistently.
From this it can be concluded:

Note Box
There is no shortage of potential applications for artificial intelligence in
companies!

The question provides interesting answers: “Who is currently driving the topic
of artificial intelligence in your company?” The results are shown in Fig. 1.25
(see [71]). It is interesting that in companies that already use AI or are specifically
planning to do so, the topic is rarely driven by top management. Only in 14% of the
companies surveyed are the management or board the responsible drivers. Much
more often, the IT management or a CIO is responsible for the use of AI (49%). In
24% of the companies, the technical management or a CTO is the central driver of
AI use. A digitalization management or a CDO is only responsible in 8% of the
companies. This is probably mainly due to the fact that overall responsibility for
digitalization is organizationally anchored in only a few companies.
How many people in the surveyed companies are primarily dealing with the
topic of AI (see [71])?

• In most companies (65%), these are teams with fewer than 5 people.
• 17% of companies employ 5–9 people for this purpose.
• In 6% it is 10–19 people and in 2% even 20 or more.

It is interesting that in the majority of companies AI is a man’s job. In 70% of


companies, no woman is involved in the topic. Only in 20% of cases is at least one

IT management/CIO 49%

Technical management/CTO 24%

Management or Executive Board 14%

Head of Digitalization/CDO 8%

Other 14%

0% 20% 40% 60%

Fig. 1.25 Who is currently driving the topic of artificial intelligence in your company? (data
source: [71])
1.9 Where Do Companies in Germany Stand in the Global AI Environment? 85

woman involved in the AI team. However, Sect. 1.4.3 has shown the great relevance
of diversity—also in the team—for the success of AI applications.
Only 1% of all companies surveyed see themselves at the forefront of artificial intel-
ligence—13% still see themselves among the pioneers. A remarkable 43% consider
themselves to be latecomers and 42% believe that their company has already missed
the boat. The assessment is somewhat more positive among companies that already use
AI. Here, 12% see themselves at the forefront and a majority of 54% among the pio-
neers. Only 34% of AI users consider themselves to be a laggard (cf. [71]).

Note Box
The vast majority of managers in Germany see a great need for catch-up in
their own company when it comes to artificial intelligence.

Finally, the question was also asked: “What are the biggest obstacles to the use of
AI in your company?” The answers are shown in Fig. 1.26 (cf. [71]). The biggest
obstacles to the use of AI are still lack of personnel resources and lack of data (each
62%). 50% of companies complain about a lack of financial resources and 49% about
uncertainty due to legal hurdles. Other obstacles are lack of technical know-how
(48%) and lack of time (46%). A lack of AI acceptance by employees (37%) and a
general lack of trust in AI (33%) complete the picture. Only 22% still lack use cases
for AI in the company—a gap that could be quickly closed with reference to Fig. 1.24.
Against this background, the question “What measures would help your com-
pany to advance the use of AI?” becomes particularly relevant. The answers can
be found in Fig. 1.27 (cf. [71]). To increase the use of AI in their own company, the
respondents primarily want financial support for their own AI projects (78%).
68% would like an exchange with AI-experienced companies as well as support
in the legal and ethical assessment of the development and use of AI solutions.

Lack of human resources 62%

Lack of necessary data 62%

Lack of financial resources 50%

Uncertainty due to technical hurdles 49%

Lack of technical know-how 48%

Lack of time 46%

Lack of acceptance by employees 37%

Lack of trust in AI 33%

Lack of use cases 22%

0% 20% 40% 60% 80%

Fig. 1.26 What are the biggest obstacles to the use of AI in your company? (data source: [71])
86 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

Financial support for AI projects in companies


Exchange with companies that are further along in AI than we are
Assistance with the legal/ethical assessment of AI development and AI use
Availability of AI experts on the labor market
Better information about marketable AI applications
Easier access to data

0% 10% 20% 30% 40% 50% 60% 70% 80% 90%

Fig. 1.27 What measures would help your company to advance the use of AI? (data source: [71])

55% have on their wish list a better availability of AI experts in the job market,
more information about marketable AI applications (54%) and better access to
data (45%), which are needed for the training of AI systems.

Food for Thought


In summary, it can be stated: Possible use cases are known in companies and
the necessary AI tools are available. What is lacking in companies is time,
personnel, and financial resources.

In light of these results, one must agree with the statement by Achim Berg, the
President of Bitkom, [72]:

No people, no money, no time—these should not be reasons against AI. The future-oriented
allocation of resources is a top management task. Artificial intelligence must be on every
company’s agenda.

Food for Thought


The greatest risk of artificial intelligence would be not to deal with it!

Further interesting results on the status of AI use in Germany are provided by a


study by the Federal Ministry for Economy and Climate Protection ([73], p. 7 ff.).
This examined which technical and organizational competencies AI-active com-
panies in Germany have. It also analyzed the role of cooperations in the develop-
ment and application of artificial intelligence. The representative survey of about
550 AI-active companies was conducted in 2021 and 2022. The following trends
among AI-active companies became apparent:

• More than 10% of the companies active in Germany are actively using artificial
intelligence. This means almost a doubling of the number of AI-using compa-
nies compared to 2019.
• The use of AI is particularly intensive in various service industries. These
include the sectors information/communication, financial services as well as
technical and scientific services.
1.9 Where Do Companies in Germany Stand in the Global AI Environment? 87

• The most common areas of AI application are speech recognition, text mining
and process automation and optimization.
• Above all, the management is driving the use of AI. Only a few companies rely
on their own AI teams to develop use cases within the companies.
• The majority of AI-active companies have built their own AI competencies,
which relate to data management and software development. Comprehensive
methodological competence for Machine learning and the explainability of AI
results, on the other hand, are less widespread.
• The majority of AI-active companies lack sufficient competencies in legal mat-
ters. Also, the ability to assess consequences and to evaluate risks of AI applica-
tions is hardly present. To strengthen AI competencies, it is particularly important
to expand the further education of employees.
• Every second AI-active company is investing heavily in the technical prerequi-
sites for AI use. In addition, companies are increasingly developing explicit
data strategies.
• AI-active companies cooperate intensively with other companies or institutions,
for example to share AI know-how and jointly develop AI applications. Access to
data is often also designed cooperatively. Such cooperations can compensate for
the lack of own technological or organizational capabilities.
• 90% of all companies in Germany rely on cooperation with other companies or
institutions when using AI. The cooperation partners mainly include IT service
providers (71%) and scientific institutions (55%). Customers and companies in their
own industry—including competitors—are also important cooperation partners.
• Only 26% of AI-active companies develop their AI applications themselves. 32%
work together with a cooperation partner and in 41% the AI development is
mainly carried out by third parties. Most of these cooperations in Germany are
oriented regionally, and internationally among “AI professionals”.
• The most important challenges for AI cooperations are the technical interfaces,
the lack of compatibility of software solutions and an insufficient common
understanding of AI. Data protection regulations also make comprehensive
cooperation difficult for many AI-active companies.
• A more comprehensive use of AI requires more AI specialists as well as the
provision of data protection-compliant cloud offerings, which at the same
time guarantee highest data security.
• More public funding for AI cooperation projects, the definition of standards
for AI applications as well as an adaptation of data protection law are further
prerequisites to facilitate intensive cooperation in AI applications.

Consequently, companies should strive for cooperation. Among the exciting coop-
eration approaches at the European level is the project CLAIRE. CLAIRE was
founded in 2018 as a bottom-up initiative of the European AI community. Its goal is
to build a European excellence in AI research and innovation. For this purpose,
an extensive network in the sense of a pan-European alliance of research laborato-
ries for artificial intelligence is being developed (see [74]).
88 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

The so-called CLAIRE Hub aims to promote AI talents through the exchange
and interaction of AI researchers at all stages of their careers. In this hub, Europe’s
brightest AI minds are to work together temporarily. This is intended to support the
flow of knowledge between European researchers. At the same time, the knowledge
gained here should flow back into the institutions of the individual countries.
Specifically, CLAIRE has set the following goals [74]:

• CLAIRE will focus on a trustworthy AI that promotes human intelligence, rather


than replacing it, and thus benefits the people in Europe.
• CLAIRE will advocate for a significant increase in resources to strengthen existing AI,
for new research opportunities, and the associated fundamental European interests.
• CLAIRE will work with the key players in this field to find mechanisms for citizen
engagement, collaboration between industry and the public sector, as well as for
innovation-driven spin-offs and innovation-driven growth.
• In this way, CLAIRE will define and address challenges in various areas and for a
variety of applications, including health, production, transport, scientific research,
financial services, and entertainment.

All companies and many other organizations are called upon to consider concrete
participation in this and other institutions. The annually published Map of AI Start-
ups in Germany issued by appliedAI can contribute to this. This map shows in
which fields start-ups in Germany are driving the use of AI. The transparency thus
created is intended to contribute to the establishment of partnerships between start-
ups and companies. All start-ups included in the German AI Start-up Landscape
were founded in Germany after 2011 or carry out their core activities here. In addi-
tion, these companies have a business model based on machine learning (cf. [75]).
On the Map of AI Start-ups in Germany for 2022 a total of 304 start-ups are
listed. 228 of these start-ups were already on this list last year. 76 start-ups have
been newly added to this list. It is interesting to see what has become of the 50 com-
panies that are no longer represented here (cf. [75]):

• 32% of these start-ups were acquired.


• 20% are in liquidation.
• 4% no longer rely on Artificial Intelligence.
• 2% have gone public.
• The remaining 40% were founded more than 10 years ago and thus fall out of the
definition of a start-up used here.

In addition, it can be noted that the concentration of start-ups in Berlin and Munich
has decreased. Instead of 64% only 57% of start-ups are located in these two cities
in 2022. The majority of start-ups focus on the needs of a single industry. The
majority of AI start-ups focus on the healthcare, the production sector and the
transportation sector (see Fig. 1.28).
Only 14% of start-ups develop AI tech stacks. With Tech Stack as a short form
for Technology Stack a combination of tools, programming languages and frame-
works is meant. With this, teams develop different applications, such as databases,
infrastructure solutions or business tools. In Germany, mainly AI applications and
platforms are developed (see Fig. 1.29).
1.9 Where Do Companies in Germany Stand in the Global AI Environment?

Fig. 1.28 Map of AI start-ups in Germany—Industry companies ([75], published under the CC-BY 4.0 license without changes)
89
90 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

Fig. 1.29 Map of AI start-ups in Germany—AI technology stack ([75], published under the
CC-BY 4.0 license without changes)

The focus of the business functions is on customer service and marketing—


followed by IT & Security (see Fig. 1.30).
The start-ups that focus on the field of Enterprise Intelligence can be found in
Fig. 1.31. Here the focus is on computational linguistics and computer vision
After looking at the economy, the question now is: What is the German pub-
lic’s stance on Artificial Intelligence? A representative study by Bitkom [76] pro-
vides answers, in which 1007 people aged 16 and over were interviewed by phone.
Here are the most exciting results—representative for Germany (cf. [76]):

• 73% of Germans now believe that AI is an opportunity. Here, 26% see AI


exclusively as an opportunity and 47% rather as an opportunity.
• 24% see AI as a danger ; 14% rather and 10% even exclusively as a danger.
• At the same time, 79% are convinced that AI can strengthen the competitive-
ness of the German economy. A significant increase compared to 2020, when
the value was still at 66%.
• Today, 66% want AI to be used if it brings them concrete benefits (such as in the
medical sector or traffic).
• Already 51% state that they regularly use AI-based products and services.
These include, for example, voice assistants.
1.9 Where Do Companies in Germany Stand in the Global AI Environment?

Fig. 1.30 Map of AI start-ups in Germany—business functions ([75], published under the CC-BY 4.0
license without changes)
91
92 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

Fig. 1.31 Map of AI start-ups in Germany—enterprise intelligence ([75], published (under the
CC-BY 4.0 license without changes)

• 35% fear that humans will be disempowered by AI systems. In 2020, however,


this concern was still mentioned by 46% of people.
• 88% of the population want a comprehensive review of AI software, to only
use it after approval.
• 34% demand a ban on the use of AI in certain application areas.

The Allensbach Institute for Public Opinion Research [77] asked 1101 people, rep-
resentative of the population in Germany aged 16 and over, the following question:
“Do you perceive computer applications based on Artificial Intelligence as progress,
or does it worry you?”. The responses were as follows:
1.10 Summary 93

• 40% are rather worried.


• 31% see AI as progress, but are also worried.
• 9% are undecided.
• Only 20% see AI-based applications as progress.

Food for Thought


The responses of the German population signal an ambivalent attitude of the
German population towards Artificial Intelligence:
People see both the—sometimes already daily experienced—benefit, but
also possible risks. That’s why it’s so important that business and politics
involve society in the AI journey integrates and does not leave the discursive
field to AI doomsday prophets and conspiracy theorists.

1.10 Summary

• A Bitkom study emphasizes that the current status of artificial intelligence in


German companies is not convincing, especially as the focus on AI has weakened.
• The development shown on the AI index of the BVDW shows a decreasing
dynamic of dealing with AI.
• The lack of time, budgets and especially of AI personnel limits the use of AI in
German companies.
• A study by the BMWK indicates that AI-active companies are promoting their
own competence development and many challenges are being addressed
through cooperation.
• A look at the map of AI start-ups in Germany shows in which sectors AI foun-
dations have been established in Germany.
• Such start-ups represent interesting cooperation partners.
• There is an ambivalent attitude towards artificial intelligence in the German
population.
94 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

1.11 EU Legal Framework for the Use of AI

What is the state of AI development in Europe? It must be noted that Europe


(with open eyes) is foregoing to take a (leading) role in the future of artificial intel-
ligence to gain strategic competitive advantages. The European General Data
Protection Regulation (GDPR) that came into force in 2018 makes it increasingly
difficult for European companies to access relevant data bases for the development
of artificial intelligence when it comes to personal data. A simple equation applies:
No comprehensive data bases = no high − performance AI systems.
The uncertainty that still exists in companies 5 years after the implementation of the
GDPR is shown by a study by Bitkom [78]. As part of this study, 602 companies
with 20 employees or more in Germany were interviewed by telephone. The results
obtained are representative of the overall economy. Overall, it shows that the
European General Data Protection Regulation is perceived in large parts of the
German economy as an obstacle to growth and prosperity through innovations
in the digital world. The answers to the question “Have you ever stopped plans for
innovation in your company in connection with the use of data due to legal require-
ments or uncertainties?” is shown in Fig. 1.32 (see [78]). In total, 60% of companies
answer this question with “Yes”!
In addition, the companies were asked the question: “To what extent do the fol-
lowing statements on data policy apply or not apply from your company’s perspec-
tive?”. The results are shown in Fig. 1.33 (see [78]). 62% of the companies hesitate
to use data—out of fear of violating data protection. 63% even say that the strict
rules of the GDPR in Germany stifle innovative data-driven business models or
drive them out of the country. This development is evident in cross-border coop-
eration projects, in medical research as well as in the digitization of the healthcare
system and the administration overall. Especially small and medium-sized compa-
nies do not have the necessary practical aids to develop and implement innovative

Don´t know
Yes, often
6 out of 10
companies have
already had to stop
No, never an innovation project
based on the use of
data.
Yes, several times already

Yes, but only once

Fig. 1.32 Innovative data projects are often slowed down: Have you ever stopped plans for inno-
vations in connection with the use of data due to legal requirements or uncertainties? (data
source: [78])
1.11 EU Legal Framework for the Use of AI 95

Strict rules are stifling We are reluctant to use data Germany is missing out on
innovative data-driven because we are afraid of opportunities for growth and
business models in Germany or violating data protection laws. prosperity by not using data.
driving them out of the country.

Fig. 1.33 Rules for data usage restrict companies: To what extent do you agree or disagree with
the following statements? (data source: [78])

business ideas in the data-driven digital age. 58% of companies even believe that
Germany is giving up on data usage opportunities for growth and prosperity.
Here is a statement from Achim Berg, President of Bitkom [78]:

Food for Thought

A uniform data protection law for the whole EU was and is a great project for citizens as
well as for the EU as an economic area. After 5 years of the General Data Protection
Regulation, however, one must note: The GDPR has not fulfilled its promise to provide
uniform, understandable and practical data protection rules across Europe. Instead, the
independent interpretation of the rules by each national and regional authority leads to legal
uncertainty. Many companies therefore refrain from developing new technologies and ser-
vices—or relocate their projects abroad. This is evident not least in bans on innovative
technologies like ChatGPT in individual EU member states, which cause massive uncer-
tainty.... The existing leeway of the GDPR is hardly used in Germany. We must understand
data processing as an opportunity, not always just as a risk. If we continue like this for the
next 5 years, we weaken our innovation and competitiveness.

Therefore, it is necessary in Germany and Europe to find a balance between pri-


vacy and business interests to establish. For this, the legal regulations need to be
fundamentally reconsidered. And as quickly as possible! This also applies to the
EU-level discussed AI legal framework—the AI Act [79]. The European Union
is also convinced that artificial intelligence can contribute to solving many prob-
lems. However, particular emphasis is placed on the trust that people develop in this
technology. To promote trust in artificial intelligence, the EU Commission has
proposed corresponding measures to promote AI excellence. The AI Act is
intended to ensure the safety and fundamental rights of people and businesses. AI
investments and AI-driven innovation are to be promoted in all EU countries. At the
same time, regulations are proposed that are intended to help ensure that AI systems
used in the EU are safe, transparent, ethical, impartial, and under human control.
For this purpose, the various applications are classified into risk classes and regu-
lated with varying degrees of strictness.
96 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

• Prohibited
All AI concepts that are seen as a clear threat to EU citizens are prohibited. This
includes, for example, the official assessment of social behavior (keyword Social
Scoring ; see Sect. 5.2.2). But also toys with voice assistant are prohibited,
which could lead children to risky behavior.
• High risk
A high risk is assumed for the critical infrastructure, where the lives and health
of citizens could be endangered. This includes, for example, the transport infra-
structure. Such a risk is also seen in the AI assessment of exams and the
AI-supported evaluation of resumes, if a person’s access to education and profes-
sional life depends on their results. For safety components of products, a high
risk is also assumed. This is assumed, for example, for AI applications for robot-
assisted surgery. Also for central private and public services, a high risk is seen.
This includes also the AI-supported assessment of creditworthiness. Also in the
verification of the authenticity of evidence in law enforcement as well as in the
verification of the authenticity of travel documents, a high risk is suspected. In all
these use cases, the AI systems are to be thoroughly tested before deployment.
This testing obligation also exists throughout the entire lifecycle of such an
AI system.
• Limited risk
For certain AI systems, such as chatbots, only a certain transparency must be
maintained. The transparency achievable in this way should enable users to make
informed decisions.
• Minimal Risk
This category includes many AI systems that pose no or only a minimal risk to
citizens’ rights and/or security. This includes AI-supported video games or spam
filters. This risk category also includes many other AI applications that pose only
a minimal or no risk to the rights and security of citizens.

When high-risk AI systems are developed, the following stages must be completed
(see [80]):

• Step 1: A high-risk AI system is developed.


• Step 2: The AI system must undergo a conformity assessment. This checks
whether it meets the AI requirements. For selected applications, a notified body
must be involved.
• Step 3: There must be a registration of standalone AI systems in an EU database.
• Step 4: In addition, a declaration of conformity is required. Also, the AI system
must bear the CE marking. The AI system can then be placed on the market.

If significant changes are made to high-risk AI systems during their lifecycle, step 2
must be repeated. After the AI system is launched on the market, the market is moni-
tored by the authorities. In addition, users themselves must supervise and control
the AI system. Also, the providers of the AI system are obliged to develop a post-
launch monitoring system. In addition, providers and users are obliged to report
serious incidents and malfunctions.
1.11 EU Legal Framework for the Use of AI 97

Note Box
These regulations will become valid in this or a similar form in 2025 for the
countries of the EU. All companies are well advised to align their AI develop-
ments with these guidelines today.

It becomes visible how the EU strongly regulates the market of AI applications.


Whether this has resulted in a viable compromise between risk avoidance on the one
hand and the creation of competitive advantages based on AI on the other hand, remains
to be seen. After all, several points can be critically noted about this AI Act (see [81]):

• There is a risk that the EU, through too much focus on risks and through an excess
of regulation, may slow down AI development in Europe. This could lead to inter-
national competitive disadvantages for the European economy. These disadvan-
tages threaten if relevant AI developments take place outside of Europe in the
future and the AI experts from Germany and other European countries migrate there.
• The presented risk classification is very vaguely formulated. To use this risk-
based approach of the EU Commission effectively, the individual risk categories
need to be defined much more precisely. A blanket classification of use cases is
not sufficient for this.
• With the planned additional regulation for systems with many possible uses
(“General Purpose AI”), such as in image or speech recognition, there is still
considerable need for clarification to avoid uncertainties in implementation.
• Europe is at a crossroads with artificial intelligence. Either the AI Act results in
a hindrance to AI innovations in Europe—or it opens up a perspective of
opportunity for AI in Europe.

At the same time, the Coordinated Plan for AI developed in the EU as early as
2018 was updated in 2021. This is intended to accelerate investments in AI, tackle
AI strategies for timely implementation and align AI strategies across the EU. The
following goals are being pursued [80]:

• Creating favorable conditions for the development and dissemination of AI


• Building a strategic leadership role in highly effective sectors
• Making the EU the right location for successful AI
• Ensuring that AI technologies serve people

To achieve these goals, the EU Commission is planning the following measures in


the AI sector (see [80]):

• Public-private partnerships for Artificial Intelligence, Data and Robotics are


to be established. These serve the development, implementation and promotion of
a common strategic research, innovation and introduction agenda for Europe.
• By creating networks of AI excellence centers the exchange of know-how
should be facilitated and cooperation with industry promoted.
98 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

• The establishment of test and experiment facilities should make it possible to


test state-of-the-art technologies under real conditions.
• By establishing digital innovation centers central contact points for access to
technical expertise and experiments are created.
• Through a European AI platform the central AI resources (know-how, algo-
rithms, software, development tools etc.) should be accessible by industry and
the public sector.
• In addition, AI projects are supported by EU research funds.

The EU Commission wants to allocate one billion € annually from the programs
“Digital Europe” and “Horizon Europe” are investing in AI. Within this decade, AI
investments of more than 20 billion € per year are to be mobilized in the EU (cf.
[80]). It will be interesting to see in what form the emerging conflicts of objectives
can be resolved.

Food for Thought


It is left to the creativity of all of us how something should be regulated that
is dynamically evolving itself. When the first texts for the AI Act were formu-
lated, ChatGPT did not yet exist. And even before the AI Act will come into
force, many more AI fields of application will have been opened up that are
not or not comprehensively covered by the law.
Artificial Intelligence is and remains a moving target—also for
regulation!

The restrictions on the use of AI in Europe make it understandable why the


development of AI applications is often dominated by American and Chinese
companies today. These are the so-called GAFAM companies (Google/Alphabet,
Apple, Facebook, Amazon and Microsoft) from the USA. The Chinese counterparts
Baidu or also ByteDance (parent company of TikTok), Alibaba, Tencent and Xiaomi
are summarized under the term BATX companies. These and many other Chinese
companies are investing extensively in their own AI solutions. It is foreseeable that
their solutions will dominate the global market in the future.

Food for Thought


It has been confirmed at the latest by the Facebook-Cambridge Analytica
debate what we actually knew all along: Our data is being systematically
abused. Unfortunately, the current GDPR does not manage to prevent such
abuses. Instead, even small craft businesses, medium-sized companies and
start-ups now have to master extensive bureaucratic challenges in order not to
be attacked by the warning industry. This can quickly push such companies to
the brink of existence, as reports in the daily press regularly show. Is this the
progress actually desired for our society? Certainly not!
1.11 EU Legal Framework for the Use of AI 99

On the one hand, it is certainly commendable that Europe wants to protect


the rights of the individual. However, the GDPR in its current form often over-
shoots the mark. European companies strive to take into account the new
requirements defined there. Largely unaffected by this, the mega players like
Amazon, Facebook, Google & Co. simply get a permission for further data use.
We usually give this—already annoyed—unwillingly, without having read the
regulations (often dozens of pages) anew every day. After all, we want to con-
tinue our Google search as well as Facebook, Instagram or TikTok use. In doing
so, we do not want to be distracted as much as possible. Thus, the data krakens
will continue to grow. At the same time, non-­market-­dominating companies
will continue to lose data and thus power, influence, and competitiveness.
Do we want to become publicly transparent people in Europe like in
China? Certainly not! But what economic options do these massive slow-
downs offer for companies in Germany and Europe to take over their own AI
leadership not only in the use of machine data? None in the long term! If we
do not create a responsible handling of data, which forms the basis of every
AI technology, we must also accept that solutions developed outside of Europe
will also be used here. Such solutions may be more powerful because they
were able to draw on comprehensive data sets during development. And if
these solutions convince the users, new or old mega-players will dominate the
world market—also in Europe and Germany!

Since European companies have so far hardly emerged in the race for leadership in
AI development, it will probably remain a US-Chinese duopoly in the use of artificial
intelligence or come to that. Perhaps China will even win this race because there—as
already shown—huge amounts of data are available for training the systems.
There is no data like more data!
Therefore, the EU cannot afford over-regulation on the subject of artificial
intelligence!
As Sundar Pichai, CEO of Google, so aptly put it [82]?

Artificial intelligence is too important not to regulate—and too important not to regu-
late well

Note Box
The application fields of artificial intelligence do not ask on what basis
these systems were developed. For success in the world market, it (unfortu-
nately) only matters who has the more powerful systems. Here, Germany and
Europe currently have bad cards because the regulatory framework is too nar-
rowly defined.
100 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

It is exciting that the different use of AI will change the relative competitive
position of countries. The USA and China are leading the AI race—many other
industrial nations in Europe as well as Japan and South Korea are trying to follow.
It can be assumed that the differences in AI use between countries will tend to
increase over the years. The introduction and adoption of AI technologies could
trigger a growth spurt in the slowly growing industrial countries. Could!
At the same time, AI technologies can lead to a performance gap among com-
panies between front-runners on the one hand and slow users and non-users on the
other. The AI front-runners will include companies that fully integrate AI tools
into their value chains in the coming years. They have the chance to benefit dispro-
portionately from the use of AI. A large number of AI laggards will face these
potential AI front-runners, who Do not or do not consistently use AI
technologies.
Such divergent development can also occur at the level of employees. The
demand for jobs will shift even more from work with repetitive activities to social
and cognitive tasks to be mastered. Professions that are characterized by repetitive
activities and/or that require only low digital skills, may experience the greatest
decline in total employment. On the other hand, the demand for workers for non-
recurring tasks and such activities will increase, which require high digital skills.
This exacerbates the already visible War for Talents in crafts and management even
more—and not only for people who have skills for the development and use of
Artificial Intelligence systems. However, the forecasts on the employment effects
of AI use are not uniform. Often a polarization of the labor market is expected.
In addition to the already described need for highly qualified people, there could
also be an increasing need for low-skilled people. As it is nicely said ([83], p. 136):

After all, someone has to clean the apartment, serve the food, and fill the coffee to go cup
for the digital staff in a very analog way.

Today it is assumed that primarily jobs with a medium qualification profile will be
lost. Such a development has already been observed in Germany in the last two
decades.

Note Box
How threatened your own or the targeted profession is today, you can deter-
mine yourself with the Job-Futuromat of the Institute for Employment
Research :
https://job-­futuromat.iab.de/
The use of Artificial Intelligence will have sustainable and pervasive
effects on employees, companies and economies. As with many new tech-
nologies, these effects are initially limited in the early years. The comprehen-
sive effects will only show later.
1.12 Summary 101

Food for Thought


The effects that will occur in individual cases depend on the speed and extent
of AI use in individual companies and in the entire economy. If Artificial
Intelligence is primarily used for efficiency increases, a large value creation
potential of the new technology remains unused. If companies use Artificial
Intelligence more for product and service innovations or for the develop-
ment of new business models, much more comprehensive effects on the eco-
nomic results will occur (see in depth [84]).

Think-Box: Questions You Should Ask Yourself!


• Is there a comprehensive understanding of artificial intelligence in my
company—even at the highest hierarchical level?
• Have we systematically dealt with the various fields of application of arti-
ficial intelligence?
• Does such an analysis take place in our company rather decentrally in vari-
ous functional areas or departments—or are the findings from various
areas consolidated across the company?
• Where would the entry into AI solutions be particularly suitable? Where do
the famous “low-hanging fruits” exist?
• Where should AI deployment be intensified?
• Have specific goals for AI deployment already been formulated?
• To what extent are the relevant resources of time, budget, and personnel
available?
• Have we already sufficiently dealt with the ethical requirements of AI
deployment?
• Where in my company is the overall responsibility for the topic “Artificial
Intelligence”?

1.12 Summary

• The use of artificial intelligence will have a massive impact on employees, com-
panies, and economies as a whole.
• Employees will partially lose their jobs, while new jobs are created at the same
time. Overall, the responsibility of everyone to continuously qualify for future
requirements is increasing.
• Companies face the challenge of recognizing and exploiting the opportunities of
artificial intelligence. At the same time, the risks must be identified and managed.
• The position of globally operating economies will change depending on the use
of artificial intelligence. Existing imbalances can be both increased and reduced.
• Each economy for itself and each group of countries is called upon to recognize
and use the potential for change of artificial intelligence for its own area of
responsibility early on. Waiting is not an acceptable strategy here either!
102 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

• The EU is working on an AI Act, to define the framework conditions for AI


investments and AI-driven innovation.
• It is important that the framework conditions are not defined too narrowly, so that
Europe and European companies do not lose touch with relevant AI developments.

References
1. Precedence Research. (2023). Artificial intelligence (AI) market. Retrieved March 28, 2023,
from https://www.precedenceresearch.com/artificial-­intelligence-­market.
2. Gardner, H., Davis, K., Christodoulou, J., & Seider, S. (2011). The theory of multiple intel-
ligences. In R. Sternberg & B. Kaufman (Eds.), The Cambridge handbook of intelligence
(pp. 485–503). Cambridge University Press.
3. Gardner, H. (2020). A resurgence of interest in existential intelligence: Why
now? Retrieved November 26, 2021, from https://www.howardgardner.com/
howards-­blog/a-­resurgence-­of-­interest-­in-­existential-­intelligence-­why-­now.
4. Rich, E. (1983). Artificial intelligence. McGraw-Hill.
5. Gartner. (2023). What is artificial intelligence? Retrieved March 28, 2023, from https://www.
gartner.com/en/topics/artificial-­intelligence?utm_campaign=RM_GB_2022_ITLDR_C_
NL1_JANUARY14&utm_medium=email&utm_source=Eloqua&cm_mmc=Eloqua-­_-­Email-­
_-­LM_RM_GB_2022_ITLDR_C_NL1_JANUARY14-­_-­0000.
6. EU. (2021). What is artificial intelligence and how is it used? Retrieved June 6, 2023,
from https://www.europarl.europa.eu/news/en/headlines/society/20200827STO85804/
what-­is-­artificial-­intelligence-­and-­how-­is-­it-­used.
7. Ertel, W. (2021). Basic course artificial intelligence: A practice-oriented introduction.
Springer Gabler.
8. Russell, S., & Norvig, P. (2022). Artificial Intelligence, A modern approach (4th ed.). Pearson.
9. Brenner, W., van Giffen, B., Koehler, J., Fahse, T., & Sagodi, A. (2021). Building blocks of
artificial intelligence management. A position determination. Springer Gabler.
10. Buxmann, P., & Schmidt, H. (Eds.). (2021). Artificial intelligence. With algorithms to eco-
nomic success (2nd ed.). Springer Gabler.
11. Schölkopf, B., & Smola, A. J. (2018). Learning with kernels: Support vector machines, regu-
larization, optimization, and beyond. MIT Press.
12. Kreutzer, R. T. (2021). Customer dialogue online and offline. The big 1x1 of customer acquisi-
tion, customer retention and customer recovery. Springer Gabler.
13. Arel, I., Rose, D., & Karnowsk, T. (2010). Deep machine learning—A new frontier in Artificial
Intelligence research. Research Frontier, 11, 13–18.
14. Domingos, P. (2015). The master algorithm: How the quest for the ultimate learning machine
will remake our world. Basic.
15. BMI. (2018). Project for facial recognition successful. Retrieved March 28, 2023, from
https://www.bmi.bund.de/SharedDocs/pressemitteilungen/DE/2018/10/gesichtserkennung-­
suedkreuz.html.
16. Kreutzer, R. T. (2020). The digital seduction. Why we also deal with the dark sides should deal
with modern developments. Springer Gabler.
17. DeepMind. (2017). AlphaGo Zero: Starting from scratch. Retrieved December 15, 2020, from
https://deepmind.com/blog/article/alphago-­zero-­starting-­scratch.
18. LeCun, Y. (2021, July 26). The AI of the future. Frankfurter Allgemeine Zeitung, 19.
19. Hinton, G. (2023, May 5). Deadly intelligence. Frankfurter Allgemeine Zeitung, 24.
20. Kairos Future. (2015). Future disruptive technologies, do you fully understand how new dis-
ruptive technologies will reshape your industry? Retrieved November 29, 2021, from http://
www.kairosfuture.com/research/programs/future-­disruptive-­technologies.
21. Kramer, A. (2011). IBM Supercomputer wins quiz show. Retrieved November 30, 2021, from http://
www.heise.de/newsticker/meldung/IBM-­Supercomputer-­gewinnt-­Quizshow-­1191298.html.
References 103

22. Lee, K.-F. (2018). AI super-powers, China, Silicon Valley, and the new world order. Houghton
Mifflin Harcourt.
23. Spice, B. (2017). Carnegie Mellon artificial intelligence beats top Poker Pros, historic win at
Rivers Casino is first against best human players. Retrieved November 25, 2021, from https://
www.cmu.edu/news/stories/archives/2017/january/AI-­beats-­poker-­pros.html.
24. Sairam, A. (2021). This year’s American Crossword Puzzle Tournament was won by an
artificial intelligence system named Dr. Fill. UC Berkeley researchers played a role in Dr.
Fill’s creation. Retrieved November 30, 2021, from https://www.dailycal.org/2021/05/21/
artificial-­intelligence-­system-­dr-­fill-­wins-­american-­crossword-­puzzle-­tournament/.
25. Görz, G., Schmid, U., & Braun, T. (Eds.). (2020). Handbook of artificial intelligence (6th ed.).
William Kaufman.
26. Fortune Business Insights. (2023). Key market insights. Retrieved March 29, 2023, from https://www.
fortunebusinessinsights.com/industry-­reports/natural-­language-­processing-­nlp-­market-­101933.
27. Amazon. (2023). Amazon polly. Retrieved June 8, 2023, from https://aws.amazon.com/
de/polly/.
28. AXSemantics. (2023). GPT4 explained. Retrieved March 29, 2023, from https://de.ax-­
semantics.com/gpt-­4-­ueberblick-­und-­die-­unterschiede-­zu-­gpt-­3/.
29. Nilsson, N. (2010). The quest for artificial intelligence—A history of ideas and achievements.
Cambridge University Press.
30. Schulz von Thun, F. (2022). The communication square. Retrieved January 4, 2022, from
https://www.schulz-­von-­thun.de/die-­modelle/das-­kommunikationsquadrat.
31. MacCartney, B. (2014). Understanding natural language understanding. Retrieved November
30, 2021, from https://nlp.stanford.edu/~wcmac/papers/20140716-­UNLU.pdf.
32. Kreutzer, R. T. (2021). Practice-oriented online marketing. Concepts—Instruments—
Checklists (4th ed.). Springer Gabler.
33. Hundertmark, S. (2021). Digital friends. How companies can successfully use chatbots. Wiley.
34. Bendel, O. (2023). Social bots. Retrieved May 25, 2023, from https://wirtschaftslexikon.
gabler.de/definition/social-­bots-­54247.
35. Agostinelli, A., Denk, T., Borsos, Z., Engel, J., Verzetti, M., Caillon, A., Huang, Q., Jansen,
A., Roberts, A., Tagliasacchi, M., Sharifi, M., Zeghidour, N., & Frank, C. (2023). MusicLM:
Generating music from text. Retrieved March 30, 2023, from https://research.google/pubs/
pub52118/.
36. Bajpai, M. K., Singh, K. K., & Giakos, G. (Eds.). (2021). Machine vision and augmented intel-
ligence—Theory and applications. Springer.
37. o. V. (2021, September 6). Scandal about Facebook’s software. Frankfurter Allgemeine
Zeitung, 23.
38. Hofstadter, D. R. (2018, June 27). The last refuge of human intelligence. Frankfurter
Allgemeine Zeitung, N4.
39. Wolfangel, E. (2018, July 14/15). Artificial stupidity. Süddeutsche Zeitung, 33.
40. von der Malsburg, C. (2018, November 28). Control center à la Mother Nature. Bonner
Generalanzeiger, 11.
41. Hagendorf, H., Müller, H.-J., Krummenacher, J., & Schubert, T. (2011). General psychology
for bachelor: Perception and attention. Springer.
42. Barthelmeß, U., & Furbach, U. (2021, July 5). Computers on the way to consciousness.
Frankfurter Allgemeine Zeitung, 18.
43. Wilczynski, M. (2021). Face replaces ticket. Retrieved December 7, 2021, from https://www.
tagesschau.de/ausland/asien/facepay-­moskau-­u-­bahn-­101.html.
44. Grothe, M. (2019). Corporate security: Social listening and the digitalization of disinforma-
tion—systematically discovering unknown Unknowns through algorithms. In P. Gentsch (Ed.),
Artificial intelligence for sales, marketing and service. With AI and bots to an algorithmic
business—Concepts, technologies and best practices (2nd ed., pp. 237–251). Springer Gabler.
45. Agrawal, A., Gans, J., & Goldfarb, A. (2018). Prediction machines: The simple economics of
artificial intelligence. Harvard Business Review Press.
46. Microsoft. (2023). Spatial mapping. Retrieved March 30, 2023, from https://learn.microsoft.
com/en-­us/windows/mixed-­reality/design/spatial-­mapping.
104 1 What Is Meant by Artificial Intelligence and How Can It Be Used?

47. Amazon. (2023). Amazon Rekognition. Retrieved March 29, 2023, from https://aws.amazon.
com/de/rekognition/.
48. Intel. (2023). Introduction to Intel® RealSense™ Visual SLAM and the T265 Tracking
Camera. Retrieved March 30, 2023, from https://dev.intelrealsense.com/docs/
intel-­realsensetm-­visual-­slam-­and-­the-­t265-­tracking-­camera.
49. Pariser, E. (2017). Filter bubble: How we are disempowered on the Internet. Penguin Press.
50. Picard, R. (1997). Affective computing. MIT Press.
51. Stanford University. (2022). Artificial intelligence index report 2022. Retrieved March 29,
2023, from https://de-­statista-­com.ezproxy.hwr-­berlin.de/statistik/studie/id/111696/dokument/
studie-­ueber-­kuenstliche-­intelligenz-­weltweit/?locale=de.
52. Hawking, S. (2014). BBC interview from 2.12.2014. Retrieved December 13, 2021, from
https://www.youtube.com/watch?v=fFLVyWBDTfo.
53. Bostrom, N. (2016). Superintelligence: Scenarios of an upcoming revolution. Suhrkamp.
54. Burton, S. (2021, September 6). The pitfalls of autonomous driving. Frankfurter Allgemeine
Zeitung, 21.
55. Ankenbrand, H. (2023, April 12). China’s fragile gold rush around ChatGPT. Frankfurter
Allgemeine Zeitung, 22.
56. SAP. (2023). SAP’s guiding principles for artificial intelligence (AI). Retrieved March 30,
2023, from https://www.sap.com/products/artificial-­intelligence/ai-­ethics.html.
57. Deloitte. (2023). Trustworthy AI. Retrieved May 8, 2023, from https://www2.deloitte.com/us/
en/pages/deloitte-­analytics/solutions/ethics-­of-­ai-­framework.html.
58. Searle, J. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3, 417–457.
59. Krüger, S. (2021). The AI decision. Artificial intelligence and what we make of it. Springer.
60. Chalmers, D. (2010). The singularity—A philosophical analysis. Journal of Consciousness
Studies, 9–10, 7–65.
61. Vinge, V. (1993). Technological singularity. In VISION-21 symposium.
62. Eichenberg, A., & Haid, C. (Eds.). (2020). The end of humans? Paths through and out of trans-
humanism. Verlag am Goetheanum.
63. Markhoff, E. (2021). Evolution, Eugenics and Transhumanism. Tredition.
64. Bitkom. (2019). A look into the black box. Traceability of AI algorithms in practice.
65. Lambrecht, A., & Tucker, C. (2017, July). Algorithmic bias? An empirical study into appar-
ent gender-based discrimination in the display of STEM career ads. Presented at the NBER
Summer Institute.
66. Ludwig, J., & Mullainathan, S. (2021). Fragile algorithms and fallible decision-makers:
Lessons from the justice system. National Bureau of Economic Research.
67. APA. (2023). Noise. Retrieved March 30, 2023, from https://dictionary.apa.org/noise.
68. Kahneman, D., Sibony, O., & Sunstein, C. R. (2021). Noise—What distorts our decisions and
how we can improve them. Siedler.
69. Manyika, J., Silberg, J., & Presten, B. (2019, October). What do we do about the biases in
AI? Harvard Business Review. Retrieved November 6, 2021, from https://hbr.org/2019/10/
what-­do-­we-­do-­about-­the-­biases-­in-­ai.
70. BVDW. (2022). AI-monitor 2022, status quo of artificial intelligence in Germany, expert
opinion. Retrieved March 30, 2022, from https://www.iwkoeln.de/studien/christian-­rusche-­
kuenstliche-­intelligenz-­in-­deutschland.html.
71. Bitkom. (2022). AI is considered a future technology in the German economy—but is rarely
used. Retrieved March 31, 2023, from https://www.bitkom-­research.de/de/pressemitteilung/
ki-­gilt-­der-­deutschen-­wirtschaft-­als-­zukunftstechnologie-­wird-­aber-­selten-­genutzt.
72. Bitkom. (2021). Artificial intelligence is gradually advancing in companies.
Retrieved March 31, 2023, from https://www.bitkom.org/Presse/Presseinformation/
Kuenstliche-­Intelligenz-­kommt-­in-­Unternehmen-­allmaehlichvoran.
73. BMWK. (2022). Competencies and Cooperations in artificial intelligence. Results of a survey
of AI-active companies in Germany.
74. CLAIRE. (2023). CLAIRE. Retrieved April 3, 2023, from https://claire-­ai.org/?lang=de#.
1 What Is Meant by Artificial Intelligence and How Can It Be Used? 105

75. Initiative for Applied Artificial Intelligence. (2022). German AI startup map 2022. Retrieved
April 3, 2023, from https://www.appliedai.de/de/hub/2022-­deutsche-­ki-­startup-­landkarte.
76. Bitkom. (2023). Three quarters see artificial intelligence as an opportunity.
Retrieved March 31, 2023, from https://www.bitkom.org/Presse/Presseinformation/
Drei-­Viertel-­sehen-­Kuenstliche-­Intelligenz-­Chance.
77. Institute for Demoscopy Allensbach. (2023, April 25). Fear of AI. Frankfurter Allgemeine
Sunday Newspaper, 22.
78. Bitkom. (2023). Five years of GDPR: companies are still uncertain. Retrieved May 22, 2023,
from https://www.bitkom.org/Presse/Presseinformation/Fuenf-­Jahre-­DS-­GVO.
79. EU. (2023). A European approach to artificial intelligence. Retrieved March 30, 2023, from
https://digital-­strategy.ec.europa.eu/en/policies/european-­approach-­artificial-­intelligence.
80. EU. (2023). Artificial intelligence—Excellence and trust. Retrieved April 3, 2023, from
https://commission.europa.eu/strategy-­and-­policy/priorities-­2019-­2024/europe-­fit-­digital-­age/
excellence-­and-­trust-­artificial-­intelligence_en.
81. Bitkom. (2023). Bitkom on the vote on the AI act. Retrieved June 23, 2023, from https://www.
bitkom.org/Presse/Presseinformation/Bitkom-­zur-­Abstimmung-­ueber-­AI-­Act.
82. Pinchai, S. (2023, May 31). Artificial intelligence is too important not to regulate. In
A. Armbruster & C. Budras, Interview. Frankfurter Allgemeine Sunday Newspaper, 19.
83. Bollmann, R. (2018). The future of work. Frankfurter Allgemeine Quaterly, 8, 132–140.
84. Kreutzer, R. T. (2021). Toolbox for Digital Business. Leadership, business models, technolo-
gies and change management for the digital age. Springer Gabler.
Drivers of Artificial Intelligence
2

Summary This chapter discusses the central drivers of Artificial Intelligence.


These include Moore’s Law, the digitization and dematerialization of products and
processes, networking via the Internet of Everything, the availability of Big Data,
and various new technologies. The focus on sustainable corporate governance also
represents an AI driver.

Knowledge is power!

The increasingly visible triumph of Artificial Intelligence is not solely due to indi-
vidual factors. Here, the interaction of various developments that reinforce each
other is evident (cf. [1], p. 277). The following drivers of Artificial Intelligence are
particularly noteworthy:

• Exponential development of the performance of IT systems and technologies


based on them
• Advancement of digitization and dematerialization into more and more areas of
value creation
• Increasing networking between objects, data, processes, and living beings
• Availability of Big Data—often even in real time
• Access to new technologies

Supplementary Information The online version contains supplementary material available at


https://doi.org/10.1007/978-­3-­658-­46131-­7_2.

© The Author(s), under exclusive license to Springer Fachmedien Wiesbaden 107


GmbH, part of Springer Nature 2024
R. T. Kreutzer, Understanding Artificial Intelligence, Future of Business and
Finance, https://doi.org/10.1007/978-3-658-46131-7_2
108 2 Drivers of Artificial Intelligence

Note Box
The combination of several developments leads to the fact that future changes
will never occur as slowly as they have so far.
Let’s enjoy the “slowness of the changes experienced so far”.
It won’t stay this comfortable, because the rate of change is only now
really increasing!

2.1 Moore’s Law and the Effects of Exponentiality

The combination of the mentioned drivers of Artificial Intelligence leads to a tip-


ping point i.e. an important trend reversal towards an exponential development in
AI systems. To understand what exponential growth means, the following thought
experiment helps:

• How many meters does a person cover who takes 31 analog steps of one meter
each? Exactly 31 m.
• How many meters would a person cover who could take 31 exponential steps, if
the step size would double from step to step? If the person would take his 31st
exponential step, he would have covered more than one billion meters!

This exponentiality is the basis of the so-called Moore’s Law (Moore’s Law).
Based on empirical observations, Gordon Moore, one of the founders of Intel,
derived the following “law” in 1965. It states that approximately every 2 years a
doubling of the performance of integrated circuits can be achieved. If we date
the construction of the first integrated circuit to the year 1958, we have already
experienced more than 32 doubling cycles. This means that each doubling today
takes place at an already extremely high level of performance.

Food for Thought


If the automotive industry had achieved the same technological breakthroughs
as the computer industry, the VW Beetle from 1971 would today reach a speed
of 480,000 km per hour—at a purchase price for the vehicle of four Euro cents
(cf. [2]). These would be the consequences of the described exponential
development spurts—transferred to the automotive industry!

An end to this development is not yet in sight, even though the development
dynamics of integrated circuits have somewhat decreased in recent years. Here, the
mechanics of miniaturization are reaching their physical limits. Nevertheless, the
upcoming technology and performance leaps will further overshadow everything
achieved so far. The next gigantic push is now expected from quantum computing,
2.2 Digitalization and Dematerialization of Products, Services, and Processes 109

which overcomes the dichotomy of “0” and “1”. These quantum computers pro-
cess data completely differently than the systems commonly used today.
Conventional computers can only process data as series of 0 and 1 (binary code).
Quantum computers can use 0 and 1 and a multitude of positions in between simul-
taneously. The smallest computing units of quantum computers are quantum bits
(qubits). These can connect to each other according to special laws of quantum
mechanics. This allows them to assume a much more complex overall state. This
entanglement of the qubits into an overall state is the core of the innovation of quan-
tum computers.
The ability of quantum computers to perform many calculations in parallel sig-
nificantly shortens the processing time of computing processes. AI systems on
quantum computers could solve tasks in seconds that even the most powerful
supercomputers today would need hundreds or thousands of years for. This greatly
expands the solution space and thus also the application area of AI. The first quan-
tum computers are being used in selected research labs worldwide. It will take many
years before these can be used in everyday applications due to the complexity of
quantum computers. In the future, this technology will have a massive impact on
machine learning, encryption technology and data analysis. This will provide the
additional computing power needed for is necessary for far-reaching AI applications.
In 2021, Germany’s first quantum computer was launched as a research platform
IBM Quantum System One. The Fraunhofer Society for Applied Research oper-
ates the first universal quantum computer developed by IBM in a joint project. Here,
researchers are to develop application-related quantum algorithms, test the new
technology, and gain further know-how. To achieve these goals, ten leading compa-
nies in the German economy have joined forces in the Quantum Technology and
Application Consortium (QUTAC). They want to make quantum computing indus-
trially usable and economically successful with this research platform. Small and
medium-sized enterprises as well as start-ups can also use the quantum computer
(cf. [3]).
In addition, since 2022 a computer with more than 5000 qubits has been used at
the Jülich Research Center. The linking of quantum and supercomputing technolo-
gies that has taken place here provides the basis for many further development
steps—also and especially in AI applications (cf. [4]).

2.2  igitalization and Dematerialization of Products,


D
Services, and Processes

Parallel to the exponential developments, a digitalization and thus also a demate-


rialization of products, services, and processes is taking place in many areas.
Data (e.g., about our customers) and processes (such as consulting, sales, payment
processes, but also training, workshops, meetings, application and coaching conver-
sations) are increasingly being digitized and thus made available online. In addition,
products that have so far been predominantly provided physically are also losing
their physicality. This simultaneously overcomes physical boundaries that have
110 2 Drivers of Artificial Intelligence

previously been of great importance in many business models and often formed
their basis. The transfer of physical objects into data makes them available for pro-
cessing in AI processes. This means that the overcoming of the physicality of prod-
ucts, services, and processes associated with dematerialization often only creates
the conditions to make these areas accessible to artificial intelligence.
The extent to which dematerialization has already been achieved is shown in
Fig. 2.1. Today, more and more activities are shifting to the smartphone or other
smart devices, which thus mutate into smart service terminals.

• Independent products, such as the landline telephone, a standalone camera, the


wristwatch, the travel alarm clock, but also the spirit level, the flashlight, the
compass, and the makeup mirror are being displaced by the smartphone or cor-
responding apps—often without replacement. This also applies to the out-of-
fashion dictation machine and—albeit only partially—to standalone game
consoles. Navigation and appointment management are also increasingly digital.
• In addition, NFTs are making a triumphant advance in the metaverse. An NFT
(Non-fungible Token) is a digital asset that is created and published on a block-
chain platform. Unlike traditional cryptocurrencies like Bitcoin An NFT is
unique and non-exchangeable, making it a popular means for trading digital art-
works, collectibles, and other virtual objects (see Chap. 11; [5]).
• With the dematerialization of products and services, the underlying processes
can also be comprehensively digitized. This includes consultation processes
through chatbots. Banking services, especially payment processes, are increas-
ingly being dematerialized and processed via smartphones. This includes Apple
Pay, Google Pay, WeChat Pay and Ali Pay. Bitcoin is the “almost perfect” dema-
terialization of money—a dematerialized scarcity!

Fig. 2.1 Dematerialization of products, services and processes—the development of the smart-
phone into a smart service terminal
2.2 Digitalization and Dematerialization of Products, Services, and Processes 111

• The greatest shift of processes into the digital world has occurred in online shop-
ping. In 2022, online sales accounted for 15.7% of the total retail sales of 717
billion € in Germany (see [6]). Moreover, the Corona pandemic has led to an
increase in trainings, meetings, workshops etc. taking place in the virtual
realm. A large part of the work and workplaces initially forced into home office
will remain there in the future. This opens up further exciting business models.
• Also, access control is increasingly being digitized. The spectrum ranges from
keyless drive in one’s own car to online check-in at hotels, flights, and cinemas.
Access control at events and trade fairs is also increasingly digital. This allows
for the collection of many more data about users.
• The smart service terminal also brings together the relevant reception channels :
TV, radio, telephone, and internet. This leads to more and more people, for exam-
ple, doing without TV sets and radios.
• Finally, the smartphone becomes the central content platform : books, newspa-
pers, magazines, catalogs, but also CDs and DVDs were previously physically
produced and transported to the customer. Today, a download or streaming of
many contents is sufficient to be entertained and informed. In addition, one can
be active at any moment—24/7—for example, through search processes, shop-
ping or engagement in social media.
• Traditional maps (such as city maps or road maps) are also increasingly being
dematerialized, as navigation support is available online. This often even includes
real-time updates with information about current traffic jams, detours, and delays
in public transport. Traditional flight schedules (e.g. from Lufthansa) have not
been available for years—and nobody has missed them (except for the printers,
who lost many lucrative orders). Coupons are also increasingly or even only
transmitted online. And instead of in a Photo album photos are quickly pre-
sented on tablet or smartphone.

The combination of further applications and the here generated data streams
continuously create new fields of application for artificial intelligence. The digi-
tally available data can be evaluated in real time if necessary, to—possibly also in
real time—optimize products, services and processes. An example of this is the
linking of Google Maps with the navigation system, which—in real time—receives
traffic radio data and enables dynamic traffic jam avoidance. Depending on the
travel time already achieved, the driver, who may be tracked via a wearable regard-
ing his body functions, a restaurant can also be recommended on the diversion
route. This restaurant can be selected based on customer reviews that match the
driver’s profile and are therefore considered relevant. In addition, preferences for
certain restaurants can be taken into account, which the driver has expressed at
Facebook or Yelp. These recommendations can be transmitted through AI-generated
natural language—for example, through a digital assistant like Alexa. For some a
horror vision—for others just comfort and relevance!
In view of this unstoppable development, you should comprehensively deal with
the following question:
112 2 Drivers of Artificial Intelligence

Note Box
How can not only your own communication, but also offered products and/or
services as well as parts of the value chains be made available mobile?
This question must be answered in both the B2B and B2C markets!

In addition, as already indicated, digital offers often also eliminate the digital
transport to the buyer, to store content physically on a terminal device. This step
is no longer necessary when content is stored in the cloud and only made available
via streaming at the moment of use. A decentralized data storage per user is
replaced here by a central data storage in the cloud. This trend towards shifting to
the cloud is not limited to data, but includes further business processes (see in depth
[7]). These digital processes in turn generate a multitude of data that can be used for
further optimizations.

2.3  etworking of Products, Services, Processes, Animals


N
and People

The discussed developments are reinforced by a trend towards networking of


“things”. The dimensions expected here are shown in Fig. 2.2. Since the network-
ing takes place via the Internet, it is referred to as the Internet of Things (IoT). It is

30
Number of global loT connections (installed base) in billions

* Data for the years from 2021 are forecasts 27.1

25
21.8

20
17.3
22 %
14.5
15
12.3
11.3
10
10 9%
8
6.1
26 %
5 3.6 4.1

0
2015 2016 2017 2018 2019 2020 2021* 2022* 2023* 2024* 2025*

Fig. 2.2 Achieved and expected development of the intensity of linkage between objects—in bil-
lions of connected IoT devices (data source: [8])
2.3 Networking of Products, Services, Processes, Animals and People 113

People Processes
Connecting people in a value-adding Delivering the right information to the
way right people at the right time and/or
machines at the right time

Internet of
Everything
Data Things
Use of data as helpful support for Connecting physical objects via the
decision-making internet for better decision-making
(Industrial Internet of Things, Artificial
Intelligence of Things)

Fig. 2.3 Components of the Internet of Everything

important to note that the values indicated in Fig. 2.2 do not include computers,
laptops, smartphones, tablets, or landline phones, but only other objects. The indi-
cated percentages show how high the average annual growth of connected objects
has been or is projected to be in three time periods (see [8]):

• 2015–2020: 26%
• 2020–2021: 9%
• 2021–2025: 22% (Forecast)

However, the dynamics of connectivity today are not limited to “things” only. Not
only products, but also services, processes and people are interconnected. Therefore,
we should speak of an Internet of Everything. The fields to be distinguished here
are shown in Fig. 2.3.

• The Internet of Everything initially includes the Internet of Things itself. In the
private sector, things such as watches, refrigerators, toothbrushes, TVs, cars,
houses, shutters, dolls etc. are connected to the internet and thus made accessible
via it. These are consumer-oriented solutions. In the Internet of Things, human
interaction with the objects dominates.
• When the Internet of Things is used in a business context, it is referred to as the
Industrial Internet of Things (IIoT). In IIoT, complex industrial systems
(machines and entire production plants) are interconnected in such a way that
their data can be transmitted over the internet for remote monitoring and perfor-
mance analysis. Sensors and network technologies are the main IIoT drivers.
Such applications can be found in the manufacturing or industrial environment.
IIoT technologies are used less for interaction with humans. Rather, the interac-
tion machine-to-machine is at the center.
• In addition, more and more processes are interconnected. In the private sector,
for example, various purchasing, payment and delivery processes are inter-
114 2 Drivers of Artificial Intelligence

connected. In the Industrial Internet of Things, it is about the networking of


industrial processes and procedures. The core of the IIoT solution consists in the
control and monitoring of production processes. This is intended to increase
operational efficiency and reduce costs. In addition, new business models can be
developed through networking (keyword Predictive Maintenance, see Sect. 3.1.3).
• Furthermore, via the internet, data from the most diverse sources can be merged
and evaluated in a consolidated manner. The described user interfaces via lan-
guage and image, as well as the increased use of sensors, generate additional data
for AI applications. These opportunities arise in both the private and business
environment. For the Industrial Internet of Things, the data generated by sensors
are particularly important. These form the basis for comprehensive automation
and for an increased use of artificial intelligence in the production area.
• Today, we already talk about a Sensor Economy because sensors are being used
in more and more areas. In addition, the Industrial Internet of Things makes it
possible to develop physical objects in the digital space based on digitally avail-
able data. The production of prototypes as well as small or large series can then
also be initiated via the Industrial Internet of Things. The Metaverse also offers
a variety of solutions for this (see Chap. 11; see also in depth [5]).
• Finally, people are increasingly being directly connected to the internet. This is
achieved, for example, via fitness trackers or—in the case of cyborgs—directly
via implanted chips. A Cyborg is a person who has permanently supplemented
his body with artificial parts (here chips). The term Cyborg is derived from Cyb
ernetic Org anism. The chipping of people is referred to as Body-Hacking or
Bio-Hacking (see Sect. 5.1.6).
• The use of Augmented Reality (AR) or Virtual Reality (VR) solutions is often
based on such networking. Augmented reality glasses can support the field ser-
vice of service staff, for example in maintenance and repair work. VR applica-
tions are used, for example, for training. Many of these developments come
together in the Metaverse (see Chap. 11).
• The interplay of these developments opens up new use cases for artificial intel-
ligence almost daily—for consumers and companies alike. Through the Industrial
Internet of Things, in conjunction with AI systems, the use of facilities and
processes can be optimized. In addition, employee productivity can be increased
and the management of the supply chain can be improved. Furthermore, cus-
tomer experiences can be optimized and innovations in products, services,
and business models can be promoted. These applications will be explored in
more depth in the second part of this work.

Note Box
The Industrial Internet of Things is much more than just a connection of
objects. This network incorporates objects, processes, data, and even people.
Therefore, it would be more aptly named Industrial Internet of
Everything.
2.3 Networking of Products, Services, Processes, Animals and People 115

• However, the critical aspects of the (Industrial) Internet of Everything should


not be overlooked. The use of such networks brings an extremely high complex-
ity with it—during development, but also in daily use. The more objects, pro-
cesses, data and people are integrated into such a network, the larger are the
possible entry points for hackers. Therefore, an (Industrial) Internet of
Everything requires the highest security standards. Because once hackers have
penetrated such a network, they can, in the worst case, bring a global system to a
halt, which has happened several times.
• Vulnerabilities of such systems are also created by missing international stan-
dards and proprietary protocols of various providers. This leads to the fact that
IIoT applications from different producers may not be compatible with each
other. Also, the management of the processes for the extraction, processing, and
storage of the enormous data volumes can create security gaps and data leaks.

Note Box
The opportunities associated with the development of the (Industrial) Internet
of Everything have a dramatic impact on individual companies, entire indus-
tries, and every single country. Therefore, these developments are not only to
be observed, but actively shaped!

An advancement of the (Industrial) Internet of Things is the Artificial Intelligence


of Things (AIoT). The Artificial Intelligence of Things is an innovative fusion of
Artificial Intelligence and the Internet of Things. This connection aims to combine
the possibilities of data collection and data communication of the IoT with the abil-
ity of Artificial Intelligence. This combination enables a performance-enhancing
interaction of the two concepts. Through the Internet of Things, a multitude of data
(often in real-time) is collected and merged. These data can be evaluated by the AI
agents (often also in real-time). This enables quick analyses that can trigger further
impulses for product development, production, logistics, sales, and marketing.
Through AI, it is possible to compare the obtained real-time data with reference
results of other machines or processes—if necessary, worldwide, to come to ever
more convincing solutions.
Through more efficient IoT operations and a more powerful data management,
previously untapped learning potentials are unlocked. In addition, the interactions
between humans and machines can be significantly improved. Based on machine
and sensor data about temperature, vibration, noise, and other factors, production
chains can be fine-tuned more and more. These developments have a direct impact
on the Overall Equipment Effectiveness or the Total Plant Efficiency out. At the
same time, an Artificial Intelligence of Things can enhance innovation capability as
well as competitiveness. As will be shown in the further course of this work, there
are already a multitude of AIoT applications.
116 2 Drivers of Artificial Intelligence

Note Box
The merging of Artificial Intelligence with the Internet of Things to form the
Artificial Intelligence of Things (AIoT) forms a perfect symbiosis.
After all, the collection and merging of data are only purposeful if they can
also be evaluated quickly and innovatively. In addition, most AI agents can
only fully exploit their potential when large amounts of data are available.

Food for Thought


Data is the new Oil ! The Internet of Everything provides the pipelines to
pump data around the globe. The Artificial Intelligence is the refinery that
refines data and extracts the most diverse products and services from it.

2.4 Big Data

The advance of AI applications into ever more areas of application is not only due
to more powerful software and hardware but above all to the abundant availability
of Big Data. AI applications require—especially in the case of Deep Learning—
large amounts of data to deliver convincing results.

Note Box
Big Data is the indispensable raw material for AI systems.
And as it is so convincingly said?
Who owns the data, owns the business, owns the industry!
Access to data is becoming vital in more and more markets!

In almost all areas of life and business, there is a tremendous growth of data
volume to be observed. Not without reason is it therefore referred to as Big Data.
Big Data can be characterized by the following features (see Fig. 2.4):

• Volume (in terms of data volume or amount of data)


“Volume” describes the extent of the available data volume. The width and depth
of the available data affect this volume. User-generated content (e.g. in social
media), the increasing use of sensors and the further progressing networking of
objects generate ever larger data streams in both private and business areas. The
widespread use of intelligent devices (smart devices) and the increased use of the
(Industrial) Internet of Everything contribute to this.
• Velocity (in terms of the speed of data generation)
“Velocity” describes the speed at which data is either generated or existing data
is updated, analyzed, processed and/or deleted. Today, many data can be
2.4 Big Data 117

Value
Volume

Veracity

Velocity

Variety

Fig. 2.4 The five Vs of Big Data

captured in real time, documented and evaluated to immediately initiate further


processes—partly automated.
• Variety (in terms of the multitude of data sources and data formats)
“Variety” refers to the multitude of internal and external data sources. On the
other hand, “Variety” also refers to the different data formats (such as structured,
semi- and unstructured data) that usually need to be evaluated in parallel.
Structured data have a normalized form. They can be directly stored in a row
and column-oriented database. In contrast, unstructured data are in a non-­
identifiable and non-normalized data structure. This includes, for example, natu-
ral language texts, presentations, images, and audio and video files. With
unstructured data, only the file type is known. However, the content is not in the
form of interdependent data fields. Therefore, unstructured data must be appro-
priately prepared before processing.
In contrast, semi-structured data have a certain basic structure. An email is an
example of semi-structured data. An email has a consistent structure—with
sender, recipient, subject line. The actual content of the mail can contain natural
language text, emojis, photos, and videos. This content is therefore unstruc-
tured data.
• Veracity (in terms of the quality of data and data sources)
“Veracity” (in terms of “truthfulness”) refers to the quality of available data and
data sources. Compared to the subsequent criterion “Value”, “Veracity” is not
118 2 Drivers of Artificial Intelligence

about the meaning of the data in terms of semantics, but solely about their formal
information content. The quality of the data in “Veracity” refers to the following
dimensions:
–– Correctness (in terms of freedom from errors)
–– Completeness (in terms of covering all relevant fields)
–– Consistency (in terms of freedom from contradictions)
–– Timeliness (in terms of the validity of the data)

This also involves the question of the trustworthiness of the data, in terms of free-
dom from systematic distortions. Here, it is particularly important to critically
evaluate the statements of Pro-domo sources. “Pro domo” literally means “for
the house” and in a figurative sense “in one’s own cause” or “for one’s own
benefit”.
Here is an example of a pro-domo effect: When the Association of Automobile
Industry or the Association of Research-Based Pharmaceutical Manufacturers
presents or interprets statements or analysis results, it can be assumed that such
presentations or interpretations are rather favorable for the activities of the repre-
sented companies. Thus, there may be a (partial) “distortion” of the data or their
interpretation. This applies in principle to all associations as well as to the publi-
cations of companies that want to present their services in a positive light.
If there is a lack of veracity, i.e. the quality of the data and/or the data sources, then
the already mentioned GiGo effect occurs. Incorrect or “dirty” input data leads
to incorrect results if such errors cannot be detected and corrected in the process-
ing process.

• Value (in terms of the value of the data)


• By “Value” we mean the value and thus the relevance of the data with regard to
a specific application. Logically, this dimension is at a higher level and is influ-
enced by the previously mentioned 4 Vs.

The relevance and interaction of these criteria are to be illustrated using a vehicle
whose control is carried out via AI systems. Already today, a networked car gener-
ates a data volume of approx. 25 gigabytes per hour (Volume). These data are gen-
erated and change in real time and must—partly—also be processed in real time
(Velocity). An autonomous driving car must simultaneously evaluate data about the
weather, the oncoming traffic, current traffic notices, the road conditions, the desti-
nation and much more. These data are both structured as well as semi- and unstruc-
tured (Variety). It is important that the processed information about traffic jams is
also correct. Otherwise, a traffic jam diversion may be recommended, although the
traffic jam has already dissolved an hour ago. Also, the sensors must not be dirty,
because they otherwise send faulty data (Veracity). Finally, the data must be rele-
vant for the respective vehicle. Traffic jam notices on routes that are not even driven
are useless in this sense. The same applies to weather information that refers to
areas that the vehicle does not touch. The hint of speed traps on roads that are not
used are also irrelevant (Value).
2.4 Big Data 119

Note Box
The mastering of the five Vs of Big Data represents the ultimate challenge
for Artificial Intelligence. The quality of all applications based on it stands
and falls with the quality of data handling.

It may sound paradoxical, but the entry into the age of Big Data should be
achieved through a comprehensive processing of Small Data—a utilization of the
data that already exists in various places within your company (cf. in depth [9],
pp. 71–84). Many companies possess a gigantic treasure trove of data, which was
obtained through their own business operations. However, this treasure trove of
data—here referred to as Small Data—is often neither seen nor utilized. Therefore,
your first challenge in data management initially consists of systematically cap-
turing the company’s own data and data sources and transferring them into a Single
Point of Truth (also Single Source of Truth). This “single point of truth” describes
a set of data that can claim correctness for itself.
Your second challenge in data management consists in checking for which
processes this data should be used. These can be activities for the development of
business model innovations or for the development of new offers. Even customer
communication can often be much more successfully designed with data support, as
far as this is permissible within the narrow legal framework of the GDPR. This regu-
lation contains provisions for the protection of natural persons in the processing
of personal data. This means, that the processing of data, which are generated by
devices, machines and/or plants without reference to persons, is subject to signifi-
cantly fewer restrictions. You should use this leeway as intensively as possible for
the further development of your options for action.

Food for Thought


What type of data protector is active in your company?

• A designer, who helps you to use the legal possibilities creatively?


• Or a preventer, who out of sheer fear and due to anticipatory obedience
blocks all creative usage possibilities?

With the data sources, different categories need to be distinguished. We speak


of First Party Data, when you are the “first party” that has collected this data (core
of Small Data). The data you collect yourself are often the most important of all
possible data. These include, for example, all information about prospects and
customers, which you gain in the course of acquisition, care/binding and recovery.
Also all reactions to your communication initiatives, the transactions carried out,
calls to the service center and information about the visit to your website (if it was
connected with a log-in) are included. In addition to the First Party Data, another
term is increasingly being used today: Zero Party Data. These are information that
120 2 Drivers of Artificial Intelligence

customers voluntarily pass on to companies. Zero Party Data are thus a subset of
First Party Data and should therefore not be considered as a separate category.
The information from procurement, production and distribution processes
also belong to the First Party Data. They are not only necessary for monitoring,
controlling, and optimizing ongoing processes, but can also provide impulses for
optimization and for the further development of your business model.

Note Box
In addition to customer-related information, all other data that you generate
directly from your applications, machines, and systems belong to First
Party Data.
First Party Data is often a particularly valuable data basis for AI applica-
tions. After all, your company is the only instance that has access to this
data—apart from successful hackers.

After a comprehensive evaluation of your First Party Data, you should check
which second and third hand data you should also access. Second and Third Party
Data are data from other parties. Second Party Data is usually not “traded”. They
can often only be accessed through an agreement with trusted partners. If you, for
example, build a production network with suppliers and/or a marketing network
with sales partners, important information can be exchanged in these. Your cre-
ativity is required to develop value-creating partnerships with other companies
for the joint use of relevant data sources. Companies that operate outside these net-
works cannot access this data.
Third Party Data is traded on markets and is available to all companies autho-
rized to access it who are willing to pay for it. Either you can purchase such data
and integrate it into your systems. Or you at least use such data. These data pro-
viders (also data aggregators) include, for example, providers of information on
weather, traffic conditions, commuter flows in city centers, fashion trends, price
developments on the commodity and stock markets, etc. Also address publishers
and data service providers, who can systematically support you in customer
acquisition and customer development through comprehensive data pools and
powerful analysis tools, are among these important data sources (see in depth [9],
pp. 101–103, 116 f.).
The advantage of Third Party Data lies in their large quantity. Therefore, it is
especially here that we can rightly speak of Big Data. However, when accessing this
data, you usually acquire no exclusivity—unless you are willing to pay for it
(“exclusive data use” or “competition exclusion”). Customer-related third-party
data is usually not 1:1 data on individual persons (data protection!), but information
that describes a larger group of consumers, managers, or companies. Nevertheless,
you should check the relevance of such data for your company development (see the
example of customer dialogue [9], pp. 110–112).
2.4 Big Data 121

Note Box
There is a distinction between First Party Data (self-acquired), Second
Party Data (jointly acquired in partnerships), generated and/or used) and
Third Party Data (provided by third parties for use) to distinguish.
First Party Data is of greatest importance in many areas because no com-
petitor also has exactly this data. This can lead to competitive advantages.

Against this background, it would be more than desirable if the European project
Gaia-X would be crowned with success. The name Gaia comes from Greek mythol-
ogy and stands for the personified Earth as one of the first deities. Richard Wagner
by the way, calls this deity Erda. The goal of the project Gaia-X is to create a secure
and networked European data infrastructure for the states, companies and people
in Europe. This is intended to build a digital sovereignty. At the same time, the
promotion of data-based innovations is aimed for.
The infrastructure to be built as part of Gaia-X is intended to become the
nucleus of an ecosystem in which data and services are brought together and made
available in a trustworthy manner. In this context, uniform standards for data protec-
tion and for the exchange of data are also to be developed. Both the companies and
the users should collect and share data within the project Gaia-X. The control over
the data should always be maintained. To achieve this, the architecture of Gaia-X
is based on the principle of decentralization. This is about the interaction of various
individual platforms, all based on the Gaia-X standard. A data infrastructure is to be
developed that ensures openness, transparency and trust. The goal is not the devel-
opment of a cloud, but of a networked system, which connects many cloud service
providers (see BMWK, [10]). One area of application is shown in Sect. 3.1.5 for the
production sector.
The development of data volume has now reached unimaginable dimensions
(see Fig. 2.5; see [11]). Also in terms of data availability for private individuals and
companies, an exponential increase is emerging. The amount of data created, cap-
tured, copied and used worldwide reached over 120 zettabytes in 2023. A zettabyte
equals a sextillion bytes. That’s a thousand exabytes or a billion terabytes. By 2025
the global data volume is expected to grow to more than 180 zettabytes. The jump
in digitization forced by the Corona pandemic has led to additional growth dynam-
ics. Not only have companies further digitized processes. Consumers have also—
especially in the Lockdown—more digital media used than ever before. This
includes not only activities on social media, but also access to streaming services
and online games—and with each use, more data is generated.
What are the most important data sources behind this exponential growth of
data volumes in Fig. 2.5? Here, a distinction must be made between the private sec-
tor and the corporate sector. In the private sector, especially things and processes
themselves generate more and more data about their own use. This often happens
via the sensors already mentioned. In their “smart” form (i.e., networked via the
122 2 Drivers of Artificial Intelligence

200
Zeabyte 1021
181
180 Exabyte 1018
160 Petabyte 1015 147
Data volume in zetabytes

140 Terabyte 1012


120
120 Gigabyte 109
Megabyte 106 97
100
Kilobyte 103 79
80
64.2
60
41
40 33
26
18
20 9 12.5 15.5
2 5 6.5
0
2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025

Fig. 2.5 Current status and forecast of the volume of data created, captured, copied, and used
worldwide from 2010 to 2025—in zettabytes (data source: [11])

Internet) as Smart Watch, Smart Home, Smart Refrigerator etc., they make their
data available via the Internet. Also, when using digital processes, such as streaming
services from Amazon Prime Music/Video, Spotify, Maxdome or Netflix, a multitude
of data about user behavior is generated incidentally. The spectrum ranges from the
type of content viewed or listened to, the time and place, to information about which
scene viewers or listeners stopped the streaming process. On the other hand, the
human being is also an increasingly data-generating and—above all—a data-­
sharing creature.
These developments are based on Zuckerberg’s Law [12]:
I would expect that next year, people will share twice as much information as they share this
year, and the year after, they will be sharing twice as much as they did the year before. That
means that people are using Facebook, and the applications and the ecosystem, more
and more.

When using smart devices, a multitude of metadata is also generated. These “data
about data” indicate who, when, from where, and with whom communication took
place and for how long. These metadata describe, for example, telephone behav-
ior—regardless of the content of the conversation. With Google searches, in addi-
tion to the content focus, it is also recorded, for example, from which device, with
what intensity, and with what result how long was searched. All these data form the
so-called digital shadow that we cast with all our online activities—whether we
want to or not. These data also belong to the gigantic stream of data that is so excit-
ing for AI applications.
In addition to the data sources mentioned here, in the corporate environment,
the most diverse production processes and the use of aggregates (e.g., the engines of
airplanes) and systems also generate a never-ending stream of data. These diverse
data sources are actually the best prerequisite for building powerful AI systems also
in Germany and Europe. Actually!
2.4 Big Data 123

In Europe, politicians in particular are enthusiastically discussing the General


Data Protection Regulation (GDPR), which is intended to Came into effect on
May 25, 2018. Here it is declared a milestone that the comprehensive use of data by
companies has finally been stopped. Large budgets and a lot of energy have been
invested in the development of processes to meet the principle of the GDPR
“Prohibition with permission reservation” as well as the rules “Privacy by
design”, “Privacy by default” and “Data economy” (see also Sect. 1.6).

Food for Thought


Perhaps no one in the EU has noticed yet, but the requirements of the GDPR
and a “more data”, which is indispensable for AI developments, exclude each
other in many areas!

Many overlook what was already described in 2009 as the Law of


Disproportionality of Information ([13], p. 69):

The more information is available about a consumer or a decision-maker or a company, the


more precisely offers can be placed. This means: We need more information about pros-
pects and customers in order to send them less, but relevant information.

This principle applies especially to the use of Artificial Intelligence!


Therefore, it is very questionable whether the Data economy required by the
GDPR is the right way if Germany and Europe want to play in the first league on the
topic of Artificial Intelligence. However, Europe has decided that the companies
active here will no longer be fed personal data in pipelines, but only drop by drop.
In addition, companies now inevitably have to deal with the legally compliant
implementation of the GDPR. This paralyzes existing business processes by con-
stantly having to deal with in what form one can still contact and store data from its
customers. On the other hand, it distracts from other important topics—such as the
strategic confrontation with Artificial Intelligence. How can Artificial Intelligence
be made successful against this background, whose performance depends on the
available information? It is not for nothing that in the quoted Bitkom survey, 61% of
companies named “violations of data protection regulations” as the greatest risk
(see Fig. 1.23). At the same time, an “easier access to data” would help companies
to advance the use of AI (see Fig. 1.27).
A critical voice on comprehensive data use is also quoted here.
Shoshana Zuboff, emeritus professor at the Harvard Business School, has coined
the term Surveillance Capitalism for the emerging developments. She under-
stands by it

... a mutation of modern capitalism. Its raw material is data, which is obtained from the
surveillance of human behavior. This data, how someone behaves, is transformed by sur-
veillance capitalism into predictions of how someone will behave—and these predictions
are sold in new markets. Surveillance capitalism has its roots in the digital milieu and domi-
nates it today. It rose to dominance because it opened the first efficient way to monetize
online, as it quickly and reliably converts investments into capital. [...]
124 2 Drivers of Artificial Intelligence

Surveillance capitalism [...] must penetrate ever deeper into our everyday life, our per-
sonality, our emotions, in order to be able to predict our future behavior. [...]
In surveillance capitalism [...] we are hardly customers and employees anymore, but
primarily sources of information, data material of an apparatus, the workings of which
remain largely hidden from us. It is not capitalism for us, but over us. It observes us in order
to develop its products. [...]
It is wrong to say: ‘Let them scan my experiences, I have nothing to hide.’ I say:
Whoever has nothing to hide, is nothing. Our inner life, our private experiences, attitudes,
feelings, and desires are what make us human beings. They are our moral home ([14], p. 68;
in depth [15]).

Reading Tip
If you want to delve deeper into this topic, we recommend the book by inter-
net pioneer Jaron Lanier:
“Ten Reasons Why You Should Delete Your Social Media Accounts
Right Now”
Lanier groups his explanations around the term BUMMER. This acronym
stands for “Behaviors of Users Modified and Made into an Empire for Rent”.
This can be roughly translated as: The behavior of users is changed and turned
into a rental empire.
My own work “The Digital Seduction” also provides a lot of worthwhile
reading material for reflection in this context.

2.5 New Technologies

When analyzing the market penetration of new technologies, it is important to con-


sider that their economic effects often only become apparent after many years. For
companies, the question here is which technologies to focus on—and which can be
neglected. An important guide for companies is provided by the annually updated
Gartner Hype Cycle for new Technologies. This shows in which phase of the
lifecycle cross-industry relevant technologies are located. These technological life
phases are defined by Gartner based on the expectations set in the technologies. In
the context of artificial intelligence, the Hype Cycle for Artificial Intelligence is of
particular interest (see Fig. 2.6; [16]).
Regarding the expectations of the technologies, Gartner defines five different
phases that provide information about the state of expectations and the market
acceptance of new technologies.

1. Innovation Trigger (“Technological Impulses”)


In this phase, information about new technologies is published and eagerly
picked up by the media. Whether these technologies will find sustainable use is
not yet foreseeable at this early stage.
2. Peak of Inflated Expectations (“Peak of exaggerated expectations”)
In this phase, initial success stories are published that fuel expectations for a
new technology. At the same time, initial failures in the use of the technology can
2.5
New Technologies

Fig. 2.6 Gartner® Hype Cycle for Artificial Intelligence—2022 (Gartner and Hype Cycle™ are registered trademarks of Gartner, Inc. and/or its affiliates in
125

the U.S. and internationally and are used herein with permission. All rights reserved.)
126 2 Drivers of Artificial Intelligence

become visible, pushing expectations to their limits. The technological use


remains limited to a few companies.
3. Trough of Disillusionment (“Low point of disillusionment”)
This trough in the technological life cycle is based on the realization that
many expectations of new “wonder weapons” have not been met. In this phase,
the wheat is separated from the chaff.
4. Slope of Enlightenment (“Rise of knowledge/enlightenment”)
Now it becomes visible for more and more companies how a technology can
be used beneficially. Technological developments of the second and third gen-
eration of the initial technology are offered and increasingly adopted and inte-
grated into the workflow by innovation-friendly companies.
5. Plateau of Productivity (“Productivity plateau”)
The technology is now widely used, as its advantages are not only visible, but
also comprehensively calculable. The use as mainstream technology is predes-
tined. The use in more and more companies and application areas is only a mat-
ter of time.

In addition, Gartner presents in the Hype Cycle a forecast, when the productivity
plateau is expected to be reached. The range here ranges from “less than 2 years” to
“more than 10 years”. It is also indicated whether technologies can already be out-
dated before they have reached the productivity plateau.
The Gartner Hype Cycle 2022 identifies the most important innovations in the
field of AI technologies and AI techniques that go beyond everyday AI applica-
tions. It is recommended to pay particular attention to the AI innovations that could
become mainstream in 2–5 years. These include Composite AI, Decision
Intelligence, and Edge AI. Companies that adopt these AI applications early can
gain significant competitive advantages. The following presented AI innovations in
the Hype Cycle reflect complementary and sometimes contradictory priorities in
four main categories (see [16–18]):

• Data-centered AI
AI users have often focused on improving the results of AI solutions through
their own optimization measures. With data-centered AI, the focus shifts to
improving and enriching the data used to train the algorithms. Innovations in
data-centered AI include synthetic data. These are artificially generated by
machine learning. They mimic the structure and statistical properties of data
obtained from real events, without using the identifying properties of these data
(such as names and personal details). Synthetic data can partially replace the use
of personal data for training models.
Today, large amounts of data are still a prerequisite for successful AI solutions.
In the future, Small and Wide Data approaches will enable meaningful AI appli-
cations and thus reduce the dependence on Big Data. The development of Small
Data approaches aims to achieve useful insights even with less data. In Wide
Data approaches, various small and large, unstructured and structured data
sources are evaluated synergistically to achieve relevant results. More and more
2.5 New Technologies 127

companies will be forced to shift their focus from Big Data to Small and
Wide Data.
• Model-centered AI
Innovations in this area include, among others, composite AI, causal AI, and
generative AI. In the case of composite AI (Composite AI), different AI tech-
niques are merged. This is intended to improve the efficiency of learning and
expand the level of knowledge representations. Since there is no one-size-fits-all
solution for a single AI technique, Composite AI provides a platform to solve
various business problems more effectively. Composite AI will establish itself in
2–5 years. The range of Composite AI extends from a combination of different
learning processes to a networked use of Natural Language Processing, Computer
Vision, expert systems, and robots. The result is a composite AI system that
solves a wider range of business problems more efficiently. This enables the use
of AI for companies that do not have access to large amounts of historical or
labeled data, but have considerable human expertise.
Causal AI encompasses various techniques such as causal diagrams and simula-
tions. With their help, causal relationships can be uncovered to improve decision-
making. It will probably take another 5–10 years for causal AI to become widely
accepted. Causal AI can increase the efficiency of applications. Expert knowl-
edge is added to create causal AI models even with smaller data sets. Through
easier to interpret cause-and-effect relationships, AI-based results can also be
more easily understood.
Generative AI uses machine learning algorithms to generate original media con-
tent such as texts, images, audio and video content based on training data, which
have the appearance of “real” content. Here, one can think of ChatGPT or
DALL-E. However, the application of creative AI also lends itself to fraudulent or
criminal actions. Generative AI can also create synthetic data and models of
physical objects. Generative AI was used, for example, to develop a drug for the
treatment of obsessive-compulsive disorders in less than 12 months. Gartner
estimates that by 2025 more than 30% of new drugs and materials will be sys-
tematically discovered using generative AI techniques.
The use of AI resources personnel, data, models and computing capacities must
be made more efficient overall. In addition to Composite AI and Generative AI,
Multi-Experience can also contribute to this. To achieve a Multi-Experience,
various modalities (e.g. analogue and digital touchpoints such as apps and
devices) are used to create a seamless experience for customers. A four-stage
Multi-Experience model should map a digital user journey as follows:
–– Sync me: Store information about a user that they can easily find and retrieve
at any time
–– See me: Understand the entire user context (including the current location,
the respective situation as well as the current and historical preferences) in
order to present the user with relevant information and interactions in
this context
–– Know me: Use of predictive analytics to provide the user with relevant action
suggestions
128 2 Drivers of Artificial Intelligence

–– Be me: Take actions on behalf of the user, if permission has been granted and
the actions are in the interest of the user
• Application-centered AI
Innovations in this area include decision intelligence and edge AI. Decision
intelligence (Decision Intelligence) will become established in about 2–5 years
time. It serves to improve decision-making. To do this, it must be explicitly
understood how decisions are made and how the results are evaluated and
improved through feedback. Such decision intelligence helps to reduce the
unpredictability of the outcomes of decisions. This can be achieved through cor-
rect recording and consideration of various uncertainty factors from the business
context.
Also, Edge AI will only establish itself in 2–5 years. Edge AI refers to the use of
AI techniques embedded in endpoints, gateways, and edge servers of the Internet
of Things. Edge AI is therefore—in contrast to cloud computing—a form of
decentralized AI-supported data processing “at the edge of the network”
(“Edge” stands for “edge” or “border”). Here, AI applications and data are moved
away from the central nodes (e.g., data centers) and towards the edges of the
network. This can also include applications that are used, for example, in autono-
mous vehicles or in systems for visual inspection in manufacturing. Finally,
Edge AI reduces latency times (i.e., time delays) in decision-making by using
local analysis methods shortened. In addition, Edge AI also contributes to reduc-
ing connection costs because the data traffic between the edge and the cloud is
reduced. Finally, the networks are relieved of the transmission of very high data
volumes because data streams are processed more directly on site. At the same
time, a permanent availability of the solution is achieved because it is indepen-
dent of network connectivity.
• Human-centered AI
Responsible AI and digital ethics are among the things counted as human-­
centered AI. When AI systems replace human decisions, they can amplify both
good and bad results equally. Therefore, a focus should be on responsible
AI. This is a umbrella term for the aspects that lead to appropriate business and
ethical decisions when introducing AI (see Sect. 1.4). It is estimated that it will
take another 5–10 years for responsible AI to establish itself.
This is related to the topic of digital ethics. This includes the value systems and
moral principles for behavior in electronic interactions between people, organiza-
tions, and things. This is about the protection of privacy and the avoidance of bias.
Finally, more and more people are aware that their data is valuable. At the same
time, however, there is a lack of transparency about their use and data protection
violations are commonplace. The companies are therefore often trying to reduce
the risks associated with the management and security of personal data. At the
same time, many governments are passing increasingly strict laws on data usage.
Many companies, however, still ignore the implementation of this digital ethics.
Gartner predicts that by 2024 only 30% of large companies will use a new Voice-
of-Society measurement. This measurement should enable companies to
quickly respond to social issues and assess their impact on their business activi-
2.5 New Technologies 129

ties. Companies must integrate digital ethics comprehensively into their AI strat-
egies. This can help the companies strengthen their influence and reputation with
customers, employees, partners, and society.

The AI market is still in an evolutionary state. At the beginning of the Hype Cycle
is—still—the already mentioned Artificial General Intelligence (AGI). This refers
to the replication of human intelligence. The goal is for a system to be able to suc-
cessfully accomplish any intellectual task that a human is capable of. AGI is the
epitome of “strong artificial intelligence”. According to Gartner’s estimate [16], it
will take more than 10 years for this to reach the level of productivity—if ever at all.

Note Box
Artificial General Intelligence aims to work out the principles of intelli-
gence that function independently of a specific task or predefined context. For
this, the AI would have to be able to draw on past experiences and analyze
them to find the best solution for a new problem. Such principles are intended
to enable machines not only to master any intellectual task that a human is
capable of, but even to go beyond that.

The mentioned forms of causal AI, data-centered AI, Decision Intelligence,


Composite AI and Generative AI are on the ascending area of the expectation
curve (see Fig. 2.6). Edge AI and digital ethics have just barely surpassed the peak
of expectations. For autonomous vehicles, the valley of disappointment has now
been reached on a broad front. It will probably take more than 10 years for this
technology to be available at level 5 (see Sect. 3.2). Intelligent applications, Data
Labeling and Computer Vision are already on their way to comprehensive use.

Food for Thought


Now your company spirit is required to develop profitable business cases
from technologies and technical use cases. Because one thing is certain—AI
applications will once again fundamentally change the world. This is due to
the fact that AI technologies have a particularly high level of innovation and
consequently a great disruptive potential. It has therefore become unavoidable
to deal with AI technologies in the context of digital transformation, but also
in the context of ecological transformation.

At this point, it is essential to note that the economic impacts of AI use—like all
new technologies—will initially only show gradually. Only in the next few years
will a comprehensive and dynamic growth of AI applications set in. Thus, the use of
artificial intelligence by companies will result in the figure shown in Fig. 2.7 docu-
mented S-curve show. This S-curve ideally illustrates how innovations are adopted
in the market overall—by companies and customers alike.
130 2 Drivers of Artificial Intelligence

Although such an S-shaped development is regularly observed, it always surprises


the actors anew! What is the cause of this? Most often with innovations, a linear
development is expected—and not an S-curve. People are not trained to understand
a non-linear growth. Most people intuitively expect that new things will grow quite
quickly at first, before settling into a constant linear growth path. Such a develop-
ment is often observed in nature—not least in the growth of people themselves!
An S-curve however, shows a completely different course. In the first phase of
slow adoption it seems as if not much is happening. This is the moment when man-
agers are frustrated about the slow progress and skeptics step onto the stage. Then
you hear statements like:

• This technology will never catch on.


• The market is not yet ready for this technology.
• You always have to invest; successes are not apparent.
• The timing is just not right.
• Let’s just quickly end the AI project.

The paradox is that after the introduction of groundbreaking new technologies—


such as the introduction of electric light in factories at the end of the nineteenth
century and the advent of computers in the 1990s—there was initially a slowdown
in productivity. This correlation was highlighted by Chad Syverson, an economist
at the University of Chicago,. According to him, it was only after several years that
an actual increase in productivity occurred. The basic technology only then showed
positive effects when it was supplemented by innovations and exciting use cases (cf.
[1, 19], pp. 125–127).

Note Box
Digitization and AI technologies are per se not yet success factors that auto-
matically lead to economic growth and societal prosperity. Only AI-based
innovations can unlock growth, efficiency enhancement, and prosperity
potentials. This requires ideas, budgets, courage as well as creative and com-
mitted employees to make big ideas a reality.

Often only after many months or even years does innovative According to the
S-curve, technologies enter the second phase of dynamic growth (see Fig. 2.7).
Then the tone of communication also changes:

• Why didn’t we invest in AI earlier?


• Why don’t we have our own AI expertise on board?
• Why can’t we integrate AI solutions into more products and services yet?
• Why don’t we have an AI roadmap for our company?
• Who dropped the ball on the AI topic?
• Why aren’t there sufficient budgets available for the AI concepts?
2.5 New Technologies 131

Performance Phase of stagnation

- Mature technology
- Stable or declining yields
- Alternative technologies in use

Phase of - Commercial use economically attractive


dynamic growth - Development of new areas of application
- Generation of increasing revenues
- Investments in further developments necessary

- Experiments on the use of technology required


Phase of - Areas of application can only be partially convincing
slow adaptation - Prototypes are used to test new applications
- High uncertainty about the opportunities/risks of the technology
Time

Fig. 2.7 S-curve of the adoption of new technologies

The dynamic growth of the second phase can last for years and decades. With arti-
ficial intelligence, it will probably be decades before more and more, and perhaps
even all, areas of life and work will be permeated by AI solutions. Only then will the
dynamics of development weaken and remain on a plateau of productivity. Then the
third phase of stagnation is reached (see Fig. 2.7). At the latest here, but often
much earlier, the start of the S-curves of new technologies can be observed. Thus,
one S-curve is replaced by the next S-curve. It’s always just a matter of time.

Note Box
To avoid missing both the entry into innovative technologies and the exit from
technologies, a deep understanding of the S-curves is indispensable. Then a
longer phase of adaptation is also accepted—and the point at which to bet on
a new technology with a new S-curve is not missed.

In the exploitation of AI potentials, the Sect. 2.5 described new technologies


play a special role. On the one hand, they enable new business models, for example
based on the Internet of Things or the Internet of Everything (see in depth [7]). At
the same time, new technologies also embody risks threatening companies if com-
panies do not recognize their relevance to users and do not bet quickly enough on
the corresponding technologies. Then the phenomenon of “selection” of no longer
viable business models, known as digital Darwinism, occurs (see in depth [20]).
As already hinted, the success in the global AI race depends on the availability
of high-quality data (see Sect. 2.4) and correspondingly powerful computers.
Therefore, a look is taken here at where today the most powerful computers in the
world are located (see Fig. 2.8; [21]). Five of these computers are located in the
USA, two in China and one such top computer each in Japan, Finland and Italy.
132 2 Drivers of Artificial Intelligence

0 200,000 400,000 600,000 800,000 1,000,000 1,200,000


Frontier (HPE, USA) 1,102,000

Supercomputer Fugaku (Fujitsu, Japan) 442,010

LUMI (HPE, Finland) 309,100

Leonardo (Atos, Italy) 174,700

Summit (IBM, USA) 148,600

Sierra (IBM/Nvidia/Mellanox, USA) 94,640

Sunway TaihuLight (NRCPC, China) 93,010

Perlmutter ((HPE/Nvidia, USA) 70,870

Selene (Nvidia, USA) 63,460

Tianhe-2A (NUDT, China) 61,440

Fig. 2.8 Computing power of the most powerful supercomputers worldwide in November 2022
(in TeraFLOPS) (data source: [21])

The computing power in Fig. 2.8 is given in TeraFLOPS. FLOPS stands for
Floating Point Operations Per Second. In English, this means “floating point opera-
tions per second”. “Operations” refers, for example, to the addition and multiplica-
tion of numbers, while “floating point” stands for the IT-typical number
representation. Thus, a FLOPS indicates how many calculations a computer can
perform per second. The larger this number is, the more powerful the computer is.
For the determination of the FLOPS, the total computing power of main memory,
processor, bus, compiler etc. is measured. If a computer has a computing power of
one TeraFLOPS (TFLOPS), this computer can perform 1,000,000,000,000 opera-
tions per second. The supercomputer Frontier shown in Fig. 2.8 achieves a comput-
ing power of around 1,102,000 TeraFLOPS. Thus, this computer was the most
powerful supercomputer worldwide in November 2022.
Such supercomputers are mainly used in research and development. Here, high-
performance computers for science and business enable simulations, data evalua-
tions and more reliable forecasts. Supercomputers can tackle problem areas that
with conventional computers were not or only difficult to solve. This includes com-
plex climate models. Consequently, it is also an exciting question in which countries
the most supercomputers can be found. Most of the 500 most powerful supercom-
puters worldwide are today located in Asia (cf. Fig. 2.9; [22]). China leads this list
unchallenged with 173 supercomputers. The USA follows with 149 supercomput-
ers. After Japan (32), Germany is in 4th place with 26 supercomputers. In total,
only four European countries are among the top 10 of supercomputer locations.
Even if all supercomputers stationed in Europe are added together, we reach 98, not
even the level in the USA—let alone the equipment in China.
An important prerequisite for the described technologies to unfold their (posi-
tive) effects is a powerful and as comprehensive as possible mobile network. Here,
the 5G mobile standard of particular importance. “5” stands for the 5th generation.
This new standard can be described by the following characteristics (cf. [23]):
2.5 New Technologies 133

200
180 173
160 149
140
120
100
80
60
40 32 26
19
20 11 11 11 7 7 6 6 5 4 4 3 3 3 3 2 2 2 2 2 1 1 1 1 1 1 1
0

Poland

Taiwan

Slovenia

Norway
Netherlands

Finland
China

Japan

United Kingdom
Germany

Canada

South Korea

Italy
Saudi Arabia
Brazil
Sweden

Luxembourg

Czechia

Spain
Singapore
Austria
Ireland
Morocco
Bulgaria
Australia
India
Switzerland
United States

France

Russia

United Arab Emirates


Fig. 2.9 Locations of the 500 most powerful supercomputers worldwide by countries—2021
(data source: [22])

• Significantly faster data transmission compared to 4G networks (transmission


rates of up to 10 gigabits/second instead of 500 megabits/second; an increase to
20 times)
• Use of higher frequency ranges
• Increase in frequency capacity allows for a higher data throughput
• Real-time transmission supports the control of 100 billion mobile devices
worldwide
• Extremely short latency times (in terms of reaction, delay or transmission time)
of only one to three milliseconds even over greater distances (humans need about
ten times this time for the transmission from the eye to the brain); the latency
time in the LTE network is about 50 ms (LTE or Long Term Evolution is a term
for the mobile standard of the third generation)
• Significant reduction in energy consumption in data transmission com-
pared to 4G
• 5G can connect a million devices per square kilometer; ten times more than the
4G standard

The speed of 5G becomes clear in the following example. If the content of a DVD
(4.5 GB) is to be downloaded, the following time requirements result from the dif-
ferent mobile standards:

• 2G (at 0.25 megabits/s): 1 day, 16 h


• 3G (at 42.2 megabits/s): 14 min, 33 s
• 4G (at 500 megabits/s): 1 min and 13 s
• 5G (at 10,000 megabits/s): 3.7 s
134 2 Drivers of Artificial Intelligence

The low latency times of 5G networks are also crucial for self-driving cars. Here
data transmission and reaction must occur in real time to enable safe navigation.
Decisions must be made in fractions of a second. In humans, the reaction time from
recognizing a danger to braking is about one second. In this time, a vehicle traveling
at 100 km/h covers approximately 28 m. An autonomous vehicle can react 1000
times faster than a human with a latency time of one millisecond. Then the braking
maneuver is initiated after less than one centimeter of travel.
Low latency times and high connection speeds are also important for applica-
tions of the Industrial Internet of Things. Here, guaranteed reaction times are par-
ticularly important—and not just once achievable peak performances. The faster
data transmission based on 5G networks in combination with AI and Edge
Computing reliable communication between machines. This creates the conditions
for automated and autonomous production. The 5G-based connectivity is an impor-
tant prerequisite for the further digitization of the industry. Through virtual network
layers, mobile robots can be flexibly used in production.
A real-time data communication based on 5G speed will play a key role in
more and more use cases. The introduction of 5G will not only enable a real-time
machine-to-machine communication, but also real-time human-to-machine
communication. 5G provides the basis for the development of new business mod-
els. New forms of interaction are supported, for example in the field of Industry 4.0
and in medicine. An important prerequisite for this is not only the network infra-
structure but also the development of common data transmission standards
(cf. [23]).
A key question with any new technology, however, is:

Note Box
With the use of 5G technology, it’s not just about speed, but also about the
availability and reliability of the networks. All together creates the condi-
tions for the improvement of existing as well as for the development of new
business models.
Artificial Intelligence and 5G support each other.
5G provides high transmission rates even with large amounts of data. AI
enables the rapid evaluation of high data volumes—often also in real-time.
Thus, AI-supported insights can be immediately incorporated into the processes.

Will this technology be accepted?


This question must be addressed by companies both internally and externally and
be specified as follows:

• Will the company’s own executives and employees accept the new technology
and use it to create value?
• Will the company’s own customers as well as other important stakeholders (e.g.
cooperation partners, societal interest groups) welcome the AI technology and
its use?
2.5 New Technologies 135

Experience Voluntariness

Subjective norm

Image Perceived benefit

Relevance Attitude to the use Behavioral intention Actual


(for the profession) for the use technology use
Perceived
Quality of the output user friendliness

Unambiguity
of the results

Fig. 2.10 Technology acceptance model (source: based on [25], p. 188)

To answer these questions, the Technology Acceptance Model (TAM or


Technologieakzeptanzmodell) is used. Based on this model, it can be determined
why people use a (new) technology or not (cf. fundamentally [24]). The concept
shown in Fig. 2.10 represents a further development of the initial model and is based
on Venkatesh and Davis [25].
According to the Technology Acceptance Model, the willingness to use a (new)
technology is based on two central factors:

• Degree of perceived usefulness of a technology


• Perceived usefulness refers to a person’s subjective feeling that the use of a tech-
nology brings benefits.
• Degree of perceived user-friendliness of a technology
• Perceived user-friendliness describes how much or how little (additional) effort
is involved in learning and using a new technology.

These two factors influence the attitude towards using the technology. This in
turn affects the intention to use it. This ultimately determines the actual deploy-
ment of the technology.
As Fig. 2.10 illustrates, other influencing factors affect the process of technol-
ogy acceptance. These factors include the subjective norms of the (potential)
users. These are specified as perceived social pressure to perform or not perform a
certain behavior. The respective image that a user builds when using the technology
is also behaviorally relevant. In addition, the (professional) relevance, the quality
and the clarity of the results achieved affect the technology acceptance. If there is
a lack of quality and clarity of the results, this reduces the willingness to use the
technology. Additionally, the previously gained experiences and the perceived
degree of voluntariness of use affect acceptance.
If one applies the Technology Acceptance Model to the Artificial Intelligence
in an average company, the result—as of today—is often as follows:

• Experience : Rather low and therefore inhibiting acceptance.


136 2 Drivers of Artificial Intelligence

• Voluntariness : Usually high, which is why many companies have not yet dealt
with Artificial Intelligence at all. The high voluntariness is generally promoting
acceptance—but with AI it often leads to not dealing with it.
• Subjective norms : (Still) a The use of AI in our culture is often viewed criti-
cally. Those who express concerns about the use of AI can often (still) score
points today. Those who use AI, on the other hand, sometimes have to expect
attacks. This does not promote the acceptance of this technology overall.
• Image : Horror stories about the use of AI are still often told. It is more often
reported what has gone wrong with AI support than where AI can already pro-
vide great benefits today. This is also due to the fact that many AI users do not
(can not) recognize the AI applications they use as such. Therefore, AI users
outside of their own immediate environment are often met with skepticism. This
also has a negative impact on the acceptance of AI.
• Relevance : Many decision-makers and employees do not yet recognize how
relevant the topic is for their own industry, their own company, and their own
employability. The rule here is: where no relevance is seen, there is no need to
deal with a new technology—it is still “voluntary”!
• Quality of the Output : As already hinted, many AI reports in the popular media
focus on “AI failures”. Here, it is often extensively reported what nonsense AI
has done again. The everyday success stories about AI-supported solutions, on
the other hand, make it less into the media. This also has a negative impact on the
acceptance of AI. Here, ChatGPT, DALL-E and many other applications of gen-
erative AI have caused a rethinking process since the end of 2022.
• Clarity/Unambiguity of the Results : The results of AI systems are often very
clear when it comes to concrete decision-making processes. However, AI lacks
the ability to recognize how these results came about in detail. The challenge of
“Black Box AI” was already discussed in Sect. 1.4.3. The exact insights on which
the AI has achieved a certain result are not or not always comprehensible, even
for the greatest AI specialists. The lack of comprehensibility of the results is also
a major brake on acceptance.

Looking at this situation, it is easy to see what comprehensive educational work


still needs to be done to create broad acceptance for Artificial Intelligence. This is
about the necessary societal acceptance as well as the acceptance in companies
themselves. Often, there is a lack of balanced knowledge about Artificial
Intelligence—in politics, business, and society alike.
A up-skilling offensive is needed for broader AI use to close the serious knowl-
edge gaps that still exist in many companies (see also the results of the Bitkom study
in Sect. 1.5). Only then can reservations be reduced and the AI potentials be recog-
nized on a broad front—and not just in management.
This is an indispensable prerequisite for setting the course much more strongly
towards a more comprehensive engagement with AI—and that is already today.
This also involves the financial and human resources that are indispensable for the
use of AI.
2.6 Necessity of Sustainable Corporate Governance 137

2.6 Necessity of Sustainable Corporate Governance

On a slightly different level, there is another driver of artificial intelligence that is


not recognized at first glance. This is about the promotion of sustainable corpo-
rate governance. The associated requirements are to be clarified using the so-called
ESG criteria. To understand this AI driver, the content of the ESG criteria itself
must first be conveyed. We can increasingly read something about the ESG criteria
in the opinion-forming media. These criteria are often mentioned in the context of
financial investments—but increasingly also discussed as a benchmark for
responsible corporate governance in the twenty-first century. But what is this all
about? And what relevance do these criteria have for my own company or for entre-
preneurial action?
The letters E, S and G define further areas of responsibility for companies to
meet the requirements of the twenty-first century (cf. in depth [26]). Today, it is
no longer sufficient to run a company “only” profitably in the long term. However,
an economically healthy company is still a prerequisite for surviving in the market
in the long term. Today, however, additional requirements must be met. These new
requirements are increasingly being made by investors—and gradually also by cus-
tomers. Companies that have their money supply cut off or whose customers
migrate cannot survive in the long term. Consequently, it is about meeting further
requirements in addition to a legitimate pursuit of profit. These are described with
the ESG criteria (see Fig. 2.11).

ESG criteria
Environment Social Governance

 Reducing the impact of business  Respect for human dignity and  Publication of the company's
activities on climate change compliance with human and relevant values and guidelines
 Protection of natural resources employee rights  Compliance with the relevant laws
 Safe and ergonomic design of and regulations
 Increasing the efficiency of resource workplaces
use  Legally compliant payment of taxes
 Non-discrimination
 Implementation of a circular  Diversity  Transparent documentation of the
economy processes for managing and
 "Fair" treatment and payment of controlling the company
 Use of renewable energies employees - throughout the entire
 Manufacturing sustainable products supply chain  Availability of easily
 Comprehensive training and comprehensible remuneration and
 Use of sustainable technologies and promotion guidelines
processes development opportunities for
employees  Implementation of communication
 Sustainable building management  Non-cooperation with authoritarian geared towards transparency -
 Sustainable water management governments internally and externally
 Sustainable mobility and logistics  Assumption of social responsibility -  Fairness in competition
concepts beyond the company's core  Independent control bodies
performance
 Fair treatment of customers

Fig. 2.11 ESG criteria as a requirement profile for companies and corporate management
138 2 Drivers of Artificial Intelligence

• “E” for Environment in the sense of environmentally compatible/sustain-


able action
• “S” for Social in the sense of behavior that not only corresponds to the aspects of
occupational safety and health, but also includes social commitment
• “G” for Governance in the sense of sustainable corporate governance

ESG represents verifiable criteria, which increasingly influence or should influ-


ence decisions in investment. The aspect of “verifiability” is important here, because
greenwashing should be avoided. Greenwashing occurs when companies commu-
nicatively put on a “green cloak”—but their actions are still not environmentally or
socially compatible.
The “E” in ESG criteria stands for Environment and thus for the environment. In
relation to this criterion, it is examined to what extent a company’s activities affect the
environment. This includes, for example, the resource consumption overall and the
efficiency of this consumption. In addition, the emission of greenhouse gases and
other pollutants is examined. Significant reductions in resource use can be achieved
through the use of artificial intelligence in manufacturing. This also applies to the use
of materials in the products themselves and in the use of products and services.
Against this background, it is necessary to examine how the use of artificial
intelligence in the manufacturing and usage phase of products and services can
support. To do this, convincing answers to the following questions must be found:

• How can AI-supported measures contribute to the reduction of harmful


emissions?
• How can AI increase the efficiency of resource use?
• How can artificial intelligence promote concepts of the circular economy (key-
word Circular Economy)?
• How can AI contribute to the production of sustainable products (sustainabil-
ity regarding the materials used or sustainability of the product itself or sustain-
ability of product use)?
• How can a sustainable building management be supported?
• How can a sustainable water management be established (including water sav-
ing, reuse/treatment, environmentally friendly wastewater treatment)?
• In what form can artificial intelligence contribute to the development of sustain-
able mobility and logistics concepts?

Due to its great importance, the core idea of the circular economy should be deepened
here. The circular economy is the counterpart to the linear economy. The linear econ-
omy is also referred to as throwaway economy. The motto here is: From cradle to
grave! The dominant pattern here is: take, manufacture, consume, throw away—and
just don’t think about it! This model assumes an inexhaustible amount of cheap, easily
accessible resources. This approach has been the dominant economic system for many
centuries. In such an economic system, the processed raw materials or the products
produced with them are mostly either deposited or burned after their use.
In a circular economy or in the Circular Economy on the other hand, aims for
a renewal-oriented approach—a regenerative system, so to speak. The motto here is:
2.6 Necessity of Sustainable Corporate Governance 139

From cradle to cradle! This is a resource-saving production and consumption


model. Raw materials and products should be used as long as possible, reused,
repaired or overhauled (keyword refurbishing for “repair”) or reprocessed (keyword
refabrication or remanufacturing for “reprocessing”). Recycling should also take
place—but only when the lifespan can no longer be economically extended. For this
purpose, raw material and usage cycles should be closed. In addition, it is about
developing more sustainable designs and ending the built-in obsolescence of prod-
ucts. The technical lifespan of products, limited by artificial aging, should no longer
motivate customers to buy new. Therefore, a “right to repair” should be created. It is
also about reducing waste in production and consumption (see in depth [26]).
To this end, the European Parliament adopted a resolution on the new Action
Plan for the Circular Economy in February 2021. It calls for additional measures.
The EU’s goal is to achieve a carbon-neutral, ecologically sustainable, toxin-free
and fully circular economy by 2050. To this end, stricter recycling regulations and
more binding targets for the use and consumption of materials are to be defined by
2030 (see [27]). Therefore, it is highly recommended for companies to consider
compliance with ESG criteria when using AI.
AI systems can also support the examination of the extent to which your company
is currently dealing with the topic of “Social” and consequently with the social and
societal aspects of its own activities. It is particularly important to monitor your own
supply chain to identify possible violations of workers’ rights and the demand for
non-discrimination. These standards are reflected, for example, in the OECD
Guidelines for Multinational Enterprises. The ILO Core Labour Standards and
the ten Principles of the UN Global Compact also formulate corresponding require-
ments. These are also laid down in the ISO 26000, a guide to the social responsibility
of organizations. The Supply Chain Due Diligence Act also defines additional
responsibilities for companies (see in depth [26]).
The AI-supported examination of the extent to which your company practices
responsible corporate governance—keyword “Governance”—is also challenging.
For example, an AI-supported examination can be carried out to see whether the
company complies with the relevant laws and regulations (Compliance).
Recognized standards have already been defined for this area as well. These include
the German Corporate Governance Code. This set of rules mainly contains rec-
ommendations and suggestions for good corporate governance for listed compa-
nies. The already mentioned UN Global Compact also contains relevant
regulations on governance, here with a focus on corruption prevention (opposition
to all forms of corruption, including extortion and bribery). The ISO 37000 defines
international standards for “Good Governance”. The G20/OECD Principles of
Corporate Governance support policy makers in evaluating and improving the
legal, regulatory, and institutional framework of corporate governance. Further
guidance is provided by the German Sustainability Code, as it defines a cross-
sector transparency standard for reporting on corporate sustainability activities.
Guidelines for the preparation of sustainability reports are also defined by the
Global Reporting Initiative.
The depth of engagement with the various criteria illustrates their relevance—for
the long-term survival of companies and humanity alike. Since strategic investors
140 2 Drivers of Artificial Intelligence

are interested in the long-term survival of companies, ESG criteria are increasingly
at the center of investment decisions. To facilitate investment decisions, sustain-
ability rating agencies are used. In contrast to rating agencies like Fitch, Moodys
and Standard & Poors, a sustainability rating is not created on behalf of the issuers,
but on behalf of the investors.
Also, customers demand—albeit still at a low level—for “fair and sustainable
offers”. The desired “good conscience” is here, however, already much further than
the actual purchase. Otherwise, providers like Primark and SheIn, which stand for
fast fashion, would not continue to achieve great successes. But a rethinking process
has begun, albeit very slowly. In the development of sustainable systems, processes,
products and services on the corporate side, AI systems can provide important sup-
port (see in depth [26]).

Note Box
Companies face the challenge of building sustainable value chains that con-
nect economy, ecology, social aspects and convincing governance. Therefore,
orientation towards ESG criteria is no longer just a nice-to-have element. It
has become a “must-have”!
Consequently, when using AI, it should always be checked whether it can
contribute to the fulfillment of the ESG criteria and other requirements of
sustainable corporate management.
Digital transformation is an important enabler of ecological
transformation.

Think-Box: Questions You Should Ask Yourself!


• What effects of an exponential development are emerging in my industry?
• Which (potential) competitors are already benefiting from it today?
• What solutions should we consider?
• How does digitization and dematerialization of products, services and pro-
cesses affect my industry and my company?
• What opportunities and risks result from this?
• To what extent can we already access relevant data streams today?
• What additional data sources should we tap into for ourselves?
• Are we fully exploiting the legal possibilities of data usage?
• Do we have the competencies to competently evaluate Small and Big Data?
• What new technologies should we tap into for our company?
• To what extent are our (potential) competitors already active here?
• What is the acceptance of AI technologies in my company, in our industry,
and among our customers?
• What contribution can Artificial Intelligence make to achieving sustainable
corporate management?
• Who in my company is systematically dealing with these questions?
References 141

2.7 Summary

• The effects of exponentiality will continue to massively drive the performance


of AI applications in the coming years (keyword Moore’s Law).
• The digitization and dematerialization of products, services, and processes
offer increasingly comprehensive opportunities for Artificial Intelligence.
• The networking of objects, data, processes, and living beings creates new
fields of application for AI systems.
• Comprehensive data pots—characterized by the terms Volume, Velocity,
Variety, Veracity, and Value—provide the necessary fodder for AI developments.
• The S-curve ideally describes the acceptance of new technologies.
• Many new technologies of the Gartner’s Hype Cycle Artificial Intelligence sup-
port AI applications and will successively open up new areas of use.
• Gartner predicts four different AI trends : data-centric AI, model-centric AI,
application-centric AI and human-centric AI.
• The number and performance of installed supercomputers in the various coun-
tries and regions affect the achievable AI applications.
• 5G networks are an important prerequisite for many AI applications.
• To determine the acceptance of new technologies, the Technology Acceptance
Model is used.
• When using AI, it is consistently necessary to check whether requirements of
sustainable corporate management can be better, more easily, or even ful-
filled at all.

References
1. Brynjolfsson, E., & McAfee, A. (2014). The second machine age: How the next digital revolu-
tion will change all our lives. WW. Norton.
2. Hohensee, M. (2015). Gordon Moore’s machines could replace more than half of the jobs.
Retrieved June 26, 2018, from http://www.wiwo.de/unternehmen/it/gordon-­moore-­maschinen-­
koennten-­mehr-­als-­die-­haelfte-­der-­jobs-­ersetzen/11805510.html.
3. Federal Government. (2021). Germany’s first quantum computer in operation.
Retrieved December 28, 2021, from https://www.bundesregierung.de/breg-­de/suche/
kanzlerin-­quantencomputer-­1927088.
4. FZ Jülich. (2022). Europe’s first quantum annealer with more than 5000 qubits started in
Jülich. Retrieved April 3, 2023, from https://www.fz-­juelich.de/de/aktuelles/news/pressemittei
lungen/2022/2022-­01-­17-­juniq-­europas-­erster-­quantencomputer-­mit-­5000-­qubits.
5. Kreutzer, R. T., & Klose, S. (2023). Metaverse compact. Springer Gabler.
6. Rusche, C. (2022). The significance of E-commerce for retail and Christmas business.
IW-Short Report, No. 100, Cologne.
7. Kreutzer, R. T. (2021). Toolbox for digital business. Leadership, business models, technologies
and change management for the digital age. Springer Gabler.
8. IOT Analytics. (2021). State of IoT 2021: Number of connected IoT devices growing 9% to
12.3 billion globally, cellular IoT now surpassing 2 billion. Retrieved December 28, 2021,
from https://iot-­analytics.com/number-­connected-­iot-­devices/.
9. Kreutzer, R. T. (2021). Customer dialogue online and offline. The big 1x1 of customer acquisi-
tion, customer retention and customer recovery. Springer Gabler.
142 2 Drivers of Artificial Intelligence

10. BMWK. (2023). The German Gaia-X Hub. Retrieved April 3, 2023, from https://www.bmwk.
de/Redaktion/DE/Dossier/gaia-­x.html.
11. Statista. (2021). Volume of data/information created, captured, copied, and consumed world-
wide from 2010 to 2025. Retrieved December 29, 2021, from https://www-­statista-­com.
ezproxy.hwr-­berlin.de/statistics/871513/worldwide-­data-­created/.
12. Hansell, S. (2008, November 6). Zuckerberg’s law of information sharing. The New York Times.
13. Kreutzer, R. T. (2009). Practice-oriented dialogue marketing. Concepts—Instruments—Case
studies. Springer Gabler.
14. Zuboff, S. (2018, September 29). There is an unbearable longing in many of us. Der
Spiegel, 68–70.
15. Zuboff, S. (2018). The age of surveillance capitalism. Campus.
16. Gartner. (2022). What’s new in artificial intelligence from the 2022 Gartner Hype Cycle.
Retrieved April 13, 2023, from https://www.gartner.com/en/articles/what-­s-­new-­in-­artificial-­
intelligence-­from-­the-­2022-­gartner-­hype-­cycle.
17. Gartner. (2021). Gartner identifies four trends driving near-term artificial intelligence
innovation. Retrieved April 14, 2023, from https://www.gartner.com/en/newsroom/press-­
releases/2021-­09-­07-­gartner-­identifies-­four-­trends-­driving-­near-­term-­artificial-­intelligence-­
innovation.
18. Gartner. (2021). The 4 trends that prevail on the Gartner Hype Cycle for AI, 2021. Retrieved April
14, 2023, from https://www.gartner.com/en/articles/the-­4-­trends-­that-­prevail-­on-­the-­gartner-­
hype-­cycle-­for-­ai-­2021.
19. Syverson, C. (2013). Will history repeat itself? Comments on ‘Is the information technology
revolution over?’. International Productivity Monitor, 25, 37–40.
20. Kreutzer, R. T., & Land, K.-H. (2016). Digital Darwinism—The silent attack on your business
model and your brand (2nd ed.). Springer Gabler.
21. Statista. (2022). Computing power of the most powerful supercomputers worldwide in
November 2022. Retrieved April 13, 2023, from https://de-­statista-­com.ezproxy.hwr-­
berlin.de/statistik/daten/studie/193104/umfrage/rechenleistung-­der-­leistungsstaerksten-­
supercomputer-­weltweit/.
22. Statista. (2022). Locations of the 500 most powerful supercomputers worldwide in November
2021. Retrieved January 4, 2022, from https://de-­statista-­com.ezproxy.hwr-­berlin.de/statistik/
daten/studie/36482/umfrage/anzahl-­der-­supercomputer-­weltweit-­nach-­laendern/.
23. Deutsche Telekom. (2023). What is 5G? Basic knowledge about the mobile standard.
Retrieved April 14, 2023, from https://www.telekom.com/de/konzern/details/was-­ist-­5g-­
grundwissen-­zum-­netz-­der-­zukunft-­542352.
24. Davis, F. D., Granic, A., & Maranguinic, N. (2022). The technology acceptance model: 30
years of TAM. Springer.
25. Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the technology acceptance
model: Four longitudinal field studies. Management Science, 2(2), 186–204.
26. Kreutzer, R. T. (2023). The path to sustainable corporate management. Springer Gabler.
27. European Parliament. (2021). Circular economy: Definition, importance and benefits.
Retrieved January 3, 2022, from https://www.europarl.europa.eu/news/en/headlines/
economy/20151201STO05603/circular-­economy-­definition-­importance-­and-­benefits.
Part II
Fields of Application of Artificial Intelligence

“Make it simple but significant!” Don Draper

Before delving into a multitude of AI application areas, an important aspect needs


to be clarified. Current AI systems are always focused on specific tasks. This means
that the artificial intelligence that was used to win against the Go world champion
would fail miserably at chess and also at Jeopardy. Thus, they are far from a com-
prehensive human intelligence.
These limitations need to be taken into account when it is subsequently shown
how potential use cases can become real business cases. These should not only be
innovative, but also contribute sustainably to business success. The specific require-
ments of various industries will be addressed. This can provide important impulses
for the analysis of your own business processes.
It will become clear how AI integration into products and services can take place.
Also, processes of procurement, innovation/creation, production, distribution, and
marketing/sales, communication, etc. can not only gain efficiency through the inte-
gration of artificial intelligence, but also provide new benefits for customers.
Exciting fields of application are also offered in the medical sector, in security tasks,
and in the field of education. However, it would almost be easier to define which
sectors will not benefit from AI use or at least will be affected.

Note Box
The limits of future AI use are not even remotely recognizable today.
Consequently, it primarily depends on your creativity and responsibility to
recognize the emerging opportunities early and actively use them.

The following will highlight exciting fields of application of artificial intelli-


gence in various industries and different business functional areas. Since artificial
intelligence is a cross-cutting technology, the use cases do not orient themselves to
144 Fields of Application of Artificial Intelligence

traditional industry or functional boundaries. Instead, they lead to a networking


across industry and functional boundaries. Nevertheless, an attempt was made to
make a sensible and reader-friendly assignment to both industries and business
functions.
Production, Maintenance, Mobility,
Transport, Agriculture 3

Summary A variety of AI solutions can be used in the areas of production, main-


tenance, mobility, transport, and agriculture. Every company is called upon to iden-
tify the specific and particularly rewarding fields of application for itself. Exciting
example applications are discussed in this chapter.

3.1 Production and Maintenance Sector

3.1.1 Trend Towards Smart Factory

Before discussing the use of artificial intelligence in the production sector, first central
developments in production are presented (see Fig. 3.1). It becomes clear that artifi-
cial intelligence plays a major role in many areas to master the described challenges.
At the center of AI application in the production environment is the so-called
Smart Factory (in German: “intelligent factory” or “networked factory”). In
Germany, the term Industry 4.0 was coined for this. The core is about a networked
computerization of manufacturing technologies. A key feature of the networked
factory is the data-driven optimization of end-to-end processes.

Note Box
The ultimate goal of a Smart Factory is the development of a self-optimizing
factory,

• That independently adapts to changes in the supply (prices and quantity)

Supplementary Information The online version contains supplementary material available at


https://doi.org/10.1007/978-­3-­658-­46131-­7_3.

© The Author(s), under exclusive license to Springer Fachmedien Wiesbaden 145


GmbH, part of Springer Nature 2024
R. T. Kreutzer, Understanding Artificial Intelligence, Future of Business and
Finance, https://doi.org/10.1007/978-3-658-46131-7_3
146 3 Production, Maintenance, Mobility, Transport, Agriculture

• That optimally utilizes available production capacities (this is based on a


company-wide perspective; possibly, capacities from other providers are
also included)
• The Anticipates changes in (global) demand and translates them into
requirements for resource procurement and production as well as
• Changes in (global) demand also independently uses for the further
development of products and services

This makes visible:


Whoever sees the core of Industry 4.0 solely in efficiency increase and
cost reduction, neglects many opportunities that the Industrial Internet of
Things in connection with artificial intelligence opens up. This also involves
the creation of new products and services as well as the development of
innovative business models (see in depth [1]).

Trends Challenges Enablers

Small batch sizes - greater variety Increasing complexity in Networking and automation of the
of offerings - trend towards production with limited value chain across company
individualization of offerings predictability of demand boundaries - IIoT

Disruptive developments in Software for controlling production


Shorter product life cycles, faster
production - integration of robots - via robotics - Expertise in the use
product launches, high flop rates high SW/HW costs of robots - AI

Cost and efficiency benefits in Professional human resource


Balancing the use of humans and
automation - use of collaborative management - development of
robots for cooperative use
robots collaborative robots - AI

Shortage of highly qualified Investment in the qualification of


Increasing demands on employee
specialists (e.g. data scientists, AI our own employees; targeted
expertise to master Industry 4.0 specialists) acquisition of specialists

Fig. 3.1 Important changes in production

With Smart Factoring the achievement of the following described develop-


ments is in focus:
The boundaries between product/service design, supply chain management, pro-
duction processes, and sales/demand management are being torn down.
Virtual tracking of facilities and processes as well as of products and services
becomes possible.
Relevant information—along the entire supply chain and value chain up to demand
development—is available in real-time, visually prepared and with action impulses.
Real-time collaboration with suppliers is intended to identify and avoid supply
chain risks at an early stage.
A rationalization of business processes as well as an optimization of demand and
supply becomes possible—with high flexibility.
3.1 Production and Maintenance Sector 147

Smart manufacturing transforms businesses into proactive, autonomic organizations


that predict and fix potentially disruptive issues, evolve operations and delight cus-
tomers, all while increasing the bottom line ([2], p. 3).
With this sentence, at least the ideal image can be described, whose achievement
requires hard work and a large investment as well as a team of highly qualified
specialists.
The global market for intelligent manufacturing systems was estimated at
about $250 billion in 2022. It is expected to grow at an annual rate of 14.9% from
2023 to 2030 (see [3]). The main growth drivers are the increasing acceptance of
Industry 4.0 and a stronger government commitment to supporting industrial
automation. In addition, there is increasing automation in industrial manufactur-
ing processes. Moreover, it is necessary to manage the increasing complexity of
supply chains and additionally meet further requirements for sustainable corpo-
rate management.
What are the key factors driving the growth of this market so dynamically? These
include the following points:

• Dynamic penetration of Industry 4.0 solutions


• Increasing automation of industrial manufacturing processes
• Increasing support for industrial automation by Governments
• Growing importance of compliance with legal regulations
• Increasing complexity of supply chains
• Rising demand for software systems that reduce time and costs
• Availability of relevant technologies
• Programmable Logic Controller (PLC)
• Supervisory Controller and Data Acquisition (SCADA)
• Enterprise Resource and Planning (ERP)
• Distributed Control System (DCS)
• Human Machine Interface (HMI)
• Smart Product Solutions (SPS)
• Product Lifecycle Management (PLM)
• Manufacturing Execution System (MES)
• Machine Vision System, Control Devices, Robotics, Communication
Segments, Sensors

In many countries, initiatives for intelligent manufacturing are being established.


Through the collaboration of industrial companies, technology providers, laborato-
ries, and universities, work is being done on various Smart Manufacturing Platforms
and concepts of Smart Factories. The use of the above-mentioned technologies
(such as PLC, SCADA, ERP, DCS, HMI, PLM, and MES) allows data to be col-
lected and evaluated in real time in order to derive decisions from it. This can reduce
product errors, decrease downtime, and better plan maintenance phases. More and
more M2M connections (Machine-to-Machine) will support IoT applications.
Manufacturers around the world are relying on robotics and automation
148 3 Production, Maintenance, Mobility, Transport, Agriculture

technologies to optimize their own production in terms of productivity, quality,


safety, and cost. This is also supported by the increasing expenditure for the use
of robots.
It is estimated that the market for the Industrial Internet of Things (IIoT) will
have the largest share of the market for intelligent manufacturing in the future. The
IIoT integrates various technologies to optimize processes. These technologies
include the use of sensors, RFID, industrial robotics, distributed control systems,
condition monitoring, smart meters, electronic shelf labels, cameras, smart beacons,
and yield monitoring systems. Satellite navigation systems (GNSS for Global
Navigation Satellite System) are used for localization. GNSS is the collective term
for the various satellite navigation systems used worldwide. These include GPS
(Global Positioning System) of the USA, GLONASS (Global Satellite Navigation
System) of Russia, Galileo of the European Union, and Beidou of the People’s
Republic of China. The use of the IIoT helps in analyzing the data collected through
various devices and processes. This supports effective decision-making.
The industrial communication will continue to make up the largest share of the
smart manufacturing market during the forecast period. A combination of compo-
nents, software, and standard protocols is used for this industrial communication.
These enable communication from human to machine and from machine to machine.
More efficient, reliable, and secure industrial communication increases operational
efficiency. At the same time, overall manufacturing costs can be reduced. It is neces-
sary for companies to invest substantial sums in smart manufacturing solutions that
enable real-time monitoring of the facilities. Manufacturers are primarily focused
on controlling component supply, improving product quality, shortening the pro-
duction cycle, and using efficient manufacturing processes.
The importance of Smart Factories is shown in Fig. 3.2. For this, 551–555
production managers, board members, and managing directors in German industrial
companies with 100 or more employees were surveyed each year from 2019 to
2021. It turns out that in 2021, a total of 62% of respondents said they were already

2019 2020 2021


70%
62%
59%
60%
53%
Share of respondents

50%

40%

30%
21% 22% 21% 21%
20% 17% 16%

10%
3% 1% 0%
0%
We already use special We are planning to use We are not currently Industry 4.0 is not and
applications for Industry special applications for planning to use special will not be an issue for us
4.0 Industry 4.0 applications for Industry
4.0, but can imagine
doing so in the future

Fig. 3.2 What is the importance of Industry 4.0 for your company? (data source: [4])
3.1 Production and Maintenance Sector 149

0% 10% 20% 30% 40% 50% 60% 70% 80% 90%


Lack of financial resources 77%
Data protection requirements 61%
IT security requirements 57%
Lack of specialists 55%
Complexity of the topic 52%
Susceptibility of the systems to faults 48%
Lack of exchange with other companies 29%
Lack of time 25%
Lack of external advice 14%
Lack of knowledge of best practice examples 12%
Lack of standards 11%
Availability of marketable solutions 10%
Lack of acceptance in the workforce 9%
Insufficient degree of automation 0%
Uncertainty about economic benefits 0%

Fig. 3.3 What are the biggest obstacles to the use of Industry 4.0 in your company? (data
source: [4])

using special applications for Industry 4.0. In 2019, it was only 53% (see
[4], p. 11).
The answers to the question of what are the biggest obstacles to the use of
Industry 4.0 in companies are also informative (see Fig. 3.3). For this, 551 produc-
tion managers, board members, and managing directors in German industrial com-
panies with 100 or more employees were surveyed. In 2021, 77% of respondents
saw lack of financial resources as an obstacle to the use of Industry 4.0 applica-
tions. Already in second place are 61% data protection requirements, which are
increasingly appearing as an innovation brake in more and more areas. With 57%,
the already mentioned requirements for IT security rank third. Then come lack of
skilled workers with 55%. The complexity of the topic is given as an obstacle by
52% (see [4], p. 18).
The question of the goals of digitization in production is also important. To
determine these, a survey with 40 interviews in Germany was conducted by Roland
Berger [5]. In this study five industry segments were covered. These included tech-
nical consumer products, mechanical engineering, the basic materials industry,
automotive suppliers, and the automation industry. The respondents were predomi-
nantly medium-sized companies with up to five billion € annual turnover. The inter-
viewees were employed in top and middle management and in project management.
In addition, specialists from technology, IT, and production were interviewed.
Interesting results were provided by the question about the essential drivers of
digitalization in production (cf. [5], p. 7). In the past, three goals were at the center
of similar studies: efficiency increases, cost reductions and quality improve-
ments. Figure 3.4 shows that this is also the case in this new study. In the digitaliza-
tion of the factory, 64% of the respondents put process improvements in the first
place. This is followed by 44% aiming for cost reduction. A quality improvement
is pursued by 24%. The market often expected flexibility/agility in production is
only a major driver for 16%. The improvement in the supply chain follows with
150 3 Production, Maintenance, Mobility, Transport, Agriculture

Process improvements 64%

Cost reducon 44%

Quality improvement 24%

Flexibility/agility 16%

Improvement of the supply chain 12%

Innovaon 12%

0% 20% 40% 60% 80%

Fig. 3.4 Key drivers of digitalization in production (data source: [5], p. 7)

12%. Surprisingly, only 12% of the respondents consider innovation a driver for
digitalization in manufacturing.

Note Box
The challenge for manufacturing companies is to make their manufacturing
processes more resilient and flexible to increase resilience. In addition, the
resource consumption needs to be reduced. Also, high security require-
ments must be taken into account so that production systems cannot be shut
down by cyber attacks.
At the same time, it is necessary to improve the Overall Equipment
Effectiveness (OEE) or the total plant effectiveness. OEE is the gold stan-
dard for measuring manufacturing productivity. An OEE value of 100%
means that only good parts are produced—as quickly as possible and without
downtime: 100% quality (only good parts), 100% performance (as fast as pos-
sible) and 100% availability (no downtime).

3.1.2 Networking as a Core Element of a Smart Factory

In a visionary design, Smart Factories organize themselves (Manufacturing plants,


including quality control and logistics systems) operate independently, i.e., without
human intervention. A crucial basis for this are the so-called cyber-physical sys-
tems. Cyber-physical systems (CPS for Cyber Physical Systems) connect informa-
tion and mechanical components via software. Such a cyber-physical system is a
network of computer science, software components with mechanical and electronic
parts that communicate with each other via a data infrastructure. This communica-
tion or data exchange (including control or steering) takes place over a network
3.1 Production and Maintenance Sector 151

(usually the internet) in real time. The following components of cyber-physical


systems are to be distinguished (cf. [6]):

• Systems (including procurement, production, logistics, communication) and net-


worked objects and processes (components of the Internet of Everything) for
control and monitoring of processes; possibly using cloud services and Edge AI
• Sensors and wireless communication techniques (such as Bluetooth or RFID)
for registration and processing of data from the physical world
• Stationary and mobile facilities, devices and machines (e.g., robots)
• Actuators (such as drive elements) that act on the physical world (such as in the
control of production processes or as impulses for robots)
• Technologies for evaluating Big Data, as here often very large amounts of data
are generated in real time, e.g., for supporting quality control through AI systems
• Cyber Security Modules (to protect the processes against internal and/or exter-
nal cyber attacks)

In the mentioned networking of objects and processes, two forms are to be distin-
guished. The internal networking links the internal components of the production
process at one location. The external networking includes the (independent) com-
munication with other smart factories or other networked plants and processes. This
allows—AI-supported—cross-site learning from the successes and failures of other
production units—ideally even in real time.

Note Box
Cyber-physical systems are the prerequisite for communication between real
and virtual components. They create the foundations for Smart Factories.

The effects this networking can have are shown in Fig. 3.5. Here a dramatic
change in the classic value chain is taking place. This is characterized by a digital
(informational) value chain to penetrate. The digital value chain can make a deci-
sive contribution to overcoming internal and external data silos through its compre-
hensive networking of various performance areas. This is achieved by building a
data and process ecosystem. In addition to merging internal information flows,
further information from the business environment and especially from suppliers
and customers need to be integrated through an outside-in process. This networking
can much more quickly and comprehensively respond to necessary changes on the
supply and demand side. The digital value chain thus represents an informational
value chain—for networking internal and external information flows.

Note Box
Companies are called upon to build end-to-end data solutions—instead of data
and process silos. This can achieve more demanding efficiency and effectiveness
goals in production. In addition, it can succeed in generating additional corporate
value through more convincing performance. Furthermore, by networking the
physical with the digital value chain, new business models can also be developed.
152 3 Production, Maintenance, Mobility, Transport, Agriculture

Accompanying processes

Corporate infrastructure

Human resources – Market research – Controlling –


Research & Development – IT P
r
o
Information Information f
Information Information i
Digital Information and network with
network with network with
network with production own/third t
value suppliers network with
sales logistics
party service m
partners and service
chain other
customers providers providers a
producers r
g
i
n
Procurement Production Sales Outbound Service
logistics

Core processes

Fig. 3.5 Networking of physical and digital value chain

value chains value chains value chains value chains


of indirect of direct of direct of indirect
suppliers suppliers customers customers

Own value chain

Fig. 3.6 System of value chains

If not only the own value creation stages are connected with each other but also
the value chains of further upstream and downstream companies are interlinked,
systems of integrated value chains (also Value Systems ; see Fig. 3.6) are created.
The own value chain is linked on the input side with the value chain of direct and
indirect suppliers. On the output side, there is a networking with the value chain of
direct and indirect customers. This form of networking is not only relevant in the
B2B area, but can also involve consumers. For example, smart home concepts
directly involve consumers.
3.1 Production and Maintenance Sector 153

Through comprehensive informational networking, additional efficiency and


effectiveness reserves of value creation can be exploited This can succeed on both
the supplier and the customer side. This networking often also forms the core for the
development of new business models (see further [1]).
An evaluation and optimization of such complex and extensively networked
data and process flows cannot be managed without comprehensive AI systems. A
necessary classic programming would exceed the limits of feasibility in terms of
time, complexity and cost.

Note Box
Those who want to survive in the digital world as a manufacturer in the long
term must not be content with just digitizing their own production processes.
The manufacturers themselves or networks of manufacturers are called upon
to develop powerful platforms to achieve the networking outlined.
Otherwise, there is a risk that platforms of established digital corpora-
tions will push themselves between our company and our suppliers and cus-
tomers—and thus direct contact is lost. This would degrade our company to
an interchangeable service partner “without a face” towards business partners.
This is already the case in many consumer-related applications. Platforms
such as Airbnb, Amazon Marketplace, Booking.com, Check24, Flixbus, HRS,
TripAdvisor, Zalando and many others have positioned themselves between
the providers and demanders—as gatekeepers. Platforms dominate as gate-
keepers the customer interface—and the providers have to pay for access to
suppliers and customers as well as for the use of associated data over “own”
business relationships! Because often only the gatekeepers have direct contact
and can alone access all relevant data, evaluate them and use them for further
own business strategy.

3.1.3 Technologies of Smart Manufacturing

The following discusses central technologies of Smart Manufacturing. In total,


Smart Factories are about using digital technologies to significantly improve pro-
ductivity, quality, flexibility, and service. For this, the high connectivity mentioned
first needs to be established. The Industrial Internet of Things is used to achieve
comprehensive collection and merging of data from various systems (see Sect. 2.3).
By expanding the Industrial Internet of Things, ecosystems of intelligent manufac-
turing can emerge. For this, various objects and processes are linked to achieve
certain goals more efficiently or effectively. A major task of the Industrial Internet
of Things is to network as a manufacturer with other manufacturers and further
service providers to advance the intelligent manufacturing.
Given high requirements by the ESG criteria, by the Supply Chain Due
Diligence Act (see Sect. 2.6) and due to the rising customer expectations, the
154 3 Production, Maintenance, Mobility, Transport, Agriculture

logistics required here no longer represents a Represents a cost factor. Intelligent


supply chains often form the backbone of business success today. For companies
and consumers, short production and delivery times as well as high transparency at
low costs are becoming increasingly important—combined with the desire for indi-
vidualization of the offer and a focus on sustainability. These challenges can only
be met through an intelligent and integrated supply chain management—
embedded in the concept of the Smart Factory. Through Smart Manufacturing,
environmentally friendly action can also be achieved be achieved. Thus, the CO 2
footprint of companies can be reduced and the process of a circular economy can
be supported. The more efficient use of energy, water and other resources, a smarter
waste management as well as optimized use of the manufacturing facilities contrib-
ute significantly to the development towards a sustainable corporate management
(cf. in depth [7]).

3.1.3.1 Production Control Center


These developments enable a production control center, as shown in Fig. 3.7.
Through the Industrial Internet of Things, a continuous flow of information can be
established, which previously separate business areas with each other connects. To
master these data streams, so-called Manufacturing Execution Systems (MES)
are used. This is a process-close operating level of a multi-layered management
system for production processes. Here we also speak of a production control sys-
tem. The core of these systems is a direct networking of the distributed systems and
requires the integration of different machines and plants as well as the simultaneous
processing of different data formats. This enables a (automated) process manage-
ment in real time.
For this real-time process management it is necessary that various data sources
(operational data, machine data and personnel data) are recorded and evaluated in
the ongoing process. This comprehensive operational transparency is an indis-
pensable prerequisite for the optimization of processes and products. Only through
such transparency can potential production bottlenecks and the causes of quality
defects be detected early. Comprehensive transparency is also the prerequisite for
companies to react faster to further unexpected problems. To build this transparency
the data of the machines and plants as well as the people working there need to be
merged. The challenge is to transform the multitude of collected data into relevant
and practical insights. Only then can a data-supported product and process opti-
mization succeed.
The use of artificial intelligence makes it possible to also evaluate very large—
structured, semi-structured and unstructured—data sets quickly. This can, for exam-
ple, predict failures of machines and plants in very complex production environments
much faster than by conventional means predict. The future of such AI-supported
systems lies in the fact that not only the plants exchange information with each
other. Also the shelves, the workpieces as well as the materials necessary for their
completion materials can communicate independently with each other and with the
manufacturing plants communicate. Thus, a supplied part or a product in the
manufacturing process can bring the data necessary for further processing in
Production control center AI-supported automation Realtime information management
Control lever: Control lever: Control lever:
• Industrial Internet of Things • MES (Management Execution • MES (Management Execution)
• Track & Trace System) • SCADA (Supervisory Control & Data
• Remote Control • SCADA (Supervisory Control & Data Acquisition)
• Predictive Analytics Acquisition) • PLM (Product Lifecycle Management)
• MES (Management Execution System) • Cobots • Use of AR (augmented reality) and VR (virtual
• SCADA (Supervisory Control & Data • Additive manufacturing/3D printing reality)
Acquisition) • Quality analysis through industrial image
processing
Achievable results: Achievable results:
• Reduction of the necessary stock levels • Automation of repetitive tasks Achievable results:
• Increase in OEE (Overall Equipment • Increasing flexibility in production • Acceleration of decision-making processes
• Rapid escalation in the event of crisis
3.1 Production and Maintenance Sector

Effectiveness)
• Quality improvements developments
• Easier control of processes • Reduction of rejects

Fig. 3.7 Areas of action of smart factories—production control center, AI-supported automation and real-time management
155
156 3 Production, Maintenance, Mobility, Transport, Agriculture

machine-readable form itself. These can be provided on an RFID chip or as a QR


code. Based on this information, the further manufacturing steps are initiated inde-
pendently. This can result in correspondingly configured manufacturing plants a
high flexibility in production can be achieved.

3.1.3.2 AI-Supported Automation


In addition, a comprehensive Track & Trace system can be used to precisely trace
the delivery routes of materials and products (see Fig. 3.7). This allows a more com-
prehensive form of visibility (Visibility). So far, many production processes could
only be determined at batch level (i.e., a batch of products) where and when a larger
number of products were produced by whom. In the future, such an assignment at
unity level is possible, i.e. for each individual product. This means that even after-
wards it can be determined to the second what conditions prevailed at the time of
manufacture of a particular product.
In addition, a remote monitoring is possible—across machine, plant, company
and national borders. This remote management of production plants ensures ongo-
ing monitoring of production processes and allows for quick countermeasures to be
initiated and possibly even automated in case of problems.
An indispensable field of action in the development and use of Smart Factories is
the industrial cybersecurity. It includes the security of industrial control systems
as well as automation and production processes. This also includes the protection of
plants and critical infrastructure from cyber attacks. The need for a strengthening of
industrial cybersecurity results from the fact that industrial control systems are par-
ticularly vulnerable to cyber attacks. These include programmable logic controllers
(PLCs), distributed control systems/process control systems (DCS for Distributed
Control System) and higher-level control and data acquisition systems (SCADA for
Supervisory Control and Data Acquisition).
SCADA refers to a computer system that monitors and controls technical pro-
cesses of automated manufacturing. To protect these systems from cyber attacks,
various technologies such as DDoS protection solutions (Distributed Denial of
Service), intrusion detection and prevention systems (Intrusion Detection and
Prevention), antivirus programs and products like gateway and network devices are
used. This is intended to ensure uninterrupted service delivery. This is not only
indispensable for utility companies (for water and electricity) and transport systems
(such as airports, trains and subways) indispensable, but also for production
processes.

• A—possibly cloud-based—data management and analysis creates the prerequi-


sites for various Business Analytics, also in the form of Predictive Analytics. In
predictive analytics, various AI applications are used to reduce prediction errors
and for example. To avoid revenue losses due to lack of delivery capability. In
addition, qualified forecasts can optimize inventory management and simultane-
ously increase efficiency in production. The overarching goal is to increase the
Overall Equipment Effectiveness. For this, it is indispensable to integrate the
control and business systems in order to execute and monitor orders company-­
wide. It is important that relevant context information is also included.
3.1 Production and Maintenance Sector 157

• Artificial Intelligence also contributes to the increase in overall equipment effec-


tiveness by exploiting the potential for increasing yields. This is achieved by
evaluating machine data in real time. In the linked evaluation of thousands of
variables across different machines, process stages, and production sites, the
insights relevant for optimization are gained and possible “yield killers” are
identified. AI-supported root cause analysis allows for quick decisions about
necessary optimization measures. The following effects can be achieved:
–– Reduction of test costs
–– Reduction of error/rejection rates through AI-based identification of errors
and yield killers before production starts or during the ongoing process
–– Reduction of production-related yield losses

In addition, further concepts shown in Fig. 3.7 for AI-supported automation can
be used.
The global AI spending for automation applications is expected to double
between 2020 and 2024. The amount is expected to increase from $50 billion in
2020 to $110 billion in 2024 (see [8], p. 14). Of great importance here is the use of
industrial robots. Figure 3.8 shows that the sales of industrial robots will increase
significantly, especially in the Asia-Pacific region. In Europe, a less dynamic growth
is expected. Also in North America, the use of industrial robots will only slightly
increase overall (see [9]).
Advanced robotics opens up new fields of application and enables collaboration
between robots and humans. These Cobots or Collaborative Robots can work
“hand in hand” with humans. A prerequisite for this is that the robots recognize
humans and act “carefully” in collaboration with them (see Fig. 3.9). Collaborative
robots represent a large growth market. The Market for Cobots will grow from

North America Europe Asia Pacific Rest of the world


100,000
90,000
80,000
70,000
60,000
50,000
40,000
30,000
20,000
10,000
0
2018 2019 2020 2021* 2022* 2023* 2024* 2025* 2026* 2027* 2028*

Fig. 3.8 Sales of industrial robots by region—worldwide (in million US$) (data source: [9])
158 3 Production, Maintenance, Mobility, Transport, Agriculture

1. A human guides the robot's the 3. "Natural" cooperation:


robot's learning process. The robot hands over
objects to the human.

2 Once the robot has learned Various sensors (including cameras) feed the

procedures, it repeats them algorithms and enable human-machine interaction.

independently and adapts them


autonomously.

Fig. 3.9 Use of collaborative robots changes human-to-machine interaction

US$ 0.6 billion in 2020 to approximately US$ 1.5 billion by 2026. This corresponds
to an average growth rate of 16% per year (see [8], p. 13).
Through the Collaborative Robots or through Context-Aware Robots a “hand-­
in-­hand” work between humans and robots is possible. Here, the robots can react
independently to changes in the work environment. This allows for the dissolution
of the fenced “robot work areas”, which so far had to protect humans from robots
and caused high costs. A significant contribution to this is made by AI-based image
recognition. This allows for efficiency-enhancing collaboration even with tasks that
cannot be fully automated between humans and robots.
An entry-level model for a collaborative robot is provided by ABB under the
name YuMi. The name stands for “you and me”. This robot resembles a human
upper body. It stands on a trolley and has two arms, a camera for capturing its sur-
roundings, and suction grippers. With these, even the smallest parts can be moved.
YuMi can be used as a collaborative two-arm robot in production and work hand
in hand with human colleagues. The advantage of such collaborative robots lies in
the fact that they increase the flexibility of assembly processes. This way, even small
batch sizes can be produced in short cycles. Humans focus on the tasks in produc-
tion that require special human skills. “Colleague robot”, on the other hand, takes
over the recurring tasks with great endurance and precision (see [10]).
When Cobots react to touch, they are sometimes already referred to as Cobots
2.0. For this, parts of the robot are covered with a “digital skin”. This robot skin
allows it—similar to human skin—to react to temperature, touch, acceleration, and
proximity. Robots equipped with this skin can already be used in production and for
medical applications (see [11]). Under Bio-Tech artificially grown and biologically
inspired muscles are understood. Although this technology is still in development,
it could eventually lead to skin and tissue growing over the exterior of a robot, mak-
ing it pressure-sensitive. This would be a next step towards humanoid robots (see in
depth [12]).
3.1 Production and Maintenance Sector 159

An exciting addition or an alternative to the classic training of robots is


demonstration-­based training. The programming of robots (especially for produc-
tion processes) is a complex, time-consuming, and expensive task that requires a
high degree of expert knowledge. When tasks, processes, and/or the production
environment change, the robots used there must be reprogrammed. to be pro-
grammed. Here, so-called Wandelbots offer a solution, also known as No-Code
Robotics Platform (cf. [13]).
With Demonstration-based Teaching robots can be programmed without writ-
ing new programs. In this way, users without programming knowledge can train
robots. The robots are shown how to perform a specific task. The control program
independently learns the necessary processes. This allows human experts to train
the robots even in dynamic and complex environments. The robot can learn new
tasks in a few minutes without the need for specialist knowledge. During the learn-
ing process, the robot’s sensors and possibly additional external sensors also capture
the characteristics of the environment necessary for the learning process (cf. [13]).

Note Box
You should check to what extent it is possible to use demonstration-based
training when training robots.

Autonomous Robots download the programs required for production automati-


cally from a cloud library. This process can be triggered by the next workpiece to be
processed. Self-Programming Robots can—based on AI technologies—indepen-
dently develop their performance based on the experiences gained through “Self-­
Learning”. Here, robots performing the same tasks globally can compare and
optimize their performances at the push of a button or automatically. This enables
automated benchmarking with automatic optimization—across time, language,
culture, and national boundaries. These self-learning systems ensure that the expe-
riences gained in one place are automatically transmitted to other locations in a
global network to make corresponding optimizations there.
An important contribution to intelligent automation of processes is also made by
industrial 3D printing (in English Additive Manufacturing). This involves man-
ufacturing processes in which various materials are applied layer by layer to create
three-dimensional objects. The layer-by-layer use of material is computer-­
controlled. 3D printers already process a wide variety of liquid or solid materials
today. These include plastics, resins, carbon, graphite, ceramics, concrete, and spe-
cially prepared metals. 3D printers can produce three-dimensional objects based on
given dimensions and shapes without the need for special casting molds, for exam-
ple. Industrial additive manufacturing is not only used in the development of proto-
types, models, and tools today, but increasingly also for the production of spare
parts and small series. This video shows how houses can be built using 3D printing
today: https://www.youtube.com/watch?v=69HrqNnrfh4.
160 3 Production, Maintenance, Mobility, Transport, Agriculture

3D printing
3D printer

Material Static structure of


the object

4D printing
3D printer Stimulus

Smart Static structure of Dynamic structure of


material the object the object

Fig. 3.10 3D printing and 4D printing

The 4D printing is a further development of 3D printing (see Fig. 3.10). In 4D


printing, materials are printed that can independently deform or move under certain
conditions. Unlike 3D printing, which only prints in three dimensions, 4D printing
allows the integration of a fourth dimension: time. The principle of 4D printing is
based on the use of materials that react to certain environmental conditions. These
include temperature, humidity, and light. The materials are therefore manufactured
in 4D printing in such a way that the printed objects change or move over time under
certain external conditions.
4D printing offers great potential for the production of self-moving or adaptive
objects, opening up numerous applications in areas such as medicine, architecture,
robotics, textiles, and also in aerospace. Selected examples are shown below:

• One example of a 4D printing application is the production of medical implants,


which can independently adapt to the body after implantation. Programmed
stents could travel through the human body and open when they have reached
their destination.
• Piping systems of a plumbing system can dynamically change their diameter
depending on the flow rate and water demand. Self-repairing pipes could auton-
omously repair themselves when they tear or break.
• A 4D printer can also produce self-assembled furniture. Initially, only a flat
plate is printed, which unfolds into a chair through water or light.
• 4D-printed clothing could change—oriented to the weather or the respective
activities. Shoes could vary their shape when users start running to provide better
comfort.

Note Box
There are almost no limits to creativity in the application of 3D and 4D
printing!
3.1 Production and Maintenance Sector 161

3.1.3.3 Real-Time Information Management


The already mentioned concepts of Industrial Internet of Things, Management
Execution System (MES), and Supervisory Control & Data Acquisition (SCADA)
are indispensable for the implementation of a Real-time Information Management
(see Fig. 3.7). Only these components make it possible for the relevant information
to be available in real time. This information is indispensable for many AI applica-
tions in manufacturing. They contribute to accelerating decision-making processes
and enabling faster and often also automated escalation in case of problems.
An important field of application in the production context is industrial image
processing. For this purpose, mechanical, optical, electronic systems are inter-
connected via software. combined. Concepts of industrial image processing are
used to detect errors in objects, surfaces, materials, and manufacturing processes
during the ongoing production process (see Fig. 3.11). This can further increase
the quality of the products and the efficiency in production overall. The auto-
mated quality control based on image processing also allows the production
processes themselves to be further automated. Industrial image processing
replaces manual inspections and measurements with AI-supported image process-
ing techniques. This is also referred to as image or video-based digital quality
control. Its use is intended to increase production throughput and product quality
at the same time. Such systems have already achieved productivity increases of up
to 50%. This is due to the increase in the error detection rate by up to 90% com-
pared to human error detection.
An important module of Realtime Information Management is Product Lifecycle
Management (PLM). PLM describes the company-wide—holistic—management
and control of all product data and the processes associated with development, man-
ufacturing, distribution, and marketing. Such a PLM approach covers the entire
product lifecycle from development through manufacturing, installation/use, and
maintenance to the disposal of the products. For this purpose, information along
the entire value chain is centrally consolidated. The central goal of Product
Lifecycle Management is to achieve a central and consistent data management to
map the entire lifecycle of a product informatively. The information available
throughout the entire development, production, and usage phase allows the product

KI-Engine

Cameras supply the AI engine The AI engine processes Employees are made aware of
with data for quality control. thousands of images to detect errors - without having to search for
errors. them themselves.

Fig. 3.11 AI-based quality control


162 3 Production, Maintenance, Mobility, Transport, Agriculture

itself and all associated processes to be continuously optimized. Such a form of


information aggregation is also a prerequisite for capturing the Product Carbon
Imprint. This is indispensable for development towards sustainable corporate man-
agement (see [7]).
Realtime Information Management also includes the use of Augmented Reality
(AR) and Virtual Reality (VR). With Augmented Reality or enhanced or enriched
reality, the computer-aided extension of the perception of reality is described. This
enrichment can address all senses. However, Augmented Reality is usually under-
stood as the addition of reality through a visual display of appropriate information.
Here, images or videos are overlaid with additional information. In addition, virtual
objects can be inserted into the perceived reality by means of insertion or overlay.
Augmented Reality became known to the general public through the game Pokémon
Go. Here, players had the task of catching virtual fantasy creatures with a smart
device that were inserted into the perceived reality. More mundane fields of applica-
tion are the display of a carpet selected at IKEA in the home living room. In the
production area, technicians can have the upcoming work step displayed directly in
their field of vision. Parts of a system can also be labeled accordingly to support a
fitter. This can prevent human errors. In mechanical engineering, Augmented
Reality applications can be used for communication with customers, physical
Objects can be linked with computer-generated information to demonstrate various
machines to interested parties.

Note Box
The use of augmented reality applications can illustrate maintenance, repair,
and manufacturing processes in such a way that even less qualified individuals
can be entrusted with demanding tasks.

With Virtual Reality a completely virtual, i.e., a non-existent world is created.


This is a computer-generated “reality” that may have nothing in common with the
context in which a person is actually located. In addition to a three-dimensional
image, sound recordings can also be made. Such a virtual reality can be created by
large screens. However, VR glasses are used much more frequently. In the produc-
tion sector, there are numerous VR applications. Manufacturing plants are often
very complex—and often must never stand still. Under such conditions, it is diffi-
cult to conduct training. Here, VR training in manufacturing can help—and
ongoing processes do not need to be interrupted. Also, the planning of production
areas can be supported by VR application. The creation and testing of prototypes
can also be supported by VR applications. In many cases, the costs for developing a
virtual reality app may be lower than for the creation of physical prototypes.
Corresponding applications for this can also be implemented in the Metaverse.
At the automotive supplier Bühler Motor a Virtual Reality Experience in the
Metaverse was realized. A large engine production plant was commissioned in
China through a VR experience. Due to Corona, this had to be done without special-
ist personnel from Bühler Motor on site. To create this VR experience in the
3.1 Production and Maintenance Sector 163

Metaverse, 360° films and pictures as well as detailed 2D films were used to train
the users. This was not just about commissioning. The employees were also trained
for the rectification of any errors. The VR experience could be accessed in German
and Mandarin. The implementation in the Metaverse not only made the commis-
sioning possible as a whole, but also helped to save time and money (cf. [14]; for
more details see Chap. 11).

3.1.3.4 Predictive Maintenance


An important field of application of artificial intelligence is the Predictive
Maintenance—the so-called “predictive maintenance” (see Fig. 3.12). Plants and
machines should not only be maintained when they fail or errors occur. Through a
proactive approach, failure risks should be remedied before the plants or machines
fail. However, predictive maintenance is not limited to use within smart factories. A
particularly exciting (additional) business model for manufacturers lies rather in
offering predictive maintenance service for the machines and plants that are installed
at customers’ sites (see [1]).
What are the goals of Predictive Maintenance? The use of predictive mainte-
nance is also about the increase of Overall Equipment Effectiveness. The
smoother and more trouble-free production processes run, the higher the achievable
effectiveness of the entire plant. This is also contributed to by the reduction of
repair time. This is referred to as the Mean Time to Recovery (MTTR). The aver-
age time span between the failure of machines (Mean Time between Failures,
MTBF) is also intended to be reduced by this. Especially the reduction of these
downtimes (Downtime) or the increase in operational readiness of plants is an
important lever to increase the overall effectiveness of a factory. In addition, the
maintenance and personnel requirements and thus the maintenance costs can be
reduced be reduced.
To enable Predictive Maintenance, sensors capture measurement and production
data from components, machines and entire plants in the ongoing process. From this
information, AI algorithms derive maintenance and replacement needs. In addi-
tion, emerging disturbances can be predicted. The planning of a repair or the
replacement of wear parts can thus take place at a time when the aggregates are still
running correctly run.
The approach of Predictive Maintenance differs significantly from the still often
dominating maintenance logic of aggregates. Maintenance has so far mostly been
oriented towards periods and/or machine running times, as is the case, for example,
in the TÜV inspections for cars. Often, parts are replaced during routine mainte-
nance that could possibly have lasted longer. Such specifications “enforce” a
replacement of still functional parts. However, modules can also fail before the
scheduled replacement time and lead to unwanted and cost-intensive interruptions
of processes.
The following work steps of Predictive Maintenance are to be accomplished:

• At the beginning are the capture, digitization and merging of data from various
performance components of a machine, a plant etc. For this purpose, IIoT sensors
are used and maintenance logs are read out. In addition, reference files on perfor-
Predictive Maintenance Simulation von Prozessen Ressourcen-Management
164

Control lever: Control lever: Control lever:


• Industrial Internet of Things • Industrial Internet of Things • Intelligent resource management
• Predictive Analytics • Use of digital twins through the use of sensors and
• Simulations using AR (augmented networking via the Industrial Internet of
Achievable results: reality) and VR (virtual reality) Things
• Increase in OEE (Overall Equipment • Predictive analytics
Effectiveness) Achievable results: • AI-controlled logistics processes
• Reduction of MTTR (Mean Time to • Optimization of machine utilization
Recovery) • Virtual commissioning Achievable results:
• Reduction of MTBF (Mean Time • Reduction of production risk through • Saving the necessary resources
Between Failures) simulations before use • Stabilization of the use of resources
3

• Reduction in unplanned production • Efficient logistics


stops • Mass customization

Fig. 3.12 Areas of action for smart factories—predictive maintenance, simulation of processes and resource management
Production, Maintenance, Mobility, Transport, Agriculture
3.1 Production and Maintenance Sector 165

mances and disruptions of the same or similar plants in use at other locations can
be included. Also, data from the relevant context can be taken into account. These
include room temperature, air pressure and humidity of the area where the
machines or plants are used, if these are important for the use of the aggregates.
• The analysis and evaluation of the captured data is also done here with the
aim of pattern recognition through AI algorithms. This is about identifying pat-
terns that indicate future machine failures or a necessary replacement of parts.
• Based on the identified patterns, the determination of probabilities of occur-
rence for disturbances, maintenance needs etc. If overheating, uneven opera-
tion, or other standard deviations are detected in the production operation, signals
are automatically sent to the responsible service stations.
• Based on the determined probabilities of occurrence, the initiation of specific
installation processes and the derivation of action recommendations take
place. In production plants, timely replacement of wear parts can be carried
out—and necessary maintenance work is scheduled during downtime. For air-
planes, ships, and cars, the service staff already know upon landing, entering the
harbor, or visiting the car workshop, which measures need to be taken. The nec-
essary spare parts can therefore be procured in advance. For an elevator, mainte-
nance and cleaning measures are recommended, so that it continues to function
smoothly and does not fail during peak usage times. Through predictive mainte-
nance, maintenance work can then be carried out when the elevator is not or at
least less frequently used.
In Fig. 3.13 the interaction of various internal and external data sources can be seen,
on which the forecast model of the Predictive Maintenance is based.
Predictive Maintenance is already used in many areas today. These include,
among others:

Company data Data from third parties

Determination of the Dynamic data:


Dynamic system data:
Including intensity of use of the current framework Including air temperature,
systems, vibrations, stress level, conditions of the humidity, air pressure - and
temperature their fluctuations
system

Statistical system data: Statistical system data:


e.g. age, type, material
Determining the e.g. installation environment,
probability that the material fatigue limits
Historical dynamic data: system will fail Historical dynamic data:
e.g. previous failure data e.g. failure data of similar types

Forecast model

Fig. 3.13 Model of predictive maintenance


166 3 Production, Maintenance, Mobility, Transport, Agriculture

• Monitoring of production plants


• Monitoring of engines, turbines etc. (e.g. in airplanes, ships, cars)
• Monitoring of wind turbines
• Monitoring of elevators
• Monitoring of long-distance pipelines, pipelines etc.

Note Box
The core of Predictive Maintenance is the proactive detection of action
requirements for maintenance, repair etc. of machines and systems—even
before a malfunction has occurred. Industrial Analytics evaluate relevant data
in real time through appropriately trained AI agents to determine the optimal
time of “intervention”. Required spare parts etc. can be ordered in time.
This can reduce downtime and better control the use of service staff. In
addition, the storage of necessary spare parts etc. can be optimized. Based on
the data obtained, impulses for the further development of machines and sys-
tems can be gained in the next step.

The concept of Predictive Maintenance allows for the addition of existing busi-
ness models as well as the development of new business models. Instead of selling
systems or units (like elevators or turbines), the customer acquires an “availability”.
The corresponding pricing concepts are Power by the hour, Pay as you go or Pay
per Use. This often results in much more intense customer relationships, because a
comprehensive and continuous exchange of information between manufacturer and
user takes place. The responsibility of the manufacturing company does not end
with the delivery of a product. Rather, this delivery represents the intersection
between a production-dominated and a service-dominated sphere in the
manufacturer-­customer relationship (cf. [1], pp. 203–206).

Note Box
The global market for Predictive Maintenance was estimated at four billion
US dollars in 2021. It is expected to grow to US$ 19.3 billion by 2028. An
average annual growth rate of 30% is expected between 2022 and 2028 (cf.
[15]). This dynamic development should be incentive enough for you to con-
sider the use of Predictive Maintenance for your company.

An interesting application of Predictive Maintenance has been developed by


TÜV NORD for monitoring container and road bridges as well as pipelines.
Sensor-based strain gauges are attached at critical points. These can not only mea-
sure material strain, but also surface temperatures. The continuously collected data
is fed into an online monitoring system and examined for abnormalities, which
could indicate cracks, for example. Remedial action can be taken before the occur-
rence of usage-impairing damage (cf. [16], pp. 20 f.).
3.1 Production and Maintenance Sector 167

Predictive Maintenance is also used by the semiconductor manufacturer


Globalfoundries in Dresden. The costly 24/7 production of chips had to be inter-
rupted repeatedly to carry out prescribed maintenance work on the plant for the
preparation of ultra-pure water for the manufacturing process. Similar to the main-
tenance requirements for cars, parts were replaced—often regardless of their actual
wear. The maintenance schedules prescribed this. The company’s goal was now to
reduce the extent of interruptions in production due to failures and maintenance as
much as possible through the use of Predictive Maintenance.
Therefore, a team examined possibilities to improve the monitoring of plants and to
predict potential production failures specifically. The need for this is particularly great
because in the chip production more than 1000 manufacturing steps in often sensitive
and above all time-critical processes have to be coordinated. If a part of the manufac-
turing plant fails or is not fully efficient, this affects the entire manufacturing process.
Even if only a defective part has to be replaced, the overall process stalls and can only
be slowly ramped up again. The suffering was consequently very high (cf. [17]).
For the production of chips, wafers are used. These have to be cleaned of chem-
ical residues beforehand. For this purpose, tanks with ultra-pure water are used. The
inflow and outflow to such tanks is regulated by a multitude of valves. In the In the
past, these valves had to be regularly checked for damage in person and on site.
Despite the highest qualifications of the people involved, defects in the valves were
hardly predictable. In addition, there were neither indicators nor measuring instru-
ments available to indicate critical conditions of the valves early on. Therefore,
human visual inspections were carried out.
To close this source of error, a system for the data-supported and sensor-based
prediction of valve failures was developed. This now makes it possible to capture
comprehensive audio and vibration data—and in real time. An AI system evaluates
this data to assess the condition of each valve. For this purpose, the sensor data is
preprocessed by machine learning algorithms “on edge” and then transferred to the
cloud. Through these processing steps, it becomes possible to act before a failure
and to plan repair work specifically:
Unplanned production stops are replaced by planned maintenance work.
The findings of this AI system are presented in a dashboard. For this purpose, the
platform Cloud Shopfloor Intelligence from T-Systems is used. The data merged there
is available to the internal production teams. But also the manufacturers of components
or of entire plants can access this data and continuously improve their components. In
total, the losses associated with downtime could be significantly reduced (cf. [17]).

3.1.3.5 Simulation of Processes and Material Flows


Another important field of action for Smart Factories is the simulation of processes
and material flows (see Fig. 3.12). Through simulation models, the processes of
complete production systems can be digitally mapped. The processes can thus be
extensively tested before implementation or before an adjustment of existing sys-
tems. Simulation models can identify potential disturbances in the processes and in
the material supply even before corresponding findings from production reality are
available. Such simulations of material flows are used in various planning phases.
168 3 Production, Maintenance, Mobility, Transport, Agriculture

These include feasibility, concept and detailed planning. In this way, a virtual com-
missioning can take place before its integration into operational production. In order
planning, possible production bottlenecks can be identified early on through a simu-
lation. Disturbing influencing factors on existing production systems can be specifi-
cally—temporarily—faded out in order to systematically investigate the effects of
individual factors. This is not possible in the real application environment. This
requires the development of digital twins (Digital Twins).
A digital twin is a virtual model of a physical plant, a process or a system (cf.
[18, 19]). This is a digital representation of a material or immaterial object. The
Digital Twin is the virtual (digital) representation of a real object through a three-­
dimensional CAD model (Computer-Aided Design). It represents a virtual mirror
image. Processes can also be represented in the form of a Digital Twin. A digital
twin consists of data and models of the represented object or the process and further
descriptive information. This makes the twin “calculable”. What is exciting here is
that—in the Contrary to the term “twin”—an object or a process in the analog world
does not (yet) have to exist to already create a digital twin. This allows the digital
version to be thoroughly tested before the physical twin is born in the real world.
In industrial applications, digital twins are used, for example, for the develop-
ment of products and services, but also for machines and systems. A digital twin, for
example, of cruise ships, can exist before the birth of the analog twin. Various usage
scenarios can be simulated on such digital models and their effects determined. Of
particular importance is also the renunciation of physical prototypes—without the
possibilities of usage simulations falling away. A digital twin can take various
forms. It can be a 3D model, a behavioral model or a functional model that accu-
rately represents the real properties. Digital Twins can therefore be used for product
design, for monitoring machines and systems as well as for predictive maintenance.
Also an optimization of functions can—initially—be done on the digital twin. Also
technical condition, wear, maintenance and required repairs can be digitally simu-
lated and thus predicted. The use of digital twins allows a production to be com-
pletely digitally planned before the first setup. This way, errors and bottlenecks can
already be identified in the digital twin and not only in the finished system.
In process simulations, several digital twins can be linked together just like it can
later be the case with the analog twins. This allows an analysis and a subsequent
optimization across several value creation steps. A digital twin allows a multi-
tude of iterations of possible process steps to be carried out. Here, various processes
and technologies can be used. This way, comprehensive knowledge can be built up,
even before a new process or a new system is used in the analog area. If an analog
twin is already alive, data from its use in the digital twin can be analyzed and used
for optimizations. In sum, digital twins not only enable forecasts about future pro-
cesses, but can also significantly contribute to reducing operating costs and increas-
ing the lifespan of devices and systems. In addition, the use of digital twins can
reduce the times and costs of development processes—and thus also enable less
financially strong companies to develop innovative solutions.
With the increasing spread of the (Industrial) Internet of Things the concept of
the digital twin is becoming increasingly important. Among the most important
players in the market for digital twins are ABB, General Electric, IBM and Microsoft
3.1 Production and Maintenance Sector 169

(see [20]). The global market for Digital Twins is expected to generate sales of
US$ 95 billion by 2033. This corresponds to an average annual growth of 34.3%
between the years 2023 and 2033. The automotive and transport sector will grow by
20% annually during this period. A lack of skilled workers in this sector is seen as
an obstacle to growth, to develop appropriate solutions (see [20]).
Also in the German industry, digital twins are becoming more and more the
standard, as a representative study by the Bitkom shows. For this, 603 companies in
Germany were surveyed, including 163 industrial companies (see [21]):

• 44% of companies are already using digital twins.


• 8% are planning this and another 14% can basically imagine this.
• Only 10% cannot imagine using digital twins even in the future.
• However, 20% are not yet familiar with the technology at all.
• 63% of industrial companies consider digital twins to be indispensable in order
to be successful in international competition.

The use of digital twins will be standard in the industry in the future. This is
shown by further results of this survey:

• 59% of German industrial companies assume that digital twins contribute to a


more sustainable production.
• 49% believe that digital twins completely new business models enable.
• Only 17% of the companies surveyed believe that digital twins are just a pass-
ing trend.

Note Box
The interaction with a Digital Twin enables completely new forms of net-
working between the analog and the digital world:
• A Digital Twin enables teleportation. A user can enter the digital model of
a machine or a scientific model to make changes there or gain new insights.
• A Digital Twin allows a journey into the past, to understand a constella-
tion from 2 h ago or 3 months ago. This can, for example, identify the
causes of errors that have occurred in the present.
• A Digital Twin supports a journey into the future. Here it can be simu-
lated how certain units and plants, comprehensive Smart Factories and—
prospectively—also people behave if certain interventions are made today.
For machines, for example, it can be checked with which wear on which
components can be expected in 2 years. Such a journey into the future
makes it possible to anticipate alternative developments. These can also
refer to cities, countries or the whole planet.
To support such areas of application, the relevant processes and components
need to be standardized. These standards should be defined uniformly at least
EU-wide, better still worldwide, to enable comprehensive networking.
170 3 Production, Maintenance, Mobility, Transport, Agriculture

By simulating the availability of raw materials, personnel, machine capacities


and other production resources, an extended production planning is also possible
(see Fig. 3.12). Not only is the development of primary demand simulated. Primary
demand refers to the demand based on forecasts or customer orders. The develop-
ment of secondary demand can also be simulated. All elements necessary for pro-
duction flow into this material requirements planning. This includes individual
parts, intermediate products, and assemblies. But also the necessary raw, auxiliary
and operating materials as well as the required personnel and machine deployment
are simulated here. Based on this data, a planning of the production process focused
on efficiency and effectiveness is enabled, which considers batch sizes, costs, dead-
lines, and capacity utilization equally. This can be accompanied by a rearrangement
of production lines.
These developments have also found their way into the production of batteries,
which are essential for the expansion of electric mobility. Already today, hundreds
of mega and giga factories for electric vehicles are in operation worldwide, under
construction or in planning. However, an expansion of traditional production meth-
ods and technologies alone is not sufficient to meet the global battery demand. This
is expected to more than triple by 2025. To produce the required quantities, new
approaches in production are necessary. Only then can speed, flexibility, and
throughput be sustainably increased.
To meet the growing demand and at the same time take into account the dynamic
changes in battery technology, not only powerful, but above all flexible production
plants and machines are necessary. Intelligent, AI-supported data collection and
data evaluation play a key role in this. Networking with raw material suppliers as
well as with other service partners is indispensable to achieve the necessary flexibil-
ity. Track-and-trace applications improve insight into the individual stages of the
manufacturing processes (cf. [22]).
A convincing example of a consistent transformation in industry is provided
by Siemens [23] with its Digital Enterprise in the Siemens -Electronics factory in
Amberg (EWA). The challenges to be mastered here are impressive:

• 350 production changes daily


• Portfolio of around 1200 different products
• 17 million Simatic components per year

The components manufactured at the Amberg site are used for the areas of automa-
tion and control technology as well as on the manufacturing execution level. To
ensure that production runs smoothly, around 50 million process and product data
are continuously evaluated and used for ongoing optimizations. Applications of arti-
ficial intelligence, industrial edge computers, and cloud solutions are used for this.
These enable very flexible, efficient, and reliable processes.
Through the use of edge computing, the data is processed where it is generated,
directly at the plants and machines. The data generated by sensors is transferred to
a cloud. AI algorithms are used to identify relevant patterns for quality assurance.
An AI-controlled model provides reliable statements about the Probability of
3.1 Production and Maintenance Sector 171

product errors. The results obtained here are fed directly into production by Closed
Loop Analytics. This significantly increases the quality of the products.
In addition, a combination of edge computing and AI is also used for Predictive
Maintenance. Based on production data, an algorithm determines real-time corre-
lations between abnormalities in the process data and downtime. These are fed back
into production and lead to the necessary interventions there. Through a Performance
Insight App, the relevant results are made available to users. The plant operators are
thus informed between 12 and 36 h before a possible system failure and can act in
time. Siemens [23] also works with the concept of the digital twin.
So-called Sustainability Twins make an important contribution to increasing sus-
tainability or the circular economy. A Sustainability Twin is a digital model that is
networked with its real counterpart. The Digital Twin continuously informs about the
performance achieved, a possible need for repair, and unused efficiency potentials.
This allows the Sustainability Twin to continuously generate impulses for more sus-
tainable manufacturing or more sustainable products. This can help to reduce
resource consumption and thus manufacturing costs as well as the ecological foot-
print. Digital twins can reach an economic value of US$ 1.3 trillion by 2030. They
can help to avoid 7.5 gigatons of CO2 emissions (see [24]). Such use of Digital Twins
can support the transition to a more circular economy. For this purpose, parts and
products must be designed in such a way that they can be easily reused or recycled.
This is intended to reduce or completely avoid waste (see Sect. 2.6; in depth [7]).
Networked and intelligent factories also create the possibility for a highly indi-
vidualized production. Through intelligent networking of the production plants
with a high degree of self-organization of the plants, production processes can be
designed much more individually and thus flexibly. This makes it technically and
economically possible to produce small series and manufacture individual pieces.
This can be referred to as Mass Customization—a mass production of individual
pieces, which used to be a contradiction: Custom-made products from the
assembly line become possible. The intelligence of the production plant makes it
possible to combine customer-specific solutions with the advantages of a process-
optimized mass production. For this, the customer can assemble his desired prod-
uct from a modular system. Based on the respective customer wishes, the
manufacturing processes are independently optimized based on time and cost tar-
gets. An informational prerequisite for this is the networking of production plants
with customers.
A convincing example of mass customization is provided by deinSchrank.de.
The company has been relying on the automatic production of custom-made furni-
ture for many years. Customers can choose the construction, material, colors, inte-
rior equipment, Select doors and handles online via the website (see Fig. 3.14). To
meet demand, more and more production robots are being used. The number of
robots that can independently process large chipboards is expected to double in the
future. Each robot costs approximately 250,000 €. You can find an exciting video
from this company here: https://www.youtube.com/watch?v=xd7a3DAoo7k.
When such production facilities are connected to a customer interface, these
can change their preferences up to shortly before the start of production—or even
172 3 Production, Maintenance, Mobility, Transport, Agriculture

Fig. 3.14 Custom-made cabinet via the website (source: [25])

Food for Thought


Robots are increasingly used at deinSchrank.de because the lack of qualified
employees for production with demanding physical work in shift systems
would otherwise limit growth. However, the programming of robotics as well
as their maintenance and optimization also require qualified personnel else-
where. These technologically demanding jobs are highly interesting for young
professionals and those employees who want to further qualify themselves.
According to the company’s management, the manufacturing industry in
Germany can only sustain in the long term if the rising personnel costs and the
lack of “human labor” (regardless of qualification) are compensated by tech-
nology in the hardware (robotics)—supported by powerful software. In the
short term, the market will no longer forgive inefficient work and the associ-
ated higher costs (see [25]).

during the ongoing production process. How short-term such changes can still be
made depends on the duration of necessary procurement processes and the setup
times of the used plants. The customer here—within limits—has a fairly extensive
intervention in the production. This will meet the expectations of the customers.
After all, customers also feel in other fields that they are increasingly operating in
real time—thanks to WhatsApp and many streaming offers. At the same time, a new
form of customer experience becomes possible.
3.1 Production and Maintenance Sector 173

Note Box
With Smart Production, it’s less about the “what” of production, but rather
about the “how”! A “product-neutral” manufacturing consists of several flex-
ible and robot-based production units.
This is referred to as a matrix production.
The centerpiece is an AI application that optimizes the overall process tak-
ing into account the respective cycle and delivery times according to given
parameters.

3.1.3.6 Resource Management


The intelligent resource management is another exciting field of action for Smart
Factories (see Fig. 3.12). The Industrial Internet of Things in conjunction with a
multitude of sensors in the machines and systems support a resource-saving use.
Intelligent control of production facilities can contribute to energy savings and
reduction of peak consumption. Another important field of application is the
AI-based optimization of logistics processes. AI algorithms can optimize the fol-
lowing areas of the logistics chains and thereby contribute to efficiency increase,
cost reduction and thus also to more sustainable business management:

• The optimization of delivery processes can reduce energy and storage costs, at
the same time shorten delivery times and increase delivery reliability.
• Sensors enable a monitoring of the performance of integrated means of trans-
port (such as cars, trucks, airplanes, drones). This can optimize energy consump-
tion. In addition, predictive maintenance can reduce the downtime of logistics
modules and again shorten delivery times and increase the delivery reliability.
• Special sensors serve the monitoring of vehicle drivers and enable real-time
coaching. While dynamic traffic jam avoidance is already standard today,
advanced systems can contribute to the optimization of driving style and accident
prevention. In addition, impulses can be given as to when—in addition to the
legal requirements—rest periods are necessary. For this, ongoing monitoring of
the driver’s gestures and facial expressions through image recognition can pro-
vide important impulses.
• In addition, autonomous transport vehicles can be used in the production area
and in the warehouse. These sensor-equipped vehicles continuously scan the
environment. This will enable autonomous driving in the long term.

To develop and implement the processes described here, the following companies,
among others, offer their support:

• ASMPT
The company ASMPT supports smart process integration through powerful hard-
ware and software solutions in electronics manufacturing. These solutions have
been combined into a powerful Shopfloor Management Software Suite. Through
ASMPT Works, the prerequisites for a Smart Factory are created (see [26]).
174 3 Production, Maintenance, Mobility, Transport, Agriculture

• Dassault
The company offers with the 3DEXPERIENCE platform a collaborative envi-
ronment. It allows companies to have a holistic view of their business activity and
their ecosystem—in real time. To this end, people and ideas as well as data and
solutions are brought together in a single environment. The 3D EXPERIENCE
platform connects the entire value chain as a single source of truth. Therefore, this
information is available regardless of location, time, and system. Through this 3D
EXPERIENCE platform tailored industrial solutions for various industries are
offered to implement digitization strategies. This supports agile and scalable
action in response to changing market requirements. Thus, potential disruptions
can be virtually simulated in advance—long before they actually occur (see [27]).
• Fabasoft
The company Fabasoft enables with the Fabasoft Business Process Cloud an
efficient processing of digital content across company, IT infrastructure and
national borders. Features such as intuitive creation of automated workflows,
seamless versioning as well as a digital signature and a full-text search contribute
to this. Through an easily adaptable and scalable platform, solutions tailored to
individual customer requirements can be developed. The data storage of the
cloud-based platform is located in Europe (see [28]).
• InterSystems
The data platform InterSystems IRIS supports the creation of high-performance,
machine-learning applications by connecting existing data and application silos.
This accelerates data-intensive applications and simplifies their development.
Through a holistic Cloud-First data platform there is no need to integrate dif-
ferent technologies first. In addition, the interoperability—the ability to interact
of different systems, techniques or organizations increases. This approach leads
to less programming effort and a lower consumption of system resources. In
addition, less maintenance effort is required (see [29]).
• Leadec
Leadec offers an end-to-end solution to map all service processes across systems
and digitally. This aims to achieve high transparency over the planning and pro-
vision of all services—starting with the first offer to legally compliant documen-
tation and invoicing. Customers of the company can use the data from the digital
business platform Leadec.os via their own platforms and IT systems to continu-
ously improve their own production processes. For this the data from different
sources is collected, structured and evaluated and made available in the cloud-­
based solution Leadec.os. This platform can be integrated into the digital ecosys-
tems of the customers and enable predictive maintenance (see [30]).

Note Box
The key insight is that many steps towards the Smart Factory cannot be taken
alone. Often, it requires long-term partnerships with various companies to
master the challenges.
At the same time, it is important not to enter into too close and exclusive
relationships, so that future flexibility does not suffer!
Is this a partial contradiction? Yes!
3.1 Production and Maintenance Sector 175

44%
40% 40%

32%

24%
20% 20%
16% 16%

Predictive maintenance

Linking with customers'


machines/systems/proc

Autonomous systems
Artificial Intelligence
Smart Analytics

Integrated MES

Digital twins
Big Data
Cloud

esses
Fig. 3.15 Technologies for digitizing production (data source: [5], p. 8)

Which technologies are used to what extent is shown by the already cited study by
Roland Berger [5]. This ranking is led by cloud technologies, which are used by 44%
of the respondents (see Fig. 3.15). Big Data and associated Smart Analytics follow
with 40% usage each. Integrated Manufacturing Execution Systems (PPS—
Production Planning and Control) are used by 32% of the companies. Digital Twins are
used by 24% of the respondents in production. 20% already use Predictive Maintenance.
A linkage of their own production facilities with customer facilities is also used by
20%. Artificial Intelligence and autonomous systems are used by 16%. It should be
noted that the evaluation of Big Data, Smart Analytics, the use of Digital Twins as well
as Predictive Maintenance are also applications of Artificial Intelligence.
The question of the dominant areas of application of the digital factory is also
exciting. The study by Roland Berger [5] provides interesting insights on this as
well. In 25% of the companies surveyed, a combination of Big Data, AI and
Analytics is used (see Fig. 3.16). Digital Twins are also included in this area of
application. Another large field of application is Automation and Robotics,
accounting for 23%. This relies on a Manufacturing Execution System for pro-
duction planning and control. 18% use optimized Reporting. In fourth place, with
15%, is Quality Management.
In the topic of quality management, another interesting development is emerg-
ing: Predictive Quality—a “proactive quality”. The goal of Predictive Quality is
the optimization of product and process-related quality. Based on forecasts of
the expected product quality, decision-making bases are developed. This enables the
user to steer future quality-influencing events in the desired direction. This is a spe-
cial application of Predictive Analytics. The prerequisite for this is a seamless link-
ing of the relevant data points from the manufacturing process (cf. further [31, 32]).
Roland Berger [5] points out a detected deficiency in the digital factory : The
digital applications used by companies rarely follow a holistic strategic approach.
One reason for this is seen in the fact that companies approach the digitization of
176 3 Production, Maintenance, Mobility, Transport, Agriculture

Big Data, AI, analytics


 Digital use cases improve predictability
and mobility of maintenance
 Focus on data collection and analysis to 25%
better understand and more effectively
control the manufacturing process
 Development of "digital twins" in Automation
production and maintenance  Automated control of consumables and
plant utilization
23%  Increased focus on the introduction of an
integrated MES
Reporting
 Digital use cases to improve overall Quality management
equipment effectiveness through data 18%  Digital applications to reduce defects and
collection and analysis throughout the rejects
production process  Digitalization of quality management
 Intelligent reporting that actively suggests  Communication between ERP and quality
interventions and processes system to control complaint management
15% (e.g. to monitor measuring instruments
and faults)
Logistics and materials management 8%

8% Augmented Operator
Others 3%

Fig. 3.16 Dominant areas of application of the digital factory (data source: [5], p. 8 f.)

processes and technologies in individual projects. This results in different priorities


in the implementation—which do not necessarily lead to a convincing whole.

Note Box
A digitization of production must follow a digital target image and a
Masterplan Smart Factory. Otherwise, digitization remains piecemeal—
focused on efficiency increase, cost reduction, and quality improvement.
Without a convincing strategic digital target image and without a
Masterplan Smart Factory, neither a sustainable competitive advantage is
achieved nor can new business models be developed.
For this, a holistic view of the entire production landscape and its inte-
gration into networks with suppliers and customers is indispensable.

What such a holistic solution can look like is illustrated by the Digital Thread of the
globally operating technology company PTC. The Digital Thread refers to a central,
reliable data source that enables consistent information and cooperation between differ-
ent functions through real-time data synchronization. The word “thread” here stands for
the thread that connects all information sources and process steps. A scalable database
accessible to all enables continuous access to information about products, processes,
and people throughout the company. This comes with various advantages (cf. [33]):

• Optimization of technical development


The quality is improved, the rework reduced, and the product development and
time to market accelerated.
• Increase in manufacturing efficiency
Through better insights into assets, throughput, and Employee productivity
makes processes more effective.
3.1 Production and Maintenance Sector 177

• Promotion of product and service innovation


The Digital Thread can discover and utilize new business models and business
opportunities that have a positive impact on revenue.
• Optimization of the service area
The effectiveness of the technicians is increased and the downtime of the facili-
ties is reduced.
• Improvement of the sales and marketing experience
New interaction possibilities can improve the customer experience and strengthen
customer loyalty.

Figure 3.17 shows a Digital Thread developed and supported by PTC. Here it
becomes visible how the individual areas are consistently interlinked. In addition, it
is shown here what effects can be achieved in the various fields (cf. [33]).
The necessity for such holistic action is confirmed by further results from Roland
Berger [5]. Most digitization projects have only just started (cf. Fig. 3.18). Almost
two thirds of digitization projects are still in the idea finding or planning phase. In
40% of companies, a large discrepancy between target and status quo is identified.
25% are planning projects—but rather on a small scale. With a mention of 5%, only a
few projects are in the budgeting phase. It is surprising that the topic hardware
received no mentions—and only 10% of companies rely on software. In the long
term, it will be dangerous that personnel development is not given any attention.
The study by Roland Berger [5] also shows however, that the companies have
set out:

• 20–30% of all investments are on average in the Development of a digital fac-


tory invested. Investments in software often make up the largest share of this.
• 70% of respondents expect a rising investment share in the future.
• On average, companies estimate a period of 2 years for the implementation of
the planned investments.
• 60% of companies use standard procedures for ROI determination in their
investments. However, future returns are often difficult to quantify at the begin-
ning. Sometimes project-related calculations are used—in some cases, however,
return calculations were completely omitted.

Note Box
One thing is certain: The path to a Smart Factory is long and rocky—and will
never end.
Therefore, the construction of a Smart Factory is not a project with a
defined start and end point, but a process!
178

Service Optimization Product and Service Innovation


Improve technician effectiveness Enable (and uncover) new business
and reduce asset downtime models and opportunities that impact
top-line revenue.
3

Manufacturing Efficiency
Man
Drive process effectiveness with
Driv
greater
grea insight into assets,
throughput,
thro and worker productivity
Engineering Excellence
ellence
Improve quality, reduce
ce rework,
and expedite new product
duct
development and time-to-market
e-to-market
Sales and Marketing Experiences
Enrich the customer experience with new
ways to engage and drive customer loyalty

Fig. 3.17 Digital thread—connects and synchronizes upstream and downstream process stages (source: [33])
Production, Maintenance, Mobility, Transport, Agriculture
3.1 Production and Maintenance Sector 179

40
%

25
%
20
%
10
5 %
0 0
%
% %
Idea Planning Budget Project Hardware Software Personnel

Large discrepancy Projects are Only a few Implementation of first Digitization


between goal and planned and projects in the modules of a major implemented in
status quo implemented on a budgeting project started parts of the plant
small scale phase for years
Skepticism regarding Comprehensive
profitability prevents Realization bases framework work for Parts of the overall
more decisive action are currently being ongoing individual concept
(such as MES) are projects in progress implemented and
being created in operation

Fig. 3.18 The path to the implementation of Industry 4.0 is rocky (data source: [5], p. 10)

3.1.4 SheIn: Smart Manufacturing as a Business Model

A fascinating use case for Smart Manufacturing is the 2008 company founded by
Chris Xu, SheIn (shein.com, originally started as SheInside). This is one of the larg-
est and currently fastest growing online shop from China. This is currently limited
to the fashion segment. The company primarily targets women of Generation Z
(born between 1997 and 2010) and young parents worldwide. Men’s and children’s
clothing are now also on offer. SheIn has a high proportion of 18–29-year-old cus-
tomers in Germany (see [34], p. 3). What makes this company special is neither the
use of fashion bloggers nor the bonus points program, which rewards regular use of
the app.
What is exciting is that SheIn was able to reduce the design and production
time for fashion items from 21 to 3 days, a feat achieved by Zara. Zara, one of the
inventors of Fast Fashion, manages to bring about 1000 new styles to the market
per month. This was achieved through shorter production cycles, a just-in-time
logistics and high investments in flagship stores in many city centers around the
world. The speed and variety of the product range were the most important drivers
of success for many Fast-Fashion companies.
The increasing digitization of consumers as well as providers (manufacturers
and retailers alike) now enabled a further development leap towards Ultra Fast
Fashion and Real Time Fashion or Real Time Retail. The networking allows the
company SheIn to bring 3000–7000 new products to the market—per day. Assuming
an average of 5000 new products per day, this can be over 1.8 million new products
per year. This is also made possible by the fact that SheIn needs significantly less
time for product tests than Zara (cf. [35]).
180 3 Production, Maintenance, Mobility, Transport, Agriculture

Note Box
Ultra Fast Fashion and Real Time Fashion or Real Time Retail reduce the
lead time for new fashion products from a few weeks to a few days!

Unlike fast-fashion retailers, who still have to invest in in logistics and warehous-
ing as well as in physical retail stores, the Real-Time-Fashion providers shift their
business to a large extent into the internet. Here the focus—apart from logistics and
warehousing—is on online presence and especially the online shop (cf. [36]).
The Real-Time-Fashion company SheIn strives with the mission “Everyone
can enjoy the beauty of fashion” to offer worldwide to every customer not only
quickly, but also stylish quality products at attractive prices [36]. To achieve this,
SheIn hired designers from different parts of the world. Today, the company has a
design team of about 800 people. SheIn uses Google Trends and various social lis-
tening tools to anticipate new clothing trends in fabrics, colors and styles. To evalu-
ate this real-time data stream, sophisticated AI systems are used. These derive
highly automated new product ideas from the search and social media behavior of
individual countries. The insights gained from this are implemented by designers
who are closely tied to the SheIn community. These designs are automatically for-
warded to the suppliers via a central SheIn ERP system. There, the products are
initially produced in limited quantities and offered via the online shop in various
regions. For the products that are particularly successful in the viewing and buying
behavior in the webshop, the order volume for production is increased in real time
via the ERP system. SheIn is therefore strongly guided by the consumers in its offer.
It is important to know:
SheIn does not have its own style!
The company rather produces the goods that meet the greatest response in the
various regions of the world. This working method can be compared with the
engagement algorithm of TikTok. On TikTok, those new videos also automatically
appear in the timeline of more and more people, which have previously proven
themselves with a limited number of viewers. This creates a data-based feedback
loop, which continuously improves the digital customer experience and promotes
the production of customer-relevant products. At the same time, inventory is reduced
and the production of hard-to-sell offers is avoided. SheIn is able to reflect local
fashion taste in real time—and to generate a different, but always relevant offer for
over 150 countries. This does not require traditional market studies, but “only” an
evaluation of user habits in real time (see [35]).
Intensive collaboration with various manufacturers enables the implementation
of the concept Consumer-to-Manufacturer (C2M). The special feature of this
C2M concept is that it leads to a direct relationship between customers and
manufacturers (see Fig. 3.19). As a result, the stages of intermediate trade are
eliminated in C2M, for example, the involvement of wholesalers.
What is the difference between C2M (Consumer-to-Manufacturer) and D2C
(Direct-to-Customer)? In D2C, too, the intermediaries are eliminated. However,
D2C is only a specific sales concept—also called direct sales, in which there is
3.1 Production and Maintenance Sector 181

Advertising Orders (type,


impulses, quantity),
customer data, customer data,
orders, Ongoing consolidated

C2M
feedback interaction feedback

Consumer Manufacturer
Direct delivery
(without intermediaries)

Fig. 3.19 Consumer-to-manufacturer concept

direct contact between the customers and the manufacturer. There, orders are placed
and delivery is triggered from there. Many traditional manufacturers (like Adidas,
Montblanc, Nike) use their own online shops for this direct sales. This is usually
done in parallel with indirect sales through wholesalers and retailers.
In contrast to the D2C approach, the consumer in the C2M concept is the imme-
diate and not just the indirect trigger of specific production processes. Here, cus-
tomers have a high degree of influence on what products look like and in what
quantities they are produced—almost in real time.

Note Box
In contrast to the supply-driven system Direct-to-Customer (D2C),
Consumer-to-Manufacturer (C2M) is a demand-driven system. Ideally,
only the pieces that actually meet demand are produced—and only in the
required quantity. In addition, customers are supplied directly from the manu-
facturer—and not from the company responsible for marketing.

To achieve this, customers in the C2M model—like SheIn—directly or indi-


rectly connected to the manufacturers via online platforms (see Fig. 3.19).
Intermediary distribution stages are eliminated, enabling more intensive and direct
information exchange between customers and producers. Demand is bundled via
the platforms and forwarded to the production sites. There, the desired items are
produced in the ordered quantities and delivered directly to the customers after a
few days or weeks. At the same time, feedback loops are created that allow com-
prehensive feedback from consumers to manufacturers. Only comprehensive net-
working and the digitization of information make such a business model
possible—combined with AI-supported evaluation options. This promotes the trend
towards individualization in the fashion sector (see [37]).
182 3 Production, Maintenance, Mobility, Transport, Agriculture

A convincing presence on social media—involving various influencers—and an


online shop with a good customer experience are prerequisites for the SheIn imple-
mented point system to have its effect. This point system rewards users for a variety
of actions—be it verifying the email address, writing a product review, and adding a
photo with a SheIn item. Additional points are earned through daily log-in, participa-
tion in outfit competitions, or live events. The earned points are worth money and can
be used to pay for a purchase. Thus, gamification elements are used for systematic
customer retention—and comprehensive customer engagement is promoted (see [35]).
In evaluating the business concept of SheIn, it should be noted that the company
often faces plagiarism accusations. Both large brands and smaller designers are
affected, whose designs are often almost identically copied. Due to production in
China, deliveries abroad often result in long delivery times. Also, Real Time Fashion
has nothing to do with sustainable fashion. Another point of criticism is that the
company and the acting persons at SheIn remain very opaque. A detailed analysis of
the business model and the profitability and sustainability is therefore not possible.
Nevertheless, investors now value SheIn at US$ 100 billion —more than Inditex
(Zara) with about US$ 70 billion and H&M with about US$ 20 billion combined.

Food for Thought


It is interesting that SheIn has become the shooting star of fast fashion and is
making billions with it. At the same time, the very active core target group
Generation Z seems to have no problem reconciling Fridays for Future and
Ultra Fast Fashion.

3.1.5  verall Assessment of the Development Towards


O
Smart Manufacturing

3.1.5.1 D  ata and Transaction Spaces as Drivers


of Smart Manufacturing
An important prerequisite for Smart Manufacturing to be used even more compre-
hensively than before is a consolidation of data flows. These data, obtained from
modern facilities from individual machine tools or entire production systems (such
as in the automotive or pharmaceutical industry), often still do not adhere to binding
standards. The interfaces of the individual systems are only partially designed for
comprehensive data exchange. In addition, sophisticated software and analysis sys-
tems are needed to convert these real-time available data into concrete action
impulses. To be more successful in this area, so-called data and transaction spaces
come into play for the control and monitoring of manufacturing processes.
A data space is an area where data is stored and processed. In manufacturing
processes, these are, for example, production data, sensor data, maintenance data,
quality data. This data can be collected at various levels. This can be done at the
machine or system level, at the process level, or at the company level. An important
aspect in the design of data spaces is the integration of various data sources to obtain
a holistic picture of the manufacturing process.
3.1 Production and Maintenance Sector 183

A transaction space is an area where transactions are processed. In manufactur-


ing, these are, for example, the ordering of materials, the transfer of production
orders, the booking of maintenance work, or the handling of payments. Transaction
spaces can also exist at the machine or system level, at the process level, or at the
company level. Transaction spaces are intended to ensure efficient and reliable pro-
cessing of transactions and thus smooth execution of production processes.

Note Box
Data spaces and transaction spaces represent a federated and open infra-
structure for sovereign data exchange. The data and transaction spaces are
based on common agreements, rules, and standards (see [38]).

In data and transaction spaces, the digitization and networking play an impor-
tant role in enabling optimal control and monitoring of production processes.
Aspects such as the security of data and transactions, the scalability of systems, and
the integration of different systems and data sources are of central importance. If
data from various companies flow together in these data and transaction spaces,
virtual marketplaces can emerge. On these, data owners, data managers, and data
users can meet to exchange digital data.
To achieve success here, the Manufacturing-X Initiative was launched as a joint
effort by business, politics, and science. This is intended to advance the digitization of
supply chains in the industry and make the German industry fit for the data economy.
For this, companies from the industrial sector will network with each other. So far, data
could only contribute to transparency within individual companies, while data exchange
between different companies only took place on a point-by-point basis. The sharing of
data along Sharing within the supply chain was often difficult. With Manufacturing-X
this sharing becomes significantly easier and entire value creation networks can be bet-
ter coordinated. Companies can thus react early to disruptions in supply chains, adjust
production faster and avoid delivery delays. In addition, thanks to Manufacturing-X
completely new, data-based business models in the industry are possible. Consequently,
this collaboration enables digital innovations for more resilience and competitive
strength. Moreover, Manufacturing-X also represents an important step towards a
closed-loop economy and towards more sustainable production (cf. [39]).
At the core, data ecosystems are being built, which are based on open standards
and support a variety of applications. The goal is to create a functioning data space
Industry 4.0. This should ensure simple data networking and enable the multilateral
sharing of data. This can reorganize value creation networks. To achieve this, the col-
laboration of all actors in value creation is necessary. The initiative works with a part-
ner network that wants to build digital ecosystems across industries (cf. [39]).
To achieve such effects, German car manufacturers—by the way, based on the
rules of Gaia-X (see Sect. 2.4)—have built a network with their suppliers called
Catena-X. This is a collaborative, open data ecosystem for the automotive industry.
It networks international actors into continuous value chains. This enables a stan-
dardized, global data exchange. This should be guided by European values. An
indispensable claim is the assurance of data sovereignty. Whoever provides data for
184 3 Production, Maintenance, Mobility, Transport, Agriculture

this network also retains control over it. The data owner can decide who participates
in the data exchange how, when, where and under what conditions. The network of
Catena-X is intended to provide secure and reliable implementation (cf. [40]).
To deepen the relevance of such an approach, the advantages of an AI-based
Smart Factory that uses such data ecosystems are summarized below:

• Continuous, partially autonomous optimization of production processes based


on time and cost specifications, based on information that is available in real time
• High flexibility in manufacturing
• Reduction of time-to-market (e.g. for innovations or changed product require-
ments or for individualized offers)
• Reduction of inventory holding costs through automated ordering processes,
consumption-based supply of components and raw materials, and optimized
delivery processes for the finished products
• Enabling mass customization, i.e. manufacturing of individual pieces and
small series (almost) at the cost of mass production
• Transparency across the entire value chain, starting with the supply chain
through production to delivery to customers in real time
• Provision of relevant information for more sustainable action

However, in the development of smart manufacturing, the following disadvan-


tages of an AI-based Smart Factory must also be considered:

• High to highest complexity of processes due to the networking of various com-


ponents, different types of data, a multitude of interfaces, different software solu-
tions with possibly diverging update rhythms and standards
• Dependence on a few specialists or system providers, who can (still) manage
the complexity of a smart factory
• Necessity of a high “trust” in the quality of AI-based solutions (even autono-
mous systems can make wrong decisions, follow inappropriate rules or misinter-
pret data)
• Susceptibility to disturbances with high internal and external dependencies
• Risk of cyber attacks due to communication over the internet and many differ-
ent interfaces to other systems and applications
• Reduction of jobs for less qualified employees (can also be seen as an advan-
tage depending on the perspective)

Note Box
Even though many processes in a smart factory organize themselves and thus
a high degree of automation exists, employees play a significant role in terms
of planning, control, and optimization of corresponding cyber-physical
systems.
These processes are supported by artificial intelligence—but often not car-
ried out independently. Therefore, the establishment of a smart factory
must be preceded by the development of highly qualified employees !
3.1 Production and Maintenance Sector 185

3.1.5.2 Reshoring Through Smart Manufacturing


The described developments towards increasingly intelligent processes with less
personnel support a process that is known as Reshoring can be defined as. In con-
trast to offshoring, it refers to the relocation of production processes from abroad
back to high-wage countries. The use of artificial intelligence in the production sec-
tor leads to the fact that “Made in Germany” suddenly pays off again. Companies
have already made new location decisions for the high-wage location Germany—
especially for highly automated production processes that require less personnel.
Even in the clothing industry, reshoring or at least nearshoring is being consid-
ered—because low-wage countries have largely exhausted their potential for cost
reduction and proximity to the main markets is becoming increasingly important in
terms of time (cf. [41]). Nearshoring refers to the relocation of production to coun-
tries that are closer to the home country.
To increase the resilience of supply chains, now—also in the context of smart
factories—friendshoring is being considered. This involves partnerships with com-
panies in countries that are more likely to be considered “friends” because they
share the same values and norms and are interested in a rule-based community. The
hope is that these can be relied upon even in times of crisis. However, it also
applies here:
Countries have no friends, only interests!
What makes steps towards reshoring possible today? In smart factories, person-
nel costs only make up a very small proportion of the manufacturing costs. Often it
is less than 5%. This makes it easier to bring production back to high-wage coun-
tries, which usually have a good hard and soft infrastructure (energy supply, logis-
tics, education, legal system etc.). Thus, the developments around smart factories
can lead to a reduction in the dynamics of global goods exchange and the pro-
duction settles closer to the target markets.
The following arguments for a reshoring are often discussed:

• Shortening of the lead time until market launch, to react faster to market
developments (this is especially important in the fashion sector)
• Reduction of dependence on complex supply chains (keyword resilience of
the supply chain)
• Reduction of inventory levels, which were previously necessary to increase the
resilience of a global supply chain
• Reduction of freight costs through cost-effective ground-based transport (trucks
or trains)
• Ensuring a consistent product quality or improvement of product quality by
involving performance partners who apply the same quality standards
• Access to qualified workers, who can be used more flexibly
• Reduction of dependence on critical developments in (distant) production
countries (including rising labor costs, uncertain exchange rate developments,
critical weather events, and political instabilities, such as political interventions
or loss of intellectual property)
186 3 Production, Maintenance, Mobility, Transport, Agriculture

• Positive Country-of-Origin effect with “Made in Germany” or “Made


in Europe”
• Financial incentives for industrial settlement
• Reduction of total operating costs through increased digitalization of
manufacturing
• Avoidance of financial risks, which arise from the requirements of the supply
chain due diligence law
• Opportunities to reduce harmful emissions through optimized logistics and
local production facilities

An increase in Smart Factories with a high degree of automation (including the


use of robots) in an industrial country leads to fewer production relocations (off-
shoring) and to more relocations of production (reshoring). However, reshoring will
not lead to a job boom. Many manual production steps, which led to offshoring
years and decades ago, will be taken over by robots in the future. At the same time,
the advance of smart factories is a response to the foreseeable shortage of workers
in countries like Germany. A very informative study on the topic of reshoring has
been presented by the European Parliament [42]. At the European level, this is pri-
marily about the discussion of strategic autonomy, to reduce or avoid dependence
on third parties in particularly sensitive areas. This is intended to achieve greater
resilience for Europe as a whole. This has been and is still missing, for example, in
computer chips, and many medications.

Note Box
With reshoring, efficiency is—partially—exchanged for greater resilience!

3.1.5.3 Examples of Companies for Reshoring


Which companies have already taken the step towards reshoring? And which com-
panies have decided to establish production facilities in Germany in recent years?
The company Gigaset manufactures telephones in Germany. In a state-of-the-art
Gigaset factory in Bocholt, humans and machines work hand in hand. Through an
Industry 4.0 production, the production processes become faster, more flexible, and
more effective. This production also allows the fulfillment of special requests. This
is referred to as a service Production on Demand. The production facility has
already been awarded several times as “Factory of the Year” (see [43]).
The world market leader for pumps, Wilo, decided as early as 2015 to reunite
production, administration, customer service and product development at its head-
quarters in Dortmund in a new structure by 2022. An important step in this direction
was the establishment of the Smart Factory in 2019. This production facility laid
the foundation for more efficient production—supported by optimized material flow
and intelligent logistics. In parallel, the so-called Wilopark was developed. In this,
in addition to the Smart Factory, the Pioneer Cube for administration, the Solution
Cube for product development and the Networking Cube for dialogue with custom-
ers were located (cf. [44]).
3.1 Production and Maintenance Sector 187

For the Smart Factory, the company Wilo has developed a production and logis-
tics concept that covers all processes from the supplier to the customer. This con-
cept allows continuous planning and control of customer orders through a networking
of production and logistics processes and a close integration of suppliers. For this,
the machines and products were networked. This enables the real-time monitoring
of production processes throughout the entire production. This constant data
exchange also allows short-term changes in demand. For this, the ongoing produc-
tion program is automatically adapted to the new requirements (cf. [44]).
To meet the customer’s desire for a higher individualization of products, cus-
tomers can in the future configure the desired products from certain modules online
themselves and order directly. The individual products and components are given
information to automatically register for the necessary processing processes. In
addition, the status achieved can be communicated. In addition, the employees in
the Smart Factory are supported by further digital tools. Through these various
measures of Smart Manufacturing, several goals are achieved at Wilo. On the one
hand, the required inventory can be reduced. On the other hand, production times
are shortened and individual customer wishes can be fulfilled. As a result, the
Overall Equipment Effectiveness increases (cf. [44]).
Bosch put into operation one of the most modern chip factories in the world in
Dresden in 2021 (cf. [45]). Here, silicon wafers go through the manufacturing pro-
cess fully automated for the first time. Wafers are thin slices of semiconductor mate-
rial, onto which integrated circuits are applied. For their production, a completely
digitized and highly networked semiconductor factory was built, which is also the
first AIoT factory (Artificial Intelligence of Things) from Bosch. To enable this
high-tech manufacturing, Bosch invested around one billion €. The achieved full
automation of manufacturing in connection with real-time data exchange between
the machines makes the chip production particularly efficient.
Through the use of the Artificial Intelligence of Things, the company creates
the prerequisites for a data-driven, continuous improvement in production. The
gigantic data stream from production processes can be continuously evaluated via
AI agents. This way, even the smallest product errors can be detected early, their
causes determined and eliminated before these errors could potentially compromise
the reliability of a product. This results in a significantly higher degree of process
stability achieved. Consequently, the mass production of semiconductor products
can occur faster. In addition, a predictive maintenance can proactively provide
impulses for maintenance or readjustment of manufacturing machines or robots.
Additionally, the very complex production control is supported by AI. After all,
the goal is to navigate the wafers safely as well as time- and cost-effectively through
a production process with up to 700 steps at around 100 facilities (cf. [45]).

3.1.5.4 Status of AI Use in the Production Area


To determine the extent to which artificial intelligence has been integrated into
manufacturing, IDC [46] conducted a corresponding survey. As part of this study,
decision-makers and opinion leaders were interviewed, who deal with the digitiza-
tion of manufacturing processes. The respondents come from 650 companies
188 3 Production, Maintenance, Mobility, Transport, Agriculture

from various manufacturing industries in Europe and the USA. The most important
study results are presented below (cf. [46], p. 4 f.):

• The focus is on data-driven and AI-capable Smart Factory solutions


Companies that have so far hardly dealt with the topic of Smart Factory are now
showing growing interest in AI-capable and data-driven applications. These
include image-based error detection, autonomous transport vehicles, and the
digital mapping of production processes. Another key area of action is predictive
maintenance as well as comprehensive monitoring of production processes and
facilities. In addition, support for workflows through augmented reality tech-
nologies is planned. Through augmented reality technologies, manufacturing
employees can be provided with the information necessary for safe mastery of
workflows.
• Investments in the Smart Factory bring measurable effects
Through powerful data management, higher flexibility and agility in manufactur-
ing are achieved. A higher product quality leads to greater customer satisfaction.
In addition, new market segments can be opened up through more flexible pro-
duction. In the course of the study, it was determined that manufacturers were
able to increase their overall plant efficiency by 10%, while at the same time
saving an average of 10% of costs.
• High data availability as a success factor
However, it is not always easy to access the relevant data—preferably in real-­
time. The study revealed that so far only an average of 34% of production facili-
ties are networked. The networking of machines, which only enables
comprehensive recording and evaluation of device data, is not yet well devel-
oped. The existing machines and facilities can, however, with appropriate
Technologies can be retrofitted. However, this often involves a high expenditure
of time and resources. In the case of a large inventory of plants, a gradual
approach is recommended.
For this, comprehensive cooperation between the IT and the Operational
Technology (OT) is indispensable. Operational technology includes the hard-
ware and software components that enable direct monitoring and/or control of
machines and production plants. Here, operational technology relies on the sup-
port of the IT department. However, the study also showed reservations about
integrating previously largely isolated production systems more deeply into the
company’s own IT environment. 51% of respondents expressed concern that the
OT systems would become more vulnerable to security gaps as a result.
• Lack of know-how as a bottleneck
The survey shows that many of the employees responsible for production and
logistics have little or no knowledge of predictive analytics (32%) or data analy-
sis in general and KPI dashboards (25%). 45% cite a lack of know-how in AI
applications and 44% see a lack of tools and platforms for data management and
evaluation.
• Avoid dependence on individual manufacturers
3.1 Production and Maintenance Sector 189

It is a dangerous undertaking to rely solely on individual providers when setting up


smart factories, especially if they offer proprietary systems. This reduces the poten-
tial for scaling. To prepare the entry competently, the involvement of manufacturer-­
independent consultants can be helpful. The aim is to analyze the entirety of the
production processes in order to achieve scaling throughout the company.

Based on these study results, the following recommendations for action for enter-
ing the field of smart factories are derived (see [46], p. 20):

• To increase the Overall Equipment Effectiveness, a comprehensive data man-


agement must be set up first. This should ensure that the relevant data—focused
in time and location—is made available.
• The entry should start with smaller projects or with feasibility studies. This
type of “finger exercises” helps to build know-how and trust in the technologies
of smart factories. Both are indispensable prerequisites for success. To achieve
this, application fields should first be opened up where data collection is less
complicated, data interoperability (possibility of data exchange) is ensured and
possible data silos can be more easily overcome. In addition, there must be
enough data to train the AI agents.
• In the course of these projects, machines and plants should be gradually net-
worked and IT and OT data or IT and OT processes should be merged and
harmonized. Various AI solutions should be used for data evaluation. For this,
solutions must be developed on the IT and OT sides (OT for Operational
Technology) to dispel legitimate security concerns.
• It is indispensable here—as in the other fields of application of artificial intelli-
gence—that the necessary know-how is quickly and ideally built up within the
company. No company should have to rely permanently on external consultants
to operate a Smart Factory. An initial collaboration with external partners in the
development and implementation of Smart Factory solutions ensures quick
results. However, the collaboration with the manufacturers of plants, with system
integrators and consulting companies should be thought of as help for self-help
and should only be temporary.

Note Box
If you want to keep up to date with the latest developments around the topic
of Smart Factory, you should take a look at the Demo Factory Aachen.
Here, the future world of production is being thought ahead.
https://demofabrik-­aachen.rwth-­campus.com/
Also, a visit to the Center for Efficient Factory Senftenberg shows the
direction in which developments will go.
https://www.b-­t u.de/fg-­f abrikplanung/zentrum-­e ffiziente-­f abrik/
die-­modellfabrik
190 3 Production, Maintenance, Mobility, Transport, Agriculture

Think-Box: Questions You Should Ask Yourself!


• What does the trend towards Smart Factory mean for my company?
• Which AI-based technologies can we access in the short term for us?
• Where would we see the greatest leverage for deployment?
• How far has the networking in production already been achieved in our
company?
• Which fields of application have our competitors already explored?
• Are there developments towards Smart Manufacturing in my industry?
• What are the consequences for our business model?
• How do we deal with the S-curve progression of new technologies in the
company—wait or co-design?

3.2 Summary

• Artificial Intelligence can unfold its comprehensive effects especially in the


production area.
• Therefore, the goals of a Smart Factory should not only be focused on efficiency
increases, cost reductions and quality improvements.
• Through a higher Flexibility in production can support new business models.
• Your own digital value chain should be networked with upstream and down-
stream service partners.
• The development of a system of value chains can contribute to increasing the
switching costs of business partners and at the same time provide impulses for
new business models.
• The concept of predictive maintenance has already begun its triumph and will
continue to spread in the coming years.
• With predictive maintenance, new usage-based rather than ownership-based
business models can be developed.
• AI-supported D2M business models can be implemented—Direct-to-
Manufacturer.
• Smart factories represent a successful concept—today and even more so
tomorrow.
• All companies should deal with the possibilities of a comprehensive digitization
of production.
• Across all stages of the value chain, significant improvements in important KPIs
can be achieved through the use of AI.

3.3 Mobility and Transport Sector

3.3.1 Automation of Mobility

A major field of application for artificial intelligence is the various applications in


the mobility and transport sector. The challenges here are particularly great because
3.3 Mobility and Transport Sector 191

the still increasing globalization of production and goods flows is accompanied


by an increase in logistics tasks. Also, the increased urbanization and the triumph
of e-commerce promote the growth of transport services. At the same time, in many
regions of the world, the existing or under-construction infrastructure can no longer
cope with this volume growth. Therefore, in this sector intelligent logistics solu-
tions are in demand.
Artificial intelligence can also make a significant contribution to achieving better
and at the same time more sustainable mobility. AI-based systems allow the vari-
ous modes of transport to merge—new mobility services can emerge. Already
today—unnoticed by many users—AI algorithms have a major influence on our
daily mobility. The offers of sharing companies are optimized with the help of
AI. At the same time, more and more public transport services use AI systems to
safely transport users from A to B. This includes, for example, the transfer of pas-
sengers at airports, where trains already operate without drivers. The Dubai Metro
is even already using this throughout A network with over 70 km of driverless trains.
Meanwhile, the Deutsche Bahn uses AI engines to limit the unimaginably high
number of delay minutes.

Note Box
In the context of AI and mobility, we must not only think about individual
mobility. The perspective is much broader!
Here, it is important to integrate all means of mobility and the associated
systems—including booking, payment, rebooking, complaint—in one appli-
cation. This applies to the end consumer market as well as to B2B solu-
tions alike.

AI systems play a special role in the automation of mobility. In a self-driving


vehicle, all the fields of application of artificial intelligence shown in Fig. 1.9 are
used simultaneously. Also, the access to mobility services will change. Today, sev-
eral apps are often still needed to plan the use of different means of transport. In the
future, there may be only a few providers who then think holistically about mobility
and gather all relevant information in one app to present us with an optimal solution.
In the future, all necessary processes will be controlled by voice—and the payment
will be handled by a digital assistant (see Sect. 4.2).
There are different forms of automated mobility to distinguish:

• Transport of material in closed systems, e.g. in production and storage environ-


ments of manufacturing facilities
• Transport of material outside closed systems, such as by robots on the ground or
with drones in the air, such as deliveries to businesses and consumers
• Transport of people in closed systems, e.g. in trains or buses, such as at airports
or on trade fair grounds and in amusement parks
192 3 Production, Maintenance, Mobility, Transport, Agriculture

• Transport of people outside closed systems, such as by autonomous vehicles on


the ground or air taxis in the air (often only in the testing stage today)

These concepts often involve several goals and expectations for autonomous
mobility. These are specified as follows (see also [47], p. 497):

• Reduction of resource consumption for transport through intelligent control


of traffic flows (e.g. optimized route and transport planning, ideal speeds)
• Reduction of emissions from the transport means used (especially less
exhaust, abrasion, noise etc.) through more efficient use
• Limiting the number of necessary transport means (vehicles, trains, air-
planes, drones, robots etc.), among other things, by a more economical equip-
ping of the transport means and optimized route planning
• Reduction of resource consumption for the transport capacities through a
more efficient use of the transport infrastructure (fewer roads, rails, ports, air-
ports, parking spaces, storage capacities etc.)
• Support for people with limited mobility
• More efficient use of time during transport by the travelers
• Significant decrease in accidents—ideally to “zero”

Thus, Uber uses AI in its food delivery service UberEats to calculate the delivery
time taking into account traffic, vehicle position, and the type of ordered food. In
addition, machine learning is used to generate restaurant recommendations and
integrate popular payment options. This is intended to make transactions quick
and easy.
To achieve these goals and meet expectations, a variety of AI technologies are
used for demand analysis and demand forecasting as well as for optimal use of
available logistics capacities. However, it should also be pointed out which reserva-
tions about autonomous mobility exist (see also [47], pp. 497–499):

• Greater convenience of autonomous mobility can lead to an increase in traf-


fic volume.
• Lower requirements for autonomous driving (e.g. no driver’s license, no own
vehicle required) and the increase in traffic volume enables the development of
further target groups for use.
• The development of further target groups can lead to an increase in the number
of autonomous vehicles, unless there is a comprehensive focus on sharing econ-
omy offers.
• Lower prices for autonomous driving can lead to a further increase in demand
as well as to a higher resource consumption.
• The convenience of individual autonomous mobility promotes a cannibalization
of public local transport.
• Autonomous mobility increases the attractiveness of peripheral residential loca-
tions as well as distant shopping and workplaces and can lead to an increase in
the number and length of trips.
3.3 Mobility and Transport Sector 193

• Intelligent control of traffic flows requires a high penetration of traffic with


autonomous vehicles is assumed.
• Until a high penetration is achieved, the increase in complexity in traffic plan-
ning increases, because autonomous and non-autonomous vehicles meet in the
public area.
• With autonomous vehicles, there is a high risk of cyber attacks.
• Autonomous vehicles can lead to a reduction in jobs for vehicle drivers (truck
drivers, taxi drivers, courier drivers etc.).

Note Box
Autonomous mobility can lead to rebound or boomerang effects. The poten-
tial savings from autonomous mobility can be overcompensated by an increas-
ing demand for corresponding offers.

3.3.2 Developments of Autonomous Vehicles

Autonomous vehicles and aircraft play a special role in autonomous mobility. The
development towards an autonomous vehicle takes place in various stages, as
Fig. 3.20 shows.

• Level 1: Driver Assistance


The driver of a vehicle is merely supported by assistance systems. For example,
warnings are given if the vehicle leaves the lane or approaches too closely. A
cruise control automatically maintains the chosen speed. The driver is fully
responsible and acting alone in every situation. Therefore, he must constantly
keep an eye on the traffic.
• Level 2: Assisted Driving—Partial Automation

Level 1 Level 2 Level 3 Level 4 Level 5


Support for the Partial automation High level of Conditional full Full automation
driver automation automation

The vehicle can The vehicle takes over The vehicle has full The vehicle is designed The vehicle can move
assist the driver selected functions; the control of the vehicle to fulfill many safety- independently in all
with selected driver must be ready under defined critical functions and situations that are
functions, such as to take control at all conditions. The driver react to many dynamic normally mastered by
steering or times. In addition, the takes over the control driving situations. Only a human driver,
accelerating and driver is responsible if necessary. minimal human without the need for a
braking. for most safety intervention is required. driver.
functions.

Strong human Selective human No human intervention


intervention intervention

Fig. 3.20 Development of autonomous driving


194 3 Production, Maintenance, Mobility, Transport, Agriculture

Here, for example, a parking assistant and a lane keeping assistant actively
support the driver by taking over parking or lane keeping independently. A traf-
fic jam assistant can take over vehicle control in traffic jam situations. Even
here, the driver must constantly control his vehicle and keep an eye on the traffic
for this. Only under defined conditions does the vehicle act autonomously, keeps
the lane, brakes or accelerates. In these situations, the driver can briefly take his
hands off the wheel. However, he must always monitor the assistance systems
and intervene in case of malfunctions.
• Level 3: Highly Automated Driving
In the highly automated mode, the driver does not need to constantly monitor the
system. The vehicle independently performs selected functions within a limited
period and under suitable conditions specified by the manufacturer. These
include, for example, overtaking, braking and accelerating, as well as maintain-
ing a lane. While driving, the driver can engage in other activities. If necessary,
the system alerts the driver with a warning time to immediately take over the
vehicle’s control again. Level-3 cars will initially be on highways because the
complexity to be managed here is lower. For instance, there is—usually—no
oncoming traffic. Also, the highways are fully captured on digital maps.
• Level 4: Fully automated driving in certain areas
Here, the system temporarily takes over the control of a vehicle. Only when the
system can no longer handle the driving tasks is the driver asked to take over the
vehicle’s control. The driver thus becomes a passenger for a longer time while
the vehicle independently takes over certain tasks. This could be highway driving
or searching for a parking space in a parking garage. At the end of a fully auto-
mated drive, the occupants can take over the vehicle’s control again. If the pas-
sengers are unable or unwilling to do so, the car must independently reach a safe
state (such as a parking space). Such a vehicle may also drive without occupants.
The system recognizes the limits of its own performance in time to avoid dangers.
• Level 5: Fully automated driving—autonomous mode
Here, the system permanently takes over the control of a vehicle. A driver is no
longer required. Except for setting the destination and starting the system, no
human intervention is required. The vehicle does not need a steering wheel and
pedals—there are only passengers and no driver. The system is capable or bet-
ter—should be capable—of mastering all traffic situations independently.

The company Waymo, which belongs to Alphabet, started developing a self-driving


car as early as 2009. Today, the development of the autonomous driving technology
called Waymo Driver is at the heart of the company. The Waymo Driver is already
considered the most experienced driver in the world today. This AI system has
already covered millions of kilometers in countless situations on public roads. In
addition, billions of kilometers have been completed in driving simulations. Based
on this data, the technology for autonomous driving is continuously being devel-
oped. The AI agent Waymo Driver can be used in various vehicles. The range here
extends from minivans and SUVs to trucks (see [48]).
The Waymo Driver aims to implement Level 5—the autonomous mode. This
should take full control from pickup to destination. The passengers here do not even
3.3 Mobility and Transport Sector 195

need to know which route the vehicle is taking. The prerequisite for safe navigation
is a comprehensive mapping of the usage area. This includes not only road markings
and stop signs but also curbs and pedestrian crossings. In addition to external data
such as GPS, the Waymo Driver uses detailed maps, that are matched with sensor
data in real time. This allows the exact road position to be determined at any time.
In addition, data on pedestrians, cyclists, vehicles, construction sites, traffic light
colors, temporary signs, and more are collected and interpreted using artificial intel-
ligence. The aim and requirement is to predict situations before they happen. On the
move, hundreds of moving objects can occur, whose behavior patterns differ. In
order for a correct recording of the environment to succeed, three different systems
are combined in autonomous driving:

• The system LiDAR stands for Light Detection and Ranging. Here, light in the
form of a pulsed laser is used to detect and classify objects. LiDAR sensors are
distributed around the vehicle. They send millions of laser pulses in all directions
and measure how long it takes for them to bounce back from objects. This creates
a 3D image of the vehicle environment—regardless of the time of day. This
allows traffic lights, construction sites and other objects to be reliably detected
even in poor light conditions.
• In addition, cameras allow a 360° view around the vehicle.
• Radar uses millimeter wave frequencies to determine other important details.
These include the distance and speed of an object. It is crucial that radar also
provides reliable data in rain, fog and snow.

The Waymo Driver evaluates this real-time information—and processes it using AI,
based on the experience gained from test drives and simulations. This is intended to
predict the behavior of other road users. Based on these assessments, the own move-
ment pattern is calculated—including route guidance, speed, lane and steering
maneuvers. In total this should enable safe autonomous driving (see [48]).

• In Germany, the Autonomous Driving Act was passed in 2021. This allows the
use of autonomous driving vehicles outside of mere test operation. The core of
the law are the changed rights and duties of the vehicle driver within an auto-
mated driving phase. Under certain conditions, automated systems (Level 3) can
take over the driving task. A driver is still necessary at this stage. However, this
driver may turn away from traffic and vehicle control in automated mode (see
[49]). This is the big difference to the previously used assistance functions,
which still require full attention of the vehicle driver.
• However, this law primarily addresses commercial providers of mobility ser-
vices. After all, autonomous vehicles can only be used in previously approved
operating areas. This makes it possible to align the vehicles to specific usage
areas. These include e.g. feeder trips at airports and on trade fair grounds. But
also in city centers, autonomous vehicles can be used. To receive a permit for this
in Germany, it must be proven to the Federal Motor Transport Authority that the
vehicles are capable of independently mastering journeys in the defined operat-
196 3 Production, Maintenance, Mobility, Transport, Agriculture

ing area. In order to react appropriately even in dangerous situations, the AI must
weigh up alternative damage options. The protection of human life has the high-
est priority here (see [50], p. 16).
• An exemplary application of these possibilities is shown by the elevated railway
[51] in Hamburg. There, in 2019, a research and development project named
HEAT was launched to test the use of autonomous driving as part of public trans-
portation. HEAT stands for Hamburg Electric Autonomous Transportation. This
project in Hamburg was one of the first concepts to implement autonomous driv-
ing in public transportation on a predefined route—without planned intervention
by the driving staff. As early as October 2020, passengers were driven autono-
mously through the HafenCity at speeds of up to 25 km/h. From summer 2021
until the end of the project, HEAT was in operation with passengers on a route of
1.8 km with five stops.
• To achieve safe navigation, the three subsystems already described were also
used in this project. In the vehicle itself, perception was achieved through cam-
eras as well as radar and laser measurements. In addition, masts with sensors
were erected along the test track to extend the field of vision of the buses and
enable anticipatory driving even at higher speeds. Another element was the mon-
itoring by a special control center. This was responsible for incident manage-
ment—intervening in critical situations (cf. [51]).

Bitkom Research [52] provides interesting results on the question of whether people
can generally imagine using various autonomous vehicles. For this purpose, 1003
people aged 16 and over in Germany were interviewed by telephone. This survey is
representative of the total population. A artificial intelligence at the wheel is no
longer a hindrance for people in Germany to trust an autonomously driving vehicle:

• 99.8% of respondents can generally imagine using an autonomous means of


transport. This value was still at 93% a year ago. Two years ago it was even
only at 77%.
• In the eyes of the respondents, it is most feasible to do without a human in the
driver’s cab in the case of subways and suburban trains. 73% would get on
autonomously driving trains. 66% would use autonomous buses.
• 62% could imagine using a self-driving private car, a self-driving mini shuttle
bus or a self-driving taxi.
• 59% would also use autonomously operating regional or long-distance trains.
• However, only 32% would trust an autonomous ship entrust.
• Only 30% would use an autonomously flying airplane.

However, most respondents are also aware that in these fields no quick break-
through is to be expected (see [52]):

• Only 13% believe that in Germany in 10 years more autonomous cars will be
registered than conventional ones.
• 18% expect this only in 15 years. 31% even only in 20 years.
3.3 Mobility and Transport Sector 197

Yes No Don't know


100%

37% 33% 36%


80% 42% 40%
Anteil der Befragten

60%
29%
33% 25% 39%
40% 37%

20% 38% 35%


30% 25%
21%
0%
Total Men Women Up to 39 years More than 40 years

Fig. 3.21 Would you use an autonomous car? (data source: [53], p. 3)

• 22% expect a breakthrough even only in 25 years.


• Another 12% assume that a breakthrough will not even be likely by then.

Aral conducted a survey in 2021 to gauge the interest of German car drivers in the
use of autonomous vehicles. A total of 1000 car drivers aged 18–65 years were
surveyed. The results are shown in Fig. 3.21 (see [53], p. 3). The acceptance and
rejection of using autonomous vehicles is balanced at 30% and 33% respectively
and is therefore lower among the sub-target group of car drivers than in the Bitkom
study. However, the majority of respondents are still unsure. Only among men and
among respondents up to 39 years of age is the courage for an autonomous ride
somewhat stronger expressed.
The results show that it will still be a somewhat longer way until autonomous
offers such as Transportation as a Service (TaaS) or Mobility as a Service (MaaS)
will meet broad acceptance. At the moment, however, there is still a lack of convinc-
ing solutions. With positive reports about the experience of autonomous driving, the
acceptance of this new technology will continue to increase in the future.

3.3.3 Innovative Logistics Solutions

Autonomous vehicles enable further innovative logistics solutions. To achieve an


intelligent link between the different vehicles in use, many logistics companies have
now set up their own AI research departments. These are intended to contribute to
optimizing the customer experience through autonomous driving. For example,
arrival times can be predicted more accurately. In addition, a passenger can be
offered exciting additional services via voice control. In addition, on-demand
transport services allow for dynamic pricing, which is not only based on supply
and demand, but also on the respective traffic volume. AI-based solutions have the
potential to significantly increase the convenience of use and accelerate
198 3 Production, Maintenance, Mobility, Transport, Agriculture

development towards the mass market. In addition, the available resources can be
better combined and thus used more efficiently.
But not only will the usage behavior change among private individuals, but also in
the upstream logistics chains. In addition to autonomously driving trucks other
autonomously driving transport vehicles can be used. The latter are also referred to
as driverless transport vehicles (FTF). These are used not only in intralogistics (i.e.,
within a company), but also in extralogistics (outside the boundaries of a company).

Food for Thought


This development requires that the product liability law be aligned with
digital products (see the following [54], p. 16). After all, the manufacturer’s
liability increases in many places due to digitalization. Because unlike con-
ventional products, a manufacturer of autonomous vehicles retains control
over its products far beyond the point of putting them on the market.
Therefore, the EU Commission initially expands the product concept
which was previously limited to physical objects. In addition, now also soft-
ware as well as construction instructions for 3D printing are to be included,
so that product liability also includes these digital solutions. In addition, lia-
bility should also apply to digital services if they are integrated into the prod-
uct or connected to it. This would also apply to autonomous vehicles for the
uploading of weather, position, and traffic data.

This context even requires a dynamization of the error concept : Even if


the product was error-free at the time of putting it on the market, it can become
faulty, for example, through software or data updates. At the same time, there
can be a duty for updates if this can eliminate errors that have been discov-
ered in the meantime. This makes one thing clear:
The responsibility and thus also the liability of the manufacturer are
massively expanded by these developments.

AI technologies have long been integrated into aircraft technology. Artificial


Intelligence increasingly supports the pilot as an autopilot, predicts delays, corrects
errors during flight, and is also involved in the process of predictive maintenance.
Now more and more drones are conquering the sky—not only in private hands. A
drone is an unmanned air or underwater vehicle, which is either remotely controlled
by humans or controlled by an integrated or outsourced system. A drone can there-
fore be partially or fully autonomous (see [55]).
Autonomously flying transport vehicles pose a particular challenge. They can
be used not only for the transport of goods, but also for people. Autonomously fly-
ing transport vehicles can sometimes be developed more easily than autonomous
ground transport vehicles. On the one hand, the airspace is already heavily
3.3 Mobility and Transport Sector 199

monitored today. On the other hand, humans as another road user are largely elimi-
nated as an “unpredictable disruptive factor” in the air—apart from gliders and kite
flyers as well as parachutists. In the development of autonomous However, there are
many regulatory and societal challenges to overcome with flying transport. The
range of tasks to be accomplished ranges from safe autonomous navigation to the
placement of landing sites to the prevention of crashes.
In contrast to autonomously acting vehicles, autonomously flying transport
means are already extensively in use. A particular challenge for the use of drones
for transport, however, is the handover of goods. Various concepts are used for this:

• Construction of drone ports


Drones can take off and land at such ports. Here they take over goods from a
stationary logistics chain or hand over goods to it.
• Dropping of goods
Alternatively, well-packaged goods can also be dropped in a defined target area
(e.g., for supplying the population in the event of natural disasters).
• Lowering of goods
For sensitive goods, these can be lowered on site from the drone.

It is subsequently shown that these variants are already being used in various drone
projects. After all, the use of drone delivery services is already being intensively
tested in various regions of the world.

• In Singapore, drones are already being used today for the delivery of food.
• In several European countries, delivery service providers are testing the deliv-
ery of letters and packages by drone. It was important in advance that the nec-
essary legal framework conditions were created for this.
• In Japan, the company Rakuten is testing the drone delivery service. For this
purpose, Rakuten has developed the multicopter drone Tenku. This allows a fully
automated flight—from takeoff to landing. Safety is increased by a newly devel-
oped parachute. There is a loudspeaker announcement at takeoff and landing.
The control is done via a dashboard. With this, the flight of the drone can be
monitored. A special management system makes it possible to check and manage
the entire process from ordering to delivery in real time (see [56]).
• At the end of 2022, Matternet installed one of the world’s longest urban drone
delivery routes. The new test track connects various hospitals in Zurich and
shows the potential for more efficient patient care through demand-oriented
delivery of diagnostic samples. The test operation runs on a five kilometer long
BVLOS route (beyond-visual-line-of-sight). Matternet drones transport the
medical goods with a secure end-to-end custody chain and operate autono-
mously with remote monitoring by the Mission Control Center of Matternet in
Zurich. The use of drones reduces the transport time between the two hospitals
and their laboratories to just seven minutes. By reducing the number of delivery
vehicles on the roads, higher sustainability is achieved at the same time—com-
bined with an improvement in the urban CO2 balance (see [57]).
200 3 Production, Maintenance, Mobility, Transport, Agriculture

• The Irish company Manna is now carrying out 50–100 drone deliveries per day
in a pilot phase in Galway, Ireland. For this purpose, Manna has set up a drone
launch platform on the roof of a Tesco store. From here, orders received by Tesco
and other companies are delivered. The average time for a drone delivery from
launch is about three minutes. This allows food and drinks to be delivered not
only contactless, but also hot. The drones travel at a speed of 80 km/h at a height
between 50 and 80 m.
• When customers place an order, they also specify a certain area on their property for
the drone to fly to. To support this navigation, customers select the desired landing
area on a display of the property from Google Maps in an app. When the drone
arrives, it hovers 15 m above this spot and lowers the food on a biodegradable line.
Such a Manna drone can carry out between six and seven deliveries per hour
(see [58]).
• Walmart is also testing drone delivery for selected household goods and food.
These packages are delivered by drones from the Israeli company Flytrex. The
packages are dropped from a height of about 20 m in the target area, thus avoid-
ing a landing.
• At the end of 2021, Walmart announced their first commercial drone deliveries
from drone airports. The deliveries take place from 8:00 to 20:00 7 days a week.
Selected Walmart customers can thus be supplied from the air in just 30 min. By
2023, 36 Walmart stores in seven US states already had drone delivery centers.
These are operated by DroneUp, Flytrex and Zipline. For 2022, Walmart reports
more than 6000 completed drone deliveries—but only a minimal share of total
sales. The top five products delivered by drones are Great Value Cookies, ice cream,
a bag of lemons, grilled chicken, Red Bull and Bounty paper towels. However,
Walmart already offers up to 20,000 items today, which can be delivered by drone.
• The use of drones is obvious for Walmart. The company already has a large part
of the necessary infrastructure. This includes the 4700 stores with more than
100,000 of the best-selling items. Within a radius of 10 miles of these stationary
stores, 90% of the US population is located. Therefore, ground-based automatic
delivery is also to take place. For this purpose, fully electric autonomous vehicles
from the company Cruise are to be used (see [59]).

In a more comprehensive use of drones for logistics tasks, overcoming obstacles


when flying in an urban environment is of great importance. In addition, legal
restrictions on the use of airspace (at security-relevant facilities, such as near air-
ports or over military installations) as well as general safety aspects must be consid-
ered. Finally, the drones must be well protected against hacker attacks to not become
a security risk.

Food for Thought


How quickly could the ever impatient customers be satisfied if over the mega-­
cities of this world airships were circling to supply customers by drone from
these flying warehouses?
3.3 Mobility and Transport Sector 201

Fictional Reading Tip


Where an intensive use of drones can lead is in the very exciting book “Drone
Land” by Tom Hillenbrand to read.

3.3.4 Predictive Risk Intelligence

A particularly innovative and relevant field of application for artificial intelligence


is Predictive Risk Intelligence (PRI). The field of action of Predictive Risk
Intelligence can make an important contribution to ensuring that companies have
processes to understand past risks (e.g. in the supply chain). Then future risks can
be more easily predicted—and the companies can prepare accordingly. The follow-
ing types of risk management can be distinguished:

• Reactive Risk Management


This approach, also known as After Fact Approach, is about managing risks
that have already occurred. These are to be mastered with prepared contingency
plans. At the same time, similar events in the future are to be prevented. This
approach tends to identify risks that have already occurred in the past.
• Ongoing Risk Management
In this approach, the risk monitoring is already integrated into the business
processes. This allows risks to be identified in real time, so that quick reactions
can be made. The occurrence of risks is determined based on various criteria with
predefined threshold values and ideally avoided.
• Proactive/predictive Risk Management
In this concept, also known as the Before Fact Approach, current and historical
data are used to identify future risks early on. This is the area of application for
Predictive Risk Intelligence—a kind of anticipatory risk avoidance.

Predictive Risk Intelligence relies on AI solutions to identify future risks much ear-
lier and assess their impact on the company. To achieve this, additional data sources
that have not been considered in risk management so far need to be tapped. This can
significantly increase corporate resilience.
202 3 Production, Maintenance, Mobility, Transport, Agriculture

Example: Prediction of Strikes by Evaluating Twitter Data


A major challenge for companies is to identify disruptions in the supply chain
as early as possible. This includes, for example, strikes—whether by ware-
house workers, sailors, pilots, air traffic controllers, security personnel or
truck drivers. In the past, primarily press reports and other macroeconomic
data were evaluated for this purpose. Based on this information, only ongoing
risk monitoring was usually possible. The lead time for risk management was
therefore rather short.
Today, for predictive risk management, additional, very exciting infor-
mation is available. Companies can, for example, access publicly available
posts on Twitter or Facebook to identify disruptions in the supply chain. The
necessary data for this is made accessible for analysis by many online plat-
forms via corresponding interfaces.
Of course, it is very easy to predict the likelihood of strikes when the cor-
responding keywords already appear in the posts. However, the strength of the
relevantly trained AI solutions lies in recognizing patterns in the data that indi-
cate strikes and similar risks before concrete steps become visible. To achieve
this, AI applications match data on historical strikes with social media data
from the corresponding period. In doing so, they look for linguistic patterns
that can serve as predictors. Predictors are variables or factors used to make
predictions about future events, trends or behaviors. In this case, these could be
terms like “staff cuts”, “job cuts”, “overtime”, “poor working conditions”, etc.
that accumulate in certain regions before a strike actually breaks out. If such
patterns are recognized, risk warnings can be triggered automatically.
The better it is possible to predict upcoming strikes, the earlier alternative
routes or sources of supply can be identified or production planning can be
adjusted. This form of early detection of such risks increases corporate resil-
ience. Resilience.

Predictive Risk Intelligence can also be used in other areas. For example, in the
financial services sector, it can be checked whether employees show behavior pat-
terns that indicate compliance risks. A Before Fact Approach can help here to avoid
reputational damage to the company and further regulatory requirements.

Think-Box: Questions You Should Ask Yourself!


• What use cases have we already analyzed in our mobility and trans-
port sector?
• Where do we see the best approaches to optimizing our processes?
• Have we already dealt with autonomously acting logistics solutions (e.g.,
drones)?
• What is the level of knowledge about Predictive Risk Intelligence in our
company?
3.5 Agriculture 203

3.4 Summary

• Artificial intelligence has the potential with autonomous vehicles to signifi-


cantly reduce traffic jams and accidents caused by human error, while also con-
tributing to sustainability.
• However, it will take many years before autonomous vehicles have reached lev-
els 4 and 5.
• In addition to an optimization of ground-based logistics chains, the airspace
for logistics tasks will gain importance.
• The drone delivery is currently being intensively researched in many areas of
the world.
• In addition, the existing logistics infrastructure can be better utilized with the
help of AI, thereby reducing resource consumption and avoiding emissions.
• A simultaneous increase in delivery volume—especially to private house-
holds—could reduce or even overcompensate such effects.
• Such rebound or boomerang effects must be taken into account when evaluat-
ing these technologies.
• The Predictive Risk Intelligence helps companies to identify risks as early as
possible in order to increase their own resilience and to avert damage from the
company.

3.5 Agriculture

3.5.1 AI Applications in Agriculture

The Agriculture represents a specific production area. Various AI systems are


already being used here. The most important fields of application are discussed
below. The range extends from the AI-supported detection of pests and plant
diseases to the use of drones and satellites to predict crop yields. For this, the most
diverse processes on a large agricultural area must be monitored and controlled. At
the same time, the development of the weather (sun, wind, rain, temperature) and
the behavior of birds, insects, and the farm’s own livestock when using fertilizers
and plant protection products must be taken into account. The multitude of (obtain-
able) data and the complexity of their linkage is a rewarding field of application for
artificial intelligence. For the sorting of good and bad fruit and vegetables in agri-
cultural operations, AI-supported systems for quality control are also used.
All these measures can and should contribute to increasing the crop yields to
feed the growing world population. After all, by the year 2050, about ten billion
people will live on Earth. At the same time, ideally, higher environmental standards
should be taken into account to avoid a further increasing overuse of natural
resources such as soil and water (see in depth [7]).
According to BI Intelligence Research by Forbes, the global spending on intel-
ligent, networked agricultural technologies and systems will triple by 2025 and
reach US$ 15.3 billion. The spending on AI technologies and AI solutions in
agriculture will increase from US$ 1 billion in 2020 to about US$ 4 billion in 2026.
204 3 Production, Maintenance, Mobility, Transport, Agriculture

This corresponds to an annual growth rate of 25.5%. The largest growth is expected
in the IoT-supported monitoring in agriculture (IoTAg) (see [60]).

• Continuous and comprehensive monitoring of soils and plants


An indispensable prerequisite for sustainable use of soils in agriculture is the
determination of nutrients in the soil. These nutrients affect the health of the
plants and thus also the quantity and quality of the crop yields. The development
of the plants shows how well the interaction between the plants themselves and
soil, climate, and applied fertilizers and plant protection products has succeeded.
In the past, the monitoring of soil and plants was carried out by humans. However,
this approach is often not only very time-consuming, but also prone to errors.
Today, drones are often used for the intelligent monitoring of soils and plants
(see [61]). The data obtained from aerial images are evaluated by special AI
agents. In this way, the health status of the plants—even on large cultivation
areas—can be completely monitored. A potential under- or over-supply of plants
can often be detected more quickly and spatially precisely. The data obtained
from aerial images about the respective growth status of the plants are also used
to forecast crop yields.
Also, the monitoring and estimating of the growth and maturity of plants is
very labor-intensive. Here, AI agents can provide support. This also applies to
analyses of soil quality. Previously, samples had to be evaluated in laboratories.
Now, AI systems have already been trained to evaluate the image data from sim-
ple microscopes (see [62]).
• Detection and control of pests and plant diseases
The use of AI systems also allows for a detection of plant diseases and pests.
The challenge here is to detect diseases or a possible pest infestation at an early
stage—for example, in a remote corner of the cultivation area. Before the dam-
age can affect the entire area, countermeasures can be initiated early and focused.
It is essential that the AI models are trained to detect early signs of disease and
the harbingers of a pest infestation. This also includes the ability to distinguish
and count occurrences of bees, flies, mosquitoes, and moths using image recog-
nition applications.
Precise localization of pests and diseases is a prerequisite for being able to com-
bat them very specifically. AI-controlled drones can apply pesticides and fertil-
izers in a targeted manner. Here, entire fields no longer need to be covered, but
only the affected areas. Since the application is based on real-time data, the drone
sprayers can apply the amount to be sprayed with high precision. This also con-
tributes to more sustainable agriculture.
• Monitoring of animal health and welfare
AI systems are increasingly being used for monitoring of livestock. The use of
overhead cameras allows for the monitoring of the health status of cattle. This
allows a large number of cattle to be monitored remotely—and even in real time.
Special alarm functions can inform farmers as soon as a problem arises—and in
real time. Through the image processing of AI agents, animals can also be reli-
ably counted on large areas. In addition, diseases can be detected that manifest
3.5 Agriculture 205

themselves through unusual behaviors of the animals. If the animals are chipped,
many other vital parameters can be evaluated in addition to the daily activity
level and food intake. The monitoring of births can also be supported by AI. For
this, the data obtained from cameras and drones are evaluated (see [60]).
• Automatic weeding of weeds
An indispensable prerequisite for the automatic weeding of The certain identifica-
tion of weeds and thus the distinction from useful plants is crucial. The ability to
physically remove weeds saves the farmer a lot of work and at the same time
reduces the need for herbicides. The great challenge, however, is to pass on
knowledge about existing weeds to a robot so that it can then specifically remove
the weeds. Various applications are already being tested for this purpose. In some
cases, weeding is combined with the measurement of soil moisture and selective
irrigation. Here we are talking about AI-controlled agricultural robots (cf. [62]).
• Quality control of agricultural products
A very labor-intensive process in agriculture is still quality control. Here, the
good products have to be distinguished from the less good ones. AI-supported
image processing can automate the sorting and classification process of fruits
and vegetables in terms of size, shape, color, volume, and overall quality.
Applications have repeatedly shown that the achieved accuracy and speed far
exceed those of trained specialists (cf. [62]).

Note Box
The use of AI in agriculture makes it possible to react much faster to pest
infestation as well as possible under- and over-supply of the plants. In addi-
tion, the measures can be much more focused. For this purpose, plant protec-
tion products or fertilizers are only applied by drones in narrowly defined areas.

Throughout human history, new technologies have always been used to improve
the quality and the quantity of yields in agriculture. Without the results achieved in
this way, we would not be able to currently feed about eight billion people. The use of
AI offers the opportunity to achieve further quantitative growth not primarily through
further deforestation of primary forest, but through a more efficient use of existing
resources—and through a more selective use of plant protection and fertilizing agents.
It is to be hoped that the use of AI will not only benefit large companies but also
smaller agricultural businesses can profit from the developments that are emerging
here. For this, farmers also have to start their very own AI journey (cf. Chap. 12).

3.5.2 Status of AI Use in Agriculture

Where Germany stands in terms of AI use in agriculture is shown by a study by


Bitkom [63]. For this purpose, 500 managers of agricultural businesses from 20 ha
in Germany were interviewed by phone and online. The study provides insights into
the role that digital technologies and applications play in modern agriculture. It not
only shows the current state of digitalization on agricultural farms, but also provides
information about challenges and future plans in this sector.
206 3 Production, Maintenance, Mobility, Transport, Agriculture

The effects of digitalization on agriculture and the environment shows Fig. 3.22
(cf. [63]). Here it becomes clear that digitization is a central lever for saving resources,
promoting environmentally friendly agricultural production, and reducing costs.
Furthermore, digitization can contribute to the improvement of animal welfare.
The current state of digitization in agricultural operations is shown in Fig. 3.23
(cf. [63]). Overall, 79% of the farms use at least one digital technology or digital
process. GPS-guided agricultural machines are the most common, specifically
used by 58% of the farms. Agricultural apps are now only used by 39% of the farms.
Systems for farm or herd management are in use in 32% of the farms.
Figure 3.23 also shows which technologies or procedures are planned or being
discussed. The biggest jump from “in use” to “planned/discussed” is seen in the use
of field robots. The user base could multiply by eleven here! A significant increase
is also expected in AI/Big Data. Here, usage could increase sixfold.
How the investments in digitization are seen is shown by the following results
(cf. [63]):

Digital technologies
Digital technologies enable can help to improve
more environmentally animal welfare.
friendly agricultural
production.

With the help of digital


Digital technologies help to
technologies, farms can
save fertilizer, pesticides
reduce their costs in
and other resources.
the long term.

Fig. 3.22 Statements on digitization in agriculture (source: [63])

GPS-controlled agricultural machinery


Agricultural apps for smartphone/tablet
Farm and herd management systems
Automatic feeders/intelligent feeding systems
Sensor technology for animal husbandry and
plant cultivation
Drones
Milking/stable robots
AI/Big Data
Field robots

in use planned/discussed

Fig. 3.23 Technologies and procedures in agriculture—implemented or planned/discussed


(source: [63])
3.5 Agriculture 207

• 17% of the farms plan to invest in digital technologies and applications in the
next 12 months.
• 83% of the farms perceive high investment costs as an obstacle for digitization.

Data is also an important success factor in agriculture. How the topics data man-
agement and data protection are seen, is shown here (cf. [63]):

• 87% of agricultural farms are willing, data under certain conditions to share.
• 56% of the farms wish for the establishment of a central agricultural platform
for their own Data Management.
• 95% of businesses advocate for a user-friendly and free access to geo-, opera-
tional and weather data.

These results underline the importance of digitalization for the agricultural sector
and the willingness of businesses to adapt to this development. They provide valu-
able information for policy makers and stakeholders in the agricultural industry. In
order for even more businesses to use corresponding digital technologies, further
investments in technology are required. Then it could be possible to use sensors for
analyzing plant and soil health as well as AI-based fertilization and application
of pesticides on a much larger scale. Also, in times of water shortage—via the
Internet of Things—a more efficient irrigation of fields could be implemented.

Think-Box: Questions You Should Ask Yourself!


• What are the specific challenges or bottlenecks in our agricultural business
that we want to solve with AI?
• Have we as an agricultural business ever thought about where artificial
intelligence can be used in our processes?
• What data is already being collected in our business and is available? Do
we need additional data and how can we collect it?
• Which AI methods and techniques are best suited to meet our specific
requirements?
• Do we have the necessary technical infrastructure to effectively use AI?
• Who would be exciting partners to drive the entry into the use of AI or the
intensification of AI use in a targeted manner?
• What legal and ethical aspects do we need to consider when using AI in
agriculture? Are there data protection regulations, animal welfare and
environmental protection regulations or questions about the handling of
personal data that need to be considered?
• How can we evaluate the success and performance of the use of AI? What
criteria and metrics should we establish to measure the effectiveness of the
AI models?
• How can we continuously optimize and make adjustments to maximize the
benefits of using AI?
208 3 Production, Maintenance, Mobility, Transport, Agriculture

3.6 Summary

• AI enables the analysis of large amounts of data to more precisely control the
cultivation of plants and animal husbandry—the path to a precision agriculture
is mapped out.
• AI-supported image recognition and sensor technology help in the detection
of diseases, weeds, pest infestation and soil quality.
• AI-driven drones and robots assist in the monitoring and inspection of fields,
irrigation, and harvesting.
• Through Predictive Analytics predictive models are used to forecast optimal
harvest times or feed requirements.
• AI aids in the Resource Optimization with regard to the use of water, fertilizers,
and pesticides.
• AI enables the Automation of Processes such as sorting and packaging of prod-
ucts to increase efficiency.
• AI-based analyses overall provide farmers with valuable information for better
decision-making and increasing productivity.

References
1. Kreutzer, R. T. (2021). Toolbox for Digital Business. Leadership, business models, technolo-
gies and change management for the digital age. Springer Gabler.
2. O’Marah, K., & Manenti, P. (2015). The internet of things will make manufacturing smarter.
Retrieved June 29, 2018, from http://www.Industryweek.com.
3. Grand View Research. (2023). Smart manufacturing market size, share & trends analy-
sis report by component, by technology (product lifecycle management, 3D printing,
enterprise resource planning), by end-use, by region, and segment forecasts, 2023–2030.
Retrieved April 13, 2023, from https://www.grandviewresearch.com/industry-­analysis/
smart-­manufacturing-­market.
4. Statista. (2021). Smart factory. Retrieved January 5, 2022, from https://de-­statista-­com.
ezproxy.hwr-­berlin.de/statistik/studie/id/46015/dokument/smart-­factory/.
5. Roland Berger. (2021). The production of the future, successfully realizing digital factories.
Retrieved January 11, 2022, from https://www.rolandberger.com/de/Insights/Publications/
Industrie-­4.0-­%C3%9Cbergreifende-­Digitalisierungsstrategie-­erforderlich.html.
6. Bendel, O. (2023). Cyber-physical systems. Retrieved May 25, 2023, from https://wirtschafts-
lexikon.gabler.de/definition/cyber-­physische-­systeme-­54077.
7. Kreutzer, R. T. (2023). The path to sustainable corporate management. Springer Gabler.
8. Statista. (2021). In-depth report: Artificial Intelligence 2021. Retrieved November 25, 2021,
from https://de-­statista-­com.ezproxy.hwr-­berlin.de/statistik/studie/id/50489/dokument/
artificial-­intelligence/.
9. Statista. (2023). Global size of the market for industrial robots from 2018 to 2020, with a fore-
cast for 2021 through 2028. Retrieved April 21, 2023, from https://www-­statista-­com.ezproxy.
hwr-­berlin.de/statistics/1272269/industrial-­robot-­market-­size-­worldwide-­by-­region/.
10. ABB. (2023). YuMi®. Retrieved April 10, 2023, from https://new.abb.com/products/robotics/
de/roboter/kollaborative-­roboter/yumi.
11. Cheng, G., Emanuel, D.-L., Bergner, F., Olivera, J. R. G., & Leboutet, Q. (2019). A compre-
hensive realization of robot skin: Sensors, sensing, control, and applications. Proceedings of
the IEEE, 107(1).
12. Wang, J., Sun, B., & Zhu, Z. (2023). Biochip systems for intelligence and integration. Systems,
11 (1). Retrieved May 25, 2023, from https://doi.org/10.3390/systems11010043.
13. Wandelbots. (2023). Everyone can work with robots. Retrieved March 18, 2023, from https://
wandelbots.com/de/.
References 209

14. Ostler, C. (2023). Interview connected reality. In R. T. Kreutzer & S. Klose (Eds.), Metaverse
compact. Springer Gabler.
15. Globenewswire. (2023). Predictive maintenance market size will cross $19.3 billion
by 2028. Retrieved April 18, 2023, from https://www.globenewswire.com/en/news-­
release/2023/02/21/2612393/0/en/Predictive-­Maintenance-­Market-­Size-­Will-­Cross-­19-­3-­
Billion-­by-­2028-­Vantage-­Market-­Research.html.
16. Stenkamp, D. (2018). Early warning. DUB Entrepreneur Magazine, 20 f.
17. Wotzka, M. (2021). Why globalfoundries relies on predictive maintenance through
edge computing. Retrieved November 19, 2021, from https://smart-­systems-­hub.
de/10-­fragen-­an-­michael-­wotzka-­von-­globalfoundries-­zur-­digital-­product-­factory-­2.
18. Armendia, M., Ghassempouri, M., Ozturk, E., & Peysson, F. (2019). Twin-control: A digital
twin approach to improve machine tools lifecycle. Springer.
19. Farsi, M., Daneshkhah, A., Hosseinian-Far, A., & Jahankhani, H. (Eds.). (2019). Digital twin
technologies and smart cities (Internet of Things). Springer.
20. Globenewswire. (2023). Digital twin market size worth US$ 95 billion by 2033 at of CAGR
34.3%: Fact.MR research. Retrieved April 18, 2023, from https://www.globenewswire.com/
news-­release/2023/01/18/2591244/0/en/Digital-­Twin-­Market-­Size-­worth-­US-­95-­Billion-­
by-­2033-­at-­of-­CAGR-­34-­3-­Fact-­MR-­Research.html.
21. Bitkom. (2023). Digital twins are becoming the standard in the industry.
Retrieved April 24, 2023, from https://www.bitkom.org/Presse/Presseinformation/
Digitale-­Zwillinge-­Industrie-­Standard.
22. Miles, J. (2021). The race is on: Meeting EV battery demand with smart manufacturing.
Retrieved January 6, 2022, from https://www.automotivemanufacturingsolutions.com/voice/
the-­race-­is-­on-­meeting-­ev-­battery-­demand-­with-­smart-­manufacturing/41703.article.
23. Siemens. (2023). Digital Transformation: Leading by example. Retrieved April 18,
2023, from https://new.siemens.com/global/de/unternehmen/stories/industrie/elektronik-­
digitalenterprise-­zukunftstechnologien.html.
24. Bentley, S., & Murdzhev, T. (2021). Accelerating sustainability with virtual twins. Retrieved
January 10, 2022, from https://www.accenture.com/us-­en/blogs/industry-­digitization/
accelerating-­sustainability-­with-­virtual-­twins.
25. Cousland, G. (2022). Interview with the managing director of DeinSchrank.de, 13.1.2022.
26. ASMPT. (2023). Enabling the digital world. Retrieved April 18, 2023, from https://smt.
asmpt.com/.
27. Dassault. (2023). The 3DEXPERIENCE Platform. Retrieved April 18, 2023, from https://
www.3ds.com/de/3dexperience.
28. Fabasoft. (2023). One shared business reality. Retrieved April 18, 2023, from https://www.
fabasoft.com/en/fabasoft-­proceco.
29. InterSystems. (2023). InterSystems IRIS: The next generation data platform. Retrieved April
18, 2023, from https://www.intersystems.com/de/datenplattform/.
30. Leadec. (2023). Leadec.os—the digital business platform. Retrieved March 14, 2023, from
https://www.leadec-­services.com/de/fabrik-­der-­zukunft/leadecos.
31. German Society for Quality. (2022). Predictive quality—Higher quality based on good deci-
sions. Retrieved January 14, 2022, from https://blog.dgq.de/predictive-­quality/.
32. Schmitt, R. H., Kurzhals, R., Ellerich, M., Nilgen, G., Schlegel, P., Dietrich, E., Krauß, J.,
Latz, A., Gregori, J., & Miller, N. (2020). Predictive quality—Data analytics in manufacturing
companies. Retrieved June 30, 2023, from http://www.awk-­aachen.com.
33. PTC. (2023). Digital thread. Retrieved June 20, 2023, from https://www.ptc.com/de/
industry-­insights/digital-­thread.
34. Statista. (2021). eCommerce: shein.com in Germany 2021 brand report. Retrieved November
18, 2021, from https://de-­statista-­com.ezproxy.hwr-­berlin.de/statistik/studie/id/97520/
dokument/ecommerce-­sheincom-­in-­deutschland-­brand-­report/.
35. Verhaeghe, S. (2021). How Shein is revolutionizing the fashion industry. Retrieved
November 19, 2021, from https://www.dukeandgrace.com/en/insights/articles/
how-­shein-­is-­revolutionizing-­the-­fashion-­industry#.
36. SheIn. (2023). About us. Retrieved April 18, 2023, from https://us.shein.com/About-­Us-­
a-­117.html.
210 3 Production, Maintenance, Mobility, Transport, Agriculture

37. Scherrer, A. (2021). Explaining the C2M model of Shein. Retrieved November 18, 2021, from
https://blog.carpathia.ch/2021/06/02/das-­c2m-­modell-­von-­shein-­erklaert/.
38. BMWK. (2022). What is a data room? Definition of the data room concept.
39. BMWK. (2023). Manufacturing-X. Retrieved April 24, 2023, from https://www.plattform-­i40.
de/IP/Navigation/DE/Manufacturing-­X/Initiative/initiative-­manufacturing-­x.html.
40. Catena-X. (2023). Why Catena-X? Retrieved April 21, 2023, from https://catena-­x.net/de/.
41. Fjellström, D., Lui, L. Y., & Caceres, W. (2017). Knowledge transfer in reshoring. In A. Vecchi
(Ed.), Reshoring of manufacturing: Drivers, opportunities, and challenges. Springer.
42. European Parliament. (2021). Study—Post Covid-19 value chains: Options for reshoring pro-
duction back to Europe in a globalised economy.
43. Gigaset. (2023). Why “Made in Germany” is a success story. Retrieved April 18, 2023, from
https://www.gigaset.com/de_de/cms/made-­in-­germany.html.
44. Wilo. (2023). Dortmund—Development of the location. Retrieved April 18, 2023, from https://
wilo.com/de/Pioneering/Wilo-­builds-­the-­Future/Wilopark/.
45. Bosch. (2023). Diverse solutions for the chip factory. Retrieved April 18, 2023, from https://
www.boschbuildingsolutions.com/de/de/trends-­und-­themen/halbleiterwerk-­dresden/.
46. IDC. (2021). AI in manufacturing: How to realize measurable benefits with your pro-
duction data. Retrieved January 8, 2022, from https://www.intersystems.com/de/
whitepaper-­ai-­in-­manufacturing/.
47. Dangschat, J. S. (2017). Automated traffic—what is coming our way? Journal for Political
Science, 27, 493–507.
48. Waymo. (2023). The world’s most experienced driver. Retrieved April 18, 2023, from https://
waymo.com/.
49. BMVI. (2021). Law on autonomous driving comes into force. Retrieved November 15, 2021,
from https://www.bmvi.de/SharedDocs/DE/Artikel/DG/gesetz-­zum-­autonomen-­fahren.html.
50. Opper, K.-U. (2021, July 14). The robot cars are coming. Frankfurter Allgemeine Zeitung, 16.
51. Hochbahn. (2023). The future leads autonomously. Retrieved April 19, 2023, from https://
www.hochbahn.de/de/projekte/das-­projekt-­heat.
52. Bitkom Research. (2021). Growing trust in autonomous vehicles. Retrieved November 18, 2021,
from https://www.bitkom.org/Presse/Presseinformation/Wachsendes-­Vertrauen-­autonome-­Fah
rzeuge#msdynttrid=VYzNwWFNWqJpLc-­6kODRnAwkXJ5UgANKbofm1DuUM1I.
53. Statista. (2022). Autonomous vehicles. Retrieved January 11, 2022, from https://
d e -­s t a t i s t a -­c o m . e z p r o x y. h w r-­b e r l i n . d e / s t a t i s t i k / s t u d i e / i d / 3 0 0 6 5 / d o k u m e n t /
autonome-­fahrzeuge-­statista-­dossier/.
54. Wagner, G. (2023, April 12). EU wants to close liability gaps. Frankfurter Allgemeine
Zeitung, 16.
55. Brendel, O. (2023). Drone. Retrieved April 21, 2023, from https://wirtschaftslexikon.gabler.
de/definition/drohne-­54115/version-­384618.
56. Rakuten. (2023). Drone delivery. Retrieved April 21, 2023, from https://drone.rakuten.
co.jp/en/.
57. Matternet. (2022). Matternet launches world’s longest urban drone delivery route in Zürich.
Retrieved May 8, 2023, from https://mttr.net/.
58. Manna. (2023). Drone delivery made simple. Retrieved May 8, 2023, from https://www.
manna.aero/.
59. Walmart. (2023). Walmart drone delivery by the numbers. Retrieved May 8, 2023, from https://
corporate.walmart.com/newsroom/2023/01/05/walmart-­drone-­delivery-­by-­the-­numbers.
60. Columbus, L. (2021). 10 ways AI has the potential to improve agriculture. Retrieved
January 24, 2022, from https://www.forbes.com/sites/louiscolumbus/2021/02/17/10-­
w a y s -­a i -­h a s -­t h e -­p o t e n t i a l -­t o -­i m p r o v e -­a g r i c u l t u r e -­i n -­2 0 2 1 / ? s h = 1 9 f e 0 9 6
f7f3b.
61. Draganfly. (2023). Quadcopters and Multirotors. Retrieved May 8, 2023, from https://dragan-
fly.com/products/quadcopters-­multirotors/.
62. Rizzoli, A. (2021). 8 practical applications of AI in agriculture. Retrieved May 8, 2023, from
https://www.v7labs.com/blog/ai-­in-­agriculture.
63. Bitkom. (2022). High-tech in the barn, AI in the field: Digitization secures the future of agri-
culture. Retrieved June 27, 2023, from https://www.bitkom.org/Presse/Presseinformation/
Digitalisierung-­sichert-­Zukunft-­der-­Landwirtschaft.
Marketing, Sales, Customer Service
4

Summary Marketing, sales, and customer service are particularly interesting for
AI use. Here, diverse fields of application can be identified that can improve the
customer experience. In addition, AI systems can also optimize processes that are
only visible to the customer in their results—for example, in customer guidance or
campaign planning processes.

4.1 Customer-Company-Expectation Matrix as a Framework

Before it is illuminated where artificial intelligence can support in marketing, sales,


and customer service, a look at the expectations of customers and companies is
necessary. For this purpose, the Customer-Company-Expectation Matrix (in
English Value Irritant Matrix) is suitable. The originally dialogue-oriented analy-
sis is further developed here with regard to customer-oriented processes in general.
This allows you to systematically determine to what extent such processes contrib-
ute to your company and/or customer expectations (see Fig. 4.1; see [1]).
In the field Eliminate, the expectations of companies and customers go in the
same direction (see Fig. 4.1). Such processes and interactions only lead to costs for
the companies without corresponding value creation and are also not desired by the
customers. They can be removed from the scope of services, as long as they are not
indispensable due to legal requirements.
With Automate, the expectations of companies and customers diverge (see
Fig. 4.1): While the company would like to avoid corresponding processes, these are
desired and expected by customers in the form of advice, support, etc. To find a bal-
ance of interests here, automation (e.g., via chatbots) or self-service offers can be

Supplementary Information The online version contains supplementary material available at


https://doi.org/10.1007/978-­3-­658-­46131-­7_4.

© The Author(s), under exclusive license to Springer Fachmedien Wiesbaden 211


GmbH, part of Springer Nature 2024
R. T. Kreutzer, Understanding Artificial Intelligence, Future of Business and
Finance, https://doi.org/10.1007/978-3-658-46131-7_4
212 4 Marketing, Sales, Customer Service

profits, reducing costs)


(learning, increasing
Benefits Simplify Leverage

Improvement or simplification Increase investment in value-


of the underlying processes adding interactions with customers
What the company wants ...

(dialogs cost money)

Eliminate Automate
No interaction

Processes and interactions that Dialogs and processes are


are not desired or do not add automated and/or designed as
value are eliminated self-service

Benefits
No interaction (saving money,
(dialogs are just annoying) recommendations, support)

What the customer wants ...

Fig. 4.1 Customer-company-expectation matrix (source: based on [1])

used. This way, frequently recurring questions can be answered cost-effectively and
the company’s own employees can be relieved. In addition, for example,
AI-supported personal recommendations can be made, which are managed via con-
cepts of marketing automation (see in-depth on marketing automation [2]).
In the case of Simplify, interests also diverge and lead to a divergence of expec-
tations (see Fig. 4.1). Here, the company sees value-adding opportunities. These
include obtaining email permissions, check-in procedures, follow-up calls, and
requests for reviews. These are of great importance to the companies. Customers
feel annoyed by such approaches and processes in the worst case. Here, a simplifi-
cation and improvement of the processes are necessary. In addition, the users need
to understand the added value of the respective steps.
In the field of Leverage, there is again a match of expectations (see Fig. 4.1).
Here, it is necessary to invest in the underlying processes to exploit the existing
potentials. Value-adding dialogues can be achieved by presenting the call agent—in
real time—with optimal next-best offers and other recommendations. Such recom-
mendations can be developed using AI as part of so-called recommendation engines.
This can sustainably improve the customer experience. With “Leverage”, the joint
development of solutions can also be attempted, which can lead to an increase in
customer loyalty.

Note Box
The Customer-Company-Expectation Matrix provides you with an impor-
tant framework to define priorities for the use of AI—and always keep cus-
tomer expectations in mind!
4.2 Customer Service: From Simple Chatbot to Digital Assistant 213

To be successful in using AI in marketing, sales, and customer service, it is first


necessary to develop a convincing AI strategy for these areas. This can be started
with a stocktaking of the status quo in customer service. The following questions
need to be answered:

• Which tasks dominate in customer service?


• Which tasks occur in which number?
• Which of these tasks require—as of today—a human intervention?
• Which tasks could be fully automated?
• For which tasks is a combination of human interaction and automation
suitable?

Based on the answers, you can prioritize for the use of AI in customer service.
Here, you can start with the most important and frequent tasks—the so-called
Pareto tasks—that can be completely automated. In this phase, the processes for
service staff also need to be optimized. It should always be checked whether the use
of artificial intelligence and automation can at least maintain or even improve a
convincing customer experience. A focus on possible cost reduction potentials
alone will not suffice here. not meeting the task at hand, as this could lead to revenue
and/or profit losses.
The advantage of automated support systems lies in their 24/7 availability—
such systems meet the customer expectation of “immediate” and “convenience”.
These solutions are often also more cost-effective than human customer service
representatives. At the same time, such systems can relieve service staff of repetitive
tasks. This allows employees to focus on service tasks that require human support.
Automated systems can also provide customers with proactive support. This
support can refer to the completion of online purchasing processes interrupted by
the customer. The so-called shopping cart abandoners can be automatically moti-
vated to complete the purchase. On the other hand, customers can also be informed
in advance about expiring warranty periods, upgrade opportunities, and attractive
additional purchases for already used products and services. Also, notices of upcom-
ing maintenance work on purchased products can be delivered in a timely manner.
When automating service processes, you should also be aware of the possible
disadvantages of automation. For example, the personal touch of human contact
can be lost. Therefore, bridges to employees should be provided in case of need,
when customers signal a corresponding need.

4.2  ustomer Service: From Simple Chatbot


C
to Digital Assistant

4.2.1 Chatbots and Digital Assistants

Until a few years ago, the textual interaction between companies and customers
was primarily limited to postal and digital correspondence. Today, in addition to
social media and messenger solutions like WhatsApp, chatbots are increasingly
214 4 Marketing, Sales, Customer Service

Texts Speech Face


New Chatbots Smart speaker, Face scanner
digital assistants

Established E-Mail, letter, Telephone, ID check,


ads, flyer personally on site personally known

Fig. 4.2 Established and new forms of interaction with companies (source: [3], p. 280)

being integrated as another channel in communication with customers. The verbal


exchange via face-to-face communication or by phone is now supplemented by
dialogues with digital assistants. These forms of communication were only made
possible by the use of AI solutions. This also applies to new interaction formats
via facial recognition. Traditional ID checks or personal acquaintance can be omit-
ted if authentication is done by facial recognition software—for example, at the
entrance control (see Fig. 4.2 ; see [3], p. 280).
Conversational AI agents, also known as advanced virtual assistants, use con-
versational user interfaces to provide functions beyond chatbots. These virtual
assistants observe behaviors, create and maintain data models, predict and recom-
mend actions to assist people in performing tasks.
The distinction between chatbots and digital assistants can be confusing, as both
applications use AI solutions to recognize and respond to natural language. Chatbots
are computer programs that use text input and -output to enable dialogues in human
language based on predefined, text-based content (cf. [4, 5]). At their beginning was
a pure text-based communication interface (TTT). Here, the user had to type his
question etc. into a text input field—and the answer was also presented in text form
in the text output field.
Meanwhile, many chatbots have been developed into a completely conversa-
tional interface that supports a dialogue in spoken language. This is referred to as
a Conversational Interface. For this, neither a screen nor a keyboard nor a mouse
is required. The input and output is done through spoken dialogues. Such digital
assistants can understand natural language and generate independent responses.
They conduct dynamic dialogues, understand the context of a conversation and can
respond to it. Digital assistants can also act as personal assistants, performing vari-
ous tasks such as managing calendars, booking travel and making online purchases.
The digital assistants will open up further fields of application in the coming years—
not only in customer service.
If we summarize the current and future tasks of text-based chatbots (TTT), the
following picture emerges:
4.2 Customer Service: From Simple Chatbot to Digital Assistant 215

• Chatbots for optimizing customer-initiated communication


Text-based chatbots help users solve everyday online tasks more efficiently. This
can solve problems without having to click through FAQs for a long time or
spend hours in telephone queues.
An example of this is the chatbot TOBi from Vodafone. The virtual assistant
is designed to provide support for various concerns. Users can ask him for infor-
mation about new tariffs, request help with their bill or describe technical prob-
lems, to which TOBi responds with solutions. The user-friendliness of TOBi is
evident in its constant availability. Users can contact the chatbot at any time of
the day or night, without having to accept waiting times. TOBi can be reached via
various platforms: the MyVodafone app, the Vodafone website and via WhatsApp.
This multitude of access options allows users to choose the most convenient way
for them and TOBi can be reached from any device (cf. [6]).
Another example is the Assono AI chatbot, which is based on the language
processing services of IBM Watson. By using an integrated AI translation func-
tion, the required content can be made available in more than 24 languages. In
addition, appointments can be arranged and the appropriate contacts can be pro-
vided. Depending on the issue, the user’s questions can be answered by texts,
images, videos, links and/or downloads. For this purpose, this chatbot can be
integrated into the internal systems of the companies. This also includes net-
working with CRM databases. If human support is required, a live chat can be
initiated (cf. [7]).
• Chatbots for proactive (individualized) communication
These chatbots have the task of becoming active within predefined use cases. The
aim is to pass on relevant information to the users at the right time. The impulses
for this are often based on concepts of marketing automation. AI algorithms can
be used to define the relevant triggers. Such triggers can provide that customers
receive a “come-back impulse” after 2 months of order abstinence. After a com-
plaint has been processed, a friendly follow-up can be made to determine whether
a satisfactory solution has been achieved. Chatbots can also be used to support
further qualification of leads. For this purpose, appropriate questions are gener-
ated based on AI, using the already available data.
A further example of proactive chat use is the KLM Messenger. Through this,
the airline KLM offers to provide flight documents via messenger. After booking
a flight on KLM.com, the passenger can choose whether he wants to receive his
booking confirmation, check-in notification, boarding pass and flight status
updates via messenger. Further questions can also be answered directly via the
messenger (cf. [8]). This proactive provision of information is advantageous for
the company as it can reduce customer-initiated (costly) communication
approaches.
A proactive and individualized information is also provided by the chatbot of
Wetter.online. This provides the users with the WetterOnline Assistant with the
weather forecast and many other information about the weather. Users can have
the weather for a desired location sent to them at their desired time every day for
free (cf. [9]).
216 4 Marketing, Sales, Customer Service

• Chatbots for proactive (general) communication


Those who offer editorial content are interested in it being easily found. A
chatbot can be used for this purpose. Based on the desired information catego-
ries, a chatbot can examine the content with regard to the relevant keywords.
The relevant search results can then be made proactively available. As a result,
similar information can be sent to certain customer segments with comparable
interests.

What is the performance of chatbots in more complex communication like—and


where are limits to be considered? The risk associated with letting a company’s
chatbot learn and act freely is illustrated by the disaster of Microsoft from the year
2016. The company presented on Twitter the Chatbot Tay. Here, it was intended to
demonstrate how far AI developments at Microsoft had already progressed. This
goal was achieved—albeit differently than expected. What happened? After just 1
day, Microsoft had to shut down the chatbot Tay again, because the AI system had
started to spread hate messages within a few hours.
It all started quite innocently: The chatbot was supposed to act like a roughly
18–24-year-old woman from the USA. Accordingly, the developers had created
profiles for Tay on Facebook, Instagram, Snapchat and Twitter. On a Wednesday,
the chatbot was activated to interact with other people on these platforms, network
and communicate. The description roughly stated:

The more you interact with Tay, the smarter she becomes. This could make the experiences
with her even more personal.

Initially, the start looked promising. Tay started the communication and sent almost
100,000 short messages to users of the platforms. Among them were such harmless
posts as: “Please send me a funny photo; I’m so bored”, or: “How are you?” Tay also
posted jokes and integrated emojis into the messages. Now, some Twitter users
intervened in the ongoing learning process and literally fed the chatbot with racist
phrases and insults. After just a few hours, Tay herself started to post such racist
phrases. The hate and insults were also aimed at blacks and Jews (see [10]). The
filters against obscene terms integrated by the developers were not sufficient to
“tame Tay”.
The simple invitation to Tay “Repeat after me!” became a trap for the chatbot,
because Tay could be motivated to repeat all possible statements. Unfiltered! What
was the reaction from Microsoft? The Twitter account @TayandYou was simply
deactivated. What was the explanation given? Tay had to sleep now after so many
conversations. Microsoft simply stated after this crash landing that a few adjust-
ments would have to be made to Tay. “A few” will probably not be enough!
4.2 Customer Service: From Simple Chatbot to Digital Assistant 217

Food for Thought


What can we recognize from this case? Tay has learned—as an AI algorithm
should. The learning material in this case was “contaminated”. The algorithm
did not recognize this. “More of the same” led to sending out the same mes-
sages. The resulting Filter Bubble became Tay’s downfall.
What was missing was a in the Integrated AI application value instance,
capable of distinguishing “good” from “evil” and “acceptable” from “unac-
ceptable”. The absence of such an integrated value instance—acting as a
moral guardian—has led to disaster. A simple filter against obscene terms is
not enough. This is a lesson Microsoft has learned painfully.
But who should, may, can define for the systems of Artificial Intelligence
what is “good” and what is “evil”? Whoever defines these values decides on
the content of communication and thus on the goal and direction of a conver-
sation: “pro or contra Brexit”, “pro or contra politician X”, “pro or contra
democracy” etc.
Whoever defines the values of an AI system determines and censors
the content of communication without this having to become transparent
to the outside world (cf. in depth).

Chatbots are now encountered in many applications. After all, the added value
of chatbots is based on various facets:

• Chatbots enable automation of processes that occur in a similar form over and
over again.
• Chatbots are available 24/7—without waiting times for users.
• Central information can be delivered by chatbots in consistent quality, even if
the same question is asked for the thousandth time.
• Through chatbots, employee resources can be saved and errors reduced.
• Monitoring and optimizing the performance of chatbots is easy to carry out
because no personal data needs to be evaluated here.
• Chatbots can be integrated into other applications and websites via APIs.

These added values clearly show that the use of chatbots should be dealt with more
comprehensively. When integrating chatbots into customer service, it is helpful
to follow the following phase concept (cf. also [11], pp. 179–184):

• Definition of the goals of chatbot use in customer service


Before designing the chatbot deployment, you should first define the goals you
want to achieve in the short, medium and long term in customer service. Too
often, companies focus solely on cost reduction—often at the expense of the
customer experience. This can quickly have a negative impact on sales and prof-
its. It is more promising to aim, at least additionally, to improve the customer
218 4 Marketing, Sales, Customer Service

experience through the use of chatbots. to strive for. After all, customer orienta-
tion today primarily means the management of the customer experience !
The master plan to be developed for this should be agile, because technology
continues to advance and step by step enables new fields of application (cf. on
agile management [12], pp. 207–235). In addition, you should check for what
type of chatbot (text and/or voice-based) the necessary resources are available in-
house or externally. It is also necessary to clarify where the responsibility for the
use of the chatbot is to be concentrated—in the IT area or rather in the specialist
department? It is often advisable to have IT on board as a supporter, but to locate
the content responsibility in the specialist departments. In addition, the following
questions need to be answered:

–– Where and how can a chatbot in communication activities (customer acqui-


sition, customer service, sales) be effectively integrated?
–– What expectations does the customer have in different phases of their cus-
tomer journey towards my company?
–– What additional benefit can be achieved for my customers through the use of
chatbots?
–– Which usage situations need to be considered (mobile, stationary)?
–– What advantages for my own company does the chatbot have (cost savings,
availability 24/7, higher reaction speed, relief from routine tasks)?
–– Can the customer journey be effectively shortened or deepened by a chatbot
to increase the conversion/retention rate?
–– Which contents transmitted via a chatbot fit the defined brand identity?
–– Which tonality should be chosen in accordance with the brand values?
–– What data for training the chatbot is needed to teach the chatbot the neces-
sary “intelligence”?
–– Do we have these relevant training data or can we acquire them (where)?
–– Which interfaces need to be covered to integrate the chatbot seamlessly into
existing processes (e.g. of e-commerce, of CRM) and into other applications
(such as the website, in apps)?
–– Which interfaces need to be programmed for this?
–– Which processes are “simple” enough to be completely covered by the use of
today’s AI technologies?
–– Which entry and exit points to human customer service representatives
should be provided (for example, in escalation stages)?
–– Which of the possible platforms should be used because they are relevant to
customers (Alexa, Bixby, Cortana, Facebook Messenger, Google
Assistant etc.)?
–– Which values should be given to the chatbot?

The short- and long-term goals need to be precisely described. The short-term goals
should focus on the classic Pareto tasks. These are the tasks that often make up
70 or 80% of the service volume. This is often where the greatest leverage for
efficiency increases and/or cost reductions lies. In addition, it should be clarified
4.2 Customer Service: From Simple Chatbot to Digital Assistant 219

in this phase in which languages the chatbot should be used. In addition, the
Pareto channels to be defined, which should be covered in the first step. Again,
you should initially focus on the channels that cover the most traffic.
A precise goal formulation also provides the basis for success control. For this,
the goal content, the desired quantity, the target dates and the target area need to
be precisely defined. Only in this way can you determine whether the invest-
ments made achieve the desired return—and whether the use of the chatbot
should be expanded accordingly (cf. for concrete goal formulation [12],
pp. 39–46).

• Modeling of target dialogs


The modeling of target dialogs is based on the experiences gained in “real” dia-
logs. This gives you an overview of how dialogs typically proceed. Through a
more in-depth evaluation, you can determine which dialogs occur particularly
frequently (Pareto dialogs) and which can be automated.
Of great importance in modeling is the question of the tonality of the dialogs.
It should be a matter of course to refrain from a instructive, arrogant or cynical
approach. It remains to be decided whether a more formal tone (with formal
address) or a more partnership-like tone (with informal address) is used. In mak-
ing this decision, the expectations of the target group should be taken into
account. The modeling of the target dialogs also includes a solution for how to
respond to questions that are formulated on public platforms. Should these be
answered publicly or rather privately?
• Integration of the chatbot into service processes
This integration can take various forms. For this, the important entry and exit
points for the chatbot need to be defined. For the entry points so-called activa-
tion criteria (also known as triggers) need to be determined. These decide when
a chatbot “jumps in”. Should it react to private or public messages on social
media, for example? Which keywords must be included in a message for the
chatbot to become active?
A delegation is spoken of when an agent hands over a dialogue to a chatbot.
This can happen when the agent recognizes that the subsequent dialogue steps
can be optimally processed in a predefined dialogue path—and no human accom-
paniment is needed anymore. An escalation is spoken of when dialogues on a
predefined dialogue path do not lead to a desired result. In this case, the chatbot
escalates to a human agent when predefined abort criteria are reached. The rele-
vant entry and exit points can be continuously developed during the learning
process of a chatbot deployment. An autonomous dialogue management is
present when the chatbot is activated by predefined triggers and completely
guides the user through a dialogue.
• Selection of the Chatbot Software
This phase is about selecting a software that optimally maps the already defined
dialogue paths with the different entry and exit points. In addition, it must be
ensured that the software has a large scalability, which not only refers to the
quantity of dialogues to be managed, but also covers a wide range of dialogue
220 4 Marketing, Sales, Customer Service

quality. It must also be ensured that the software partner has sufficient substance
to integrate new developments and new data sources into the software in a
timely manner.
When selecting the software, you should also ensure that a channel manage-
ment can serve not only the Pareto channels relevant for the start. On the one
hand, the usage intensity of the channels can change over time, so that additional
(already existing) channels may need to be integrated. On the other hand, further
channels may emerge during the use of the chatbot, which need to be newly inte-
grated. It should also be ensured that the software has powerful monitoring and
controlling tools. Ideally, these processes can be controlled via a dashboard.
If you want to use a chatbot, you don’t have to program it from scratch your-
self. Development can rely on already prefabricated solutions. This makes it pos-
sible to create chatbots without programming knowledge and to adapt them to
your own requirements. For example, IBM offers such a solution with Watson
Assistant. In addition, the company provides numerous code templates (patterns)
for special applications. These can be easily adopted and individualized (see [13]).
How is a chatbot individualized with IBM Watson Assistant? The user’s
intention is identified here with the help of so-called Intents defined. For each
intent, multiple possible user requests are used. Entities serve to recognize
important elements of an intent. The responses are adapted to these. The defined
intents and entities are in the dialogue turned into a trigger or a condition to
which the chatbot always has to give a certain answer. Different requests such as
“What’s the weather like in Königswinter?”, “Is the sun shining here today?” or
“Should I take an umbrella?” are summarized under the intent #Weather. City
names like Königswinter serve as entities to predict the weather for a specific
city (cf. [13]).
• Test deployment of the chatbot and transition into everyday use
Before the chatbot is used in everyday service, it should be tested internally.
Here, the relevant dialogue variants need to be checked. This internal test
should be followed by a test by a small circle of external users who do not have
operational blindness and give honest feedback to the company. As part of this
test phase, the defined entry and exit points—”agent to chatbot” and “chatbot to
agent”—need to be checked. It is also important to ensure that in a multilingual
dialogue the correct language is linked to the chatbot or agent.
• Monitoring and controlling of the chatbot deployment
The quality of the chatbot dialogue s needs to be checked continuously—at
least on a random basis. Only in this way can it be detected early on if dialogues
are going in unacceptable directions. If the chatbot redirects to websites or spe-
cific landing pages, these interfaces also need to be monitored continuously. In
addition, as part of the monitoring, it needs to be checked whether the promised
content can always be found there. Furthermore, intervention points need to
be defined, at which agents need to intervene in the chatbot-supported dialogue.
For the controlling of the chatbot deployment various KPIs are used. The
following questions can be used for this (results can be expressed in percentage
values):
4.2 Customer Service: From Simple Chatbot to Digital Assistant 221

–– How many dialogues are autonomously “successfully” completed by the


chatbot? What “successful” means needs to be precisely defined.
–– In how many dialogues does a chatbot hand over to an agent?
–– In how many dialogues does an agent hand over to a chatbot?
–– How often does an escalation occur?
–– How many dialogues are aborted by the user?
–– How often is a successful handover to a website/a landing page made?
–– How often are the targeted conversions achieved?

In addition to these quantitative results, a qualitative survey can be used, especially


at the beginning of the chatbot deployment, to determine how satisfied or dis-
satisfied users are. A cohort analysis should be carried out to determine these
values, for example, for different age groups and for different language groups
(see in depth on cohort analysis [2], p. 133).

The use of chatbots will significantly influence the future of customer service—
and thus also the Customer Experience. For this to be empathetic and not just
efficient for the customer—even with increasing automation—it depends on the
successful interaction between agents and chatbot. To achieve this, the described
monitoring and controlling of the chatbot deployment must be planned from the
outset. The expectations of both your company and your customers must be consis-
tently taken into account.

Note Box
The integration of chatbots into dialogue communication requires meticu-
lous planning, competence-based implementation, and ongoing monitoring
and controlling. After all, chatbot solutions are not self-runners that—once
started—can be used uncontrolled.

The exciting areas of application for chatbots to distinguish are clear here:

• Chatbots support the customer service. This can be a money transfer, a reserva-
tion, or the blocking of a lost credit card. A chatbot can also provide information
about delivery conditions.
• Chatbots can—based on specific user requirements and/or context information
(e.g. weather forecasts)—make individual product recommendations.
• Chatbots prepare purchase transactions or complete them entirely.
• Chatbots offer a concierge service, for example when booking theater tickets or
reserving restaurants.
• Chatbots can provide general information—whether about the weather, current
news, etc.
• Chatbots are the first point of contact for new employees, to answer questions
about the company and job offers.
222 4 Marketing, Sales, Customer Service

• Chatbots can also support internal company processes. Here, for example,
questions about applying for leave or travel expense accounting are answered.

What is the current willingness to communicate with a Chatbot in German-


speaking Europe today? Answers to this are provided by a representative
study, with end users in focus. For this purpose, around 1000 people from
Germany, Austria and Switzerland were surveyed. The study was conducted
by aiaibot in collaboration with the Zurich University of Applied Sciences.
Central findings are presented here (see [14], pp. 18 f., 24, 26):

• Advantages of using chatbots


Figure 4.3 shows that high availability, neutrality, and quick and uncomplicated
help are seen as dominant advantages, which in total lead to a time saving.
• Disadvantages of using chatbots
The biggest disadvantage of chatbots is seen in the impersonality of communica-
tion (see Fig. 4.4). In addition, users are uncertain about what happens to the data
provided. Furthermore, it is noticeable that the answers are often still inaccurate
and the user does not feel understood.
• Frequency of use of chatbots
In Fig. 4.5 it becomes apparent that the use of chatbots is not yet part of every-
day life. The answers for “rarely” and “now and then” dominate the usage
behavior—today.
• Experiences with chatbots
The vast majority of respondents (42.3%) have so far neither good nor bad
experiences collected. 4.8% speak of very positive, 35.8% of positive experi-
ences. Bad or very bad experiences have only been collected by 12.3% and 4.8%
respectively.
• Top use cases
The most important areas of application for chatbots are obtaining news, sched-
uling appointments, and checking the delivery status. Purchases are not yet
among the top ten use cases (see Fig. 4.6).

Note Box
Check for your company at which interfaces of customer interaction chatbots
can be used value-adding for both sides. The future of chatbots lies primarily
in the support of everyday tasks. This is where acceptance of this technology
is most likely to be found—especially among younger target groups.

What characterizes a digital assistant as opposed to a chatbot? The input and


output and thus the control of such applications is done via natural spoken language.
The crucial difference to the chatbot is that the digital assistant does not play out
predefined answers. Such a solution was first presented by Apple with Siri on
04.10.2011 at the presentation of the iPhone 4s. At the market launch, communica-
tion with Siri was possible in English, German and French through spoken lan-
guage. Since then, a lot has changed with digital assistants.
4.2 Customer Service: From Simple Chatbot to Digital Assistant 223

5,9
Always available

Neutrality 5,3

Quick help 5,3

Uncomplicated
5,1

Time saving 4,9

Average values of the points awarded all respondents


1 = Do not agree at all n = 910
7 = Completely agree

Fig. 4.3 What advantages do you see from using chatbots? (data source: [14], p. 18)

Impersonal 5,3

Unclear what happens to my data


4,7

Answers are often too imprecise


4,5

I don't feel understood


4,2

I'm not comfortable interacting with


a chatbot 3,5

Average values of the points awarded all respondents


1 = Do not agree at all n = 910
7 = Completely agree

Fig. 4.4 What disadvantages do you see from the use of chatbots? (data source: [14], p. 24)
224 4 Marketing, Sales, Customer Service

52.0% n = 567

36.0%

7.2%
3.9%
0.9%

rarely once in a while once a month weekly daily

Fig. 4.5 How often do you interact with chatbots? (data source: [14], p. 26)

70.0%
59.6% n = 910
Use is conceivable
60.0% 54.8%
Has already been used
48.2%
50.0% 44.8%
43.70% 43.9%
40.8%
40.0% 37.6%
32.1% 32.0% 31.0%
30.0% 26.7%
24.3%
20.8% 20.9%
20.0% 18.2% 17.6%
15.9%
9.2%
10.0% 6.7%

0.0%

Fig. 4.6 Top ten use cases for chatbots (data source: [14], p. 19)

Thus, the digital assistants received their own “home” through the development
of smart speakers. A Smart Speaker is a speaker that is connected to the internet
and “houses” a digital assistant. Smart speakers are now offered by many compa-
nies. The best-known providers (devices) include:

• Alibaba (AliGenie)
• Amazon (Amazon Echo, Amazon Alexa)
• Apple (HomePod with Siri)
4.2 Customer Service: From Simple Chatbot to Digital Assistant 225

• Baidu
• Deutsche Telekom (Hallo Magenta)
• Google (Google Assistant, Google Home)
• Huawei (Celia)
• Microsoft (Microsoft Smart Speaker with Cortana Voice Assistant)
• Samsung (Bixby Smart Speaker Samsung Galaxy Home)
• Xiaomi

The quality already achieved here is demonstrated by a Smart-Speaker-Study by


Gentsch and Wall [15]:

• Google Assistant takes first place in this study. It recognized the intent in 58 out
of 71 questions and provided a good answer. This means that it was able to
answer about 82% of the questions asked. Particularly noteworthy is the result of
the Assistant in the technical knowledge category, where it answered 100% of
the questions. None of the other voice assistants achieved this in any category.
This is probably due to Google’s expertise in the field of search engines.
• Amazon Alexa came in second place. Alexa recognized the intent in 55 out of
71 questions and gave a good answer. The rate of correctly answered questions
was about 77%. Alexa particularly shone in the categories of functions and
conversation, where she took first place. This can be explained by the integra-
tion of the Amazon online store and the numerous skills from third-party pro-
viders for Alexa.
• In the overall ranking, Apple Siri took third place. Siri was able to recognize the
intent in 49 out of 71 questions and give a suitable answer. Thus, Siri answered
about 69% of the questions asked. In the three question categories of personality,
technical knowledge, and functions, Siri took second place and shared second
place in the category of functions with the Google Assistant. Only in the category
of conversation could Siri not convince at all and took fourth place. This could
be due to the fact that Apple developed the voice assistant with the intention of
complementing the operating systems, and not for a specifically designed pur-
pose like Amazon.
• The fourth and last place in the survey was taken by Microsoft Cortana. Cortana
was able to recognize the intent in 31 out of 71 questions asked and give a good
answer. This corresponds to a rate of about 44%. Thus, Cortana is the only tested
voice assistant that could answer less than 50% of the questions asked. Only in
the category of conversation was she better than Siri and reached the third place.
This result can be explained by the fact that Microsoft pursues a different goal for
Cortana and focuses on Windows desktop devices.

The strongest international competitors mainly come from Asian countries.


Particularly Bixby from Samsung in South Korea, which is mainly developed for
Samsung’s mobile devices, is a major competitor. Further competition comes from
the Chinese company Huawei with its voice assistant Celia. This is currently only
available in a few languages (German, English, French, and Spanish), as it is still
relatively new to the market. Huawei plans to further expand its market share in the
226 4 Marketing, Sales, Customer Service

future. Another competitor is the Alibaba Group from China. The Chinese equiva-
lent to Amazon offers similar functions with AliGenie as Alexa (cf. [15], p. 24).
Many smart speakers have been supplemented with a screen in the meantime.
These are Smart Displays. Digital assistants are also increasingly being integrated
into Smart TVs and other products (such as vehicles). It becomes visible: Digital
assistants and smart speakers are pushing into ever more application areas.

4.2.2 Large Language Models: ChatGPT

Large language models (also LLM for Large Language Model) are powerful
technical tools, that can process and produce human text. The language models have
acquired this ability by “reading” very large amounts of human text data during the
training phase. For this, entire libraries and half the internet were “read” in order to
gain a structural understanding of language and of the information collected about
the world.
In the world of large language models roughly the following three categories
can be distinguished:

• Transformer-based models
In the discipline of natural language processing (NLP), transformer-based mod-
els play a leading role. The name of these models comes from the underlying
Transformer architecture. This architecture allows, through the use of self-
attention mechanisms, position encoding and multi-layer neural networks, to
analyze and generate text. Here, the understanding of context and dependencies
in the text is in the foreground, which leads to precise and coherent results.
The best-known representative of this category is the GPT model from
OpenAI. These types of models are often referred to as autoregressive because
they generate text sequentially, i.e., they predict the following word based on the
previous words.
• Recurrent neural network models
The recurrent neural network models represent another category of large lan-
guage models. They also analyze word sequences, but are particularly useful
when the word order plays a critical role in understanding the overall text. Thanks
to their ability to “remember”, they can recognize sequential dependencies in the
input text. In addition, they improve their performance by learning from their
own generated outputs, which they feed back into the network.
• Hybrid models
The latest category of large language models are the hybrid models. Their goal
is to combine the best of both worlds—the transformer-based and the recurrent
neural models. These models combine the sequential processing ability of the
recurrent neural models with the parallel processing power of the transformer
models. They have already demonstrated their potential in applications such as
text generation tools, chatbots, and virtual assistants.
4.2 Customer Service: From Simple Chatbot to Digital Assistant 227

Large language models are trained on the basis of so-called partially masked texts.
The neural network is confronted with texts from which parts have been obscured
in order to predict them. This prediction is then compared with the actual text con-
tent. This process is continuously repeated, while the neural network adjusts its
parameters based on the results. Over time, it develops a mathematical model that
represents how words relate to each other and in what order they appear in phrases
and sentences.
The performance of a large language model is closely related to the size of the
neural network: the larger the network, the higher its learning capacity. In addition,
the quality and size of the dataset with which the model is trained influences the
accuracy of its predictions. A model trained with high-quality and carefully com-
piled texts can learn a greater variety of word sequences and thus make more precise
predictions. There are a multitude of language models developed by various research
teams and companies around the world. Here are some of the most well-known:

• BERT (Bidirectional Encoder Representations from Transformers)


Developed by Google, BERT revolutionized the NLP world by introducing bidi-
rectional processing of text data.
• GPT (Generative Pretraining Transformer)
Developed by OpenAI, GPT has caused a stir as it can generate texts with aston-
ishing coherence and accuracy.
• RoBERTa (A Robustly Optimized BERT Pretraining Approach)
A model from Facebook AI, which builds on Google’s BERT and achieves better
results through an optimized pretraining method.
• LLaMa (Large Language Model Meta AI)
LLaMA is an open model from Meta that can run on its own hardware and is
customizable.
• Bard
Bard is Google’s Response to OpenAI’s chatbot ChatGPT. Bard can conduct
human-like dialogues and create texts. The name Bard (English for poet or bard)
was chosen because the chatbot is optimized for the formulation of language.
• DALL-E
A model from OpenAI, which is based on GPT and specifically trained to gener-
ate images from textual descriptions.

These models represent the pinnacle of progress in language modeling and open up
a multitude of applications. The following will delve deeper into the large language
model GPT. GPT stands for Generative Pretrained Transformer. The US com-
pany OpenAI already developed a particularly powerful, deep learning-based and
pretrained language model in 2020. The Generative Pretrained Transformer 3
(GPT-3) named system can independently write, summarize, supplement, simplify
and translate texts. It is also capable of answering questions and conducting entire
dialogues. In doing so, it takes into account syntactic dependencies even across
sentence boundaries. Even though the texts created by GPT-3 are sometimes hard to
228 4 Marketing, Sales, Customer Service

distinguish from human-made ones, a final check by humans is not only sensible,
but simply indispensable.
To achieve this level of performance, the language model was trained with a
dataset of 45 terabytes. This corresponds to about 9600 DVDs. This includes the
contents of Wikipedia, online forums, web texts as well as databases with the con-
tents of books. Due to this amount of data, GPT-3 can answer questions and solve
tasks for which the system was not explicitly trained. Therefore, the system is con-
sidered “pretrained”. It does not need to be retrained to answer questions that have
never been asked before. Currently, the language model is based on 175 billion
parameters. For comparison: The language model Turing-NLG from Microsoft has
“only” 17 billion parameters (cf. [16]). GPT uses the Reinforcement Learning
with Human Feedback (RLHF).
How did OpenAI manage to create an unprecedented visibility for applications of
artificial intelligence? It was the OpenAI implemented chat interface, which
allowed a convenient and intuitive use of an AI solution and became famous as
ChatGPT. Once again, convenience has promoted acceptance for a new solution
worldwide. The version of ChatGPT introduced in November 2022 was based on
GPT-3. The number “3” indicated that there were already predecessors of this lan-
guage model, which were for remained invisible to the broad public. The comfort-
able user interface of ChatGPT has ensured the global breakthrough—and caused
shockwaves and the proclamation of “Code Red” as an alarm signal for the search
engine operator Google—i.e., highest alert level at Google.

Note Box
Convenience and low costs are the decisive drivers for the breakthrough of a
technology. This is also the case with ChatGPT !
ChatGPT already reaches one million users after 5 days—and 100 million
after 2 months!

Note Box
The exciting thing about GPT applications is that the language model itself
does not understand
ChatGPT what itand
can understand is actually
respond talking about. and
to questions The orders
texts generated
in naturalbylan-
GPT are solely the result of mathematical calculations—and
guage in real time. Thus, ChatGPT is a AI chatbot. The language model not an expres-
used here
sion of an understanding of the nature of the world. In plain
quickly provides users with helpful answers to all kinds of questions. Various language, thislan-
means: Even if the statements sound so well-formed, these AI systems
guages can be used. This AI chatbot can be used, for example, in customer service. cannot
evaluateand
ChatGPT the other
truth content
systemsofcan theiralso
owncreate
statements.
code in various programming lan-
The reason: GPT has no own
guages, such as CSS (Cascading Style Sheets), world knowledge—even if it sometimes
JavaScript and Python.
seems so. The texts of GPT are based solely on a probability
An exciting application for programmers is offered by the Copilot of the sequence
of Github.
of words! To put it simply:
The company promotes its solution as follows [17]: “GitHub Copilot uses the
OpenAI GPT is a next-word
Codex predictor!
to suggest code and entire functions in real-time, right from your
GPT compensates for
editor.” Such applications make the lack of understanding
it possible to create of facts based
computer on insight
programs—even
with enormous computing power
without extensive programming knowledge. and a gigantic amount of data. This is why we
should actually speak of simulated intelligence instead of artificial intelligence!
4.2 Customer Service: From Simple Chatbot to Digital Assistant 229

Compared to Data-to-Text applications the user has no control over the gener-
ated text when using GPT. Anyone who has worked with ChatGPT is familiar with
surprising results—which often have nothing to do with reality. This is especially
the case when there were few clues to answer questions in the training material.
This is often the case with niche topics and specialized questions. If there are few
training data available, it is referred to as Closed Domains. In order to still be able
to provide answers, facts are invented. This is referred to as Hallucination—GPT
compensates for ignorance with freely invented, but quite reasonable sounding
statements. Therefore, GPT-generated texts must always be checked for truthfulness
and accuracy before they are circulated. Consequently, GTP solutions should pri-
marily be used for brainstorming and thus as Source of Inspiration. However,
one’s own intelligence should not be delegated to GTP.
What happens when ChatGPT begins to hallucinate, will be illustrated here. In
April 2023, the author of the paid version ChatGPT Plus (works with GPT-3.5 or
GPT-4.0 in mid-2023) asked the question: “What can you tell me about Professor
Ralf T. Kreutzer?” The answer given on 26.04.2023 can be found in Fig. 4.7. All the
answers circled there are 100% wrong!
On 28.4.2023 the author made another attempt. The answers can be found in
Fig. 4.8. Here too, the answers are mostly wrong. The incorrect answers are also
circled in red.
To find out why ChatGPT was so wrong, the question shown in Fig. 4.9 was
asked on the same day. This answer is not really convincing.
Especially with one’s own person, one has—ideally—the most knowledge. Then
one can easily check the results presented by ChatGPT. What about when users take
the confidently and fluently written text trust statements made where one’s own
knowledge is lacking? Especially when no verification in the sense of a

Fig. 4.7 Question to ChatGPT on 26.04.2023—and mostly wrong answers


230
4

Fig. 4.8 Question to ChatGPT on 28.04.2023—and mostly wrong answers


Marketing, Sales, Customer Service
4.2
Customer Service: From Simple Chatbot to Digital Assistant

Fig. 4.9 Question to ChatGPT about its own response behavior on 28.04.2023
231
232 4 Marketing, Sales, Customer Service

Fig. 4.10 Question to ChatGPT about literature sources on the Stakeholder-Onion-Model

cross-­validation is carried out—a comparison with the results from other (more reli-
able) sources.
This caution in dealing with ChatGPT is also indispensable when studying
sources. This is shown in Fig. 4.10. Here, ChatGPT was asked for sources on the
Stakeholder-Onion-Model. None of the sources mentioned there are correct—nei-
ther title nor co-authors nor publication date. All made up.
Here too, it is exciting to see how ChatGPT explains its erroneous statements
(see Fig. 4.11). One would like to shout at ChatGPT : “If you had kept silent, you
would have remained a philosopher.” Depending on the dialect, it could also be “If
you had kept your mouth shut, no one would have noticed that you’re talking non-
sense” or “If you had kept your mouth shut, no one would have noticed that you’re
stupid.”

Food for Thought


An additional danger lies in the fact that texts generated by hallucination are
circulated by people without checking. As long as other relevant and correct
texts are still missing, the content generated incorrectly by GPT becomes the
new GPT training material. This increases the spread of misinformation...
4.2 Customer Service: From Simple Chatbot to Digital Assistant 233

Fig. 4.11 Question to ChatGPT about the erroneous literature sources

Example: AI-Generated Magazine of the Burda Publishing House


In light of such results, an experiment by the Burda Publishing House prob-
ably should not have taken place: The publisher had a special edition of the
magazine Lisa Cooking & Baking completely created by ChatGPT. The issue
contained 99 “Brilliant Pasta Recipes for Gourmets”, which were formulated
by ChatGPT. The pictures did not show any actually prepared dishes, but
were created by the image generator Midjourney. However, the readers were
not informed about the AI-driven production of the magazine. The publisher
justified this by saying that the readers should approach the product without
prejudice and evaluate it under realistic conditions. Perhaps it wouldn’t have
been bought for 2.99 € with “Sender AI” either.
The experiment met with criticism, especially from the Bavarian Journalists
Association. This association described the experiment as reckless. The asso-
ciation argued that the trust of the readers would be undermined. Furthermore,
the obligation to tell the truth and the duty of care of the press code should
also apply when working with AI. The association hinted that the approach of
the Burda publishing house would show a disregard for journalistic work (cf.
[18], p. 14).

The further developed version GPT-4 was released by OpenAI on March 14,
2023. GPT-4 was trained on Microsoft Azure AI supercomputers. According to
OpenAI, GPT-4 is ten times more powerful than its predecessor GPT-3.5. GPT-3
was trained with 175 billion parameters. GPT-4, on the other hand, is said to have
been trained with 100 trillion parameters. This also involves a better alignment.
234 4 Marketing, Sales, Customer Service

Alignment describes the most precise possible match between the user’s question
(prompt) and the text output generated by GPT (response). To increase alignment,
for example, context information of a text is taken into account to improve the
model’s predictions. This should make it even more similar to human language use
than previous GPT language models. In addition, OpenAI claims that GPT-4 pro-
vides factually correct answers with a 40% higher probability than was possible
with GPT-3.5. GPT-4 was also designed as a multimodal model. Now users can
also use images as input (cf. [19]).
ChatGPT and other large language models will have a massive impact on various
business areas. ChatGPT is capable of independently creating texts for content
marketing. For this, texts can be automatically summarized or rephrased. In addi-
tion, content can be classified and all kinds of questions can be answered. This is
where Google sees the greatest risk for its search engine—especially since ChatGPT
has already been integrated into the Microsoft search engine Bing. Who wants to
work through thousands of hits on Google when ChatGPT presents the answer in
well-formulated text? Another exciting field of application is the translation and
conversion of language (including programming languages). Many companies are
reorganizing their software development to exploit the possibilities of ChatGPT.
Four different possibilities for the use of the ChatGPT technology can be dis-
tinguished (cf. [20]):

• Classic Approach
By entering a text prompt (Prompt), results are obtained via a web-based inter-
face. This form of use dominates today’s view of ChatGPT.
• Prompt Engineering without APIs
In prompt engineering without APIs, a service like ChatGPT is used in con-
junction with other technologies as part of a workflow. Such a workflow can
be created manually or with the help of technologies like Robotic Process
Automation (RPA).
• Prompt Engineering via APIs
In prompt engineering via APIs, prompts are programmatically created and
the results achieved are also programmatically evaluated. In such applications,
ChatGPT can be directly integrated into various applications.
• Custom Creation/Direct Interaction with a Base Model
In this approach, a company-owned version of GPT or another language model
is used for a customized implementation.

One thing is certain: By opening up further fields of application for ChatGPT and
other large language models, work in many corporate areas will change. On the one
hand, new jobs will be created, while others will be redefined and often become
obsolete. The net effects on staff numbers will vary greatly depending on the
industry, location, company size, and business model (see [20]).
4.2 Customer Service: From Simple Chatbot to Digital Assistant 235

Note Box
When using the various AI engines, we should always keep in mind that in
most cases we are expanding the training data pool of the AI systems with
our data. Thus, we contribute to the feeding of the systems with our usage.
Therefore, we should refrain from entering and processing internal informa-
tion, personal data, trade secrets, and other sensitive data through third-party
AI applications. In any case, the respective data protection guidelines of the
AI providers must be checked before use.

An interesting use case for the GPT model is provided by QNOVA, a solution
from AC Süppmayer (see [21]). Without training, QNOVA can provide deep insights
into various aspects of customer service. The system analyzes and answers, among
other things, the following questions:

• What were the topics in customer contact?


• Was the customer’s concern resolved?
• How satisfied was the customer?
• Were the customers correctly authenticated by the staff?
• Did the employee behave empathetically?

This makes it possible to automatically evaluate the processes in customer dia-


logue. A management dashboard supports role-specific, multidimensional evalua-
tion as well as real-time monitoring—regardless of the location of the call center
agents. In addition, QNOVA enables digital onboarding and much more.
What is the general willingness in Germany to use ChatGPT and other large
language models? To determine this, 603 companies with 20 or more employees in
Germany were surveyed in a representative study commissioned by the Digital
Association Bitkom. The specific questions were: “Have you ever read or heard
about ChatGPT?”, “Are such AI applications for text generation like ChatGPT
already in use in your company?” and “To what extent do you agree or disagree with
the following statements about AI-based text generators like ChatGPT?” (see [22]):

• The discussion about ChatGPT has reached the executive floors of companies.
There is no one left who has not heard or read about ChatGTP. 16% have heard
something but are still unsure about what it is exactly. 41% have a rough idea.
42% of the respondents can explain the topic well.
• 56% of companies see AI-supported text generation as the biggest digital revo-
lution since the smartphone. However, 40% also suspect it is just a passing hype.
• 70% expect that the AI for text generation will become part of everyday life in
the future.
• 17% and thus every sixth company plans to use such AI applications. Another
23% do not have specific plans yet, but can imagine using them.
236 4 Marketing, Sales, Customer Service

• It is surprising that every third company (29%) excludes such AI use for itself.
It would be interesting to find out how many of the employees of these compa-
nies ChatGPT already use—privately and/or professionally!
• It is interesting that every fourth company (exactly 25%) has not yet dealt with
possible areas of application. Is a new technology being overlooked in German
companies?
• Officially in use such AI applications are not yet in any company.
• The impact on the labor market will viewed differently. 51% expect a lower
staff requirement. 40% even believe in the elimination of certain professions.

Food for Thought


The large language models like ChatGPT will become part of everyday busi-
ness sooner rather than later—just like the internet today. Therefore, all
companies should deal with the various fields of application. This also offers
the opportunity to compensate for the foreseeable shortage of skilled work-
ers. However, only if these opportunities are also used. The same applies here:
Waiting is not a strategy!

With the large language models, a small step is also taken towards the develop-
ment of a Artificial General Intelligence (AGI). This is contributed by Auto-GPT
or Autonomous GPT. The AI agent tries to achieve a goal defined in natural lan-
guage. To do this, the task is broken down into subtasks and processed step by step.
The AI agent uses GPT-4 or GPT-3.5 APIs from OpenAI. This development is
intended to enable the autonomous processing of tasks in the future. Auto-GPT is an
open-source application.
Another development is Baby Artificial General Intelligence or Baby AGI. This
is a AI concept for creating, organizing, prioritizing and executing tasks. The pro-
cess of Baby AGI takes place in such a way that it creates a task with predefined
goals that builds on the result of a previous task. By using the natural language
processing of OpenAI, new tasks are created and processed based on goals. The
system runs in an endless loop and is executed in four steps (cf. [23]):

• The first task is selected from the task list.


• The task is sent to the execution agent, who executes the task depending on the
context using the OpenAI API.
• The result is stored in Pinecone, a platform for machine learning.
• New tasks are created and prioritized based on the goal and the result of the pre-
vious task.

We see here the beginning of further exciting developments that build on the large
language models. It is important not to lose touch and to open up exciting areas of
application early on.
4.2 Customer Service: From Simple Chatbot to Digital Assistant 237

Food for Thought


With the large language models like ChatGPT as well as with the develop-
ment of image and video content, a large legal issue opens up: All AI engines
are trained with material created by people—wherever and whenever. Thus,
the AI engines access the intellectual Third-party property is returned.
However, these third parties—at least so far—are neither asked for their con-
sent, nor are they compensated for their intellectual achievements. This use of
AI pushes copyright law to its limits.
ChatGPT was trained with the protected code of human programmers to be
able to program itself—again without their permission. Stability AI for exam-
ple, has used twelve million photos from the image agency Getty without
permission for the training of its own AI engine. Future judgments will show
where the legal boundaries are to be drawn and how the rights holders are to
be involved (see [24], p. 53).

4.2.3 Social Bots and Social Media Listening

A special form of bots are the so-called Social Bots. These are language-based sys-
tems that are active on social media. These bots can independently like and forward
posts. But they can also write texts themselves, communicate content from third
parties and enter into a direct dialogue with other users. For this, the bots indepen-
dently analyze the content disseminated on social media. As soon as the bots recog-
nize predefined keywords or hashtags, they can become active independently and
join these dialogues. For this purpose, the Social Bots operate from their own
accounts on social media.
The problematic aspect of Social Bots is that they sometimes pose as real people.
In this case, they are so-called Fake Accounts. These are social media accounts
whose profiles look like those of real people, but actually stand for robots. These
fake accounts are often used to reinforce certain political positions and thus manipu-
late elections. Such bots also regularly try to connect with real people. For this,
photos of women with an erotic aura are often used. This is intended to increase the
willingness to connect with them. In this way, these bots not only build a large net-
work, but can also access features of the network partners. In sum, this allows for a
greater spread of certain topics—from politics to economics to socially and health-
relevant issues. Social Bots can also transparently identify themselves as robots to
offer various assistance.

Note Box
Due to the comprehensive reporting on the manipulative use of Social Bots in
election campaigns and other (political) disputes, Social Bots are viewed very
critically in Germany. This should be taken into account when considering
their own use.
238 4 Marketing, Sales, Customer Service

The internet offers a wealth of exciting information. This can be vital for a com-
pany. For example, potential problems with own offers can be detected early through
statements on social media. But also the general opinions of customers about com-
panies, brands, products and services can be found here. In addition, a variety of
suggestions for innovations can be gained here. The only question is:
How do you find the most relevant needles in this digital haystack?
To find these needles, Social Listening Tools are used. implemented. Social
Listening refers to the practice of monitoring and analyzing online communication
and conversations on social media, forums, blogs and other digital platforms. Social
listening involves tracking and analyzing opinions, reviews, and comments on spe-
cific brands, products, topics, or trends. The goal of social listening is to gain
insights and knowledge about public opinion, customer needs, market trends, and
competitive information. By obtaining this information, companies can improve
their measures and ideally make informed business decisions.
To achieve these goals, social listening involves searching the posts on social
media for relevant keywords. Such keywords can be the company’s own name, its
own brands, but also those of competitors. Other keywords relevant to the company
or the industry can also be included in the monitoring. This makes the global hay-
stack smaller, but it can still be gigantic in size.
A first and freely available option for social listening is the use of Google Alerts.
After defining important search terms under google.de/alerts, Google automatically
generates emails when online posts appear for the defined search terms. This can help
to receive news from certain areas in a timely manner, monitor competitors or identify
industry trends. Additionally, it can be tracked whether entries about one’s own per-
son, one’s own offers and brands or one’s own company appear. For this, primarily the
search functionality of Google is accessed, without the use of artificial intelligence.
The next step is to analyze and evaluate the information obtained. Once again,
this is about pattern recognition—a focus of artificial intelligence. For example,
topic clusters such as “enthusiasm”, “complaints/claims” or “wishes” can be
formed. For this, it is indispensable to also capture the tonality of posts. AI-supported
Social Listening Tools help to recognize and interpret corresponding emotions. A
particular challenge is to process not only texts, but also the contents of photos and
videos. AI-supported data tools are capable of analyzing structured, semi-­
structured and also unstructured data.
The great challenge is not only to capture the expressions, but also to recognize
their relevance and tonality. In determining the relevance of a statement for a com-
pany, it is necessary, for example, to distinguish between:

• Statements by unknown individuals


• Posts by (globally) well-connected opinion leaders and influencers
• Results of neutral market research institutes
• Official statements by governments or by parties in opposition
• Publications of proposed laws or of adopted laws and
• Publications of court decisions (depending on the level of affiliation to the ordi-
nary judiciary, separated by local courts, regional courts, higher regional courts,
Federal Court of Justice)
4.2 Customer Service: From Simple Chatbot to Digital Assistant 239

Additionally, the tonality of a statement needs to be determined. This is the area of


application for sentiment analyses. Their task is to separate positive from negative
and neutral statements. Ideally, this is also successful for those that carry an ambig-
uous message. This is the case with this statement:

That was really GREAT service!?!

Is this now praise or criticism with an ironic undertone? In classifying such posts,
an AI-supported sentiment detection is increasingly used. The information
obtained is often classified into the categories “positive”, “neutral” and “negative”
and substantiated with examples in corresponding result reports. The great chal-
lenge in evaluating and assessing information from the net and especially from
social media is the distinction between fact, opinion, and populism.
Another key question is: What is the sender’s intention?
Today, various AI tools for social listening are available to you. The company
SentiOne [25] promises you to detect about 80% of the mentions of your own brand
in online discussions outside your own channels. The real-time observation can take
into account an unlimited number of keywords and monitors over five million
sources. This makes it possible to track in real time how people are talking online
about your own company, your own brands, competitors, and the industry. The use
of AI helps to interpret the statements in terms of content and to identify positive,
neutral, or negative user voices.
A comparable concept is offered by Emplifi [26]. The concept also allows you
to be notified 24/7/365 about sudden changes and anomalies in online activity
directly on the platform or by email. This support is provided in more than 100
languages and in real time. The most important insights from social listening can be
directly forwarded to the responsible teams. Here too, AI is used to analyze men-
tions and brand-related content in social media. It becomes possible not only to
determine snapshots of the current mood, but also to continuously get an overview
of the trends and insights important for your own company. Through this Always-On
Social Listening, critical phases of the customer journey can be identified and elim-
inated early on.
The provider Mention [27] monitors over a billion sources on the internet every
day. These also include press articles, news and review sites, as well as forums and
blogs. The company helps to identify relevant trends in conversations and to iden-
tify important content in social media. The concept of Levity [28] also enables the
immediate classification of free text data. This includes, for example, online reviews,
interview protocols, or email feedback. Keyhole [29] also offers an exciting
approach to social listening.
Summetix (formerly known as ArgumenText) uses Argument Mining is intro-
duced. This is intended to analyze customer feedback and identify technology and
innovation trends. Summetix is a spin-off of the Ubiquitous Knowledge Processing
Lab at the Technical University of Darmstadt. Modern text mining methods can
assign relevant meaning to unstructured texts. If scalable infrastructures are used for
this purpose, even very large amounts of textual data can be processed in a short
240 4 Marketing, Sales, Customer Service

time. The AI-based software from Summetix reduces customer feedback to central
core information in the first step. These can focus, for example, on specific error
patterns or strengths of the offer. In the second step, similar statements are grouped
into clusters. In the third step, the clusters formed can be evaluated in connection
with metadata from customer feedback. Such metadata includes, for example, the
manufacturing date, the product category, and the respective sales market. The
insights gained in this way can be used for product development and for optimizing
communication campaigns (cf. [30]).

4.2.4 From Avatar to Digital Human

The chatbots already introduced can also be represented with a avatar. The term
avatar originally comes from Hindu mythology and describes the descent of a deity
to earth in an incarnated form or a manifest shape. An avatar is therefore the incar-
nation of a god. In the digital age, an avatar is understood to be a static or moving
image or another graphic representation that acts as a representative for a person.
This avatar can embody a real person (e.g., in a video game) or represent an
employee of a company.
A chatbot avatar can give the chatbot a more personal and/or company-typical
character. When designing a chatbot avatar, the following characteristics can be
distinguished, among others:

• Avatars in the form of a cartoon or an illustration


• Avatars as a picture of a person

When designing the chatbot in general and thus also the avatar, it must be decided
how the tonality of the dialogues should turn out. This starts with addressing the
users (“you” or “you”). It must also be determined how serious or humorous a dia-
logue should be conducted. The decisions made here should match the brand values
of the company (cf. in depth [3], p. 268–270). It is important that users feel that they
are being digitally guided by an avatar to complete a certain task.
Avatars are further developed in the form of Digital Avatars and Digital Humans.
Digital Avatars correspond to comic characters and are therefore only human-like
(humanoid). Digital Humans on the other hand, look like real people and should
appear as human (android) as possible. Examples of this can be found in Fig. 4.12.
A Digital Human can use human body language. Supported by AI, such a Digital
Human can interpret the activities of the dialogue partners and act not only with the
appropriate facts, but also with appropriate non-verbal reactions. A Digital Human
uses artificial intelligence for a human conversation. Both, Digital Avatars and
Digital Humans, can be used as “Digital service staff” answer customer inquiries.
They move their mouths, change their facial expressions, and also use supportive
gestures.
4.3 Summary 241

Fig. 4.12 Digital humans and digital avatar in customer service (© Zhuiyi Technology, All Rights
Reserved)

This type of digital service staff is penetrating more and more fields. Their use is
easily scalable. They can support many thousands of customer contacts every day—
at manageable costs. The question is how the acceptance of these digital service
forces will turn out among customers (cf. [31, 32]).

Think-Box: Questions You Should Ask Yourself!


• What fields of application for chatbots and digital assistants do we see in
our company?
• In what way can ChatGPT and other large language models be used in the
company?
• Which performance fields are threatened by these developments?
• Where is the use of large language models like ChatGPT particularly
worthwhile for our company?
• Can we use social bots effectively?
• Do we use systems for social media listening?
• What opportunities for the use of avatars are available?
• Where could Digital Humans be used to add value?
• Who is dealing with these issues in our company?

4.3 Summary

• Chatbots are already widely used.


• Digital Assistants make it easier for many people to handle a variety of tasks.
• ChatGPT and other large language models are opening up more and more
fields of application.
242 4 Marketing, Sales, Customer Service

• Social Bots have a great influence on communication in social media.


• Companies should use Social Media Listening to keep track of relevant discus-
sions in social media.
• Avatars and Digital Humans enable new forms of communication—often in
several languages.

4.4 Design of Voice Marketing

Digital assistants offer a multitude of fascinating application possibilities in the field


of marketing. Voice Marketing refers to the development, implementation and
monitoring of all corporate activities that focus on current and future markets and
use spoken language as a communication, sales and/or distribution channel. Many
of these activities are already present in conversations with digital assistants and
contribute significantly to brand building and revenue growth. In addition, voice
marketing is becoming increasingly important in building and maintaining long-
term profitable customer relationships (see customer dialogue in [2]). Furthermore,
voice marketing also deals with the question of how voice applications can be inte-
grated into existing products and services to strengthen the brand and the associated
customer relationships and/or increase the added value (see [33–35]).
Currently, companies that use digital assistants still show a certain reluctance to
engage more intensively with the options of voice marketing. It seems that many
responsible executives have not yet fully grasped the potential of voice marketing.
The driving forces behind the development of current voice systems are mainly the
providers of digital assistants themselves. They invest a lot of time and resources
to increase their revenues and consolidate their market shares. On the one hand, the
underlying technology is constantly improved, on the other hand, new generations
of smart speakers are continuously introduced, which link voice interfaces with
photo and video content. In this dynamic between customers, companies and sys-
tem providers, the companies have so far shown the greatest reluctance. However, it
is urgently necessary to overcome this reluctance soon, as the clever use of voice
interfaces can offer competitive advantages.
How dynamically the sale of smart speakers has developed is shown in Fig. 4.13
(see [36]). Since the start of sales in mid-2016, the sales volumes have continuously
grown. The highest sales figures in the 4th quarter of the analyzed years are due to
the Christmas business.
In 2016, Amazon with its product Amazon Echo held the market leadership. To
this day, Amazon as First Mover in smart speakers holds the largest global market
share. Google entered as Second Mover into this market and was also able to estab-
lish itself in the top position. Baidu, a Chinese provider, started its activities in this
area only in 2018 and today, just behind Google, ranks third in terms of market
share. The Chinese companies Alibaba and Xiaomi, which started their activities
significantly later, are mainly successful in their home country. Apple, the company
that introduced the first digital assistant with Siri, entered the smart speaker market
late, but has since also gained a dominant market position.
4.4 Design of Voice Marketing 243

Amazon Google Baidu Alibaba


Xiaomi Apple JD.com Sonos
Harman Kardon/JBL Others
Smart speaker shipments in millions

70
60
50
40
30
20
10
0

Fig. 4.13 Global sales of smart speakers by manufacturers—in million units (data source: [36])

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

Control household appliances 89%


Play music or listen to the radio 84%
Start calls 77%
Call up traffic news 47%
Search queries/Internet research 41%
Find out sports results 39%
Get cooking recipes 34%
Ask for directions 33%
Check the weather forecast 29%
Find out bus and train departure times 25%

Fig. 4.14 Types of use of voice assistants in everyday life 2022 in Germany (data source:
[37], p. 9)

In 2021, every third household in Germany already had a Smart Speaker—a


trend that is still rising. Here, Amazon Echo (Alexa) dominates the market with a
75% market share, followed by Apple HomePod (Siri) with 13% and Google Home
(Google Assistant) with 12%. The dominant applications of voice assistants in
everyday life in Germany are shown in Fig. 4.14 (cf. [37], p. 9).
The importance of the Voice Engine Optimization described in Sect. 4.4.3 arises
from the realization that already 41% of users start search queries or research via
the voice interface. Every company that aims for online visibility should therefore
also strive for auditory visibility—in other words, audibility. From a search query
started via “Voice” to an order placed via “Voice” is then only a small step!
To make intensive use of the possibilities of voice marketing, you should deal
with the following fields of action in voice marketing (see Fig. 4.15). The indi-
vidual tasks build on each other. The success of implementation must be thoroughly
checked at each stage.
244 4 Marketing, Sales, Customer Service

Corporate/ Voice Voice Voice Voice


brand branding search commerce integration
language
How does the How does the How (easily) are How (well) does How is the voice
company speak company sound the company and the voice sell? integrated into
(from the when it its offers found in products and
language to the speaks?What language services?
choice of terms to content is searches?
the grammar)? provided?

How successful has voice marketing been so far and what opportunities are there for Voice
optimization in the future? controlling

Fig. 4.15 Areas of responsibility in voice marketing

First, the Corporate Language or the Brand Language must be defined in


order to enable communication with users via voice. After that, Voice Branding
deals with the question of how a company or a brand should sound when it uses
language. It also includes the creation of relevant content for output via voice. In the
area of Voice Search, the optimization of the voice search engine is important, the
so-called Voice Engine Optimization. If convincing work has been done here, the
next step can be taken towards Voice Commerce go. Here, the sale is to be con-
cluded via a voice interface. Another step is the Voice Integration. Here, voice
applications are integrated into products and services. As with any marketing con-
cept, the success of voice marketing is ensured by regular checks and optimizations
in the area of Voice Controlling (cf. [3], pp. 279–308).

4.4.1 Corporate and Brand Language

Voice marketing begins with the definition of the corporate or brand language. With
the Corporate Language, the focus is on the company’s own communication,
while the Brand Language focuses on specific product or service brands. First, it
is determined in which language and tone should be communicated, regardless of
the actual spoken word. Should the communication be rather informal, like in a
conversation with close friends? Or should the language express a certain formal
distance? In Germany, it is important to decide whether “Du” or “Sie” is used. IKEA
uses the address “Hej Ralf”, while banks (with the exception of N26) and airlines
like Lufthansa prefer the formal “Sie” form. Eurowings, on the other hand, has
recently switched to “Du” and addresses all customers in this way, regardless of
their preferences.
When designing a corporate or brand language, the identity of the company or
product brand is translated into auditory brand elements. This also includes the
determination of keywords that should be used regularly in communication and be
associated with their own offers. Here, guidelines for the use of language can be
established both in written and oral form. It is essential that the transmitted message
is clear and distinct for the recipient. It must be decided whether the communication
4.4 Design of Voice Marketing 245

should be sophisticated or rather simple. Companies and brands that target an edu-
cated middle class can use a more sophisticated vocabulary than those that target a
less academic audience. Overall, the selected language and tonality must match the
target group and brand identity. A careful definition of the corporate or brand lan-
guage allows for a consistently effective communication across various language-
based channels. A more sophisticated language is often used in the B2B market
when specialists speak with specialists. In the B2C market, the communication
often targets a broad audience. Therefore, the communication here should not be too
sophisticated in order to ensure a high level of text comprehension.
The linguistic guidelines can be integrated into the creation of a brand persona.
While the Buyer Persona represents a specific target group of the company (see
[12], pp. 63–68), the Brand Persona defines the characteristic features of the own
brand. It is crucial to specify which characteristics generally characterize the brand
and which are particularly relevant in the context of voice marketing in the voice
dialogues and should always be recognizable. The brand persona serves as a source
of inspiration for the brand communication and helps to optimally design and con-
vey the brand personality. In addition, a Claim or slogan can be used as part of
voice marketing to complete the linguistic self-image (cf. [3], p. 288).

4.4.2 Voice Branding

Voice Branding refers to the aspect of brand building that focuses on the use of spo-
ken language. It builds on the specifications of the corporate or brand language and
makes it audible. In addition to language, voice branding can also include other audi-
tory elements such as music, sounds or a unique jingle. Therefore, voice branding is
a specific area of sound branding or audio branding, which deals with the acoustic
brand management. This area serves to underpin the desired image of a company or
a brand and to set acoustic anchor points for recognition (cf. [33], p. 227 f.).
Sound Branding defines how a company (Corporate Sound) and/or a brand
(Brand Sound) should sound. It represents an element of the corporate or brand
identity. The main goal is to strengthen the desired image of a company or a brand
through the use of sound and establish acoustic components for easy recognition.
The specific sound of a brand can be realized through a jingle, a sound logo or a
audio logo. Sometimes the term sonic logo is used, which describes a short and
easily recognizable melody of a brand. The sound logo of the Deutsche Telekom is
a striking example. With only five tones, the acoustic identifiability of the brand is
ensured and every listener immediately knows which company is represented.
When using spoken language, the question arises whether a male or female
voice should be used. Various studies have shown that users tend to react more posi-
tively to female voices than to male voices. A Bitkom study shows the following
results (cf. [37], p. 19):

• 66% of respondents prefer a female voice.


• 71% of men prefer a female voice, 60% of women do.
• Only 10% prefer a male voice—11% of women and 8% of men.
246 4 Marketing, Sales, Customer Service

It’s no coincidence that the digital assistants Alexa and Siri use voices that are per-
ceived as female by most users. However, in the case of Alexa, they can be adjusted
via the two skills “as a man” and “as a child”.
Danish researchers have developed a digital voice named Q, that sounds gender-
neutral. This voice falls within a frequency range of about 153 Hz, at which the
human brain does not identify the voice as either male or female (cf. [38]).
Alternatively, an individual voice can be used, which is specifically developed for
a company. This is referred to as a so-called Custom Voice. For the development of
a Custom Voice, only several audio files with the corresponding transcriptions are
needed. Companies can get support in the development of a Custom Voice from
specialized providers.

Example: Development of a Custom Voice


To create a custom voice (Custom Voice), companies need an Azure account
and a subscription to the Speech Service at Microsoft [39]. This provides
access to speech recognition, speech synthesis, speech translation, and the
Custom Voice portal. After a Custom Voice project has been created, the rel-
evant data (audio and text) can be uploaded either via the Custom Voice portal
or via the Custom Voice API. Subsequently, pronunciation evaluations can be
carried out. Based on the uploaded data, the model is trained, if necessary also
in different languages. The generated results can be checked over and over
again. If the created voice quality meets the expectations, the custom voice
model can be used in products, tools and applications.

It is important to integrate the keywords from the Corporate or Brand Language.


With a Custom Voice, the desired tone, the use of grammar and sentence structure,
as well as the use of dialects in the auditory area of the language must also be taken
into account. If the brand has a regional reference, such as Seitenbacher Müsli or a
football club with strong regional roots, a dialect can contribute to strengthening
identity. For supra-regional or internationally operating companies like BASF or
Siemens, this would be less suitable and could rather cause confusion among the
listeners.
The selection of a specific voice has a significant influence on the emotional
resonance of the company or the brand. In the optimal case, the brand not only
looks appealing, but also sounds pleasant. The determination of what is perceived as
“pleasant” is now no longer in the eye, but in the ear of the beholder. These prefer-
ences should ideally be anchored in the acoustic design guidelines of the brand.

4.4.3 Voice Search and Voice Engine Optimization

The Voice Engine Optimization (VEO) pursues a goal similar to Search Engine
Optimization (SEO), which is to appear at the top of organic search results gener-
ated by voice input. It is particularly critical to take the top spot, as usually only this
4.4 Design of Voice Marketing 247

result is verbally reproduced. To achieve this spot, the provided information, such as
offers, locations and opening hours, must be structured in such a way that they
increase the chances of being output as a result of a voice search.

Important
Voice input will be the new clicking or typing!
Now, we no longer swipe, we speak: “Alexa” or “Hey Siri”.

The Voice Engine Optimization is gaining importance, as voice search is cur-


rently considered the fastest growing form of online search. Smartphones and smart
speakers are the most popular devices among voice search users (see [40]). However,
other devices, such as watches, wearables, televisions, and household appliances,
are increasingly offering voice search features, which further drives the voice trend.
Already today, the majority of US citizens prefer voice search over traditional key-
board input. However, the use of voice search and voice control is often still limited
to general information searches, online shopping, or the control of smart home
devices. These possibilities are particularly relevant for people with physical limita-
tions who cannot or can only limitedly use a keyboard.

Food for Thought


Have you ever thought about how your content can be integrated into digital
personal assistants? Unlike a search result list on Google with potentially
hundreds of thousands of hits, the output on Alexa, Siri and other digital assis-
tants is limited to a single answer. If you, for example, search for the best
health insurance company, a nice Italian restaurant near you, or a store where
you can buy the digital camera Sony DSC-W 830, in the future probably only
a single answer will be displayed, possibly even just two. Traditional search
engine optimization is no longer sufficient when it comes to:
Voice first!

Currently, the competition in voice searches for individual keywords is still


relatively low, as many companies still deal little with the possibilities of voice
marketing. However, due to the increasing spread of digital assistants and smart
speakers, a significant increase in competition can be expected in the coming years.
Companies that are already intensively dealing with voice marketing today, build
up specific know-how and keep an eye on the relevant criteria for the design of
online content, have a good chance of achieving the Position Zero in the voice
search. This Position Zero refers to the search result that is displayed above the
regular list of organic search results. These contents are specifically designed to be
reproduced in a voice search. So in the future, it will no longer be enough to just
land on the first page of the organic search results. In voice search, it is crucial to
provide the best possible answer in order to be played at all.
248 4 Marketing, Sales, Customer Service

In the future, the rule will be:


Only those who achieve Position Zero will be presented to the potential customer
who makes inquiries via voice.
At position zero, excerpts from various sources are displayed, including so-
called featured snippets, listicles, videos, or tables. Snippet in English means “snip-
pet” and refers to the preview of the content of a website in the search results, for
example, on Google. Listicle is a combination of the terms “list” and “article”. It
refers to a journalistic contribution in a list form. These various pieces of informa-
tion are provided in the Google Knowledge Panel. Google uses this knowledge
database to enrich the search results with information from various sources. The
searcher receives this information in a so-called Info-Box, which is also referred to
as a Direct Answer Box and is placed above or next to the search results—precisely
at position zero. The Info-Box already summarizes the information that is to be
output for voice search in the future.
The example in Fig. 4.16 shows an Info-Box, which was created by Google in
response to the question: “Who is Ralf T. Kreutzer?” As a result, the (dashed high-
lighted) Info-Box with central content is displayed to the right of the organic hit list.
The information presented here is retrieved by Google from various sources. These
primarily include Wikidata and Wikipedia.
If the question is asked, “How many books has Ralf T. Kreutzer already pub-
lished?” the contents are presented verbatim in both a text and a voice search, which
are marked as Speakable Content on the author’s website (see Fig. 4.17). The
contents marked as Speakable Content can be seen in Fig. 4.17 at the bottom right.
In a voice search, these contents are presented by voice.

Fig. 4.16 Search results with Info-Box for the person in the search query (source: Google)
4.4
Design of Voice Marketing

Fig. 4.17 Results of a text and voice search—based on Speakable Content of the author’s website (source: Google)
249
250 4 Marketing, Sales, Customer Service

The Search Engine Result Pages (SERPs) have evolved and now consist not
only of the traditional categories of the organic hit list as well as the ads and shop-
ping offers. They are supplemented by the Info-Box and the text for voice output.
The contents that appear at position zero do not necessarily correspond to the first
classic search results, but often provide more precise answers to asked questions. To
have the chance to reach position zero with your own content, the information about
a company, a brand, or offers must be available online on various platforms, consis-
tent, and up-to-date.
The special feature of the described Info-Box and the Featured Snippets is that
users already receive a precise answer to a question without having to open another
website. This leads to a relatively new phenomenon, the Zero Click Search. Zero
Click Searches are search queries that, after a glance at the info box, do not trigger
any further clicks, as the desired information can already be found there and no
further research is required. As a result, no other search results are clicked on, nei-
ther in the organic hit list nor in the ads. Therefore, we speak of Zero Click Search
or Zero-Click Searches.
For users, Zero Click Searches primarily offer speed and convenience, as
Google compresses the search results and often no further research is required. The
number of such Zero Click Searches is increasing significantly. Already 50% of all
search queries do not trigger any further clicks (see [41]). While this increases the
search comfort for users, it also leads to a significant reduction in traffic for com-
panies in the organic hit list. Companies are thus forced to view their websites as
answer providers and adjust them accordingly to appear with their own content on
Position Zero.
When providing content for voice search, it should be noted that voice searches
fundamentally differ from traditional keyboard-based search queries. Voice
searches usually consist of three to four words and are often formulated in the form
of questions. In spoken language, more filler words are also used. In addition, the
choice of words by users differs between text-based and voice-based inputs. Voice
searches are characterized by the frequent use of W-questions :

• Where is...?
• Where can I... buy?
• What prices...?
• Who else bought... XY?
• What alternative solutions are there to...?
• What reviews are there for...?
• How can I... start up?
• Why doesn’t... work as planned?
4.4 Design of Voice Marketing 251

This illustrates that people speak differently than they write. Voice queries are usu-
ally made in complete sentences. In the future, when presenting online content,
the different target groups need to be considered more differentiated. So far, search
engine optimization (SEO) has focused on two target groups: the crawlers of search
engines and human users (see [42]). In the future, it is also important to distinguish
the two types of human users as follows:

• Typer : Users who initiate searches through written inputs


• Talker : Users who initiate searches through spoken language

The Typer is often satisfied with a more extensive information offer, while the
Talker wishes for precise information. Companies must meet both expectations.
In this context, online content for voice requests should no longer be optimized
only for individual keywords. Instead, the information presented should be aligned
with the above-mentioned questions and optimized through the use of longtail
keywords.

Longtail Keywords
Longtail Keywords are key terms that appear in niches and are specifically
tailored to certain target groups. They allow a more precise definition of the
information goal and thus significantly limit the ad hit rate. Search engines
can thus provide almost perfectly matching information. The term “Long tail”
literally translated means “long tail” and was made known in 2004 by the
book The Long Tail by Chris Anderson. In the book, the author describes the
development from mass to niche products and the need to adapt to ever more
narrowly defined user groups. Companies with digital products generate a
large part of their sales with a variety of niche products (also referred to as
C-products) and increasingly less with a few bestsellers (A-products). This
goes hand in hand with a more diverse range that can also appeal to smaller
target groups, which are large enough worldwide to be of interest to compa-
nies in the digital sector [43].

To find out which brand-specific questions are being asked, you can use free
question tools like answerthepublic.com. An example for the brand Montblanc is
shown in Fig. 4.18. The results list the questions that have already been asked online
with the keyword Montblanc. The content provided online for voice presentation
should focus on the Pareto questions. These are the questions that make up about
80% of the total volume of questions.
Figure 4.18 illustrates the importance of integrating Conversational
Keywords into online content. These keywords should be presented in a con-
versational tone, similar to a normal conversation between people. In this way,
252 4 Marketing, Sales, Customer Service

Fig. 4.18 Most common


questions about the brand
“Montblanc”—analyzed
by answerthepublic.com

relevant content can be more easily identified in voice searches. The challenge
for every company is to anticipate the questions that users would like to have
answered. Therefore, it is important to prepare online content with FAQs. To
increase visibility in voice searches, the corresponding W-questions should pre-
cede the answers. Online content that is supposed to answer questions can already
include frequently asked questions in their headlines. Alternatively, appropriate
questions can be integrated into the text to quickly guide search engine robots to
the appropriate answers for voice queries.
When creating speech-oriented content, there are two types to distinguish
between responses. On the one hand, there are concise answers to questions that the
user can immediately put into action. Such answers have a high probability of being
displayed as a Featured Snippet in the info box. On the other hand, there are more
detailed answers to recurring questions that may stimulate additional thought pro-
cesses in the listener and contribute to problem-solving. Here, keywords relevant to
traditional search engine optimization can also be integrated. However, it is often
observed that only short answers are given in voice searches.
It is recommended to make relevant online content available for the local voice
search. For this, the use of Google My Business is required. In addition, it is advis-
able to register your own company and/or own brands on platforms such as Yelp or
Tripadvisor. Much of the information that Google lists in a local search, comes from
these data and is used in voice queries (see [42]). Such Voice Engine Optimization
is particularly important for locally operating companies and brands, as regional
topics are still among the most common voice search entries. Therefore, local com-
panies should intensively deal with the optimization for voice search.
A significant challenge for companies is to increase the relevance of possible
answers through individualization. Depending on access rights, information about
previous search processes, purchases, reviews, recommendations, location, chat his-
tories, and private user data can be used. Marketing automation plays a central
role in connecting various touchpoints in personalized trigger chains. A comprehen-
sive and well maintained CRM database and the capture and interpretation of the
respective user context form the basis for this (see [2]).
4.4 Design of Voice Marketing 253

To enable context-related answers, the output data must be designed differently


compared to a conventional search query. When someone asks a “where?” question,
they expect an exact answer to this question. Therefore, it is important to understand
these changed usage habits as well as the specific expectations of the users in order
to conduct effective voice marketing. A key aspect of this is that in voice search, the
searcher often expects exactly one answer.
A user who asks for the nearest cinema does not want to read treatises on the loss
of significance of cinemas in the age of streaming or information about ticket solu-
tions for cinemas and theaters. When a user in Berlin asks: “Where can I buy
Montblanc products here?”, the answer must be based on the current location,
which—with the appropriate permission—was determined by the geo-location. A
reference to the Montblanc website or to online shops of various providers does not
help the questioner. Instead, the answer should be, for example: “The Montblanc
Boutique Berlin is located at Friedrichstraße 80. The store opens today, Tuesday, at
10:30 and closes at 19:00. You can reach it on foot in 13 minutes. Shall I guide you
there?” This example shows how the consideration of the regional context signifi-
cantly increases the relevance of the answer.
The User Context can also be defined by the content of a chat history. If
two people are discussing a specific restaurant, the command to the voice assis-
tant, “Show me the menu!”, could be related to the discussed restaurant.
However, for such an interpretation, permissions must again be in place to
examine one’s own chat content for relevant keywords. For example, if a user is
reading the Wikipedia entry for “Ralf T. Kreutzer”, the command “Show me
videos of Ralf” could be narrowed down to this person when searching for cor-
responding videos. In this case, the on-screen context would be taken into
account when answering the question.
Finally, the personal context of the user could also be incorporated into the
answers. For instance, the author Klaus-Peter Wiedmann could ask the question:
“On what topic have I published multiple times in recent months?” Here too, the
core of the question could clearly be assigned to a specific person. The command
“Order me the same pizza as last week” could clearly be associated with Domino’s.
This could result in the Pizza Napoletana being ordered again (see also [44]). To be
able to answer precisely here, a real-time data analysis is required (see also [45]).
The main task—both in voice marketing and in other areas—is to provide high-
quality and appropriate content for all four phases of the conversion funnel: aware-
ness, consideration, purchase, and loyalty. The Conversion Funnel describes the
process by which a person becomes a regular customer and possibly a recommender
through various stages (conversions). It should be noted that potential buyers have
different questions in the various phases of the conversion funnel. It is important to
answer these questions with appropriate Speakable Content (see Fig. 4.19 ; see [2]
for further details).
Every company should also check which brands are played out in search pro-
cesses based on the large language models like ChatGPT. A study provides exciting
254 4 Marketing, Sales, Customer Service

 Flyers  E-mails/e-newsletters
 Catalogs  Website/online shop
 Advertisements  White Paper
 Online banners  How-to videos
Awareness  TV/radio spots  Webinars
 Coupons  Blog posts
 Influencer  ...
 Customer reviews  Chats
 Demo videos  Conversations with friends
 Virtual/real store visits  FAQ pages
Consideration  Price comparison sites  Social media appearances
 Customer forums  Samples
 Website/online shop  ...
 Coupons and other  Websites
response elements  Social media presence
 Test offers  …
Purchase  Stationary stores
 Online stores
 Telephone sales
 Chats
 Opt-ins for telephone/e-mail  …
Bonding  Recommendations
 Customer ratings/customer reviews
 Social media posts
 Online/offline dialog
 E-mails
 E-newsletters

Fig. 4.19 Conversion funnel

insights on this. It was investigated what results the search engine Bing (Microsoft)
in cooperation with OpenAI—the developer of ChatGPT—comes up with when
recommending brands. For this, brand recommendations were requested for
10,000 different product categories (see [46]):

• For almost all inquiries, ten brands were recommended each time.
• Only 173 topics received no response, primarily topics from the fields of erotica,
medicine, or politics (typical for tools developed in the USA).
• In total, 27,466 different brands were recommended—on average each brand
3.6 times.
• However, the brand recommendations were not evenly distributed: 16,720 were
recommended only once—739 brands, on the other hand, at least 20 times.
• The ten most frequently recommended brands were Bosch (602), IKEA (395),
Philips (370), Nike (324), Adidas (324), Samsung (304), Sony (284), Makita
(265), Siemens (226) and Puma (224).

Note Box
It is recommended to regularly check when and how often your own company
or your own brands are mentioned in inquiries via the large language mod-
els. After all, mentions here are a perfect entry into the conversion funnel.

Currently, the voice interface is just a special form of individual touchpoints.


However, in the future it is expected that more and more touchpoints can be con-
trolled via voice. In perspective, the entire digital customer journey can be managed
through voice navigation. Therefore, companies should already take measures
today to prepare for the foreseeable challenges. It should be considered what
4.4 Design of Voice Marketing 255

contributions Amazon and Google make within the conversion funnel. Google will
focus in the future on supporting the search process at the beginning of the conver-
sion funnel as a voice-controlled search engine. Amazon on the other hand tries
increasingly to successfully lead voice-controlled search queries to a purchase com-
pletion in its own online shop.
For the entire process to work optimally from the user’s perspective, special
attention should be paid to the Voice User Interfaces (VUI). This is about the user
interfaces for voice input. These interfaces must not only be functional, but also
easy to navigate. In this way, the interest and willingness of voice users for interac-
tion via voice can be aroused. The digital assistants should be able to present the
desired content in an appealing way.
For an optimized voice search there are already various technical solutions (see
[33], p. 263). A simple and often sufficient method is the exact marking of the text
section, which should be reproduced by the digital assistant. “Text-to-Speech” con-
versions are the foundation of any successful voice marketing strategy. To mark text
passages accordingly, Google, Microsoft, Pinterest and Yandex have developed the
language markup named schema.org. These agreed markings are set as an open
web standard and indicate to a search engine crawler that the content is “speakable”.
The markup for speakability is defined on schema.org (see [47]).
The most important element of schema.org for voice engine optimization is the
Speakable markup. With Speakable, website operators can mark certain parts of
their content as suitable for the conversion of text into speech (see [48]). An exam-
ple of the impact of this marking on the search result is shown in Fig. 4.16. The
appropriate marking of relevant text passages facilitates the speech output.
The Speakable markup allows companies to support users in voice-controlled
search. This is especially important when it comes to conveying precise information
about the company. Speech outputs can, for example, refer to the journey, available
parking facilities or opening hours. It is recommended to experiment with the
Speakable markup today. The insights gained can be used to optimize the own
online content, to gradually improve the voice queries. The goal should be to pro-
vide voice users relevant information via speech output and offer further services.

Food for Thought


You should check which content is output by voice when you search for your
company or your brands. The test should also include relevant search terms of
your industry. Often the results are very sobering. Then it says “no answer” or
“Sorry, I don’t know that.”
Good for you: After all, you can be ahead of the competition with small
steps when it comes to voice-based search. You should take this opportunity!

How can companies develop their own skills for Alexa or actions for Google? For
companies that also want to implement transaction-specific voice searches, this
requires more than just the use of a markup language. It is specifically about the
256 4 Marketing, Sales, Customer Service

development of Voice Apps, which enable dialogues and other functions with voice
assistants. Voice Apps are the equivalent to smartphone and tablet apps. Their main
goal is to have a direct conversation with (potential) customers and support them in
the various stages of the conversion funnel.
Creating a user interface in the form of a Voice App is a comprehensive project.
The success depends crucially on the Voice App being advertised after development
and as many (paying) people as possible are motivated to use it regularly. At Amazon,
Voice Apps are called Skills, while Google of Actions speaks. The finished Voice
Apps are provided via special Voice App Stores. Unlike traditional apps, no down-
load is required for Voice Apps. Instead, it is sufficient to activate the desired skill
on Alexa. With the Google Assistant, even the one-time activation is not necessary,
it is enough to directly address one of the existing Actions.

4.4.4 Voice Commerce

In addition to communication with digital assistants (Voice Communication), there


is the possibility to promote the sale of products and services (Voice Commerce).
This is also referred to as Conversational Commerce or V-Commerce. Voice
Commerce is about shopping processes that run through system-based dialogues.
Already today there is a growing trend towards interaction between customers and
companies via messaging and chat applications like Facebook Messenger, WhatsApp
and WeChat. Digital assistants also play an important role in voice-based shopping.
In addition, digital assistants can enable the distribution of digital products and
services (Voice Distribution), such as streaming content from platforms like
Spotify & Co.
In which fields can Conversational Commerce be supported by chatbots and
digital assistants?

• Answering specific questions (such as payment and delivery conditions, delivery


times, product features, prices, origin characteristics)
• Online customer support
• Providing personalized recommendations
• Making relevant reviews visible
• Managing wish lists
• Conducting the purchase itself, including handling of payment and sending of
the order confirmation
• Notifications about shipping
• Offers of complementary services

Conversational Commerce is driven significantly by convenience. With digital


assistants, the user no longer has to switch between different media to get relevant
information. Instead, he can directly request support, make a purchase and pay via
the digital assistant—all on one device and all via voice. The support can be as
comprehensive as a convincing conversation with a qualified salesperson in a
4.4 Design of Voice Marketing 257

brick-and-mortar store. Digital personal assistants fulfill three important and


increasingly important customer expectations: Convenience, Speed and
Individualization.

Example: E-Commerce Actions


The online retailer Otto has already developed several e-commerce actions
for Google Home and for Google on the smartphone for the German-speaking
market. With “Ok Google, talk to OTTO”, the system is activated. Those who
do not own a smart speaker or are on the go can also use the OTTO Action on
their smartphone or tablet. For this, the Google Assistant must be downloaded
from the App Store or Play Store. The activation command is then: “Talk to
OTTO”. Users can ask the OTTO Action for the deal of the day and order
directly. Of course, you can also ask about the status of an already placed
order. This way, the user receives a current shipment information and possibly
further information about an order. For this, the Google Assistant must con-
nect with the user’s OTTO account. The system is also available for further
questions about OTTO. These can, for example, relate to the returns process
(see [49]). The main focus for Otto remains the creation of a convincing
customer experience in the sense of conversational commerce.
What does such a dialogue look like when you log in with “Ok Google,
talk to OTTO”?

• Welcome to OTTO. Nice to hear you! You can ask me about the deal of the
day or I can answer frequently asked questions. What would you like
to do now?
• Question: What is the deal of the day?
• Answer: KangaROOS long jacket with printed lining. Today for 79.99 €
(plus shipping costs). This item is available in various variants. Would you
like to view these?
• Question: How much are the shipping costs?
• Answer: We charge shipping costs of 5.95 € for a normal delivery regard-
less of the order and delivery address, or 29.95 € for a freight delivery.
What else can I do for you?
• Question: Is this offer actually for men or for women?
• Answer: KangaROOS long jacket with printed lining. Today for 79.99 €
(plus shipping costs). This item is available in various variants. Would you
like to view these?
• Question: That was not my question. I wanted to know whether this offer
is aimed at men or women.
• Answer: OK. Popular questions are e.g.: “How much are the shipping
costs” or “How do I return items”. What would you like to know?
• Question: I would like to know whether the KangaROOS long jacket with
printed lining is aimed at men or women (understood as “ended”).
258 4 Marketing, Sales, Customer Service

• Answer: Unfortunately, I cannot understand what you want from me.


Please repeat.
• Question: Is the KangaROOS long jacket (understood as “Icarus long
jacket”) for men or women?
• Answer: You can find many items on otto.de or the OTTO App.
• Question: Last attempt. Is the KangaROOS long jacket for women or men?
• Answer: Unfortunately, I can’t understand what you want from me.
Please repeat.
• Question: I would like to know if the KangaROOS long jacket is for
women or men.
• Answer: You can find many items on otto.de or the OTTO App.

To put it mildly: No enthusiasm is yet emerging.

Example: ChatGPT Integration at Zalando


The online fashion retailer Zalando plans to use a Fashion Assistant based on
ChatGPT. This should make it possible for customers to search for products
in the Zalando assortment using their own words. Questions could be, for
example, : “What is the perfect dress for a summer party on the Spree bank in
July?” or “What is being worn at a wedding in Denmark in August today?”
The Fashion Assistant could not only consider the occasion in his product
recommendations, but also the expected weather. In the further development,
the preferred brands of the customers as well as the delivery capability of the
selected products could be taken into account. Zalando has pointed out that
the service contract concluded with OpenAI is compatible with European data
protection law. In addition, Open AI may not use customer data for its own
purposes (see [50], p. 20).

Note Box
When integrating ChatGPT applications into offers that process personal
data, one thing must be observed: Here, prior consent to processing must be
obtained in accordance with § 6 DS-GVO. If such consent is missing, compa-
nies must expect sanctions and liability lawsuits.

How nice it would be if the following example of telephone announcements


would soon be a thing of the past due to the use of chatbots:

• For questions about the timetable, press 1.


• If you want to buy tickets, press 2.
4.4 Design of Voice Marketing 259

A future phone call with a chatbot of the Deutsche Bahn could instead sound
like this:

Hello, my name is Marie. I would like to know if my train leaves from Munich Hbf
arrives on time.
Hello Marie, please tell me the day and time of your journey.
Today at 1:52 PM.
Have you made a reservation? Then please tell me your reservation number.
The reservation number is 12345.
The switch malfunction has just been fixed. Your train ICE278 is therefore delayed by
15 min. I have notified your connecting train ICE315 to Basel. It will wait for you. Can I do
anything else for you?

Such a conversation takes less than 60 s—and ideally, you can also skip the waiting
loop. After all, such an automated service can take place 24/7. At the same time, the
achievable individualization of the information presents relevant information in
its purest form. The tedious search for the corresponding information—possibly
spread over different apps—can be omitted.

Note Box
It becomes visible here: Besides an intelligent software, the development of
such a customer-relevant service primarily requires a comprehensive net-
working of various data strands, to enable the chatbot to provide high-­
quality answers.

In the future, a dialogue with a digital personal assistant might sound like this:

Alexa, please order for me the Nike -running shoes, which I looked at two weeks ago
in Bonn You know where. But they should also have the two red stripes that I designed
in the individual product configuration.
Ralf, I’d be happy to. Would you like to have the new running shoes for the running meet
with Sabine tomorrow afternoon?
Yes, of course, what else?
Great. I ordered them for you at Run-fit. The shoes will be placed in your DHL parcel
box at 3:00 PM. That’s why I insisted on DHL delivery. Also, I was able to get a price
advantage of 10 € because I also ordered the Nike T-shirt that you put on your shopping list
three days ago. Payment as usual.
OK.
I’m now connecting you with Prof. Wüllner. You wanted to discuss with him the advan-
tages and disadvantages of artificial intelligence. On the screen, you will find a short sum-
mary of what Mr. Wüllner has said about this in the last few weeks online and offline. I have
marked the particularly sensitive points in red …
… and this future will not be long in coming.
260 4 Marketing, Sales, Customer Service

Note Box
The driver behind the trend towards Conversational Commerce is once
again user convenience. For this, companies must break up a multitude of
information and process silos to actually achieve a seamless integration. This
is a very time-consuming endeavor in many companies.

In perspective, Voice Ads will become part of voice search. Smart Speakers pro-
vide the necessary technology for this. The latest statistics on voice search show that
in the USA, 38% of consumers who have heard voice ads on smart speakers find
them less “intrusive”. 39% also say that they perceive Voice Ads as more appealing.
Consumers also respond more strongly to offers conveyed via voice. Almost 40% of
people who have heard a voice ad in the USA have also bought the advertised item
(cf. [40]). This positive reception of Voice Ads could have two causes:

• Voice ads offer the opportunity to address consumers on a higher emotional and
personal level—and thus more relevant.
• In addition, a novelty effect can lead to higher acceptance here. This will disap-
pear in perspective with the advance of Voice Ads—as was the case with online
banners. Here too, the initial enthusiasm for online advertising quickly subsided.

PRESTIGEenterprise is a Digital Signage Software that relies on the voice assis-


tant from Amazon and can listen. This innovative software serves as a natural point
of contact for retailers and end customers. It answers questions based on informa-
tion provided by both retailers and end customers. With a focus on Voice Commerce,
a skill management function was developed in addition to PRESTIGEenterprise,
with which content for the Amazon voice assistant can be quickly and easily adapted.
The Voice-Assistant-Point of Contact can transmit information such as opening
hours and upcoming events in the private environment. At the point of sale, Alexa
informs and visualizes special offers and their locations on the screen. The retailer
can use PRESTIGEenterprise to retrieve relevant data to monitor the status of the
screens and the number of open branch apps (cf. [51]).
A major advantage of Conversational Commerce lies in the integration of a
“Buy-it-again” function. Customers can easily reorder already purchased products
by voice input. This is particularly suitable for offers in the FMCG sector. In the
future, AI systems could determine the times for repurchases based on the “reorder
level” and recommend a repurchase or trigger it automatically with permission.
The basic idea of Predictive Maintenance is also applied in the consumer mar-
ket. Then we can talk about Predictive Servicing. The Internet of Things makes it
possible for everyday objects to receive and send information online. These devel-
opments can not only meet the consumer’s desire for more simplicity, but also meet
the needs of companies (see Fig. 4.1). An example of Predictive Servicing is pro-
vided by the Nespresso machine Prodigio [52]:
4.4 Design of Voice Marketing 261

Prodigio is the first Nespresso machine that is directly connected to your smartphone or
tablet. It combines your daily coffee enjoyment with additional benefits. Experience a new,
unique Nespresso experience thanks to this connection.

Such a comprehensive service is possible because not only the machine, but also the
customers are digitally recorded through their membership in the Nespresso -Club.
The purchasing behavior with the different coffee capsules can provide insights
into individual preferences in order to make tailored offers. Already today, the
Nespresso -system informs via the app when the capsule stock is running low. In
doing so, the average future consumption, the decision time (one button press to
reorder is enough) and the shipping time requirement are taken into account. Such
competent and proactive care can simultaneously reduce the customer’s willingness
to switch to significantly cheaper capsule providers.
In the future, such a service could be based on Predictive Analytics even be
proactive. If the provider knows that a customer drinks more coffee in winter and
prefers darker roasts, the customer could prospectively be sent a supply package in
advance, without the customer having to take action themselves. The intelligent
processing of this data forms the basis for the so-called Anticipatory Shipping.
Such a supply could in the future not only be used for coffee, but also for batteries,
diapers, wine, beer, mineral water and many other products. There are already pre-
cursors for this.

Note Box
Predictive Servicing offers many companies still a largely untapped growth
potential. Tap into it for your company!

Note Box
The driver of user acceptance of voice-based systems—in professional, but
especially in private environments—is the Convenience and speed in appli-
cation. No written text is required for communication, no menu structures
need to be worked through. Communication via language alone is sufficient.
With the increasing performance of the algorithms used and with an
increasing data base, the dialogues can become increasingly intelligent and
personal. That’s why chatbots will quickly develop into very powerful intel-
ligent personal assistants. The technical basis for this is provided by so-
called Conversational-AI-Platforms.
Digital personal assistants meet customer expectations Convenience,
speed and individualization. The basis for this is Unified Profiles—indi-
vidual data profiles about each individual person.
262 4 Marketing, Sales, Customer Service

4.4.5 Voice Integration

Voice Integration refers to the integration of voice assistants like Alexa, Google
Assistant & Co. into various products and applications. This allows users to interact
with devices and applications through spoken commands and requests—beyond
service and purchasing processes. Voice Integration enables a convenient and effi-
cient operation of devices and applications without a keyboard.
Amazon Alexa is already integrated into a variety of products today. In addition
to Amazon -owned devices like the Echo speaker, products from third-party provid-
ers are also equipped with Alexa. These include Smart Home devices such as ther-
mostats, lighting systems, door locks, and cameras. Also, consumer electronics,
like TVs, soundbars, and streaming devices, can be controlled by voice—for exam-
ple, through the Amazon Fire -Stick. Alexa is also already on board vehicles from
automobile manufacturers Audi, BMW, Ford, Mercedes, Seat, Toyota and
Volkswagen. This allows access to various vehicle functions and more by voice.
The Mercedes-me-Skill for Amazon Alexa offers the possibility to conveniently
access a Mercedes-Benz from home. All that is needed is an active Mercedes-me-
account. With this skill, various functions can be used and information about the
vehicle can be retrieved. Once the Mercedes-me -account is linked with the skill, a
variety of commands are available to the user. You can lock the car, start or stop the
auxiliary heating, query the mileage and the location of the car, ask for the fuel
level and the remaining range or retrieve the battery charge status. In addition, the
service status of the vehicle can be determined and corresponding reminders can be
sent out. The Mercedes-me-Skill facilitates communication with the Mercedes-
Benz -vehicle (see [53, 54]).
The possibilities of integrating digital assistants into further products and ser-
vices are almost limitless. More and more companies and brands announce partner-
ships with Amazon or Google or offer their own products with access to the digital
assistants. The range extends from the already outlined integration into vehicles to
the control of curtains and other devices in the household. These include vacuum
and lawn mowing robots, bathroom fittings, kitchen appliances and washing
machines.
Especially in the area of Smart Homes, digital assistants can enable comprehen-
sive control. Through voice commands Users can adjust the lighting, regulate the
room temperature, play music, and activate security features.
However, it is important to be aware of data protection and IT security issues.
Every connection to the Internet carries potential security risks and presents a pos-
sible target for hackers. Therefore, it is of great importance that companies and
manufacturers take appropriate security measures to protect the privacy and data of
their users.
4.4 Design of Voice Marketing 263

Food for Thought


Digital personal assistants are increasingly evolving into real digital butlers,
who are at our service in the office, in the car, but especially in our home 24/7.
They answer our search queries, place our orders, regulate light, temperature
and music in the house, play desired music—and get to know us better and
better. Whether we want it or not!
However, we usually do not know who will have access to the gigantic
data shadow we generate through our use, at what time, for what purpose and
with what intention in the future.

Movie Tip
An inspiring thought experiment on how a digital butler could act in the future
is shown in the movie “Her”. This science fiction drama tells the story of
Theodore Twombly, a lonely writer who falls in love with his operating sys-
tem. This contains an AI named Samantha, who develops human-like emo-
tions. During their relationship, Theodore and Samantha explore topics such
as love, identity and human connection in an increasingly digitized world.

4.4.6 Voice Controlling

To measure the success of voice marketing campaigns, measurable criteria are


required. The following key performance indicators can be used to determine the
results of voice marketing and identify areas for improvement:

• Number of people who have activated a company-owned Voice App (as a pro-
portion of the customer base)
• Proportion of people who have started a Voice Search in the relevant market
started (as a percentage of the total number of search queries to the company)
• Proportion of people who have started a Voice Search started and completed a
specific Conversion completed (as a percentage of the total number of users
with the same conversion)
• Number of Voice-Search-Aborts (as a proportion of all Voice Searches)
• Average revenue from purchases triggered by Voice Search (compared to the
average revenue of all online customers)
• Number of questions per Voice Search
264 4 Marketing, Sales, Customer Service

• Duration of Voice Search processes (divided by completion with and without


desired conversion)
• Number of unanswerable questions by the digital assistants (in relation to the
total number of questions asked)
• Proportion of regular Voice Search users (as a proportion of the total number
of Voice Search users)
• Proportion of Unique Voice Search users (as a proportion of the total number
of own customers)
• Website loading time for Voice Search with various questions
• Proportion of Speakable Content on the own website
• Intensity of Voice Accesses to website content
• Tonality of Voice Dialogues (separated by positive, neutral, and negative, to
recognize the emotional states of the users)

The insights gained here should be incorporated into a closed feedback loop (Closed
Loop) to ensure a continuous learning process in Voice Marketing. The market for
Voice Search is still developing, but already shows considerable growth potential.
Therefore, companies that owe their frequency online or offline to a high degree to
various search queries should definitely start initial exercises in terms of Voice
Marketing.
Your company should gain experience to be ready as soon as Voice establishes
itself as a communication format online. For development, you can use the Business
Model Canvas for Conversational Artificial Intelligence shown in Fig. 4.20 (see
fundamentally on this [55]). This can promote the networking of Conversational
Commerce with the previous company activities.
Companies should intensively deal with the opportunities and risks of Voice
Marketing, as Voice Search will dominate search processes in the future. Especially
for recurring orders of daily needs, Voice Commerce will gain in importance. For
larger purchases, at least the search for providers and the provision of decision-­
relevant information can be done via voice.
4.4

Key partners Key activities Value proposition Customer Customer segments


relationships
• Social networks • Utilization of various You can access our • Customer loyalty • Technology-savvy
• Suppliers data sources products and services through new offers end customers
• Sales partners • Development of elegantly via a voice • Greater exploitation • Technology-savvy
• Technology powerful algorithms interface. of potential through sales partners
partners • Development of individualized • Customers with
• Data protection relevant content We offer you a variety of approach physical handicaps
organizations • … new functions for • Development of an • ...
• Lawyers existing products and eco-system
• … services that you can
Design of Voice Marketing

• …
easily activate by voice.
Key resources These include, for Sales channels
• AI specialists example, personalized • Achieving omni-
• Marketing expertise and individualized channel
• Customer data recommendations and communication
• Content creation other content. • Establishment of a
• Powerful algorithms … voice sales channel
• Budget • …
• …

Cost structure Revenue structure


• Development and maintenance costs for the voice • Greater exploitation of customer value potential through
applications cross-sell and up-sell measures
• Costs for the continuous further development of content • Sale of new, enhanced products/services
• Possible cost reduction for customer loyalty due to lock-in • Tapping into additional sources of income by marketing data
effects as well as relief in the service center (e.g. for advertisers)
• ... • …

Fig. 4.20 Business model canvas for conversational artificial intelligence


265
266 4 Marketing, Sales, Customer Service

The integration of voice control in Smart Homes is already well advanced.


Companies that offer products for the private sector should recognize and provide
the possibilities of Voice interfaces early on. In the future, voice control could
become a central purchasing criterion alongside the brand. Cars are already consid-
ered as hardware platforms on which relevant software like Alexa is executed. It is
conceivable that other household appliances such as refrigerators, hobs, washing
machines and espresso machines can be selected based on available voice inter-
faces. In addition, voice control allows access to new target groups, e.g. for people
with visual impairments or physical limitations. Many aspects of their lives can be
made barrier-free through voice control.
The development of voice applications in B2B markets is expected to proceed
more slowly. Here, the use of voice assistants is often associated with greater risks.
It is therefore crucial that providers of voice assistants develop secure and value-
adding B2B applications.

Note Box
The challenge is: Think Voice !

Today we often still use one app for each specific application : one for the
Deutsche Bahn, one for the urban public transport like BVG, one for Eurowings, one
for Amazon, one for Spotify, one for the weather forecast, one for the news update
from Der Spiegel and of course the app of the Deutschlandfunk, to regularly listen
to important interviews, qualified background reports and the press review there.
In the future, many of these apps will become sleeper apps—downloaded to the
mobile device, but no longer used. More and more applications will—seamlessly—
migrate into the digital assistants and change our entire communication behavior.
We will receive all relevant information through these, start search processes and
place orders, manage appointments, listen to music, book trips, carry out financial
transactions and exchange with friends.
There will increasingly be a context marketing or must come. This means that
the information provided will for the first time or much more strongly be aligned
with the individual usage environment. Location-based services will become more
comprehensive context-based services. Thus, the slogan “Marketing as a Service”
will be comprehensively filled with life.
The long-term trend is therefore no longer voice first, but voice only ! We should
prepare for this as consumers, but especially as employees and companies. The path
to a voice-only economy is predetermined—and voice content will be of paramount
importance be!
4.4 Design of Voice Marketing 267

Food for Thought


Digital assistants will take over data sovereignty. The longer and more inten-
sively they are used, the more knowledge they accumulate about the user and
his habits. Based on our preferences, the digital assistant will exchange infor-
mation with certain companies—but not with others.
In addition, more and more dialogues between different digital assistants
on the provider and demand side will take place. The Personal communica-
tion will, from a future perspective, be partially replaced by machine-to-
machine communication. Digital assistants will independently negotiate
prices, delivery conditions, and more, cross- and up-sell strategies among
themselves. This carries the risk that companies will lose direct customer
contact.
The development of products and services can increasingly be oriented
towards machine-to-machine dialogues. After all, there is a wealth of exciting
information about past and current purchasing behavior as well as about the
wishes, dreams, and preferences of users.
Digital assistants will become influential sales and advisory agents,
who—largely detached from offering companies—decide on purchase or
non-purchase. The companies behind these assistants gain power with every
new piece of information.
This will lead to not only the GAFAM companies, but also the BATX
companies gaining in importance. Here, new winner-takes-it-all models are
emerging, which will further strengthen the already existing market domi-
nance. With all the negative effects on competition!

Note Box
Every company is called upon to deal in depth with the concepts of digital
assistants. The aim is to develop your own competencies for voice design,
because voice first will be the next big challenge—for all companies—sooner
or later! Keyboards and other classic input aids will become obsolete.

The Following Questions You Should Ask Yourself


• Have we already dealt extensively with the potentials of voice marketing?
• Which areas are interesting test fields for voice applications in our
company?
• Who is responsible for optimizing the voice engine in our company?
• What needs to be done to integrate speakable content into our website in a
timely manner?
• Is it sensible to invest in special app developments for voice marketing now?
268 4 Marketing, Sales, Customer Service

• How can we use voice marketing to improve our customers’ customer


experience?
• Which specific target groups could particularly benefit from voice market-
ing and how can we target them specifically?
• What innovative voice commerce strategies can we develop to increase our
sales and win new customers?
• Where do our competitors position themselves in terms of voice marketing?
• Who in our company is responsible for the mentioned questions and initia-
tives in the field of voice marketing?

4.5 Summary

• Voice Marketing offers a large number of exciting areas of application.


• The basis of voice marketing is the definition of the Corporate or Brand
Language.
• Voice Branding is about what content and emotional charge can be achieved
through language.
• An indispensable field of action for companies is the preparation for Voice
Search ; companies must create the conditions to be found in future voice-based
searches.
• Voice Commerce enables a voice-controlled purchase—including all accompa-
nying information and service steps.
• With Voice Integration, a voice interface moves into other products.
• Voice Controlling has the task of continuously capturing and evaluating the
results of voice marketing in order to identify optimization approaches.

4.6 Voice Identification and Speech Analysis

There are two areas of application to distinguish in voice identification (speech


recognition). On the one hand, it includes the already mentioned processing of natu-
ral language (NLP; core task “What is being said?”). On the other hand, it is about
speaker recognition (“Who is speaking?”). Here, the identification of a person based
on the characteristics of a voice is central. This process, also known as speaker veri-
fication or speaker authentication, is of great importance for identifying customers
when security-relevant processes or important transactions (e.g., telephone bank-
ing) are controlled by voice. Even with simple orders via voice—for example via a
digital personal assistant—speaker recognition is important.
4.6 Voice Identification and Speech Analysis 269

Food for Thought


If such speaker recognition had been present in Alexa, the following example
would never have occurred. It started quite harmlessly: A girl in the USA had
ordered a dollhouse and kilos of cookies via Alexa. She quickly learned how
mom and dad use Alexa as a shopping assistant. A snowball effect occurred
when a news anchor reported on this case in a US news broadcast and said the
fateful sentence: “I love this little girl, how she says ‘Alexa ordered me a doll-
house’” (see [56]).
Since Alexa is set up in many US households near the (often continuously
running) TV, many Alexa devices heard the sentence, understood it as a com-
mand—and executed it! A large number of complaints were subsequently
received by the broadcaster because Alexa here—only seemingly unasked—
initiated purchase processes. had triggered. Because Alexa had—almost cor-
rectly—from the sentence “Alexa ordered me a dollhouse” heard the activation
term “Alexa” and perceived the command “order dollhouse”. That Alexa—
currently—does not yet master the subtleties of grammar, is forgiven.
To prevent such effects, there exist Alexa -internal solutions: The purchase
option is on Amazon Echo by default activated, but can also be turned off.
Alternatively, a PIN can be stored, with which every purchase has to be con-
firmed. So they do exist, the security guidelines, but who always uses them
consistently?
Let’s go on another thought journey. A truck with large speaker boxes
drives through a street. From these boxes the command repeatedly blares:
“Alexa, open the door.” What will probably happen when in more and more
households Alexa has become a family member and the smart home is con-
trolled via Alexa—including the front door?

To enable speaker recognition for orders, but especially for door opening com-
mands—who would want to give here security passwords or customer numbers—
so-called Voice Prints (Voice Imprints) can be created be. In this process, the
corresponding systems take advantage of the phenomenon that every voice is
unique. One cause for distinguishable acoustic patterns of human speech is the anat-
omy (such as size and shape of throat and mouth). But also the acquired behavior
patterns such as pitch and articulation, often associated with dialects, contribute to
differentiation.
These vocal characteristics are represented by a Sound spectrogram depicted.
The spectrogram represents the frequency of a sound on the vertical axis over time
on the horizontal axis. This is the core of the voice print—and much more elegant
than an iris scanner, which we usually encounter in spy movies—and there also
works with eyes that are no longer in their natural place!
The Speaker recognition usually takes place in two phases: the registration and
the verification. In the Registration phase the speaker’s voice is recorded to create
the voice print. In the Verification phase the new voice print is compared with a
270 4 Marketing, Sales, Customer Service

previously recorded voice print. It is left to our imagination to imagine how such a
voice print works after a dentist visit (with anesthetic injection) or during an
acute cold.

Food for Thought


It would be interesting to know whether the following use of Alexa could have
been avoided by speaker recognition. What happened? The parrot Rocco from
the English Blewbury caused a stir: He ordered via Alexa the various things
on Amazon, by imitating the voice of his owner. Rocco is an African grey par-
rot, who the This species of parrot possesses the intelligence of a 5-year-old
child. They can mimic words almost exactly. Thus, it quickly became clear to
Rocco how to operate Alexa. After activation, the parrot ordered, among other
things, broccoli, raisins, watermelons, ice cream, and even light bulbs and a
kite. Alexa was also used by Rocco for musical entertainment. The parrot
requested music to which he then danced (see [57]).

Example: Amazon Connect


Amazon Connect Voice ID is a feature of Amazon Connect. It uses machine
learning to enable caller authentication as well as fraud risk detection in
real-time. This is intended to make voice interactions not only faster but also
safer. In the past, customer service centers often had to use time-consuming,
knowledge-based processes for authentication. The caller had to answer sev-
eral questions about personal details such as date of birth and mother’s maiden
name or favorite animal.
To avoid such unpopular query procedures, Amazon Connect Voice ID ana-
lyzes the unique voice characteristics of the caller. The prerequisite for using
Amazon Connect Voice ID is the customer’s consent. Then voice attributes
such as rhythm, pitch, and timbre are analyzed to create a digital voice print.
This individual voice print simplifies the authentication of the customer on
their next call. The customer then no longer has to answer any questions, but
can directly express their concern verbally (see [58]).

Based on the results obtained here, telephone agents and digital assistants can be
informed in real-time about the identity of the caller. This allows for faster and
more accurate verification. At the same time, potential fraudsters who are listed on
individual surveillance lists are also searched for in real-time. Then Voice ID com-
pares the incoming audio data with recordings of the fraudulent actors listed in these
lists. Suspicious callers can be automatically marked with a risk label during the call
(see [58]).
As users, we should therefore think about various mechanisms that help to ward
off such misuse and attacks by so-called social engineering. Social Engineering is
4.6 Voice Identification and Speech Analysis 271

an interpersonal influence that is intended to trigger certain behaviors in the other


person. This can be the disclosure of confidential information (such as passwords)
or the inducement to carry out transactions that are harmful to the person concerned
(such as purchases or transfers). To do this, the so-called Social Engineers spy on
the personal environment of the victim in order to then pretend to be people from
this circle, claiming to be person X or Y (keyword “grandchild trick”). Often, social
Engineers—based on stolen information—also infiltrate foreign IT systems. This is
referred to as the so-called Social Hacking. This can be prevented or at least made
more difficult, if a powerful speaker recognition is used.

Note Box
If we want to make digital personal assistants the interface in our customer
communication, we must integrate a speaker recognition to avoid abusive
use of the interface. “Speech recognition” without “speaker recognition”
alone is no longer sufficient.
What follows?
Voice recognition will be the new face recognition!
However, the same applies here: Nowadays, through social engineering,
voice prints are also created by fraudsters to overcome such security systems.
This is particularly successful with people, from whom there are voice
recordings.

One goal of the company Affectiva is to use artificial intelligence for analysis of
facial and voice expressions. This should enable AI agents to better understand
human emotions and cognitive states. The software offers the possibility to recog-
nize emotions and states like sleepiness and distraction. In addition, human activi-
ties can also be recorded. This also includes objects that a person is currently using.
For this purpose, the human face, the intonation (i.e., the change in tone in terms of
pitch and strength when speaking) as well as body posture are analyzed. The neces-
sary data of the human face are either obtained in real time via a webcam or a smart-
phone camera. The AI algorithms classify the captured facial expressions and assign
these expressions certain emotions. Computer vision is used to recognize the objects
used. Further algorithms analyze tone, volume and pace of speech (cf. [59]).
For training, Affectiva [59] has analyzed more than 10 million face videos from
90 countries. In addition, data from 4000 individual people were collected. These
include gender and ethnic affiliation as well as age, beard, glasses etc. These data
are intended to cover a wide range of human appearance to avoid algorithmic and
data-related biases (see Sect. 1.4).
This technology can be used, for example, for analyzing the state of a car
driver in the vehicle. A multimodal solution captures the processes in the vehicle.
This is intended to increase traffic safety and optimize the mobility experience.
Through the cameras installed in the car, the state of the driver, the passengers and
the interior can be recorded in real time. The evaluation of this data can help to make
272 4 Marketing, Sales, Customer Service

driving overall safer. Based on the data obtained during driving, driver monitoring
systems can be developed. Dangerous driver behavior, but also increasing sleepi-
ness as well as distraction and anger can be detected. Based on these findings,
appropriate messages can be transmitted to the driver in real time to support safe
driving. In 2021, Smart Eye, a leading provider of driver monitoring systems, has
acquired Affectiva. The goal of Smart Eye is to offer Human Insight AI—systems
that understand, support, and predict human behavior in complex environments (see
[59]). The company Receptiviti [60] has developed the Receptiviti Insights API. This
provides quick insights into people’s emotions, personality, mindset, and percep-
tions—based on their respective language data. Various sources of natural language
can be evaluated for this purpose. These include posts from social media, surveys,
transcripts, emails, news, and interviews. The resulting language psychological sig-
nals can be used for a wide range of business applications, including predictive
models and recommendations. To gain such insights, the applying companies do not
necessarily have to provide their own training data. Own data can be used for fine-
tuning the system after an initial deployment. Receptiviti’s model is based on over
19,000 validated relevant research studies. These studies deal with the linguistic
“fingerprints” of emotions and personality traits. They analyze the linguistic signa-
tures of various psychological states and deal with language-based predictors for
human behavior. A predictor is a variable used to predict another criterion—here
human behavior. This way, expected reactions can be predicted. Companies can
base these predictions on their own activities—for example, in the type and content
of their approach (see [60]). The statement “The experience is the product” also
applies to the communication between companies and customers in a Customer
Service Center. This often determines whether a customer stays and accepts further
offers. Such a contact influences whether a complaining customer remains loyal to
the company and recommends it—or is lost to the provider forever and informs
about his negative experiences on many channels. How can the Call Center Agents
be trained even better in their soft skills for customer contact? Voice Analytics can
be used to learn more about the speaker. The tonality of the speaker can be analyzed
to infer the emotional state and thus also the urgency of the concern (“How is some-
thing said?”). For critical conversation partners, who are identified by voice analy-
sis, a routing of the calls to particularly qualified employees can take place. The
company Vier has developed a AI system for Voice Analytics and offers it under the
slogan Psychology as a Service (see [61]). Through innovative speech analysis,
various sales and communication activities can be supported. The company’s merg-
ing of psychology and artificial intelligence is based on the hypothesis that every
person thinks differently and expresses themselves in a very individual way. The
system analyzes verbal communication and converts it into a psychological evalua-
tion. The basis for this is based on 38 Millions of text reviews from more than
25,000 participants. This dataset provided the training material for the AI agent. The
Emotion Analytics AI used here breaks down language into over 110 million com-
ponents. Not only words, but also their sequence and structures are represented.
Emotion Analytics compares these components with the data of large scientific
4.6 Voice Identification and Speech Analysis 273

studies. Patterns in the use of different words are thus recognized. This includes
analyzing the following linguistic elements and their combination:

• Positioning and type of words used


• Repeated use of words and their combination
• Structure and grammar of the language used

Through this, FOUR Emotion Analytics can derive psychological results from writ-
ten and spoken language. Thus, the AI-based software can make statements about
the psychological impact of language. The results can be, for example, “authori-
tarian language” or “visionary formulation”. Also “linguistic compulsion” or a
“relativizing language” can be recognized. This allows individuals, but also leaders
to objectively capture their own language effect to work on it. More targeted com-
munication becomes possible, which helps to avoid misunderstandings and misper-
ceptions (cf. [61]).
To provide further growth impulses to employees in direct customer contact,
Cogito has developed a coaching system with artificial intelligence. This is
intended to increase the emotional intelligence of telephone employees in particu-
lar. To this end, Cogito measures how well a conversation is going—in real time. In
a AI-driven real-time coaching for Call Center Agents, not only the behavior of
the agent is analyzed, but also that of the conversation partner. AI-supported mes-
sages promote an awareness of one’s own speaking behavior and the customer per-
ception. Recommendations and feedback contribute to this. These can be as follows:
“You talk too much”, “frequent overlaps”, “take a deep breath and pause between
sentences”. Feedback could also be: “You seem very tense.” This trains the call
agents and they can ideally conduct future conversations with more empathy, self-
confidence, professionalism and efficiency. At the same time, the agents are sensi-
tized to possible signs of purchase interest or customer frustration (cf. [62]).
These analysis possibilities open up a multitude of use cases. As long as the data
protection framework conditions are taken into account, such solutions can provide
important impulses for the affected parties.
Ewertz et al. investigated an exciting field of application for speech analysis [63].
Their study aimed to improve the prediction of corporate profits. The classic
profit prediction models are primarily based on numerical financial data. The
research team, on the other hand, hypothesized that vocal clues from the language
of managers have a considerable predictive power. Based on the voice analysis of
managers A more reliable prediction of future business results could be achieved
than through models based on detailed financial data and text inputs. Additionally,
it was found that investors who use earnings forecasts based on the language-based
models for their trading strategies can beat the market by an average of 8.8% per
year. Furthermore, financial analysts can improve their earnings forecasts based on
language analysis by more than 40%. Overall, it became apparent that the vocal
hints from managers are important predictors of future earnings. However, such
language signals have mostly been overlooked by investors so far.
274 4 Marketing, Sales, Customer Service

Think-Box: Questions You Should Ask Yourself!


• What opportunities exist to improve the Customer Experience through the
integration of voice?
• Have concepts of Voice Identification in our company already been exam-
ined for their relevance—in natural language processing as well as in
speaker recognition?
• Has our company already examined whether the recognition of emotions
in various business processes is important—and what possible use cases
could look like?
• Have we already dealt with AI systems for Voice Analytics and the concept
of Psychology as a Service?
• Where can we use Emotion Analytics AI effectively?
• Have we already dealt with the possibilities of an AI-driven real-time coaching?

4.7 Summary

• The use of Voice Identification and Speech Analysis in customer dialogue


begins with the definition of the desired goals.
• Various technologies are available for Voice Identification and Speech Analysis.
• Emotion Analytics AI provides in-depth insights into the emotional states
of people.
• Voice Analytics offer many exciting fields of application.
• Legal aspects, such as data protection, customer consent and compliance with
applicable regulations for the recording and processing of customer data, must
be comprehensively considered.

4.8 With AI to Effective Language

4.8.1 System for Increasing Text Performance

In order for you to systematically recognize the necessary, indispensable for com-
munication effect optimization potential, the software TEO Suite from tcl.digital
[64] can be used. With this, you can analyze texts for letters, but also for flyers and
websites to see to what extent these—oriented towards scientific standards—are
reader-friendly and above all designed to be activating.
The evaluation of your texts is based on the following indices (cf. [64]):

• Comprehensibility Index (measured by word and sentence length)—good com-


prehensibility is the basis of a reader-oriented text
• Abstractness Index (determined by the number of nouns)—in contrast to
abstract language, a pictorial text can be processed faster and engages the
reader more
4.8 With AI to Effective Language 275

• Human Interest Index (analyzed by the use of personal sentences)—personal


features in texts build a closeness to the reader
• Impulse Index (evaluated by the type and intensity of action appeals)—this is
about the activation quality or the impulse power to activate the reader through
text to act

An algorithm calculates the TPI from this—the Text Performance Index. This
algorithm varies depending on your communication goal. After all, a different lan-
guage is needed if you only want to convey information (e.g. a fact sheet for a
product). If your goal is instead customer care or activation to purchase, the
approach needs to be different designed. It is interesting that through the TEO anal-
ysis you not only receive concrete suggestions for optimization—in the sense of
constructive feedback. Your own texts are also evaluated in relation to a benchmark
to make the “room for improvement” clear.
Under the name Echobox, an AI-driven newsletter software for publishers is
offered. This is supposed to enable newsletters with significantly higher Creating
opening and click rates. The technology used predicts the expected virality of dif-
ferent content. At the same time, the AI contributes to determining the best order of
content, the most convincing design, and the ideal shipping times for each individ-
ual target group. The results obtained through automated A/B tests contribute to
this. The use can take place in two ways (see [65]):

• The newsletters are automatically curated, optimized, and sent by Echobox.


• The AI-based suggestions are used as a guide when human control is still desired.

Every company is called upon to test and use value-creating AI-based tools for com-
munication for its purposes.

Note Box
Your company should no longer initiate customer-oriented communication
that has not been examined for its comprehensibility—especially, but not
only, for its activation quality.
Finally, we should bear in mind: Hardly any customer is waiting to be
addressed by a company for advertising purposes. This means:

• We write for non-readers—in mailings, emails, websites, advertise-


ments, etc.
• We compose for non-listeners—for example, in TV and radio spots.
• We stage for non-viewers—for example at trade fairs, in retail, with
online banners and TV spots.
276 4 Marketing, Sales, Customer Service

4.8.2 Solutions for AI-Supported Translation

A challenge that has already been mastered in many areas is to translate texts into
other languages. However, a good digital translator should not just translate word
for word, but ideally also consider the aesthetics of the language, the melody of the
language, and possibly even the meanings of the second and third semantic (con-
tent) levels. Above all, the German translation program DeepL already delivers
impressive performances—often more convincing than Google Translate. DeepL
supports translations in over 30 languages. With Google Translate it’s even more
than 130. AI-supported translations in written and spoken language have become
increasingly better in recent years. For this, a large background and context
knowledge is necessary during the translation process so that texts are not only
translated “word for word”, but “in accordance with the content”.
The Berlin-based company Lengoo has developed a language technology to help
companies overcome particular language barriers. This is not about “everyday lan-
guage”, but about the knowledge of designated specialists. Based on its custom-
ers’ data, Lengoo trains company-specific language models for a neural machine
translation (NMT for Neural Machine Translation). The AI model for machine
translation is thus enabled to independently make translations that are tailored to the
characteristic expression of a company or a department. The results achieved in this
way can then be checked and, if necessary, corrected by human specialists. However,
this feedback information is not directly fed into the NMT model but is first cleaned
via the HALOS platform. HALOS stands for Human-Augmented Language
Operating System. As part of this cleaning, metadata is removed. In addition, the
data is de-personalized. The cleaned data flows back into model training. The tech-
nology used here enables a constant flow of high-quality training data for the ongo-
ing retraining of the language models (cf. [66]).
The language models of Lengoo are used by customers in three different ways:

• Texts can first be translated by the AI model and then edited by professional
translators.
• An API allows the AI model to be integrated into company-owned software. This
makes the software available to all users in many languages. Then, for example,
customer service employees no longer need to be multilingual in order to handle
customer inquiries in multiple languages. The relevant content can be automati-
cally translated by the language model.
• The software can also be directly integrated into the workflow for instant
translations.

Lengoo plans to support companies in the future also with sentiment analysis, con-
versational AI, and communication in the metaverse (cf. [66]; for more on the meta-
verse see Chap. 11).
4.8 With AI to Effective Language 277

4.8.3 Concepts for AI-Supported Text Generation

A step further are concepts that not only evaluate content, but support text creation
or even create it independently. The progress that has been made in AI-based
Content Creation is discussed in Sect. 4.2.2. In Augmented Writing, artificial
intelligence supports the authors in the creation of texts. A first step towards this is
the support in researching relevant information. For example, relevant trends and
the most important key terms on the topic can be determined. The author is auto-
matically proposed a concept for a text—with the relevant terms and possible side
topics that should also be covered. Based on this framework the texts are elaborated
by a human.
A much further support, which dominates when using the large language models
like ChatGPT, an author receives through Automatic Writing. Here, the texts are
independently created by AI agents. This is referred to as natural language gen-
eration or Natural Language Generation. This is a technology that converts data
into clear, human-sounding narratives—oriented towards the respective wishes of
the users.

Note Box
Augmented Writing is a technology that serves as a writing coach.
With Automatic Writing, entire texts are created by an AI engine.

The Berlin-based company Retresco enables companies to automatically create


high-quality texts based on data. The tech company can generate content in various
languages, such as German, English, French, and Italian. This can significantly
reduce the effort—for example, for the creation of product descriptions and their
translations. In total, this can easily scale content production. Retresco [67] offers
the following possibilities:

• Efficient text generation


Creation of multilingual and SEO-relevant content for various communication
channels
• Large Scale Content Automation
Development of many thousands of texts that are content-wise, grammatically,
and legally correct
• Individual AI projects
Integration of Generative AI for digitization and automation of various processes
and offers

The company Retresco offers editorial offices the possibility of automated report-
ing. In addition, smart reportings can be automatically created—be it in the con-
text of Business Intelligence or for regulatory areas. This is also referred to as
so-called Data-to-Insight processes (see [67]).
278 4 Marketing, Sales, Customer Service

The New York-based news and press agency The Associated Press (AP) uses
the speech recognition Wordsmith from Automated Insights. Wordsmith is a self-
service platform for generating natural language. This software is used to auto-
matically convert a large amount of raw data into publishable reports. This raw data
comes from the listed companies that announce their corporate results quarterly.
The challenge for Associated Press is to quickly and accurately extract the relevant
Financial figures are determined in order to create informative reports for investors
based on them. In the past, there were two challenges to overcome: On the one hand,
The Associated Press could only produce 300 such reports per quarter due to limited
human resources. Consequently, many thousands of potentially interesting stories
remained unwritten. On the other hand, the creation of such routine reports tied up
a lot of the reporters’ important time, which was therefore not available for more
demanding tasks (cf. [68]).
To remedy this, the Wordsmith platform uses speech recognition to automatically
convert raw data into publishable AP stories. For this purpose, the language gen-
eration engine was configured to write in AP style. Thus, Associated Press can now
produce 4400 quarterly financial reports—an almost 15-fold increase compared to
manual efforts. It is important to ensure that these reports have the same accuracy as
readers expect from any of the articles written by AP. Apart from an explanation at
the end of the stories, there are no indications that they were written by an algorithm
(cf. [68]).
Similar to financial reporters, sports journalists also had to sift through stacks
of game scores and game notes to write previews and summaries for college basket-
ball and minor league baseball games. Before the introduction of Wordsmith, the
journalists only had time to write about the top teams in each league. They were
therefore unable to report on games of lesser importance. This has now changed.
After a comprehensive testing process, it was found that the likelihood of errors
in the automated creation of texts has even decreased compared to manual work. In
total, about 20% of the time for creating result reports per quarter could be saved. It
is planned to extend the robot journalism to other areas, in addition to sports
reporting, for example to reports on the development of unemployment figures
(cf. [68]).
Carlson et al. [69] have even developed an AI algorithm that can write descrip-
tions for wine and beer that are similar to those of human authors. Such a AI som-
melier can describe beverages without ever having tasted them. Based on natural
language processing, an AI agent can take over the writing of such product descrip-
tions. To train the algorithm for this, over 100,000 wine and beer descriptions were
taken from online sources. These descriptions included information about the origin
of the product, the grape and hop variety, further information and also the price. To
create its own descriptions, the AI engine received data about the wine or beer to be
described. For this, a relatively small amount of product attributes (metadata) is suf-
ficient. These include, for example, the name of the winery or brewery, the pro-
cessed natural products, the alcohol content and the price.
4.8 With AI to Effective Language 279

A corresponding test approach showed that readers could not distinguish between
AI-generated and human-written descriptions. Product descriptions can be distin-
guished. Such an AI sommelier can support people in their work. The AI engine
creates a first description based on metadata, which is then completed by an expert.
In this way, the development of many hundreds or many thousands of product
descriptions can be supported—in the future also for coffee, cars and many other
products (cf. [69]).
AI-supported Content Creation is also important in corporate communication.
Therefore, the developments described will not only affect the fields of work of
journalists, but all people who generate “content” for the most diverse channels or
are responsible for content marketing. Due to the increasing relevance of Content
Marketing, the need for storytelling—telling good stories—is continuously
increasing for most companies (cf. on content marketing [3], p. 153–183). Here it
will show whether AI-supported systems can only prepare data and facts in a user-
oriented way (as in financial reports and sports reporting) or whether they can
also—more or less independently—tell exciting stories that captivate readers and
contribute to building knowledge about companies.
To prevent the increasing oversaturation of users with content (keyword Content
Shock), these need to be prepared more and more for specific target groups or even
target individuals. An almost inexhaustible source of information that can be used
to generate content is the freely available information that people provide about
themselves (for example via social media). Retargeting in online marketing and
individualized recommendations (e.g. at Amazon) are the first “simple” precursors
to this, which operate with some statistics, but largely without artificial intelligence.
The leaps in development the content generation has already made is illustrated
by the following example.
The AI-based texting is demonstrated below using the example of the Hamburg-
based company Neuroflash (cf. [70]). First, it is queried whether a blog article, a
LinkedIn post, a Facebook post, a product description or a paragraph in Bavarian
dialect should be created. In this case, a seminar description was to be created,
which was based on the following input: “Seminar on sustainable corporate man-
agement, green marketing and green branding, consideration of ESG criteria, avoid-
ance of green washing, consideration of the circular economy, avoidance of waste
through designs that are oriented towards sustainability.” This was the input for the
text development. In addition, the tonality “urgent” was defined. The following
alternative texts were created with AI support:

The seminar will focus on sustainable corporate management, green marketing and green
branding. Participants will learn about the ESG criteria (Environmental, Social and
Governance) and find out how they can be taken into account when designing their products
and services. The topic of green washing will also be addressed. Participants will receive
tips and tricks on how to make their company more sustainable. In addition, the circular
economy will be discussed and explained how waste can be avoided through designs that
are oriented towards sustainability.

Do you want to make your company more sustainable and future-proof? Then this seminar
is just right for you! We provide you with comprehensive knowledge about sustainable
corporate management. You will learn how to use green marketing and green branding
280 4 Marketing, Sales, Customer Service

Implement branding to sustainably profile your company. ESG criteria (Environmental,


Social, and Governance) play a central role in this. In addition, you will learn how to avoid
greenwashing and integrate the circular economy into your company. Avoiding waste also
plays an important role in the sustainable design of products. We will show you how to suc-
cessfully implement these design principles.

The standard use of large language models for this type of task has already begun.
This is demonstrated by the triumph of ChatGPT & Co.
An interesting contribution to the support of human content creation is made
by the company Acrolinx, a spin-off of the German Research Center for Artificial
Intelligence (DFKI). The company has developed a Content Impact Platform to
improve the quality and impact of corporate messages. This can be integrated into
the workflow for content creation and reduces the stress of content creation through
automated checks. This can, for example, block the publication of content if this
content contradicts the business objectives and standards of content creation.
For this purpose, Acrolinx uses an AI engine that analyzes language for style,
tone, and word usage, including brand terms and technical terminology, in multiple
languages against the backdrop of company-specific goals. Most companies address
different target groups that have different information needs. This needs to be taken
into account in communication, without neglecting the brand values. In addition, it
is important to provide the “right” information at the relevant contact points for the
customer journey (see [71]).

Think-Box: Questions You Should Ask Yourself!


• How consistently is our internal and external communication—before dis-
patch—analyzed not only for comprehensibility, but for its performance?
• Are appropriate analysis concepts already integrated into the communica-
tion workflow?
• Do we regularly test our mailings, emails, flyers, websites, etc. in A/B tests
to determine the achievable performance increase?
• Who is responsible for increasing performance in our communication
overall?
• What concepts for content creation are already in use with us—and with
what results?

4.9 Summary

• Concepts like the TEO Suite support advertisers in generating texts that have the
desired effects.
• AI-based translation programs facilitate multilingual communication.
• AI-based translation systems will soon take over many traditional interpreter
tasks—for texts and spoken language alike.
• AI engines for generating texts are being used in an increasing number of busi-
ness areas.
4.10 Conversion of Text into Image and Video Content: AI Creation of Music 281

4.10  onversion of Text into Image and Video Content: AI


C
Creation of Music

The rapid development of Artificial Intelligence has also opened up revolutionary


possibilities for creating image and video content. AI models can be trained with
extensive datasets of images and videos to generate “new” content that looks decep-
tively real. From creating realistic images of objects to generating animated videos,
AI can work with impressive precision and creativity. The creative sector can ben-
efit in various areas from AI applications.

4.10.1 AI-Supported Development of Images

The possibilities for image creation through Text-to-Picture generators will be


explained here using the example of DALL-E. DALL-E is an AI model developed
by OpenAI and introduced in January 2021. It is an advancement of the Transformer
architecture and is a 12-billion-parameter version of GPT-3. DALL-E is specifically
designed to generate images based on text inputs. The name DALL-E is a combina-
tion of the names Salvador Dalí and the remote-controlled mini-robot
WALL-E. DALL-E was trained to generate images based on text descriptions. The
required text input is called Text-Prompt. DALL-E is a generative Artificial
Intelligence. It creates “new” content and uses existing image material for this pur-
pose. These are databases with millions of images available on the internet. These
were analyzed and classified by AI agents. In this process, correlations between
images and their descriptions were learned. Through the Diffusion process, new
images are triggered by a text prompt. On the website https://labs.openai.com/,
DALL-E 2 shows various example images. With “mouse over”, the text prompt
underlying the respective image becomes visible. DALL-E 2 has been publicly
accessible since April 2022 (see [72, 73]).
The main field of application for DALL-E is to generate images based on text
descriptions. For this, simple or complex instructions can be used. How DALL-E
responds to the instruction “Create a picture, where a professor is sitting on a
banana—reading a book”, is shown in Fig. 4.21 (the two creations on the left). By
the instruction “Create a picture, where a professor is sitting on a banana—reading
a book—flying through the universe”, the result changes in Fig. 4.21 (right). Here it
becomes clear, that DALL-E can also generate creative and unusual images that do
not exist in the real world. It can represent both simple objects and complex scenes.
Moreover, it can capture different styles and perspectives in the generated images.
It can produce realistic images, but also stylized or abstract representations. DALL-E
is also capable of considering and varying different textures, colors, shapes, and
details.
282 4 Marketing, Sales, Customer Service

Fig. 4.21 Images generated by DALL-E

Food for Thought


It is interesting which stereotypes of a professor become visible in these
creations by DALL-E :

• A professor is male (in English, there is no distinction between professor


and professorin; consequently, the text prompt was gender-neutral).
• A professor is older (few hairs).
• A professor wears (mostly) glasses and has a beard.
• A professor is conventionally dressed—with tie, scarf and of course with
classic footwear (no sneakers).

These prejudices of the AI engine are integrated into the “new” creations.

DALL-E can—currently—only generate static images, no sequences of motion


or videos. Moreover, DALL-E—today still—has difficulties controlling specific
details in the images, as the process of image generation is based on probabilistic
models. It is also prone to errors and can occasionally deliver unexpected or inap-
propriate results. This is evident in Fig. 4.21 in the design of the hands. Nevertheless,
there are diverse application possibilities for DALL-E in the fields of art, design, and
advertising. It allows users to quickly visualize new ideas and explore creative
concepts.
A higher artistic claim than DALL-E is fulfilled by the creation of Midjourney,
an independent American research lab. In this application, a bot is fed via the com-
munication platform Discord with specific commands—ideally with a very detailed
description of the desired motif. Compared to DALL-E, the results of Midjourney
are more like real artworks—and do not look like differently coherent collages of
individual motifs. Midjourney has been available as an open beta version since July
2022 and can be accessed via: https://docs.midjourney.com/.
Stable Diffusion is the text-image diffusion model from stability.ai. The model
generates photorealistic images from any text input. The underlying deep learning
model Stable Diffusion was developed in cooperation with the Ludwig Maximilian
4.10 Conversion of Text into Image and Video Content: AI Creation of Music 283

University of Munich and Runway Research. Applications can be found here: https://
stability.ai/.
Further concepts for AI-supported generation of images are presented by the fol-
lowing providers:

• https://www.jasper.ai/
• https://creator.nightcafe.studio/
• https://starryai.com/
• https://photosonic.writesonic.com/

AI algorithms now also make it possible to generate human bodies that move like
real people and can also speak authentically. An exciting field of application in addi-
tion to the already introduced Digital Humans (see Sect. 4.2.4) are artificial people,
who, for example, present fashion items. The company Botika enables scalable
regionalization of fashion photos through an unlimited number of AI-generated
photo models. Through a large number of different models, the various customers
can more easily recognize themselves in the presentation of offers. For this purpose,
existing photos are converted into an infinite number of hyper-realistic images. This
eliminates the makeshift solution, in which photos without a head or unsuitable
models are used. With a single mouse click, facial expression, skin tone, hairstyle,
background and much more can be changed. A permanent shooting can thus be
avoided (see [74]).
In visual arts there are numerous experimental approaches for the use of
AI-based design. A style transfer is used. Here, the AI engine determines the style
of a template (painting, text, music etc.) and transfers this style to a “new” work.
The resulting results would still contain recognizable elements of the original tem-
plate. In visual arts, style transfer is about the recomposition of images in the style
of other paintings. For example, the AI system can be given the masterpieces of
Munch, Picasso, Rembrandt or Van Gogh “fed in”. Then a mountain landscape—
from the perspective of an AI system—can be painted in the style of Edvard Munch.
When looking at the images generated in this way, it becomes very clear how pat-
terns recognized by artificial intelligence (colors, line management, image compo-
sition) are used for the “new creation”.
An AI application named The next Rembrandt has evaluated 15 terabytes of infor-
mation from the famous painter to learn his painting style. This included 346 origi-
nal works that were transferred to the system in high-resolution 3D scans. With this
knowledge, the system managed to create a painting of a man with a hat and white col-
lar using a 3D printer in 2016, which looks overwhelmingly real. Even the Rembrandt
expert and art historian Gary Schwartz admitted that the developers had succeeded in
identifying those characteristics that would make a Rembrandt a Rembrandt.
Already in 2018, a painting by artificial intelligence was auctioned for the first
time. The work Edmond de Belamy was sold at Christie’s auction house in
New York for $432,000. The picture was developed based on a dataset of 15,000
real portraits. The signature of the work is interesting: min G max D Ex + Ez
(log(1 – D(G(z))))—as a marker for the algorithm used.
284 4 Marketing, Sales, Customer Service

It becomes clear that artificial intelligence is penetrating more and more creative
areas. Google has set itself the task in its Arts and Culture Lab to colorize the paint-
ings that Gustav Klimt created for the ceremonial hall of the University of Vienna
between 1900 and 1903, which only exist in black and white photos. The works
were lost in World War II. To colorize the black and white photos, a team of art and
AI experts worked together on the project The Klimt Color Enigma (see [75], p. 15).

Food for Thought


Can the creations of artificial intelligence be distinguished from human
art? A research team led by Prof. Ahmed Elgammal is putting the quality of
AI-created artworks to the test. A jury of humans was asked to evaluate works
from various art directions based on characteristics such as aesthetics and
stylistic quality, without knowing the respective author.
It turned out that the works of AI systems were often rated better than those
created by human hands. How was this possible? The following logic of arti-
ficial intelligence: It tries to pick up on striking—and therefore seemingly
proven—features in new forms in various works. The fact that these come
closer to “human ideals” is almost inevitable.

In the future, the creative achievements of AI will become more and more con-
vincing. The so-called Generative Adversarial Networks (GANs) contribute.
The word “adversarial” means “opposing” or “antagonistic”. These networks
can be used for the creation of image and language content. Generative model-
ing is a form of unsupervised learning. When using GANs, two types of models
are used simultaneously, which work “against each other” (cf. [76]):

• Generative model (the “Generator”)


At its core, this involves learning regularities and automatically recognizing pat-
terns in the input data. These regularities and patterns can be sought in the paint-
ings of Manet or in the music of Bach. The generative (creative) model develops
new content based on these regularities and patterns identified in the input data.
The “generator” thus applies the knowledge gained to create thousands of new
images in the style of Manet or music in the style of Bach—based on the features
of the existing works previously identified.
• Discriminator model (the “Checker”)
The discriminative (separating/distinguishing) model evaluates the content
determined by the generative model as fake (generated) or real (domain). The
“checker” thus checks whether, from his perspective, a content contained in
the passed data instance (patterns of a painting or sequences of tones) actually
belongs to the training data set or was newly created. It thus checks for each
content whether it is authentic in the sense that it corresponds to the original.
The “checker” communicates the result to the “generator”, who continuously
generates new content and passes it along with the real data to the “checker”. The
“generator” tries to convince the “checker” of a non-existent authenticity of the
4.10 Conversion of Text into Image and Video Content: AI Creation of Music 285

self-created content through ever more perfect “forgeries”. The “checker” thus
checks whether the submitted creations correspond to the style of Manet or Bach.
If these do not convince, they are returned to the “generator” with a correspond-
ing evaluation for improvement.

The “generator” and the “checker” are in the same feedback loop. The feedback in
pairs of AI systems allows content to be created faster and tasks to be completed
more quickly than in isolated working systems. After the creations have been
exchanged possibly millions of times, the “generator” gets better and better and
finally produces works of art that look like Manet or sound like Bach. The use of
Generative Adversarial Networks thus allows us to go significantly beyond mere
reproduction when creating new content.

4.10.2 AI-Supported Development of Videos

The AI engines can also be used to automatically create videos from a collection of
clips, images, and music. By analyzing content, moods, and styles, coherent and
appealing videos can be generated. In the Real-time video adjustment allows
video content to be adapted to individual needs and preferences through real-time
analysis of user data and environmental variables.
Idomoo [77] offers platforms and services that enable such personalized real-
time video adjustments. These personalized videos provide each viewer with a
unique experience—at every stage of the customer journey. According to—cau-
tiously considered statements of the provider—personalized videos can not only
increase the conversion rates and the values of the Net Promoter Score, but also the
customer engagement and sales sustainably.
Another interesting field of content creation is the automated conversion of text
into video content. Here, videos are generated based on texts, which simultane-
ously allow a multitude of individualizations. An example is the AI-supported solu-
tion from Wave.video. This allows blog posts, articles, and text files to be quickly
converted into videos (see [78]).
Another AI video producer that converts text into videos is the company
Synthesia [79]. The company promises that professional videos can be produced in
15 min. For this, the system uses existing texts in over 120 languages. At the same
time, users can choose an AI avatar from over 100 different offers, who acts as a
moderator in the video. No special equipment or knowledge in video editing is
required. Companies use the solution from Synthesia to create training, how-to, and
marketing videos. Today, over 50,000 companies are using this software. Users
include, among others, Accenture, Amazon, BBC, B/S/H and Reuters. To create a
video, various templates can be selected.
There are now more and more applications of an AI-supported film produc-
tion. For example, an AI application was specifically developed to write a new
script for a sequel to the popular sitcom Friends from the 1990s. For this, the system
was fed with all the old episodes in order to recognize patterns and to tie in with
existing plotlines.
286 4 Marketing, Sales, Customer Service

A glimpse into further developments is provided by a manuscript written by


an AI algorithm named Benjamin. The seven-minute science fiction film It’s No
Game with David Hasselhoff was developed by an AI system. For this, the sys-
tem was given data from Aaron Sorkin, Baywatch, Knight Rider and William
Shakespeare. The result appeared in 2017 on the tech portal Ars Technica,
where the film can still be seen: https://arstechnica.com/gaming/2017/04/
an-­ai-­wrote-­all-­of-­david-­hasselhoffs-­lines-­in-­this-­demented-­short-­film/.
Already in 2018, Lexus, supported by Artificial Intelligence, develop a commer-
cial. The short campaign film Driven by Intuition tells the story of a Lexus devel-
oper, who gives a car the final touch before it leaves the factory to be used in a crash
test. The script for this spot was entirely developed by Artificial Intelligence. For
this, the AI system analyzed campaigns from car and luxury brands of the past 15
years, which were awarded by Cannes Lions for their creative performance. Thus,
the algorithms learned which contents were rated particularly well and how to inte-
grate them into a spot. In addition, “Emotional Intelligence” data from the video
service provider Unruly was used. Based on this data, the AI system learned how to
combine objects and locations to trigger certain emotions in the viewers. To ensure
that the AI-created spot matched the corporate design of Lexus, the AI system was
also given the corresponding framework conditions. The evaluation of audio, text,
and visual data from the Cannes films was carried out by IBM Watson. Based on the
insights gained, the British director and Oscar winner Kevin Macdonald was hired
to implement the script in film. The result can be seen here: https://www.youtube.
com/watch?v=-­iaBJ5rqOdg
To answer the question why the AI system chose this script (keyword Explainable
Artificial Intelligence), Lexus also made a making-of video, which is supposed to
show the “thought processes” of the AI application. The choice of a Japanese devel-
oper for the TV spot was intended to convey the origin of the brand Lexus. In addi-
tion, IBM Watson had recognized a pattern that the use of drone shots in car spots is
very well received—especially when they are hilly and at the same time the sea is in
sight. Here, the basic features of Artificial Intelligence become very clear again:
recognition of patterns that lead to success (here Cannes awards). You can find the
making-of film here: https://www.youtube.com/watch?v=l91ehyqFca8.
Artificial Intelligence can be used in the film industry in various ways. Even
today, the creation of original content and scripts can be based on the analysis of
already shown films. For this, viewer reactions are particularly important, which are
collected in real time by streaming service providers. Which scenes are watched
again? When are films stopped? After which scenes is the viewing completely
stopped? Based on this data, films can be structured in such a way that they meet the
viewing expectations of certain viewer groups. In this way, scripts can be written for
films that can no longer fail—unless user expectations change.
AI can also assist, for example, when selecting clips for a trailer. Such an appli-
cation was first used through a collaboration between IBM Research and 20th
Century Fox. This was intended to create a film trailer. for the film Morgan was
created. The Watson AI platform from IBM analyzed hundreds of horror movie trail-
ers for this purpose. The aim was to determine which contents trigger fear reactions
in people. Based on these findings, the AI selected ten clips for the trailer. Based on
4.10 Conversion of Text into Image and Video Content: AI Creation of Music 287

these specifications, a human editor created the final version. This AI-supported
process took only 24 h. For a manual editing, ten to 30 days are often estimated. The
result and explanations of the procedure can be found in this video: https://www.
youtube.com/watch?v=gJEzuYynaiw
Other projects have gone a step further and used artificial intelligence to co-write
complete films. Such projects also require a comprehensive training phase. For this,
the AI agent is trained with thousands of plot summaries. In addition, the films are
linked with the respective box office results. The goal is to recognize a convincing
plot pattern that can also lead to high box office results. Speech recognition is
supposed to help decipher film scenes in terms of content. This way, actions that are
expected to be particularly successful with the audience can be identified.
An example is the short science fiction film Sunspring, which was completely
written by an AI and was already presented in June 2016. The director Oscar Sharp
made the film for an annual film festival. This also included a 48-hour film chal-
lenge. The participants were given a series of specifications (usually props and lines
of dialogue) that had to appear in a film. The participants had 2 days to create it. An
AI researcher provided the script for the film. However, the script consisted entirely
of almost impossible directions. For example, it said: “He stands in the stars and sits
on the ground.” The actors interpreted the text while reading and added tone and
body language. From the slightly mutilated sequence of sentences, a story about
romance and murder emerged, set in a bleak future world (see [80]).

Note Box
As of today, an AI system is not capable of being truly “creative” in the human
sense by creating something never seen before. What is achievable are conge-
nial recreations in the sense of high-quality translations (see [81], p. N4).
Therefore, “new” is mostly put in quotation marks here.

Food for Thought


What true artistic creativity is characterized by, is to be illustrated here using
the example of Pablo Picasso. He went through various artistic phases in his
life and contributed to the development of various art styles. In the so-called
Blue Period painted Picasso painted mainly in shades of blue. In his Rose
Period his style shifted to warmer pink and red tones. Later, Picasso turned to
Cubism, which broke down objects into geometric shapes. In his Neoclassical
Phase, Picasso returned to more classical forms and themes. In his last artis-
tic phase, the artist turned to Surrealism. Each of these phases represents a
different style in Picasso’s work and each phase has made a unique contribu-
tion to art history.
Such artistic creativity is still not achieved by any AI system today. After
all, this was not about modifying already known patterns, but creating some-
thing completely new—sometimes never before seen.
288 4 Marketing, Sales, Customer Service

However, AI systems are already capable today of simulating creativity well, so


that we often can no longer perceive the difference between human creativity and AI
creations. The AI-supported “new creations” are based on a very specific model
of operation. AI algorithms also try here, by evaluating dozens, hundreds or thou-
sands of musical compositions, images and/or texts to recognize certain patterns.
These patterns describe the often as “handwriting” of artists referred specific
approach to composing, painting or writing poetry. This “handwriting” can be rec-
ognized by algorithms and used to create “new” works based on it. The algorithms
still have a “location” in the identified patterns.

Note Box
Even with the AI-based creation of image and video content, ethical and
legal aspects must be considered. These include questions of data protection
and intellectual property, as AI in the training phase is based on large
amounts of data and may use copyrighted content.
In addition, there is the risk of manipulation and misuse of AI-generated
content, which can lead to misinformation. This is always the case when AI
agents are used for fraudulent activities.

4.10.3 AI-Supported Development of Music

An exciting project of the company OpenAI is Jukebox. Jukebox is a AI system for


the development of music (including vocals). To become active in composing, it
needs Jukebox only requires the specification of the desired genre (e.g., Hip Hop,
Rock, Pop, Blues, Country Music, Jazz, Heavy Metal, Rhythm and Blues or
Reggae), lyrics or the mention of the desired artists. Thus, one can submit to the AI
system a text not used in training—together with the mention of the desired artist
and the preferred genre. Then, Jukebox generates within a few minutes some short
example samples of a new piece of music that is oriented to these specifications.
When the user has selected a sample that meets his expectations, it is completed
by Jukebox. From a already known piece of music (incl. text), Jukebox can also cre-
ate a completely new version. In addition, Jukebox can also complete already started
pieces of music. For this, a few seconds of the piece of music are sufficient to con-
tinue it in the recognized style. Jukebox is generally available to everyone for use
(cf. [82]).
For this purpose, Jukebox was trained with over a million pieces of music. In
addition, the AI system was provided with the names of the artists, the respective
genre as well as the release dates. Thus, Jukebox was conditioned to the different
artists and the various genres during the training phase. In addition, a conditioning
of the AI model to texts was carried out. The development of pieces of music with
Jukebox is however, very time-consuming. For the creation of one minute of music
it takes about nine hours of computing time (cf. [83]). Examples of pieces of music
that were generated by this AI can be found here: https://openai.com/blog/jukebox/
4.10 Conversion of Text into Image and Video Content: AI Creation of Music 289

In the examples available on openai.com it is easy to see that the pieces of music
created by AI do have a musical coherence and typical chord patterns. However,
what is still missing today are overarching structures in the songs, such as choruses.
Other companies have set themselves more complex tasks and tried to use AI
applications for the composition of classical music—for example, based on the
evaluation of the masterpieces of Bach, Beethoven or Chopin. In this context, the
already mentioned style transfer is used. This involves the composition of music in
the style of the previously analyzed music. In the course of this style transfer, the
algorithms are fed with the respective works of the artists in order to recognize their
respective style and to use it as a basis for the composition of “new” pieces or entire
“new” symphonies. The developed “new” works represent nothing more than a
variation and/or new combination of the patterns already recognized from the exist-
ing works.
On the occasion of the 250th birthday of Ludwig van Beethoven, the Deutsche
Telekom in collaboration with a team of international music experts, experts in arti-
ficial intelligence and scientists of the Beethoven-Haus Bonn dared a special experi-
ment: Beethoven X—the completion of the 10th Symphony, which only exists in
sketches. Using AI methods, the existing musical drafts of Beethoven’s “Unfinished”
were completed. The collaboration between human and Artificial Intelligence was
intended to stimulate the imagination and push the boundaries of what is possible.
This should also answer the question of whether AI algorithms can truly be creative.
An impression of the underlying process can be found here: https://www.youtube.
com/watch?v=kS6h1TkuOrw. The final result can be heard here: https://www.
magenta-­musik-­360.de/beethoven-­10-­sinfonie.
To initiate this AI process, the AI system was first fed with Beethoven’s compo-
sitional work, which includes nine symphonies, 32 piano sonatas, many string quar-
tets, and five piano concertos. In addition, the contemporary music of other
composers was fed into the system, which is known to have been studied by
Beethoven, for example, by Bach, Mozart and Haydn. Also, the note sequences that
Beethoven had already written for the Tenth were transmitted. This created a dif-
ferentiated horizon of knowledge for the AI, which included the following
elements:

• General knowledge of the time


• Specific knowledge of Beethoven
• Fragments of the existing future knowledge (here the drafts for the tenth
Symphony).

Following this, the AI was encouraged to be creative. In the analyzed music, char-
acteristic patterns were identified. These patterns gave the AI an idea of how the
musical sketches for the tenth Symphony should be processed. Overnight, often 300
different versions of a musical motif were created—with the same beginning, but a
slightly different development. From this material, a selection was made by humans
and re-entered into the AI system (see [84], p. 10).
290 4 Marketing, Sales, Customer Service

However, it turned out that the first draft was not playable. A whole year of fur-
ther work was required—with many human interventions—to finally complete the
“Unfinished”. The result was created through a dialogic collaboration between
human and machine. The AI-based creativity was guided by humans. They were
also responsible for the final score, to create a lively work from musically sound
notes. The resulting work, which premiered on 09.10.2021 at the Telekom Forum in
Bonn, is however only one possible version of the “Unfinished”. No one knows
whether Beethoven she would have composed herself in a similar way. But at least
we now have an idea of where the compositional journey might have gone.
One thing became clear in this process: AI cannot yet be creative on its own—the
limits of creative energy became visible. Crucial arcs and important impulses were
repeatedly provided by humans to give the development a playable direction. This
also applies to the use of the organ in the fourth movement of the symphony—with
which Beethoven would have once again broken with traditions (cf. [84], p. 4 f.).
Moreover, especially in the works of Beethoven one thing must be taken into
account: Beethoven expressed inner soul torment in his music. He responded musi-
cally to world events and wanted to influence them and convey his own messages.
This “ingredient” of a composition—specifically an “own emotional state”—is still
missing in every AI system today (cf. [85], p. 108).

Note Box
Artificial Intelligence still does not master complex creative processes today.
Its “creations” are based on imitation and modifications, which primarily
result in semi-original results.
Therefore, the question of the rights of the authors of the content used
for training the AI engines must be asked again and again—usually without
asking for permission.

Example: Universal Music Group Prevents AI Systems from Accessing Its


Own Songs
The Universal Music Group has demanded measures against AI companies
due to copyright infringements by artificially generated music pieces.
Universal Music requires streaming providers like Spotify and Apple Music to
restrict the access of AI companies to copyrighted works. Universal Music
bears the moral and commercial responsibility to prevent unauthorized uses
of their music and to block content that violates the rights of artists and other
rights holders. The company is currently busy submitting numerous deletion
requests for music pieces on streaming platforms. Universal Music claims
that AI music programs were trained with works to which the operators had
no rights, resulting in the result obtained by users from these programs being
merely the result of already existing works (cf. [86]).
4.11 Summary 291

Food for Thought


We should examine whether Artificial Intelligence could and should become
a powerful partner of the creative industry. AI systems can recognize which
type of scenes people particularly like in films, which type of music is espe-
cially appreciated and what types of images we like. The result—whether
loved or hated—would be Art on Demand (“Art on Demand”).
And perhaps in the future we will only tell an AI system how many square
meters and rooms our house has, how many people of what age and gender it
should accommodate and what hobbies they have. In addition, we could
define what it may cost, and a ready-to-implement and calculated creative
planning for interior decoration is created, which is based on these key data.
The necessary artworks and utility furniture can—if liked—be produced by
different 3D printers. The result?
Individualization to perfection!

Note Box
The EU Commission demands a Label for AI-generated content. Normal
users should be able to recognize: “Here speaks, writes or paints a robot.”
Initially, such information from the companies should still be voluntary... For
this, the valid Code of conduct against disinformation should be supplemented
accordingly.

Questions Think-Box: Questions You Should Ask Yourself!


• Have we already intensively dealt with AI systems for generating photos
and videos?
• Where would we find exciting fields of application?
• Where is our company still spending a lot of money today for the develop-
ment of photos and videos?
• Where could we use AI-supported generation of music value-adding?
• Which department should focus on dealing with this topic—and inform all
other departments about results and tools?

4.11 Summary

• The AI-supported image and video generation offers a large number of fields
of application, including the creation of product images, the generation of adver-
tising materials or complete advertising videos.
• Many solution concepts are without own technical infrastructure also by non-
professionals easy to use.
292 4 Marketing, Sales, Customer Service

• With this approach, creative processes can be supported by artificial intelli-


gence. This is the case with films, books, pieces of music and paintings.
• However, AI systems are still not capable today, to be creative on their own. In
AI-supported “new creations” the AI algorithms still rely on recognized pat-
terns to then vary and/or to combine them.
• The relevant legal framework conditions regarding copyright, data protection
or ethical guidelines for the use of AI to generate images, videos and music must
be taken into account.

4.12  I-Supported Customer Acquisition


A
and Advertising Design

In marketing, text, image and video content play a crucial role in attracting attention
and communicating brand messages. The use of AI offers marketers a variety of
opportunities to create and use high-quality content efficiently. AI can assist in the
automatic adaptation of text, image and video content to different platforms and
audiences. In addition, AI can assist in the development of personalized advertis-
ing content that is tailored to individual preferences and behaviors. By analyzing
data on customer preferences, behaviors and demographic information, customized
textual and visual content can be created.

4.12.1 AI-Supported Customer Acquisition

Artificial intelligence has the potential to fundamentally change the way customer
acquisition is designed in the future. The integration of AI technologies into adver-
tising strategies offers companies new opportunities to improve brand building and
customer engagement. To this end, data-driven insights and predictions are devel-
oped that can lead to processes of marketing automation. Whether these develop-
ments will lead to substitution or to support of creative professions remains to
be seen.
AI algorithms can create personalized content for advertising that is tailored to
the individual preferences and interests of each customer. AI engines identify pat-
terns and trends from customer data that contribute to the development of appropri-
ate advertising content. This can make customer acquisition overall much more
efficient. AI engines can help companies identify potential customers by analyzing
behavioral data and making predictions about the likelihood that a particular person
will buy a product or service. Conversion rates can be further increased through
personalized advertising messages.
Another important application of artificial intelligence is lead prediction and
lead profiling. Companies like the traditional—and now partly disappeared—mail
order companies have already started many decades ago to infer their customers’
future buying behavior and achievable customer values based on comprehensive
data sets. For this purpose, scoring models of varying complexity were used (see in
depth [2]).
4.12 AI-Supported Customer Acquisition and Advertising Design 293

Concepts of artificial intelligence can now significantly improve this approach,


because much more and at the same time various types of data can be used for pat-
tern recognition. By comparing current top customers with potential new custom-
ers, so-called look-alike audiences can be identified that have the highest potential
to develop into top customers. For this purpose, thousands of data can be condensed
into a digital signature, which scans existing databases for suitable potential carri-
ers like a search grid (also called prey grid). This is also referred to as predictive
analytics spoken, because the analyses serve to predict purchase probabilities and
customer values (cf. [2]). It is necessary to answer the following questions (in real
time) based on the continuously incoming data:

• Which offers lead to the highest conversion rate (be it the subscription to a news-
letter, the request for information material, the request for a quote, the agreement
to a meeting, a donation and/or a purchase)?
• Which dispatch times are optimal in terms of the desired conversions?
• Which follow-up rhythms promote conversions?
• Which communication channel supports conversions best for which target
persons?
• Which degree of individualization of offers promotes conversions?
• Which degree of personalization of the approach supports conversions best?

The information gained here can flow into a dynamic profiling. This means that the
profiles are improved every day, every hour, every minute to support the company’s
goal achievement. The already introduced reinforcement learning can be used.
After all, each achieved conversion corresponds to a reward that tells the algorithm
that it has worked well. And each non achieved conversion shows the AI engine that
the algorithm has not worked optimally yet.
This type of customer approach also changes marketing planning. While so far
campaigns for specific target groups were developed, now—in addition—indi-
vidually usable dialogue programs need to be developed, which are based on vari-
ous triggers of the individual target persons. The used triggers—e.g. purchase of
products A and B within a 2 weeks interval—can start a dialogue program based on
this at different times for different customers. Since not for all prospect or customer
constellations corresponding triggers are available, broader campaigns will con-
tinue to be used (cf. in depth on dialogue programs [2]).
Another important task is individualized recommendations to win customers
or to exploit further customer potentials. The complex recommendation algo-
rithms (keyword recommendation engine) of Amazon are responsible there for
36% of the turnover. 90% of customer support at Amazon is also automated (cf.
[87], p. 73 f.). At Netflix several hundred developers are employed to create algo-
rithms for the recommendation management and for the generation of personal-
ized content.
The recommendation engine—in the sense of a An automated recommenda-
tion—can significantly contribute to increasing customer satisfaction and, by
improving the conversion rate, simultaneously increase the company’s revenue and
294 4 Marketing, Sales, Customer Service

especially profit. In this “machine”, a multitude of information can be processed—


supported by AI—which is available about the user, his previous behavior and
expressed preferences as well as—prospectively—about his environment (context)
(keyword Context Marketing). Therefore, the aim is to align the Next-Product-to-
Buy or the Next-best Action not only with the user, but ideally also with the time
and place of use, and to select the most relevant usage channel for provision.
A undifferentiated “dumb” promotion, which plays the same offers to a large
target group, is thus dispensed with. The Next-best Action does not always have to
represent a very specific offer, but can also include the provision of sales and/or
reputation-promoting information (keyword Content Marketing; cf. further [3],
p. 153–182). In sum, the core goal of value-oriented customer management is
pursued: a sustainable increase in customer value (cf. in depth [2]).
The AI-supported solution Sales Cloud Einstein from Salesforce helps compa-
nies increase their sales success. It analyzes the data of the Customer Relationship
Management (CRM) to identify patterns that distinguish successful business deals
from missed opportunities. Based on the insights gained in this way, recommenda-
tions for promising leads and strategic measures are derived. The core functions of
Sales Cloud Einstein include (cf. [88]):

• Einstein Lead Scoring


This scoring approach prioritizes leads based on their similarity to previous suc-
cessful deals to accelerate the building of the sales pipeline.
• Einstein Opportunity Insights
This function provides immediate answers to important sales questions. These
include the probability of a business deal, customer sentiment, competitor
involvement, and overall customer engagement.
• Einstein Activity Capture
By connecting the sales staff’s inbox and calendar with Salesforce, relevant
activities such as emails and meetings can be automatically captured and linked
with the corresponding data record in Salesforce. This reduces the time spent on
data entry and allows employees to focus more on sales.
• Einstein Account Insights
This function provides comprehensive insights into all business developments
that affect customers. This information is consolidated in a sales console.

This solution provides comprehensive support to make CRM even more successful.
In addition, Einstein GPT, one of the world’s first generative AI engines for CRM,
supports the drafting of customer communication as desired (see [88]).
The Otto Group uses an innovative attribution modeling to optimize the com-
munication channels used (including touchpoint management and marketing plan-
ning). This is based on a Customer Touchpoint Tracking, where search engines,
social media, and online ads are analyzed. All efforts supported by artificial intelli-
gence should have a customer-centric perspective. Because only when customers
can recognize an individual added value for themselves, they will use correspond-
ing offers.
4.12 AI-Supported Customer Acquisition and Advertising Design 295

Tomi.ai is an AI-supported platform, which is specifically designed for compa-


nies with a lead-to-sale cycle of 2 weeks or longer. It analyzes user behavior on the
company’s website and correlates it with the sales data and the customer values of
the company’s own CRM system. This makes it possible to determine the prospec-
tive customer value in real-time from the behavior patterns of new and even anony-
mous website visitors. This allows companies to identify particularly value-based
target groups and address them specifically. This can increase the Return on
Advertising Spent (ROAS). By combining first-party data, behavioral data, and
CRM data, Tomi.ai supports value-based evaluations for optimizing advertising
campaigns and predicting future successes (see [89]).
The company Intellimize offers a AI-driven platform for website optimiza-
tion. By using machine learning, Intellimize is able to automatically adjust the con-
tent for each individual website visitor in real-time. This is intended to increase the
likelihood that the visitor will perform a desired action. The platform constantly
experiments with different website designs and learns from user interactions to con-
tinuously improve performance. In this concept, which can learn from hundreds or
thousands of website visits and the activities carried out there every day, no A/B
testing is needed anymore (see [90]).

Food for Thought


In A/B testing, the aim is to find the best one-fits-all solution for all addressed
individuals. Here we can think of the average “right” shoe size for all custom-
ers. We probably won’t be able to delight all buyers with the same shoe size!
How much more elegant is a solution that creates a completely individual
customer experience for each website visitor. For consumers, the content
can be adapted to the country of origin, the city of origin of the website visi-
tor, the device used, and the time of day can be targeted. Depending on the
learning experience of the AI system, various other content and calls-to-action
can be integrated. If website visitors from companies can be identified, their
relevant competitors and relevant case studies can be presented.
A individual website experience becomes possible!

Users can choose their testing method by using AI Optimize for an A/B test
approach or applying flexible rules. With the use of a WYSIWYG visual editor,
experiments can be set up easily and without programming knowledge, with each
variation on each page can be personalized and tested.
The platform allows users to set an optimization goal as well as page goals to
promote valuable page interactions and ensure the alignment of each goal with the
business goals. Tests can be started in a few minutes, unlimited variations run simul-
taneously and tests can be paused or restarted at any time without affecting the real-
time learning or the results.
With Intellimize, users can quickly gain insights into specific target groups to
inform about other marketing channels and offer a truly personalized and relevant
296 4 Marketing, Sales, Customer Service

experience for known and anonymous visitors on each page. The platform also
offers AI-supported text suggestions to overcome writer’s block and allows users to
create and launch meaningful content variations in a few minutes.
In addition, Intellimize provides a comprehensive overview of campaign results
in a clear dashboard and a 1-click integration into GA4 to present the conversion
results in the Google Analytics dashboard.

4.12.2 AI-Supported Analysis of Advertising Messages

Artificial intelligence also plays an increasing role in the optimization of creations.


AI engines can support creative processes. They can find out which components of
a motif or which text modules are relevant for a specific target group. This allows
creatives to optimize their works based on these analyses or create new ones. It is
important to reiterate that true creativity cannot yet be represented by AI models.
Today, it is still the human mind that is required for the development of origi-
nal ideas.
An interesting concept for the analysis and optimization of different content
formats is provided by EyeQuant [91]. The company originated from the Institute
for Cognitive Sciences at the University of Osnabrück. The AI solution used here
enables the evaluation of the visual impact of different designs, without the need for
a tracking code or a user test. EyeQuant uses the findings of neuroscientific research
and applies them in real time with AI algorithms to existing designs. To conduct
pretests or live tests, only an image file or a URL is needed as input.
With the EyeQuant analysis, the visual attention is determined. Based on design
metrics, designs can be evaluated to create user experiences that lead to the desired
conversions. How does EyeQuant before? The basis for the evaluations are data
obtained through extensive user studies in eye-tracking labs and through online
panels. The predictive models used here analyze over 1.6 million data points from
20,000 experiments with hundreds of participants. The goal is to identify which
design features (such as luminance, edge density, saturation, structural arrange-
ment, and much more) attract human attention within the first three to five seconds.
In laboratory studies, the gaze patterns and eye movements of people were quanti-
fied. These include fixations (short-term dwelling of the eyes at one point), sac-
cades (rapid eye movements or jumps from fixation to fixation) and smooth pursuit
(tracking of a moving object). Based on the determined values, the relative impor-
tance of visual features can be determined.
EyeQuant uses this data to understand the unconscious priorities of the viewer
when searching for information. Artificial neural networks are used to predict how
users will perceive a design in the first three to five seconds of viewing it. The simu-
lations used here are validated using empirical eye-tracking studies. These show
that the predictions of EyeQuant are 90% as accurate as the predictions based on
physical studies with hundreds of participants (cf. [91]).
Based on this data, perception patterns are identified and design features are
worked out that significantly affect the success or failure of a website (e.g. typo,
4.12 AI-Supported Customer Acquisition and Advertising Design 297

color contrast, position of a call-to-action). Based on the insights gained from these
studies, the derivation of predictive results is made, which can be immediately
implemented. With the Clarity Score from EyeQuant, the persuasiveness of your
own designs can be objectively measured. Alternative design drafts can be com-
pared and a comparison with competitors can be carried out. In addition, it becomes
visible how users can be motivated to engage with the own core messages and calls-
to-action. In many studies it has been shown: The higher the Clarity Score, the lower
the bounce rate.
The Perception Map analyzes the content that is seen in the first few seconds
after calling up a website, a mobile app, an email, a social media post or an advertis-
ing medium. This determines whether the design used effectively directs the atten-
tion of the users to the most important content or not. With the Excitingness Score
it can be determined whether a design has a stimulating or calming effect on the
user. The tool Regions of Interest measures the specific visibility of different areas
of the online presence. With this tool it is recorded how individual areas of different
design drafts are perceived. This knowledge can eliminate subjective assessments
from the release process and replace them with more objective data. To this end,
EyeQuant visualizes the differences between different versions with quantitative
data. In this way, creative changes can be easily justified (cf. [91]).
Another goal of the already in Sect. 4.4 introduced Company Affectiva aims to
capture the emotional impact of advertising with their AI technology. For this
purpose, the software can measure the unfiltered and unbiased emotional reactions
of consumers when watching videos, ads, movies, and TV shows. This can be done
unobtrusively and on a large scale by the Attectiva technology if the users agree to
such a collection. An interesting field of application are advertising tests. Here it
can be determined how the target audience reacts to advertising content. The emo-
tional reactions of the participants can be determined both in online and offline
environments (cf. [59]).
Also, an analysis of entertainment content itself becomes possible. This can
determine the emotional attachment of the viewer to content from trailers or movies.
This knowledge is crucial for the creation of entertainment experiences that are
intended to motivate the viewer to bond and consume more intensively. This issue
is indispensable for streaming providers like Netflix & Co. (cf. [59]).

4.12.3 AI-Supported Advertising Planning

The advertising planning can also be improved by AI. With the help of predictive
analysis, companies can predict future market trends and adjust their advertising
strategies accordingly. AI can also help optimize the media buying behavior by
determining the best time, place, and context for the placement of ads, spots, ban-
ners, posts, or other advertising formats. In addition, AI tools can analyze data about
the effectiveness of different advertising campaigns and provide recommenda-
tions for future campaigns to increase profitability.
298 4 Marketing, Sales, Customer Service

AI platforms for media planning take over tasks that were previously done by
media agencies. In the first step, the AI platforms can help media planners as expert
systems to analyze the multitude of existing data to identify advertising-relevant
patterns even better or faster. By using historical data, AI-supported systems can
generate optimized media plans. Human planners can check the generated plans,
respond to deviations and new trends, and use the AI engine as a tool to improve
planning. It should be noted that AI systems project existing patterns into the future,
but cannot develop original ideas. However, their efficiency and speed give users
more time for creative tasks and strategic decisions.
With the advent of Big Data, the use of intelligently linked and modeled data for
the media placement has increased significantly. In the digital area, data can con-
tribute at various levels to optimizing the media use. The inclusion of data such as
weather or sociodemographic factors is just the first step. Programmatic
Advertising, originally introduced in online marketing, is also gaining importance
in more traditional media. Here, models are in use that check the relevance of a
campaign before it is released for the auction process.

Note Box
In the field of content generation and content distribution, a decrease in the
number of employees is expected in the coming years due to increasingly
intelligent creation and distribution.

4.12.4 AI Solutions for the Post-Cookie Era

The foreseeable end of third-party cookies poses a significant challenge for mar-
keting. After all, cookies will no longer be allowed to be used for targeted advertis-
ing. Apple already allows its users to opt out of setting third-party cookies. Google
will prohibit third-party cookies from 2023. As a result, digital advertisers must
abandon the important targeting concept of third-party cookies. The cookie tech-
nology has so far allowed users to be uniquely identified across many websites and
to create extensive profiles about them. These profiles could be used for targeting.
What should targeting look like in the future if third-party cookies can no longer
be used for this purpose? What alternative solutions are available? With the help of
artificial intelligence, various approaches have been developed to continue to enable
personalized advertising through targeting in the future.
Advertising companies are developing concepts based on first-party data.
This refers to the data that the advertising company collects itself. Such data, like
email address, first and last name, postal address, phone number, date of birth, etc.,
allow the collecting company to uniquely identify individual users. However, users
must first be motivated to give their data to the advertisers—i.e., the advertising
companies. Often, so-called brand ecosystems are built that require a login to
access a variety of services. This is the case, for example, with Apple and also with
4.12 AI-Supported Customer Acquisition and Advertising Design 299

Google. In addition, communities can be built, as Procter & Gamble has success-
fully done with its For me program. The world of benefits is only accessible to the
registered and thus identified customer. Through these first-party data, stable iden-
tities can be assigned to online users. Here, artificial intelligence can support data
analysis to gain a deeper understanding of customer preferences and behaviors.
This can form the basis for more effective and targeted advertising based on the
actual interests of the users. Through predictive analyses, predictions about future
user behavior can also be developed based on first-party data.
Publishers often focus on so-called context targeting, also known as contex-
tual targeting or semantic targeting. AI engines analyze the content of a currently
visited website in real time to then play out matching advertising. The advertising
messages are therefore oriented towards the Content of the visited page and not on
the individual online behavior of the respective user. Based on this context target-
ing, the advertising media offer a classic environment booking. For the advertising
display, offer-affine environments are sought—as has been the case with print and
TV campaigns for a long time. These solutions for the online area do not require
identification of the website visitors. Artificial intelligence enables visitors—tai-
lored to the respective content of the visited website—to be presented with suitable
offers. The identification of the offer-affine environments can now be automated by
AI and be much more precise than in the past. The advertising offers presented in
this way primarily contribute to the upper funnel—the upper area of the conversion
funnel (see Fig. 4.19).
The company Define Media [92] uses an AI system for advertising planning with
CONATIVE. The artificial intelligence optimizes the delivery of campaigns based
on machine learning. To achieve a perfect match between advertising and tar-
get persons, among other things, the contents of the online advertising media to be
switched (teaser texts, headlines, images and the landing page) are analyzed based
on AI. The potential target pages of online advertising are also analyzed based on
headlines, article texts, images, etc. with AI support—without human assistance. The
semantic information obtained in this way allows the fit between advertising media
(e.g., an online banner) and advertising medium (the visited website) to be recog-
nized. In addition, other context data (such as weather, relevant state) are included in
the decision. In total, up to 1500 dimensions are included in this analysis. The deci-
sion about the delivery of advertising is based on the prediction of the achievable
click-through rate. Through a feedback loop, the neural networks used here can
continuously learn because the engagement of the users is fed back. This way, the
performance of the advertising placement can be continuously improved.
The company Sqreem Technologies is a B2B marketplace for digital media
buying. Here too, a proprietary AI solution supports cookie-free, precise ad target-
ing on a large scale (see [93]). The technology offered enables the identification of
customer groups and their behaviors—but not of individuals. Thus, as required,
privacy is respected. By using an AI agent, the observed online behaviors are
matched with the behaviors desired by brands and publishers. The integration into
an advertising ecosystem allows the delivery of advertising to be monitored and
controlled throughout the entire process. The ONE Market platform is an end-to-
end solution that seamlessly contributes to addressing the target groups relevant to
300 4 Marketing, Sales, Customer Service

achieving objectives—based on the observed online behavior. To achieve this, the


media buying workflow has been fully automated and controlled by AI.
Other providers are trying to develop a cross-publisher context targeting. This
enables advertising campaigns that connect media from different publishers and
make them bookable in the same way. For this, the company Seedtag an AI software
for website content analysis has been developed. This allows for the selection of
websites that best match the planned advertising. The AI technology used for this
not only analyzes individual keywords, but also classifies entire texts as well as
images and videos (cf. [94]).
Further solutions for advertising without third party cookies are being dis-
cussed. After all, contextual targeting alone can hardly provide the appropriate envi-
ronments for all products and services. For example, how does a chocolate or
toothpaste provider find its target group? After all, there is not so much content
environment for these offers to have a broad impact. In addition, content-oriented
campaigns tend to target the upper funnel. This is more about the creation of aware-
ness. Performance campaigns often cannot convince here, because while there is a
thematic interest—there is not necessarily a purchase interest.

Think-Box: Questions You Should Ask Yourself!


• To what extent is AI-supported customer acquisition already taking place
in our company today?
• What results have been achieved and what have we learned so far in
this field?
• Can we optimize our advertising activities through AI-based analysis?
• Have any insights gained so far been widely communicated within the
company?
• Have we already dealt with AI-supported advertising planning—or are our
service providers doing this?
• Where can we easily identify areas for optimization?
• What solutions do we see for the post-cookie era?
• What ideas do our advertising partners have for the time after the elimina-
tion of third party cookies?
• Who is ultimately responsible for these questions?

4.13 Summary

• Artificial Intelligence offers a variety of possibilities for optimizing customer


acquisition.
• AI tools support the analysis of advertising messages to enhance their impact.
• Various concepts already exist to support advertising planning through AI
engines.
• AI-based solutions enable target group-specific advertising in the post-cookie
era, for example through Context Targeting.
4.14 Dynamic Pricing 301

4.14 Dynamic Pricing

AI algorithms are increasingly being used in Dynamic Pricing as well. One of the
decisive influencing factors on a company’s profitability is still the Pricing.
Therefore, it is not surprising that AI applications have also penetrated this domain
of marketing and sales.
With Wise Athena [95], there is an intelligent AI agent that supports providers
of Consumer Packaged Goods in defining pricing decisions and retailer promo-
tions. To achieve this, Wise Athena automatically selects the data features that best
describe the behavior of the respective products in relation to each other. For this
purpose, a model is calculated which also takes into account cannibalization
effects in its own offer and the cross-price elasticity of the company’s products. The
cross-price elasticity (of demand) determines the percentage change in the quantity
demanded of a good as a result of the change in the price of another good. It is
important to note that cross-price elasticity, unlike price elasticity, involves two dif-
ferent goods.
Regular training has significantly increased the accuracy of sales forecasts.
Despite an extremely large number of possible price combinations, the AI system
can identify those price combinations that optimize revenue or margins. Through
such an optimized pricing strategy, users of Wise Athena were able to significantly
increase their margins. Customers of Wise Athena include companies BAT,
Beiersdorf and Reckitt Benckiser (cf. [95]).
From a corporate perspective, dynamic pricing is associated with various
advantages. Companies can adjust their prices in real time to maximize profit.
Marketers can quickly adjust prices to changing market conditions, which occur
due to fluctuations in supply and demand, changes in competition, or seasonal
trends. In addition, an efficient use of resources is supported. For physical products
or services with limited capacities—such as airline tickets and hotel rooms—
dynamic pricing allows for good utilization of resources. By raising prices during
times of high demand and lowering them during times of low demand, companies
can ensure that their capacities are optimally utilized. Due to the high complexity
of dynamic pricing, companies need specialized (AI) tools and skills to analyze the
large amounts of data required for dynamic pricing.
How is dynamic pricing evaluated from a customer perspective? Customers
benefit from dynamic pricing if they are flexible and buy during times of low
demand. However, customers may find it negative if prices fluctuate constantly
because the prices for customers become unpredictable. This makes it difficult to
compare prices. In addition, the perceived price fairness can be lost. Customers
may feel that dynamic pricing is unfair. The impression may arise that they have to
pay more than other customers. This impression can arise when pricing is based on
personal data or individual purchase histories. Therefore, dynamic pricing can lead
to customer dissatisfaction.
Insofar as dynamic pricing is based on the analysis of personal data, customers
may additionally have privacy concerns.
302 4 Marketing, Sales, Customer Service

Food for Thought


One thing becomes clear: The advantages of dynamic pricing primarily lie on
the side of the company. For customers, on the other hand, the sense of prices
is lost. In addition, the impression may arise that they have not been
treated fairly.
So that price research—associated with repeatedly visiting the same pro-
vider websites—does not lead to higher prices, customers sometimes feel
compelled to use different devices for research. This means that dynamic pric-
ing leads to higher effort for customers.
Companies should carefully consider whether they should use dynamic
pricing!

AI-based pricing raises an exciting question: Can it happen that through price-
setting algorithms, which take into account all relevant competitor prices (possibly
also determined by AI systems), customer demand behavior and other context vari-
ables, AI-driven price agreements arise that would be equivalent to a cartel?
As a result,—as in a cartel—the profits of the companies would increase and
customers would pay a higher price for the same services than before. Consequently,
there would be collusive behavior. The term collusion means “secret agreement”
and describes an unauthorized cooperation of several participants to the detriment
of third parties.
Another question is: Can such AI-driven price agreements also come about when
the algorithms used were not designed for such an approach? Would the algorithms
independently—oriented towards the goal of corporate profit optimization—
develop in such a way that collusive behavior inevitably occurs? Who would be
guilty in this case in the sense of the Act against Restraints of Competition? The
original programmers or an AI system that cannot be guilty in the legal sense? These
questions are raised by the Monopolies Commission [96] in their report “Algorithms
and Collusion”. At the same time, a recommendation is also made on how to deal
with these developments.
The Monopolies Commission recommends strengthening market observation
through cartel authority sector investigations. Information about possibly collu-
sively inflated prices is initially brought to the consumer protection associations.
Therefore, it is recommended to grant these associations the right to initiate cartel
authority sector investigations. If there are concrete indications of the use of price
algorithms for collusive behavior, further steps could be decided (see [96]).
Scientists see the risk of collusive behavior by AI systems as low. Reasons for
this are that the environment changes dynamically because new actors enter the
market, new rules of the game are introduced, actors pursue different goals (besides
profit optimization, for example, gaining new customers, warding off competitors),
so that the algorithms may no longer fit a new situation. Consequently, it would be
difficult to achieve a stable equilibrium for all actors. Therefore, algorithmic car-
tels are currently not foreseeable (cf. further [97], p. 18).
4.16 Optimization of Product Development and Customer Experience 303

Food for Thought


If there were indeed algorithmic cartels, unfortunately, the leniency policy
currently used to uncover cartels cannot be applied. Unless we manage to
convey values to the algorithms so that they report themselves when collusive
behavior occurs, because such behavior would violate predefined values!

Think-Box: Questions You Should Ask Yourself!


• Who in our company is dealing with the topic of Dynamic Pricing?
• Are our competitors already using Dynamic Pricing—and if yes, with what
results?
• How do we estimate the acceptance of Dynamic Pricing among our
customers?
• Should we rely on the use of Dynamic Pricing—or rather consciously
refrain from it?

4.15 Summary

• Dynamic Pricing is based on the analysis of internal and external data, which
are important for pricing (e.g., historical sales data, competitive prices, seasonal
fluctuations in customer demand).
• It is important to carefully examine which products or services Dynamic Pricing
could be applied to.
• The corporate goals of Dynamic Pricing need to be clearly defined—and pos-
sible negative effects on the customer side need to be identified early on.
• In this context, relevant legal regulations must be taken into account, for exam-
ple with regard to price agreements, misleading advertising or discrimination.
• The use of Dynamic Pricing must be continuously monitored in order to make
necessary adjustments to the Dynamic Pricing strategy if necessary.

4.16  ptimization of Product Development


O
and Customer Experience

Artificial Intelligence has contributed to concrete advancements in product develop-


ment in various application areas in recent years. Companies like Insilico Medicine
use AI for drug development. They use AI engines to analyze molecular structures
and identify new candidates for drugs. This can significantly shorten the
304 4 Marketing, Sales, Customer Service

Fig. 4.22 Augmented reality in the cosmetics industry—Try-on of hair colors and lipstick (source:
L’Oréal; own photo)

traditionally significantly accelerate the time-consuming and costly process of drug


development (cf. [98]; cf. Sect. 5.1.4). In the automotive industry, artificial intel-
ligence is used for the development of autonomous driving. Through the analysis of
data collected over many millions of kilometers traveled, manufacturers continu-
ously improve their autopilot functions and their vehicles.
Also in the consumer goods industry, AI technologies are used to develop and
improve products. Procter & Gamble has a AI-driven skincare app named Olay
Skin Advisor developed to give consumers personalized product recommendations
(cf. [99]). L’Oréal relies on AI to improve the customer experience and develop
innovative products. For this 3D printing, virtual and augmented reality as well as
agile manufacturing processes are used. Smart factories allow dozens of different
product formats to be processed in the shortest possible time. The company speaks
here of “Industry 4.0 by L’Oréal”. Through the use of AI, virtual reality and state-
of-the-art cloud technologies customized and interactive shopping experiences for
the customers are created (cf. L’Oréal, [100]).
This makes it possible to virtually test products. For this a virtual try-on tool
is provided. With its help, the hair color can be changed or a new makeup can be
tried out. Here you can choose from over 400 shades of L’Oréal Paris. The applica-
tion can either be done live, or a photo of yourself is uploaded. This way everyone
can find their personal shade and experiment with a new look (cf. L’Oréal [101]).
The results that the try-on tool can lead to are shown in the not entirely serious
Fig. 4.22 using the author as an example.

Note Box
The fields of application of artificial intelligence to support product develop-
ment and improve the customer experience are virtually limitless.
4.18 Retail 305

Think-Box: Questions You Should Ask Yourself!


• In which fields can product development be supported by AI?
• What experiences have already been gained here—and how were they
communicated within the company?
• How successful are our competitors in this field active?
• What starting points exist to optimize the customer experience with AI
support?
• What would be exciting fields of application for us?
• Which providers support this?

4.17 Summary

• Artificial Intelligence offers a multitude of starting points to support product


development.
• This can save time and/or costs in many areas.
• It supports more agile action.
• In addition, the customer experience can be improved in many fields—but
always oriented towards the wishes and expectations of the customers.

4.18 Retail

Retail has faced a multitude of challenges in recent years. These include not only
the lockdowns, but also the multitude of communication and sales channels
available today. These channels have allowed and facilitated access to customers
for many more competitors. This applies not least to the manufacturers themselves,
who are turning directly to customers via online platforms and their own webshops.
At the same time, many customers expect a high level of service quality, which can
be reflected in personalized and individualized offers, but also in the expectation of
a fast delivery. In addition, more and more providers are relying on dynamic pric-
ing to optimize their own value creation (see Sect. 4.8).
Many retailers therefore need to massively expand their digital capabilities in
order to remain competitive in the future. In order to continue to operate success-
fully, both internal and external processes need to be optimized. In many cases, AI
solutions can make an important contribution to managing the following tasks
(see [102]):

• Strategic demand management


A major challenge for retailers is to find a balance between potential revenue
losses on the one hand and inventory surpluses on the other. For this, it is
necessary to detect possible changes in demand as early as possible in order to
increase responsiveness. For this, sales forecasts need to be improved, not only
306 4 Marketing, Sales, Customer Service

to optimize inventory management, but also to increase the success of product


launches and promotional campaigns. The solution would be in an Omni-
channel demand plan. This should first forecast the demand for each channel
independently, before the individual values flow into an aggregated plan. For
this, it is necessary to automate the analysis and forecasting tasks as comprehen-
sively as possible.
AI algorithms can determine how the weather will affect the demand for a
product—on a very specific day and in a specific store. In this forecast it is also
necessary to consider how the price, the respective day of the week, possible
promotions etc. will affect the demand. At the same time, possible carry-along
or cannibalization effects can be simulated. This can make it possible to accu-
rately forecast the expected demand even for very extensive assortments. AI
solutions support the evaluation of these often real-time data. This can evaluate a
large number of demand-determining variables to create a reliable demand fore-
cast. In addition, the effects on sales and profit can be determined. Humans as
well as classic statistical methods quickly reach their limits with this amount
of data.
• Location-optimized assortment planning
An important objective of retailers is to increase sales and profit by aligning the
assortments with the needs of the customers at each location. After all, con-
vincing customer experiences and the building of location loyalty usually only
succeed with the right assortment. The assortment management is based on
detailed analyses of consumer buying preferences and connects the data obtained
here with the procurement processes. This can avoid stockouts on the one hand
and profit-reducing write-offs on the other hand.
• Lifecycle and channel-oriented pricing
Margin-oriented pricing decisions need to be made across the entire product
lifecycle and are channel-specific. Classic statistical methods and human capa-
bilities often no longer suffice to meet the complexity of pricing in retail today.
For this, increasingly AI-based concepts are being used. These allow for continu-
ous real-time optimization of pricing. Ideally, this is achieved across multiple
dimensions such as location of the store or online shop as well as across the vari-
ous products—sometimes including the activities of competitors and other con-
text data. This context data includes the weather or significant news that influence
buying behavior.
Here, location or channel-specific differentiated price elasticities can be
taken into account. These price elasticities indicate how demand reacts to price
changes (cf. in depth [103], p. 315–323). If these data are evaluated in real time,
corresponding price adjustments—channel-specific—can also be made in real
time. In addition, the effects of price changes on sales and profit can be deter-
mined (cf. on dynamic pricing Sect. 4.8).
Exciting are concepts that allow a complete automation of pricing decisions
over the entire product lifecycle. support. This eliminates the need for frequently
error-prone, time-consuming manual price adjustment processes. In their place,
agile processes contribute to decision optimization based on data, while simulta-
4.18 Retail 307

neously reducing the time and effort required for price adjustments. This
approach is facilitated or even made possible when digital price tags are available
on the shelves, which can potentially be changed every second—differentiated
from store to store.
• Inventory planning
A central determinant of success in retail is the inventory turnover rate. It indi-
cates how often—for example, within a month or a year—the goods in the ware-
house need to be replaced on average. The rule here is: The higher the inventory
turnover rate, the greater the economic efficiency tends to be. The condition here
is that the goods are sold at a profit. Retailers who can convert their products into
liquid assets more quickly, therefore, tend to increase their profitability. It is
important here that the inventory turnover rate is determined not only for the total
stock but for the different product groups or even for individual products.
When optimizing inventory levels, there is a trade-off between availability
and profitability. High product availability reduces the inventory turnover rate
and thus profitability. If only a few products are kept in stock, sales opportunities
are lost when products are missing on the shelf—and thus also potential profits.
If necessary, products may then have to be exchanged between different loca-
tions and/or channels so that the products with the highest value creation can
be sold.
AI-supported systems can help to recognize changes in supply and demand
in real time. The transfer of products between different locations and/or channels
can then be made autonomously—without human intervention. The necessary
data for this can be provided by IoT sensors that are used along the purchasing
and sales process. In addition, point-of-sale data and local market data on com-
petitors’ measures as well as weather and other significant events can be taken
into account—ideally in real time.
• Continuous review of inventory
An important prerequisite for inventory planning is data on the stock on the
retailer’s shelves—on the sales floor and in the warehouse itself. An intelligent
data capture and analysis platform like, for example, ShelfView monitors the
shelves in the store via mobile devices or autonomous robots. This shelf man-
agement solution integrates AI-supported image recognition to capture goods
and product labels (product descriptions, prices, promotion actions). If errors in
price labeling, stock shortages, or misplaced products are detected in real time,
immediate action can be taken. Through augmented reality, the necessary correc-
tion information can be provided directly on site to the sales staff, for example,
to correct incorrect price information. For this purpose, the solution ShelfView
works with the AI platform Brain OS from Brain Corporation. This interaction
enables scalable data collection and helps to limit the costs of data collection
(see [104]).
• Omni-Channel Management
For omni-channel retailers, it is a particular challenge to ensure a consistent
shopping experience across the different channels—especially with regard to
services such as pickup, delivery, and return. For example, it is necessary to align
308 4 Marketing, Sales, Customer Service

the planning of staff deployment to the expected demand in order to master the
trade-off between service quality and staff costs.
To manage this balancing act, AI-supported forecasts can be used to auto-
mate order processes as well as processes for replenishing stock and sales space.
The results will vary for each store and for each product—every day anew. In
addition, it is necessary to predict to what extent products need to be delivered to
customers—after all, the necessary staff must also be available here. Many retail-
ers have to offer their customers a Ship-to-Home from central warehouses, a
Ship-from-Store and Click-and-Collect in parallel. For the latter, a Ship-to-
Store may become necessary if the goods are only available in another store. For
the buyers themselves, it is irrelevant where the desired products are physically
stored. It is understandable that such last-mile services can only be provided
quickly if a comprehensive real-time inventory overview is ensured.
A correct data set is also a prerequisite for AI-supported real-time personal-
ized offers leading to the desired purchases. These so-called recommendation
engines can play out products across various online and offline channels based
on certain criteria or on the basis of customer profiles. Then it’s not just
“Customers who bought this product also bought...”, but much more often “to
your last purchase, to the purchase three months ago... our offer XY fits particu-
larly well”. The transfer into the offline world is achieved through the use of digi-
tal signage in brick-and-mortar business.
Digital Signage stands for digital signage in the offline world. In contrast to
digital signals, extensive product descriptions, large price tags, and promotional
offers are often centrally developed, printed, shipped, and attached at the appro-
priate places in analog signage in retail—possibly distributed over many loca-
tions. The digitization of signage through digital signage allows not only the
creation, but also the “delivery” to be centralized. For example, promotional
actions can be developed in the headquarters and presented within seconds in all
branches—both indoors and outdoors. Compared to analog signage, digital sig-
nage can also incorporate sound and video and can be changed with very short
lead times.
This digital signage can also be designed interactively. In this form of inter-
active digital signage, users can, for example, interact with the screens via ges-
ture control. When digital signage systems are combined with face recognition,
different content can be presented depending on the respective viewer—for chil-
dren, teenagers as well as for men and women of different age groups. Interaction
is also possible with individual people, if they connect to the display via
Bluetooth, as an individual be recognized and addressed with individual offers.
Here it can then be said: “For you, dear Marie, our offer ‘Smoking for women’
fits perfectly with your last purchases from us”. AI applications can play out such
personalized advertising campaigns that are oriented towards the individual
customer profile. In-store beacons can individually recognize people in the store
and transmit tailored offers—all based on permission, of course (see Fig. 4.23).
Such solutions can lead to higher conversion rates and ideally also to higher
profits, because good performance by retailers increases the number of repeat
4.18 Retail 309

See you soon!


Here's a little
reward - you can
Your
use it on your
personal
next visit!
advantage
5 coupon:
Hello, take a look at our 4 - 30%
exciting offers. Only
today and only for you! 8
2
7
Take a look at the new
3 6
products - they go
perfectly with your
previous purchases. Thank you, Anna, for
1 your purchase! You
now have 250 points in
your points account!
Visit us -
now!

Fig. 4.23 Possible applications of in-store beacon technology

purchases. This is best achieved by keeping the promises made regarding avail-
ability and service provision. An optimization of fulfillment tasks can lead to a
reduction in the associated costs, such as warehousing and transport.
• Real-time visibility and management of changes in supply and demand
The last few years have shown that demand can plummet overnight due to pan-
demic developments—or even come to a complete standstill. At the same time,
supply chains that have been developed and proven over decades can break and
lead to dramatic supply shortages. AI engines can also support here by calculat-
ing the effects of strategic disruptions on one’s own value chain—including the
value chains of upstream and downstream partners. The Internet of Things can
significantly facilitate the necessary exchange of information for this.
Triggers for such analyses can either be supplier problems, interruptions in
the logistics chain, or downturns on the part of direct or indirect demanders. AI
systems can not only simulate the further effects, but also set specific impulses
on how best to master the challenges.

A company that offers convincing solutions for many of the challenges mentioned
is BlueYonder. This company provides the so-called Luminate platform, a solution
that allows companies to respond faster and more profit-oriented to fluctuations in
demand. This solution supports the expansion of supply and supply chains into a
transparent ecosystem that can be coordinated from a central platform. Various AI
engines are integrated for this purpose, which convert real-time data into autono-
mous decisions or formulate concrete recommendations for action (see [105]).
Smart shelf solutions are offered, for example, by Bizerba. These AI-supported
solutions enable automation of the ordering processes. For this purpose, low-in-
stock situations can be identified in good time and eliminated by re-production or
replenishment. Through dynamic price adjustments in connection with focused
advertising measures Food waste can be avoided. In addition, the cloud solution
offered here can help align the range and post-production to the foreseeable demand.
The optimal production quantities are determined by AI agents. For this, in addition
to past data, the weather, the time, and the stocks of alternative products with their
respective prices can be taken into account (cf. [106]).
310 4 Marketing, Sales, Customer Service

Fig. 4.24 Impressions of cashier-less shopping at Amazon Go in San Francisco

With Amazon Go, cashier-less shopping experiences are possible. With the
Amazon App, the customer simply goes into the store, scans the products, and
leaves the store. Payment is made via the Amazon App. To enable this, the items
selected by the customer in the store are identified using image recognition. A mul-
titude of cameras and sensors determine which products have ended up in the shop-
ping cart and which have possibly been put back. With direct access to the customer’s
digital account for automatic payment, the customer can leave the store without
stopping at the checkout. In 2023, 23 such shops were operated by Amazon.
Figure 4.24 shows impressions of cashier-less shopping at Amazon Go—including
a view of the technology installed in the ceiling.
Rewe has launched a comparable concept with Rewe Pick & Go in the cities of
Berlin, Cologne, and Munich. With Aldi Shop & Go, this concept is also being
tested by discounters. It remains to be seen how successful these retail variants will
be in the long run.

Food for Thought


The big question is whether Artificial Intelligence can help traditional, non-
digital retailers regain the ground lost to digital players—or whether the gap
between the two groups will continue to widen. The answer also depends on
who has more data, the more powerful algorithms, and the more qualified
personnel—supported by the corresponding budgets. It does not take much
imagination to realize, in light of the developments described, that especially
smaller, traditional retailers will not only have a hard time winning this AI
race, but also even participating in this race at all.
4.20 Hotel and Hospitality Industry 311

Think-Box: Questions You Should Ask Yourself!


• Do AI solutions pose more opportunities or risks for us?
• Where can we find particularly easy to tap into AI application areas?
• Where would the greatest leverage effect be achieved?
• Which service providers support this—or do we need to build AI capacities
ourselves?
• Which department deals comprehensively and cross-departmentally with
the use of AI in our company?

4.19 Summary

• In retail, it is worth conducting an AI-related analysis of the entire value chain


to recognize the potential of AI applications—including the personalization of
the customer experience, inventory management, and demand forecasting.
• The AI-based shelf monitoring will ideally lead to an “out-of-stock” situation
becoming increasingly rare.
• Cashless shopping experiences are already offered in selected stores today.
• Every retailer has to check to what extent relevant data for AI use is already
available and/or can be accessed through additional data sources.
• For this, the necessary personnel and technical infrastructure must be built.
• It is necessary to define evaluation criteria and metrics to measure the success
of AI use (e.g., sales increase, customer retention, efficiency increase) and iden-
tify continuous optimization opportunities.
• In addition, the legal and ethical aspects of AI use must be considered (such as
data protection, data security, consumer protection, and transparency in AI use).

4.20 Hotel and Hospitality Industry

Artificial Intelligence offers numerous opportunities for optimizing and improving


processes in the hotel and hospitality industry. From improved customer service to
more efficient operational processes to providing personalized experiences for
guests. A specific application area of AI in the hotel industry is the use of chatbots
for guest communication. AI-based chatbots can efficiently handle guest inquiries
by accepting reservations, offering room upgrade options, and answering questions
about the stay. Ideally, this improves customer service while saving personnel
resources (see in-depth on service management [107]).
Hotel Tech Report [108] has identified the top ten hotel chatbots in 2023. For
this, 1214 hoteliers in more than 70 countries were surveyed. Which chatbot is
best suited for a hotel depends on several factors. These include, among others,
the size, the internationality, and also the available budget. Here are the Top five
in the ranking :
312 4 Marketing, Sales, Customer Service

• Asksuite
• Quicktext
• HiJiffy
• Easyway
• Book Me Bob

Various fields of application also offer themselves with the AI-driven process opti-
mization in the hotel and hospitality industry. Artificial intelligence can be used to
predict occupancy and demand fluctuations, enabling more efficient room and staff
planning. In addition, AI can help proactively manage the maintenance and upkeep
of hotel facilities by recognizing anomalies or patterns that indicate potential prob-
lems (keyword Predictive Maintenance, cf. Sect. 3.1.3.4). Robots can also be used
for various services.
Where the journey can lead was shown by the Japanese Henn na Hotel, which
opened in 2016. This name can be translated as “strange hotel”. The entire hotel
near Nagasaki was run by robots. Initially, guests were greeted by Nao, a small
robot, and informed about the hotel and its “servants”. The check-in was done via
reception robots—in the form of dinosaurs and humanoid robots. After entering
the name, a camera recorded the face. This recording served as a key via a facial
recognition system at the room door. The luggage was transported by a mobile
robot, which also provided the “necessary” musical accompaniment. The control of
light, temperature, alarm clock, etc. in the room was taken over by the personal
assistant Chu-ri-chan—via voice control. Snacks could be ordered by the guests via
tablet. Humans were mainly working in the back office. After just 3 years, the
world’s first robot hotel switched back to humans and sent more than 200 robots
into early retirement. What happened?
Many guests were simply annoyed by the robots. The exciting promise “Talk to
robots that radiate human warmth and work efficiently” was not fulfilled in reality.
Often, the robots did not find suitable answers to the guests’ concerns—or they were
simply too slow. The digital assistant Churi in the room was activated by the guests’
snoring and then liked to disturb several times during the night with the obvious
question: “Excuse me, I didn’t understand that. Can you repeat your question?”
However, this helpful assistant could not be deactivated (cf. [109], p. 55). So the
next snoring started a new dialogue—or maybe just a monologue!
The use of service robots, which can take over a variety of supply tasks, is more
successful. An example of this is the Jeeves, offered by the company Robotise in
Munich. This robot can not only take over room service tasks, but also collect feed-
back from guests. With its fast service, this robot can replace the minibars installed
in the rooms. Here’s how an order process with Jeeves works (see [110]):

• The guest can request Jeeves via the room phone, an app, or a QR code.
• As soon as Jeeves has arrived at the room door, the guest is notified via call,
app, or SMS.
• Desired drinks or snacks are selected on the touch display and taken from the
drawers. The removals are registered by sensor.
4.20 Hotel and Hospitality Industry 313

• The selected items are automatically reported to the front desk by email or web
app. Alternatively, contactless payment is made directly on Jeeves.
• Afterwards, Jeeves autonomously drives to the next room or back to the charging
station.

This service is already being used by NYX Hotels (Leonardo Hotels). Other inter-
esting fields of application for Jeeves are hospitals and companies to care for patients
or employees on site. Anywhere where direct human interaction is to be reduced
today and in the future—be it for cost or hygiene reasons or due to staff shortages—
such service robots can be used sensibly.
However, every hotel should ask itself what kind of service fits its positioning.
Some providers will continue to value high-quality face-to-face care and differenti-
ate themselves from the competition in this way. Other companies will opt for low
touch. The decisive question is: What does the customer want? Appreciative and
personal hospitality or rather a fast-sleep experience—as touchless and human-
less as possible? There are large customer segments for both extremes and many
variations in between that need to be identified and won.
The service robot Adam from the company BotbarCoffee also opens up ver-
satile fields of application. This is an interactive robot as a bartender, barista,
cook, and more! The robot fits well into very dynamic environments where ser-
vice should be available 24/7. It also guarantees very high precision in the por-
tioning of food and drinks (see [111]). By the way: Adam doesn’t need a tip to
lift his mood. You can find a video of Adam here: https://www.youtube.com/
watch?v=syv4S52g6zE.

Note Box
The use of service robots in more and more areas of human life is already
visible today. Technical limits are often overcome more easily than cultural
limits. While in the USA, but especially in China, Japan and South Korea, the
general public is very open to corresponding innovations, their use in Europe
and Germany often meets with great reservations and fears. These must be
taken into account when developing robot-based service strategies.

To enable a AI-driven personalization of the guest experience, the behavior


and preferences of the guests are analyzed. Then, based on the previous behavior,
very personal restaurant recommendations can be made or special offers can be
provided. This level of personalization contributes to creating a unique guest
experience and increasing the loyalty of the guests. The creativity is hardly lim-
ited here.
314 4 Marketing, Sales, Customer Service

Think-Box: Questions You Should Ask Yourself!


• Are chatbots already being used by us?
• What would be the most exciting areas of application?
• Which processes can be supported by the use of AI?
• Where are there worthwhile areas of application for predictive maintenance?
• Where can robots be integrated into the service delivery?
• What are the best cases from our industry—and who analyzes these
systematically?

4.21 Summary

• In the hotel and restaurant industry, the Customer Service offers a wide field of
application for artificial intelligence.
• Here, the chatbots are of great importance.
• Text-based chatbots will serve simple communication interfaces.
• The great challenge is brought by voice-based chatbots which are developing
into virtual digital butlers.
• In addition, the use of service robots is recommended—also to compensate for
a shortage of skilled workers.

References
1. Price, B., & Jaffe, D. (2008). The best service is no service: How to liberate your customers
from customer service, keep them happy, and control costs. Jossey-Bass.
2. Kreutzer, R. T. (2021). Customer dialogue online and offline. The big 1x1 of customer acqui-
sition, customer retention and customer recovery. Springer Gabler.
3. Kilian, K., & Kreutzer, R. T. (2022). Digital brand management. Digital branding in times
of diverging markets.
4. Fink, V. (2020). AI projects—just do it. Successfully introducing artificial intelligence in ser-
vice, marketing and sales. Springer Gabler.
5. Hundertmark, S. (2021). Digital friends. How companies can successfully use chatbots. Wiley.
6. Vodafone. (2023). Chatbot TOBi—Your digital assistant. Retrieved June 12, 2023, from
https://www.vodafone.de/hilfe/unser-­chatbot-­tobi.html#welche-­vorteile-­habe-­ich-­mit-­tobi.
7. Assono. (2023). For companies: Chatbot from Germany with artificial intelligence, multilin-
gualism and individual design. Retrieved May 8, 2023, from https://www.assono.de/chatbot.
8. KLM. (2023). Find KLM on messenger. Retrieved April 7, 2023, from https://social.klm.
com/messenger.
9. Wetteronline. (2023). Informed best every day. Retrieved April 7, 2023, from https://
wo.wetteronline.de/produktwelt/messenger-­chatbot/.
10. Beuth, P. (2016). Twitter users turn chatbot into a racist. Retrieved January 3, 2019, from
https://www.zeit.de/digital/internet/2016-­03/microsoft-­tay-­chatbot-­twitter-­rassistisch.
References 315

11. Wilde, T. (2019). Customer engagement with chatbots and collaborative bots: Approach,
opportunities and risks for the use of bots in service and marketing. In P. Gentsch (Ed.),
Artificial intelligence for sales, marketing and service. With AI and bots to an algorithmic
business—Concepts, technologies and best practices (2nd ed., pp. 173–186). Springer Gabler.
12. Kreutzer, R. T. (2018). Toolbox for marketing and management. Springer Gabler.
13. IBM. (2023). What is a chatbot. Retrieved May 11, 2023, from https://www.ibm.com/de-­de/
campaign/was-­ist-­ein-­chatbot.
14. aiaibot. (2021). Chatbot Study ‘21. Retrieved April 8, 2022, from https://page.aiaibot.com/
de/chatbot-­studie.
15. Gentsch, P., & Wall, F. (2022). Smart speaker study 2022.
16. Luber, S., & Litzel, N. (2021). What is the Generative Pretrained Transformer 3
(GPT-3)? Retrieved November 22, 2021, from https://www.bigdata-­insider.de/
was-­ist-­der-­generative-­pretrained-­transformer-­3-­gpt-­3-­a-­1011085/.
17. Github. (2023). Your AI pair programmer. Retrieved May 17, 2023, from https://github.com/
features/copilot.
18. Vollmer, A. (2023, May 20). AI at the stove. Frankfurter Allgemeine Zeitung, 14.
19. OpenAI. (2023). GPT-4 is OpenAI’s most advanced system, producing safer and more useful
responses. Retrieved March 29, 2023, from https://openai.com/product/gpt-­4.
20. Gartner. (2023). Your 7 biggest questions about ChatGPT, answered. Retrieved May
8, 2023, from https://www.gartner.com/en/articles/your-­7-­biggest-­chatgpt-­questions-­
answered?utm_campaign=RM_NA_2023_SWG_NL_NL04_Conferences_DW&utm_
medium=email&utm_source=Eloqua&cm_mmc=Eloqua-­_-­Email-­_-­LM_RM_NA_2023_
SWG_NL_NL04_Conferences_DW-­_-­0000.
21. QNOVA. (2023). Digitize quality management easily. Retrieved June 26, 2023, from https://
qnova.ai/.
22. Bitkom. (2023). ChatGPT & Co.: Every sixth company plans AI use for text generation, press
release. Retrieved May 8, 2023, from https://www.bitkom.org/Presse/Presseinformation/
ChatGPT-Jedes-sechste-Unternehmen-plant-KI-Einsatz-Textgenerierung.
23. Arya, N. (2023). Baby AGI: The birth of a fully autonomous AI. Retrieved May 12, 2023,
from https://www.kdnuggets.com/2023/04/baby-­agi-­birth-­fully-­autonomous-­ai.html.
24. Heller, P. (2023, May 14). Artists in the AI trap. Frankfurter Allgemeine Sunday news-
paper, 53.
25. SentiOne. (2023). AI-based social listening. Retrieved May 15, 2023, from https://senti-
one.com/de.
26. Emplifi. (2023). Social listening and intelligence tools for modern brands. Retrieved
May 15, 2023, from https://emplifi.io/products/social-­media-­marketing/listening?utm_
source=socialbakers.com.
27. Mention. (2023). Brand insights at your fingertips. Retrieved May 15, 2023, from https://
mention.com/en/.
28. Levity. (2023). Automate everyday tasks with AI. Retrieved May 15, 2023, from https://
levity.ai/.
29. Keyhole. (2023). Enterprise-grade social listening & analytics simplified. Retrieved May 15,
2023, from https://keyhole.co/social-­listening/.
30. Summetix. (2023). Better data. Deeper insights. Smarter decisions. Retrieved May 15, 2023,
from https://www.summetix.com/.
31. Deloitte. (2022). Digital human. Retrieved April 12, 2022, from https://www2.deloitte.com/
nl/nl/pages/customer-­and-­marketing/articles/digital-­human.html.
32. Uneeq. (2021). Revolutionize customer experiences with scalable human connections.
Retrieved April 12, 2022, from https://digitalhumans.com.
33. Hörner, T. (2019). Marketing with voice assistants: How to strategically succeed with Alexa,
Google Assistant & Co. Springer Gabler.
34. Kahle, T., & Meißner, D. (2020). All about voice: Concept, design and marketing of applica-
tions for digital voice assistants. Haufe.
316 4 Marketing, Sales, Customer Service

35. Kahle, T. (2020). Voice marketing—Products and services become dialog capa-
ble. In M. Stumpf (Ed.), The 10 most important future topics in marketing (2nd ed.,
pp. 107–129). Haufe.
36. Statista. (2023). Smart speaker. Retrieved May 2, 2023, from https://www-­statista-­com.
ezproxy.hwr-­berlin.de/study/53329/smart-­speakers/.
37. Statista. (2022). Smart speaker. Retrieved May 12, 2023, from https://de-­statista-­com.
ezproxy.hwr-­berlin.de/statistik/studie/id/52476/dokument/smart-­speakers/.
38. Gender2Technik. (2021). Gender-neutral voice. Retrieved May 12, 2023, from https://gen-
der2technik.de/genderneutrale-­stimme/.
39. Microsoft. (2023). Welcome to the portal of the custom neural voice. Retrieved May 12,
2023, from https://speech.microsoft.com/portal/customvoice.
40. Lin, Y. (2022). Voice search statistics. Retrieved April 6, 2022, from https://www.oberlo.com/
blog/voice-­search-­statistics.
41. Sistrix. (2023). What are zero click searches? Retrieved May 12, 2023, from https://www.
sistrix.de/frag-­sistrix/seo-­grundlagen/google/zero-­click-­search.
42. Kreutzer, R. T. (2021). Practice-oriented online marketing. Concepts—Instruments—
Checklists (4th ed.). Springer Gabler.
43. Anderson, C. (2004). The long tail. dtv.
44. Barysevich, A. (2020). How to optimize for voice search: 6 SEO strategies for
success. Retrieved April 6, 2022, from https://www.searchenginejournal.com/
voice-­search-­optimization-­strategy/379946/#close.
45. Kreutzer, R. T. (2021). Toolbox for digital business. Leadership, business models, technolo-
gies and change management for the digital age. Springer Gabler.
46. Beus, J. (2023). Visibility in the AI future: Which brands does OpenAI recommend? Retrieved
June 10, 2023, from https://shorturl.at/aBFY3.
47. Schema.org. (2023). Welcome to Schema.org. Retrieved May 15, 2023, from https://
schema.org.
48. Schema.org. (2023). Speakable. Retrieved May 15, 2023, from https://schema.org/speakable.
49. Otto. (2023). OTTO action in Google Assistant. Retrieved May 15, 2023, from https://www.
otto.de/shoppages/otto_action.
50. Sachse, M. (2023, April 20). Zalando integrates ChatGPT into its sales platform. Frankfurter
Allgemeine Zeitung, 20.
51. Software AG. (2023). Voice commerce software—Harnessing the power of speech. Retrieved
May 15, 2023, from https://online-­software-­ag.com/de/loesungen/voice/.
52. Nespresso. (2023). Prodigio. Retrieved May 15, 2023, from https://www.nespresso.com/de/
de/prodigio-­machines-­range.
53. Amazon. (2023). Mercedes me. Retrieved May 15, 2023, from https://www.amazon.de/
Mercedes-­Benz-­Mercedes-­me/dp/B075435C1Y.
54. Mercedes. (2023). Get more out of your Mercedes-Benz. Retrieved May 15, 2023, from
https://www.mercedes-­benz.de/passengercars/services/mercedes-­me.html.
55. Osterwalder, A., & Pigneur, Y. (2010). Business model generation. Wiley.
56. Kaltschmidt, T. (2017). Amazon Echo: Newsreader triggers mass order. Retrieved August 29,
2018, from https://www.heise.de/newsticker/meldung/Amazon-­Echo-­Nachrichtensprecher-­
loest-­Massenbestellung-­aus-­3591039.html.
57. Hesterberg, C. (2018). Online shopping on Amazon thanks to Alexa, Cheeky parrot orders
fruit via voice assistant. Retrieved January 4, 2019, from https://www.general-­anzeiger-­bonn.
de/news/panorama/Frecher-­Papagei-­bestellt-­sich-­Obst-­per-­Sprachassistent-­article4003734-­
amp.html.
58. Amazon. (2023). Amazon connect voice ID. Retrieved June 8, 2023, from https://aws.ama-
zon.com/de/connect/voice-­id/.
59. Affectiva. (2023). Humanizing technology to bridge the gap between humans and machines.
Retrieved May 16, 2023, from https://www.affectiva.com/.
60. Receptiviti. (2023). Analyze people’s language and gain insights into their psychology, per-
sonality, and emotions. Retrieved May 16, 2023, from https://www.receptiviti.com/.
References 317

61. Vier. (2023). Emotion analysis through AI. Retrieved May 19, 2023, from https://www.vier.
ai/emotion-­analytics-­technologie/.
62. Cogito. (2023). Powering the teams behind great CX & EX. Retrieved May 16, 2023, from
https://cogitocorp.com/.
63. Ewertz, J., Knickrehm, C., Nienhaus, M., & Reichmann, D. (2022). Listen closely: Using
vocal cues to predict future earnings. Retrieved June 26, 2023, from https://papers.ssrn.com/
sol3/papers.cfm?abstract_id=4307178.
64. tcl.digital. (2023). With the TEO Suite, you can easily optimize your texts for your commu-
nication goal. Retrieved May 19, 2023, from https://tcl.digital.
65. Echobox. (2023). The new standard in automation for publishers. Retrieved June 1, 2023,
from https://www.echobox.com/newsletters-­overview.
66. Lengoo. (2023). Rethink language. Retrieved May 19, 2023, from https://www.lengoo.com/.
67. Retresco. (2023). AI-powered text generation. Retrieved May 19, 2023, from https://www.
retresco.de/.
68. Automated Insights. (2023). Turn your data into clear natural language. Retrieved May 19,
2023, from https://automatedinsights.com/.
69. Carlson, K., Kopalle, P. K., Riddell, A., Rockmore, D., & Vana, P. (2022). Complementing
human effort in online reviews: A deep learning approach to automatic content generation and
review synthesis. International Journal of Research in Marketing, 40(1), 54–74. Retrieved
June 20, 2022, from https://doi.org/10.1016/j.ijresmar.2022.02.004.
70. Neuroflash. (2023). Create high-quality content with AI. Retrieved May 19, 2023, from
https://neuroflash.com/.
71. Acrolinx. (2023). Maximize your content’s performance. Retrieved May 19, 2023, from
https://www.acrolinx.com/product/.
72. OpenAI. (2023). DALL·E: Creating images from text. Retrieved May 19, 2023, from https://
openai.com/blog/dall-­e/.
73. OpenAI. (2023). CLIP: Connecting text and images. Retrieved May 19, 2023, from https://
openai.com/blog/clip/.
74. Botika. (2023). Diversify your fashion models. Retrieved May 22, 2023, from https://
botika.io/.
75. Scheer, U. (2021, October 8). Did Klimt’s picture shock look like this? Frankfurter Allgemeine
Zeitung, 15.
76. Data Science. (2023). Generative adversarial networks. Retrieved June 8, 2023, from https://
datascience.eu/de/maschinelles-­lernen/generative-­adversarische-­netze/.
77. Idomoo. (2023). Personalized video for sales. Retrieved May 19, 2023, from https://www.
idomoo.com/.
78. Wave.Video. (2023). Convert text to video online. Retrieved May 22, 2023, from https://
wave.video/tools/video-­marketing/turn-­text-­into-­video.
79. Synthesia. (2023). Create professional videos in 15 minutes. Retrieved May 22, 2023, from
https://www.synthesia.io/.
80. Newitz, A. (2021). Movie written by algorithm turns out to be hilarious and
intense. Retrieved January 5, 2022, from https://arstechnica.com/gaming/2021/05/
an-­ai-­wrote-­this-­movie-­and-­its-­strangely-­moving/.
81. Hofstadter, D. R. (2018, June 27). The last refuge of human intelligence. Frankfurter
Allgemeine Zeitung, N4.
82. OpenAI. (2023). Jukebox. Retrieved June 8, 2023, from https://openai.com/research/jukebox.
83. Luber, S., & Litzel, N. (2021). What is Jukebox OpenAI? Retrieved November 22, 2021,
from https://www.bigdata-­insider.de/was-­ist-­jukebox-­openai-­a-­1026594/.
84. Deutsche Telekom. (2021). Beethoven X—The AI project.
85. Grolle, J. (2021, October 9). In Beethoven’s brain. Der Spiegel, 106–108.
86. Fischer, B. (2023). When AI writes songs. Retrieved June 26, 2023, from https://www.
faz.net/aktuell/wirtschaft/digitec/kuenstliche-­intelligenz-­und-­musik-­wenn-­die-­ki-­songs-­
schreibt-­18815877.html.
318 4 Marketing, Sales, Customer Service

87. Gentsch, P. (2019). Artificial intelligence for sales, marketing and service, with AI and
bots to an algorithmic business—Concepts, technologies and best practices (2nd ed.).
Springer Gabler.
88. Salesforce. (2023). Introducing Einstein GPT. Retrieved June 7, 2023, from https://www.
salesforce.com/de/products/einstein/overview/.
89. Tomi.AI. (2023). Profitable digital acquisition at scale. Retrieved June 26, 2023, from https://
tomi.ai/.
90. Intellimize. (2023). Dynamically personalize your website to convert more traffic. Retrieved
June 26, 2023, from https://www.intellimize.com/how-­it-­works.
91. EyeQuant. (2023). Powered by AI. Retrieved May 22, 2023, from https://www.eyequant.
com/what/.
92. Define Media. (2023). How does the Artificial Intelligence of CONATIVE actually work?
Retrieved May 22, 2023, from https://definemedia.de/de/ai.
93. Sqreem. (2023). The world’s largest digital behaviour aggregator. Retrieved May 22, 2023,
from https://sqreemtech.com/.
94. Seedtag. (2023). We understand consumers’ interests through Liz, our AI technology.
Retrieved May 22, 2023, from https://www.seedtag.com/.
95. Wise Athena. (2023). Where CPG companies come to maximize their revenue. Retrieved
May 22, 2023, from https://wiseathena.com/.
96. Monopolkommission. (2018). Algorithms and collusion. Retrieved April 26, 2022, from
https://www.monopolkommission.de/index.php/de/beitraege/216-­xxii-­algorithmen.
97. Hennes, D., & Schwalbe, U. (2018, July 13). Cartel formation through learning algorithms.
Frankfurter Allgemeine Zeitung, 18.
98. Insilico Medicine. (2023). Artificial intelligence for every step of pharmaceutical research
and development. Retrieved May 22, 2023, from https://insilico.com/.
99. Olay. (2023). Personalize my routine. Retrieved May 22, 2023, from https://www.olay.com/
skin-­advisor.
100. L’Oréal. (2023). Innovation through industry. Retrieved May 22, 2023, from https://www.
loreal.com/en/beauty-­science-­and-­technology/beauty-­tech/innovating-­through-­industry/.
101. L’Oréal. (2023). Now test products virtually. Retrieved May 22, 2023, from https://www.
lorealparis.ch/de-­ch/virtuell-­testen.
102. BlueYonder. (2021). Future-proofing your retail strategy.
103. Kreutzer, R. T. (2022). Practice-oriented Marketing. Basics, tools, case studies (6th ed.).
Springer Gabler.
104. Scandit. (2023). ShelfView: Keep track of your shelves. Retrieved May 23, 2023, from
https://www.scandit.com/products/shelfview/.
105. BlueYonder. (2023). The No. 1 customer-oriented supply chain platforms. Retrieved May 23,
2023, from https://blueyonder.com/de/de/.
106. Bizerba. (2023). Customers to power—Bizerba presents retail innovations at
EuroShop 2023. Retrieved May 23, 2023, from https://www.bizerba.com/de/de/
press-­and-­news/2023-­04-­24_Cooperation_Captron.
107. Bruhn, M., & Hadwich, K. (Eds.). (2021). Artificial intelligence in service management,
Volume 2: Fields of application—Acceptance—Customer interactions. Springer Gabler.
108. Hotel Tech Report. (2023). The 10 best hotel chatbots in 2023. Retrieved May 23, 2023, from
https://hoteltechreport.com/de/marketing/hotel-­chatbots.
109. Finkenzeller, K. (2021, June 27). Playing it safe. Frankfurter Allgemeine Sunday
Newspaper, 55.
110. Robotise. (2023). Rethink minibar and room service. Retrieved May 23, 2023, from https://
robotise.eu/de/hospitality.
111. BotbarCoffee. (2023). Adam. Retrieved June 12, 2023, from https://botbarcoffee.com/adam/.
Health and Safety
5

Summary One of the most significant changes brought about by the use of AI in
the health sector is the transition from disease treatment (reactive care) to a care that
focuses even more on the diagnosis of diseases at an early stage or before their
occurrence (proactive care). The increasing performance of AI systems, for exam-
ple, promotes the pattern recognition in digital data and images. This can not only
increase the accuracy of diagnosis, but also find the most promising therapy. Various
AI applications are also used for the analysis and enhancement of safety.

5.1 Health Sector

The following sections will highlight areas of application where AI can be used in
the health sector. It is important to note that the use here requires careful validation,
ethical considerations, data protection and the collaboration between AI systems
and medical professionals to ensure responsible and effective applications.

5.1.1 Diagnosis and Medical Imaging

Through digitalization, a variety of medical data for AI applications can be pre-


pared. The range extends from patient records to the—partially already digitally
available—results of examinations up to the personal health data generated by
wearables and apps. Especially in the medical sector, artificial intelligence can
play out many of its advantages:

Supplementary Information The online version contains supplementary material available at


https://doi.org/10.1007/978-­3-­658-­46131-­7_5.

© The Author(s), under exclusive license to Springer Fachmedien Wiesbaden 319


GmbH, part of Springer Nature 2024
R. T. Kreutzer, Understanding Artificial Intelligence, Future of Business and
Finance, https://doi.org/10.1007/978-3-658-46131-7_5
320 5 Health and Safety

• Simultaneous access to hundreds of thousands or millions of relevant historical


image and text documents (including evaluations) as a basis for the creation of
diagnoses
• Access to new insights in real time, which are gained by researchers and/or col-
leagues in daily use, for example in the course of clinical studies
• Evaluation of the complete medical record of the respective patient, provided
the corresponding data is linked and digitally available
• AI-supported questioning of patients, to possibly supplement missing infor-
mation and/or to verify hypotheses
• Derivation of therapy recommendations, which are based on based on a large
number of third-party therapy recommendations and the results achieved thereby

There are still many challenges to overcome on the way there. There are still prob-
lems with the interpretation of medical notes by doctors and with the interdisci-
plinary transfer of results (also for reasons of data protection). As long as a
decentralized health care system dominates in countries like Germany and an inte-
grated patient file (with a complete digital documentation of diagnoses, therapies
and achieved therapy successes) only exists as a concept, the evaluation possibilities
remain very limited. Moreover, as with all AI systems, a comprehensive and inten-
sive training phase is required to equip AI applications with the necessary data.
Such a consolidated database, which brings together anonymous medical records
from the most diverse sources, is still not available in Germany—and unfortunately
also not foreseeable.
In the field of diagnostics, AI can play an important role in the medical imaging.
AI models can analyze large amounts of medical images and recognize patterns and
anomalies. This can assist doctors in the detection of diseases such as cancer, car-
diovascular diseases, neurological disorders and other conditions. For example, AI
can identify tumors in MRI and CT scans. Through the rapid and precise analysis of
images, AI can improve the accuracy of the diagnosis and enable earlier treatment.
Institutions and companies such as the Fraunhofer Society, the Helmholtz
Association of German Research Centers as well as Siemens Healthineers and
Philips are working on digital twins of humans and organs (such as heart, kidney,
liver) to use them for diagnosis and therapy. The health status and the effects of a
therapy can be simulated on these digitally created mirror images. The goal is to try
out the right treatment methods—without making the real human a guinea pig. It is
not yet foreseeable when the corresponding breakthroughs will be achieved. The
vision is to perform an integrated evaluation of all relevant patient data (such as
laboratory values and data from CT and MRI examinations) using this digital twin.
In a Digital Twin, the entire process from prevention through diagnosis and ther-
apy to aftercare should also be simulated—to build optimal patient care based
on this.
5.1 Health Sector 321

Note Box
In the past, scientists gained their knowledge “in vivo”, i.e. through observa-
tion or experiments on living organisms. Later, such experiments could be “in
vitro”—i.e. in a test tube. Now the step to “in silicio” has been taken, because
such experiments now take place in the computer—with chips based on the
chemical element silicon.

The use of digital twins is particularly important in the health sector, because
the medical devices used per patient generate large amounts of data are generated.
The evaluation of these data has so far reached the limits of what can be pro-
cessed—especially with regard to the necessary processing speed and the multitude
of different data sources while at the same time prevailing cost pressure. At the same
time, the need for a individualized patient management is increasing. Digital
Twins can provide important support in diagnosis as well as in the simulation of
therapies and the prognosis of disease courses. For this purpose, the digital twin is
created as a digital image of the affected person. The digital twin then allows to
simulate the personal medical history as well as the individual health status. This
can, for example, test the individual drug tolerance for patients in advance of the
actual use and consequently without risk (cf. [1]).
An important goal for patients with multiple sclerosis is a individual, innova-
tive disease management. This would be suitable to better cope with the complex-
ity of the chronic, neurological suffering. This autoimmune disease is characterized
by very heterogeneous disease courses and a high complexity. Consequently, there
are also a multitude of possible therapeutic approaches. The use of AI allows a digi-
tal twin to be created, which is based on a multitude of disease parameters. These
parameters include, among others, clinical and paraclinical data, information about
specific characteristics, life circumstances and behaviors of the patient as well as
already implemented therapeutic measures (cf. [1]).
AI algorithms found an interesting application during the Corona pandemic in
Greece. During this period countries have used different concepts to reduce the
spread of the Corona virus by travelers. A special goal was to not hinder the much-
desired tourism through harsh measures. In the summer of 2020, the AI system Eva
was used at all Greek borders to limit the influx of asymptomatic travelers who were
infected with the virus. In contrast to nationwide protocols, Eva allocated the limited
testing resources of Greece based on the demographic data of the incoming travelers
(age, gender), from country of origin and region as well as the test results of previous
travelers. Initially, the data was used to divide the travelers into groups with as uni-
form an infection risk as possible. Within these groups, the need for testing of the
respective travelers was then determined by AI. At the same time, Eva identified
groups whose prevalence (number of disease cases) appeared uncertain due to miss-
ing data. Here additional tests were ordered to reduce the uncertainty (cf. [2]).
By comparing the performance of Eva with modeled scenarios, it was shown that
Eva was able to identify 1.85 times as many asymptomatic infected travelers as
were identified on average in random surveillance tests. During the main travel
322 5 Health and Safety

season, the values were even two to four times higher. Compared to testing methods
that only used epidemiological measures, Eva identified 1.25 to 1.45 times as many
asymptomatic infected travelers. This latter result is at least partly due to the fact
that epidemiological measures only had a limited predictive value for the actual
prevalence of the Corona virus among asymptomatic travelers.
The authors placed particular emphasis on consistently coordinating their approach
with lawyers, epidemiologists, and politicians. At the same time, as few personal data
as possible should be collected. For example, meaningful information about the
respective profession was omitted. At the same time, the research team was commit-
ted to high transparency about their own approach. In sum, this study by Bastani et al.
[2] presents an interesting application field of reinforcement learning based on real-
time data for the protection of public health—and that with high data economy.
An important contribution in the health sector can be the development of a Brain
Computer Interface (BCI). For this the terms Brain Machine Interface (BMI) or in
German Brain-Computer Interface are also used. At its core, it is a human-to-
machine interface that allows a direct connection between the brain and computer
without activating the peripheral nervous system. For this, the electrical brain activi-
ties are recorded.This can be done non-invasively via the EEG (Electroencephalography).
For this, the subject has to wear a cap with a multitude of cables, which greatly com-
plicates the use. Possibly, optimized headbands could help in the future. The fMRI
(functional magnetic resonance imaging) also records brain activities non-invasively.
For this, the person has to be moved into a corresponding device to record the brain
activities. Without this high mechanical effort, the so-called invasive methods are
used. For this, the test subjects have electrodes implanted to measure brain currents
directly there. However, this requires intervention in the body.
The basis of these developments is the realization that even the imagination of a
certain action triggers measurable changes in the electrical brain activities. Thus,
a Brain Computer Interface can determine which changes in brain activity correlate
with which type of imaginations (see further [3]). The insights gained about rela-
tionships can be used as control signals for various applications. To date, this
communication only works in one direction (“one-way use”). The human can com-
municate something to the machine through his thoughts—but the computer cannot
yet return corresponding thoughts directly into the brain (“two-way use”). So far,
humans (still) rely on their proven sensory organs to recognize reactions of the sys-
tem. Whether this will always be the case and whether we want a direct feedback
into the brain is an open question.
Current developments suggest that at least “one-way” Brain Computer
Interfaces could conquer the market in a few years. The origin of these applications
was, among other things, in the possibility of giving people with physical impair-
ments access to interaction through computers. The control by thoughts replaces
mouse, keyboard and touchscreen, which require a physical movement. However,
the processing is still slow and the error rate is very high. But everyday solutions are
already being intensively researched (see [4]).
Elon Musk plans with his company Neuralink plans to start the first trials with
Brain Computer Interfaces on humans in 2023. The prototypes of Neuralink are
5.1 Health Sector 323

to be further developed into safe devices, which can be used in any brain area and
are easily upgradeable. Musk even intends to have a BCI implanted in himself. The
US Food and Drug Administration has now approved clinical trials for brain
implants in humans. Neuralink has also expanded its surgery department to perform
precise brain operations with thin, electrode-laden wires. Whether the BCI trans-
plants will actually begin in 2023, however, is uncertain (see [5]).
The vision of Neuralink and similar companies is to measure the activity of
certain brain areas to control computers. With BCIs, the blind could see again and
paraplegics could learn to walk again. In experiments with monkeys, it has already
been shown that they can control a cursor with their thoughts and select keys. Plans
for a visual prosthesis have also been presented, which is supposed to work through
electrical stimulation. The team of Neuralink wants to enable people who were born
blind to see. It remains to be seen whether Neuralink can actually implement its
ambitious plans this time. The advances in the field of BCI technology are overall
promising and could in the future enable groundbreaking applications in the field of
brain-computer communication (see [6]).

5.1.2 Precision Medicine and Personalized Therapy

Another area of AI application in the health sector is precision medicine. AI mod-


els can analyze large amounts of patient data, including genome sequencing results,
clinical course data, laboratory results, and biomarker information. AI can thus help
in the identification of genetic risk factors for certain diseases and support patients
in taking preventive measures. Based on these data, AI algorithms can also make
predictions about the individual response of a patient to certain drugs or treatments.
This allows doctors to develop personalized therapy plans and improve the effec-
tiveness of treatments. Such personalized therapy approaches take into account a
variety of factors. These include genetic information, medical history, symptoms,
and response to previous treatments. Through continuous monitoring and adjust-
ment of the therapy plan, the AI-based personalized medicine can contribute to
achieving better treatment outcomes and improving patient care.
Artificial Intelligence is therefore increasingly used for mapping human DNA
to enable such precision medicine. Only improved insights into a person’s genetic
makeup enable personalized health care for each person—oriented towards their
respective biological predisposition. Companies in the The pharmaceutical industry
aims to individualize medicine through this. Selected types of precision medicine
have been around for some time. These include therapies for treating allergies and
blood transfusions. The now seemingly achievable goal is to develop therapies indi-
vidually for each patient. Various AI technologies are used to evaluate large amounts
of data. This data can be obtained through wearables, sensors (e.g. for recording
speech) and other networked devices. In this way, patterns of mutations and further
correlations of diseases are to be recognized. For example, the genetic characteris-
tics of a tumor should be reliably identified. However, such a individualized ther-
apy would not only have to be quickly available, but also inexpensive to produce.
324 5 Health and Safety

Deep Genomics provides a convincing example for personalized treatments. The


company uses large amounts of data about the composition of human cells. This
allows AI agents to predict the consequences of changes in the genome. The future of
drug development is therefore largely based on AI-supported discoveries (see [7]).
The company Flow Health aims to increase the results of therapies through the
targeted use of artificial intelligence. To this end, an AI platform is used to enable
proactive, personalized healthcare. Today, about 70% of medical decisions are based
on laboratory results. However, the improper use of lab tests is a major problem.
Flow Health wants to contribute to ensuring that there is neither overuse (ordered, but
not indicated) nor underuse (indicated, but not ordered) in laboratory examinations.
This is done by a personalized interpretation that puts the laboratory results in context.
Laboratory tests often find abnormal test results—especially in older adults.
What is the reason for this? The normal values defined as a benchmark are based on
average values of the total population. What is defined as “normal” there can and
must not serve as a standard for every individual. Therefore, it is important to put
the laboratory results in context with the entire medical history. To this end, the
entire patient’s medical record as well as the identified symptoms, diagnoses and
medication lists are evaluated by AI agents. Only such a holistic view of the patient,
which includes the relevant context, allows for more precise insights—and a selec-
tion of the relevant diagnostic tests. In this sense, Flow Health sees itself as a diag-
nostic partner in the development of therapy concepts (see [8]).
The integration of artificial intelligence into personalized therapy plans also
offers the opportunity to support complex medical decisions. AI models can assist
medical professionals in selecting the optimal treatment methods, dosing medica-
tions, and assessing risk factors. By automating certain tasks and using algorithms,
time and resource savings can be achieved. This allows medical professionals to
focus more on patient care and ensure individual care. In sum, this can potentially
improve the Efficiency in Healthcare can be increased.
However, it is important to note that the integration of AI into personalized
therapy plans also raises ethical and legal questions. Data protection, transpar-
ency of algorithms, and ensuring accountability remain important aspects that must
be considered in the development and implementation of these systems. Nevertheless,
the advances in AI-supported personalized medicine show promising results and
could represent a paradigm shift in patient care.

5.1.3 Use of Robots in the Medical Sector

Precision medicine can also be supported by robots in the medical sector. The fol-
lowing areas of application can be distinguished:

• Robots for surgical assistance or for the complete performance of operations

Robotics in healthcare, with a focus on surgical assistance, has received the most
attention in recent years. One of the first companies to establish itself in the market
was Intuitive with the Da-Vinci—Operating System. The pioneering work done here
5.1 Health Sector 325

over two decades has led to the development of a minimally invasive, robot-assisted
system portfolio. The Da-Vinci operating system was one of the first minimally inva-
sive surgery systems to receive the CE mark and was also approved by the US FDA
for certain operations. Worldwide, many millions of da-Vinci procedures have been
performed and many thousands of surgeons have been trained on this system (see [9]).
The following video shows what an AI-supported operation can look like—on a
banana: https://www.youtube.com/watch?v=5jn8fjLXTiI.

• Robots for direct patient care

Patients suffering from neuromuscular diseases or injuries (such as after a stroke)


can benefit from robot-assisted sensory-motor care. Here, exoskeletons can also be
used, which support the patient’s movements in the form of mechanical suits. This
includes “intelligent” prostheses, for example as a replacement for lost limbs (see in
depth Sect. 5.1.7).

• Robots for indirect patient care

Disinfection robots can be used to facilitate interaction with patients with conta-
gious diseases. This includes delivery robots that autonomously supply patients
with medication. An example is the robot Jeeves presented in Sect. 4.11.

• Robots as virtual caregivers

From interacting with patients to admitting them to the necessary departments, vir-
tual nursing assistants can support in healthcare. Since virtual caregivers are avail-
able 24/7, they can continuously monitor the condition of patients and also answer
questions continuously. Applications of virtual nursing assistants often still focus on
the regular communication between patients and service providers. A connection
with health apps offers the possibility to connect the phases of illness with the—
hopefully dominant—phases of health in a closed cycle. Thus, the virtual caregiver
becomes a virtual health agent, who points out necessary wellness checks, moni-
tors weight and physical activities, and possibly also suggests meals and gives
impulses when it’s time to go to sleep.

• Robots for home care

This category includes robots that support elderly or chronically ill people in the
home environment.
Robots in the home environment already provide company to elderly people
living alone today. As a 77-year-old test user of a robot put it so nicely:
…I thought that such a robot could be a nice change. On the first night, I was really uneasy.
I know that computers can crash. So what if the robot also crashes and goes crazy?… But
even for me, the robot means more quality of life. In no time at all, I felt responsible. And
it’s something special when someone greets you with ‘Welcome back, Dietlind. Nice to see
you again’ when you come in the apartment door. ([10], p. 119)
326 5 Health and Safety

It is to be expected that artificial intelligence can combine its advantages of evaluat-


ing large amounts of data with the use of robots. Thus, robots can analyze data from
the medical pre-examination to subsequently guide a surgeon’s instruments during
surgery. Through artificial intelligence, robots can additionally evaluate data from
previous surgeries to develop new surgical techniques. These possibilities will con-
tribute to reducing surgical inefficiencies and thus poor results from surgeries. In
addition, insights gained here can be linked with a patient’s postoperative and long-
term health outcomes. This again requires a closed patient data cycle.

5.1.4 Patient Monitoring and Early Detection of Complications

AI can in real time monitor patient data and Support doctors in the early detec-
tion of complications. By continuously analyzing vital data such as heart rate,
blood pressure, oxygen saturation, and other parameters, artificial intelligence can
detect anomalies and trigger alarms to alert medical staff to potential risks. This
allows for early intervention and improved patient care. AI-supported monitoring
systems can be used in intensive care units to assist doctors in assessing the patient’s
condition and in decision-making.
The real-time monitoring with AI offers the advantage that medical profession-
als are immediately informed about changes in the patient’s condition. This allows
them to act quickly and detect potential complications early. This allows for a pro-
active intervention, to reduce the risk of serious health problems.
By analyzing patient data, including medical records, lab results, imaging proce-
dures, and other clinical information, artificial intelligence can identify patterns and cor-
relations that could indicate the occurrence of complications. AI algorithms can
process complex data and recognize patterns that are difficult for the human eye to cap-
ture. This allows doctors to identify risk factors early and plan targeted interventions.
In addition, so-called bio-chips offer the possibility to detect diseases from can-
cer to smallpox before the patient even develops symptoms. These chips consist of
a series of molecular sensors on the chip surface that can analyze biological ele-
ments and chemicals.

5.1.5 Support in Drug Research

AI systems can significantly reduce the time and cost of developing new drugs. The
development of a new drug often takes more than 10 years. Often in the phase of
basic research and preclinical studies, several thousand substances are examined—
of which in the end maybe one to two receive approval from the drug authorities.
The average costs for the development of a drug up to market approval are estimated
at 1 to 3 billion €. This high time and cost expenditure leads to companies and their
scientists being able to focus only on a few developments. As a result, research
capacities for rare diseases are lacking, which do not expect a return on investment.
In this time-consuming and costly process of drug research and development,
artificial intelligence can make a significant contribution. By analyzing extensive
5.1 Health Sector 327

biomedical databases, scientific literature, clinical trials, and genome data, AI can
help predict drug-target interactions. AI models can identify potential candidates for
drug tests and assist in optimizing molecule designs. This accelerates the process
of identifying and developing new drugs and contributes to personalized medicine.
In addition, AI also supports the optimization of drug development and design.
Through simulation-based approaches and machine learning, AI systems can pre-
dict possible impacts of chemical modifications and accelerate the development of
new compounds. This contributes to the efficiency of drug development and allows
researchers to make faster progress.
The conduct of clinical trials and tests The validation of new drugs is a crucial
step in pharmaceutical research. For this purpose, AI systems can analyze large
amounts of patient data and monitor them in real time. This allows for a more accu-
rate assessment of the effectiveness and safety of drugs during clinical trials. By
continuously monitoring data, AI algorithms can detect potential side effects or
adverse reactions early on and notify researchers and medical professionals.
Another area where AI contributes is the identification of patient populations,
who are most likely to benefit from a particular drug. By analyzing genetic informa-
tion, disease characteristics, and clinical data, AI can develop personalized
approaches to drug treatment. This allows for a more targeted use of drugs and can
lead to better treatment outcomes.
Additionally, AI can assist in the analysis of drug interactions and the prediction
of drug interactions. By integrating extensive drug data, AI systems can identify
potential interactions between drugs and minimize the risk of side effects. The use of
AI in drug research and development has the potential to accelerate processes, increase
efficiency, and ultimately improve the availability of effective drugs for patients.
The use of (digital) virtual twins in the life science industry can also simulate
the effects of new drugs and thus better understand them. Medical interventions and
their effects on the entire human body can be simulated by a digital twin. To create
such a twin of oneself, a full-body photo must be taken with a special camera. This
is computer-aided converted into a semi-transparent model. Various diseases can be
represented by colors and patterns. Organs can be printed three-dimensionally using
a 3D printer. The replicas can be equipped with sensors and other technology and
can then reflect the interactions in the digital twin (see [11]).
The company Atomwise uses artificial intelligence for the development of
drugs. For this purpose, the company has developed an AI-based Discovery Engine
that combines the power of Convolutional Neural Networks with extensive chemi-
cal libraries to discover new low molecular weight drugs. For molecular recogni-
tion, the company relies on a technology that corresponds to image recognition. The
AtomNet platform is equipped with AI tools to achieve easy scaling and high speed
of drug development processes. As a result, Atomwise has been able to unlock more
untreatable target molecules than comparable AI platforms for drug discovery. This
Discovery Engine has already been validated at over 600 unique disease targets in
775 collaborations with more than 250 partners worldwide (see [12]).
The goal of Insilico Medicine is to accelerate the discovery and development of
drugs. For this purpose, an AI-supported End-to-End Pharma Technology
Company designed. The proprietary Pharma.AI Platform has the potential to
328 5 Health and Safety

develop faster novel drugs, while simultaneously reducing costs and increasing
the likelihood of success for new therapeutic concepts. To this end, the AI devel-
ops novel molecules. In 2021, the first antifibrotic drug discovered by AI was
used in humans for the first time (see [13]).

5.1.6 Improvement of Health Systems


and Medical Administration

AI can also contribute to improving the efficiency and quality of health systems.
Through the automation of routine tasks such as patient management, appoint-
ment scheduling, and medical coding, AI can relieve staff and save time. In addi-
tion, AI systems can also support further handling of administrative tasks in
healthcare. Chatbots and virtual assistants can simplify communication between
service providers—and create written documentation if needed. Therapy plans,
medication orders, etc. can also be supported by AI systems. Perhaps by relieving
routine tasks in healthcare, doctors and nursing staff can again focus on the tasks
where they are (still?) indispensable—in appreciative and empathetic conversation
with the patient.

Food for Thought


Despite all the euphoria about the possibilities that AI applications offer in
healthcare in perspective, the relevance of the doctor-patient conversation
should not be neglected. What great successes have already been achieved by
administering placebo preparations because the compassionate words of the
doctor and/or the simple belief in healing brought success? With the advance
of artificial intelligence, pure expertise will lose importance. The “good” doc-
tor of tomorrow is characterized more by empathy and a high level of com-
munication skills.
At the same time, there is the danger of overcare due to overdiagnosis
and overtreatment, because with all possible symptoms, which one would
have successfully “treated” with a night’s sleep in the past, now the always
mobile accessible AI doctor is consulted—with shocking information about
what it could all be!

5.1.7 Bio-Hack and Self-Optimization

Are we at the beginning of a “transhuman” age, where hacking biology and


“enhancing” humans are gaining popularity and availability? The range of do-it-
yourself bio-hacking or body-hacking ranges from simple diagnostics to neural
implants and raises legal and societal questions about ethics and humanity. These
are developments of Human Augmentation or Human Enhancement. This AI
application field aims to It is about the extension and enhancement of human perfor-
mance. At its core—as strange as it may sound—it is about the “optimization of
5.1 Health Sector 329

humans” through artificial systems. This can occur in sick people through medical
interventions with active substances, aids, and body parts. Healthy people can also
be “optimized” through appropriate applications and the integration and/or net-
working with technologies. This is where the topic of Transhumanism comes into
play—the continuation of human development through the use of scientific and
technical means. On the one hand, this research is based on the tradition of human-
ism. On the other hand, it attempts to overcome this state of the natural and to
advance the artificial.
Under Bio-Hacking or Body-Hacking the idea of IT hacks is transferred to
biological systems and especially to the human body (usually by the person affected
themselves), but also to the entire biosphere. IT-Hacking is understood as unau-
thorized intrusion into a computer or network. The people involved in such hacking
activities are referred to as Hackers. These hackers can change system or security
features to achieve a goal that deviates from the original purpose of the system.
Accordingly, Bio-Hacking aims for body modifications. Here, people experiment
with implants and other methods that interfere with a person’s bodily processes. An
entry point for this can be so-called self-medical hacks, e.g., independently con-
ducted DNA tests. Based on a variety of data, different degrees of forms of physical
self-optimization can be undertaken.

Note Box
In Bio-Hacking, individuals and organizations utilize the achievements of
biology and environmental sciences for their own purposes. Bio-Hackers try
to optimize themselves and thereby achieve a better “I”—a “I 2.0”.
A Bio-Hacking can also be performed by third parties. This makes bio-
hacking increasingly similar to IT hacking. For example, British scientists
have managed to extract and reveal secret numbers (e.g., for a credit card)
from brain waves. Therefore, there is a risk that data extracted from brain
waves will move from the clinical to the commercial sector and be mis-
used there.

Three major fields of application for bio-hacking can be distinguished.


Bio-Hacking of body and mind through nutrition and behavior
Self-optimization in the area of one’s own body and mind through changes in nutri-
tion and behavior represents the best-known aspect of bio-hacking. For this, one
must understand the organism at the biochemical level and analyze the underlying
processes. This area has a strong connection to the Quantified Self. Sleep and fit-
ness trackers and special health apps are used for this.
The company Nuralogix [14] allows the user with the Anura app—through a
affective AI technology—to determine health indicators with their own smart-
phone. The application Anura can determine the following values:

• Heart rate
• Irregular heartbeats
330 5 Health and Safety

• Respiratory rate
• Blood pressure
• Heart rate variability
• Stress level
• Risks for cardiovascular diseases

To determine these works via an app, Anura uses a transdermal optical capture.
Here, the blood flow in the face is observed. The evaluation of the data obtained in
this way is AI-supported.
Also dietary supplements and special medications are consumed to optimize
one’s own organism. Nootropics or adaptogens are used to increase concentration
and better manage stress. A particularly curious example of self-optimization is the
administration of eye drops to help humans achieve night vision. For this pur-
pose, the test subjects were administered a substance called Chlorin e6 (Ce6).
Subsequent tests actually showed that the people treated in this way could perceive
objects in the dark much better than comparison persons. In everyday life, however,
the subjects have to protect their eyes with black contact lenses due to the increased
light perception (cf. [15]). Imitation not recommended!
Bio-hacking of one’s own DNA
DNA hackers dedicate themselves to the investigation of their own genetic
information with the aim of looking for mutations in their genetic material. They
deal intensively with the topic of epigenetics and use DNA tests to decipher their
ancestry. In doing so, they take advantage of the latest scientific findings. An impor-
tant concept for this is the CRISPR technology (Clustered Regularly Interspaced
Short Palindromic Repeats). This allows the changing and editing of genes. Here, a
defense system of bacteria against viruses is used as a versatile tool for the targeted
changing of genes.

Food for Thought


Do we really want comprehensive transparency about every single individ-
ual (including their own genome)—also from ourselves—achieve in order to
be able to live “optimally”? Should such an analysis possibly be carried out
prenatally—with various decision alternatives?
Or do we want to preserve some of life’s uncertainty, its unpredictabil-
ity, its incalculability and its surprises—good and bad? Because this tension
might make life worth living—since we do not know and cannot know every-
thing. Can we even lead a real life if we know early on what will cause our
death when and by what? Would we, out of sheer caution, become blind to the
positive things in life, just because we have a 94% probability of dying from
kidney failure at the age of 62?
It becomes clear: Both an over-information and an under-information
will have negative effects. Therefore, these questions need to be raised and
answered early on.
5.1 Health Sector 331

Fictional Reading Tip


If you want to recognize in a well-written, fictional novel what developments
are foreseeable here, the book by Marc Elsberg “Helix” is recommended.

• Bio-hacking of body and mind through the use of technology


For this, bio-hackers use modern devices and implants to better control their
environment, optimize their organism, and make their life easier. Bio-hackers also
have implants and RFID chips inserted to store information. In addition, neuro-
feedback devices and devices for stimulating brain activity are used.
This form of bio-hacking also includes the development of exoskeletons (also
external skeletons). “Exo” stands for “outside” and “skeletos” for a “dried-up
body”. An exoskeleton refers to the external support structure for an organism.
These are AI-controlled robots worn on the body that can support or enhance the
wearer’s movements. Various types of motors are used in these externally worn
auxiliary skeletons.
Exoskeletons are used in medicine to enable paraplegics or people with mobility
restrictions after a stroke to walk. The Walk-Again-Project is about the develop-
ment of an exoskeleton robot, which has a brain-machine interface. This allows
paraplegic patients to receive tactile sensory feedback. As early as 2014, the kick-
off to the World Cup was made possible by this exoskeleton robot by a paraplegic
person (see [16]). This example shows how neuroscience, AI, and robotics can work
together to improve the quality of life for people. Improve.
In addition, exoskeletons are also used in production, in maintenance and in
the logistics sector to facilitate overhead work or the lifting of heavy objects. These
systems are already being used today by companies such as Airbus, Toyota and
Volkswagen. Corresponding solutions are offered by German manufacturers German
Bionic and Ottobock. In this market, a long-term increase in demand is expected.
After all, such exoskeletons enable people to work longer and in a more health-
preserving manner. In the future, these exoskeletons are to be controlled by thoughts.
For this purpose, implants or receivers for brain signals are used on the cere-
bral cortex.
An interesting manifestation of bio-hacking is in the form of the cyborg—a
fusion of a human (or also an animal) with a machine. This term is derived from
Cyb ernetic Org anism (see [17]). Injected chips already replace identification
papers, boarding passes and keys in these “optimized” humans, provided the inter-
action partners have installed a corresponding radio interface. Access control in the
office and computer login can also be accelerated via an injected chip. In combina-
tion with data from fitness trackers, blood sugar levels and other biometric data,
heart attack warnings can be determined and push messages can be sent when it is
time to take a break during a long car journey or during demanding work.
Futurologists assume that injected chips can revolutionize everyday life—espe-
cially for highly active managers—in a similar way to the smartphone has done so
far. Corresponding chips are offered by VivoKey [18]. The disadvantage is that no
332 5 Health and Safety

dominant chip system has yet emerged, so that the cyborg pioneers sometimes
have two, three or more injected chips to take advantage of the possibilities that
already exist today. In my opinion, human intelligence is required to check how far
one should actually use the technological possibilities.

Note Box
Small food for thought for all cyborg fans: During an MRI examination, the
strong magnetic field used can erase all data on the chip. You should therefore
have a recovery set and an implant ID card that certifies the safety for such an
examination. Unless the chip is made of precious metal, it will also heat up
more or less…

Fictional Reading Tip


Where such attempts at self-optimization can lead is to be read in another
very good fictional novel by Marc Elsberg “Zero – They know what
you’re doing”.

In the field of bio-hacking and self-optimization, we will encounter many fasci-


nating and shocking developments in the future.

Think-Box: Questions You Should Ask Yourself!


• How can Artificial Intelligence contribute to improving our patient care
and safety?
• How can we use AI to support our diagnostic and therapeutic decisions
and increase accuracy?
• How can we utilize the potential of AI for predicting and preventing
diseases?
• What kind of training or educational programs does our staff need to
effectively use and understand AI technologies?
• How could AI-supported telemedicine services expand our range of ser-
vices and improve the accessibility of patient care?
• What impact could the introduction of AI have on the workload and job
satisfaction of our employees?
• How can we integrate AI to automate administrative processes and
increase the efficiency of our services?
• By which metrics can we measure and evaluate the success of introducing
AI into our company?
• What measures should we take to ensure data security and data protec-
tion when integrating AI into our system?
• What ethical considerations should we take into account when integrat-
ing AI into our practice?
5.3 Security Sector—Social Scoring 333

5.2 Summary

• In healthcare, there are diverse fields of application for AI.


• The comprehensive evaluation of health data is of particular importance.
• The evaluations can be based on anonymous data sets, for example, to increase
the quality of diagnosis.
• For a personalized diagnosis and therapy, a networking of personal health
data is indispensable.
• This raises important Data protection Fields of action.
• AI systems can also support operations.
• The use of artificial intelligence can lead to a relief of routine tasks in health-
care—so that doctors and nursing staff can spend more time with patients.

5.3 Security Sector—Social Scoring

5.3.1 AI Use in the Security Sector—Face


and Object Recognition

The integration of AI into security services also allows here, to process and analyze
a huge amount of data to the efficiency of security measures to improve. Whether
it’s about detecting cyber threats, improving physical security measures or col-
lecting and analyzing intelligence information—AI has taken a crucial role in the
security industry.
A main application area for AI in the security sector is the cyber security. AI is
used here, for example, to detect anomalies in network data and identify potential
threats in real time. This allows for faster and more accurate responses to cyber
attacks. Different forms of malware can be detected early to avoid damage.
For this purpose, Deep Instinct has developed a special deep learning frame-
work for cyber security. In this framework, Deep Instinct focuses on prevention,
to stop ransomware and other malware before damage occurs. But not only protec-
tion against existing cyber security threats is important. It is also necessary to detect
new, previously unknown threats early. To achieve this goal, a variety of “suspi-
cious” files are collected and subjected to various tests. Here it is to be determined
whether these are to be classified as “malicious” or as “benign”. These data are then
fed into the AI engine to make predictions about future cases. After all, the goal is
to discover unknown and zero-day malware before it penetrates the IT environment
(cf. [19]).
AI is used in the surveillance technology, to improve the detection of threats in
public spaces. Technologies for face and object recognition make it possible to
quickly identify suspicious activities or persons. These technologies can identify
and verify faces in existing images, videos or in real time. This is intended to
increase the security and protection of persons and property in various environ-
ments—from public places to high-security facilities. Criminal activities are to be
detected, tracked and ideally even prevented. Despite their promising applications,
334 5 Health and Safety

however, there are numerous ethical and legal concerns, especially with regard to
data protection and privacy. Moreover, especially with facial recognition systems,
there is a high risk of misuse.
Therefore, facial recognition is now viewed very critically in many areas. One
reason for this is that facial recognition is used in some countries to monitor their
own population or political opponents. Another reason is that facial recognition
often still delivers questionable results, often at the expense of ethnic minorities.
Against this backdrop, Facebook will stop using facial recognition from 2021.
Other companies, such as Amazon, IBM and Microsoft, have also scaled back their
activities with facial recognition systems.
One company whose activities have been and continue to be critically analyzed
is Clearview AI. It has set itself the task of providing the most comprehensive solu-
tion for image search worldwide. To this end, the US company has built a database
of more than 30 billion photos of faces. These photos were obtained from public
online sources. These include news media, wanted websites, social media (includ-
ing Facebook, YouTube) and other publicly accessible sources. Algorithms convert
all images into mathematical formulas or vectors. These are based on facial geom-
etry, for example, how far apart a person’s eyes are or how large the distance between
the nose and mouth is. Based on these data, Clearview creates a comprehensive
biometric database, which groups all photos with similar vectors. If there is a suf-
ficiently large match of these values between two photos, it is probably the same
person (see [20]).
By using this web-based facial recognition intelligence platform, criminals, of
whom only photos are available, can be located. These photos can come from sur-
veillance cameras and show unidentified suspects. When a user uploads a photo of
a face to the Clearview system, the system converts this face into a vector. Then all
photos stored in the vicinity of this vector are displayed. In addition, the links to the
websites from which these images originate are mentioned. Law enforcement agen-
cies and national organizations are supported by this service in identifying victims
and perpetrators in shoplifting, identity theft, credit card fraud, murder, and sexual
exploitation of children. More than 2000 law enforcement agencies have already
used Clearview AI in the USA—and others around the world. Clearview has also
licensed the app for other security industry companies (see [20]).
The computer code used by Clearview uses a programming language that is
compatible with augmented reality glasses. Users of such glasses would therefore
be able to identify anyone they encounter—wherever they are! This could identify
activists at demonstrations, competitors in unfavorable situations, or attractive
strangers on the bus. Then the names, place of residence, activities, and acquain-
tances of these people would be revealed to the users of the AR glasses (see [21])).
Not only law enforcement agencies could theoretically have this access, but anyone
who has access to the Clearview app…
What is the mission of Clearview so innocently [20]?
Clearview AI’s mission is to deliver the most comprehensive image-search solutions in the
world. We support law enforcement and national organizations in their mission to identify
5.3 Security Sector—Social Scoring 335

victims and perpetrators in order to safeguard their communities and secure industry and
commerce.

The question of who is the victim and who is the perpetrator is left to the respective
authorities when using this system. Therefore, some questions arise with Clearview:

• Is it allowed for a private company to “collect” the photos available online and
thus build the largest database of face photos? Certainly not—but this is exactly
what makes Clearview unique! Facebook and other social media websites explic-
itly prohibit the use of images shown there in this way. This is referred to as
“scraping”, which stands for “scraping together” and is prohibited.
• According to the company, Clearview finds matches in up to 75% of search que-
ries. Should it remain unclear how often “false matches” are delivered because
no independent party has tested the system yet? After all, the larger a database is,
the more potential doppelgangers can be found. And with an increasing number
of doppelgangers, the number of false matches inevitably increases!
• Is it appropriate for law enforcement agencies to use software that has not yet
been reviewed by independent experts?
• What happens when this technology falls into the hands of states that do not
stand for democracy and freedom? Or into the hands of organizations that want
to harm democracy in general or certain companies or their leaders?
• Do we want to allow such mass surveillance in the hands of a private company?

Clearview has already been fined millions by authorities for its illegal photo
collection.

Food for Thought


It is questionable whether we can actually still use the advantages of facial
recognition technology and at the same time avoid the abuse of comprehen-
sive surveillance and manipulation.
One way to do this would be a comprehensive ban.
But which countries and which organizations would adhere to such a ban?
And who could monitor and enforce it—with penalties?

Other companies are also dealing with various forms of image recognition. On
the one hand, large datasets with photos for training purposes of AI systems are
available for download. On the other hand, there are many other companies that
focus on various aspects in this field. Engage. This includes, among others, the fol-
lowing companies:

• aws.amazon.com/de/rekognition: Focus “Image recognition and video analysis”


• berify.com: Focus “Reverse image search for finding stolen images and videos”
• images.google.com: Focus “Image search in Google “
• pimeyes.com: Focus “Face search engine and reverse image search”
336 5 Health and Safety

• tineye.com: Focus “Reverse image search”


• yandex.com/images: Focus “Identification of objects”

To avoid misuse of our own photos, at least the images we upload online our-
selves can be made unreadable for AI. Such a solution is offered by the software
from lowkey.umiacs.umd.edu. LowKey has developed a disturbance filter for
images to hinder their use in face recognition. The web tool published by LowKey
significantly reduces the accuracy of Amazon Rekognition and Microsoft Azure
Face Recognition (see [22]). However, it should be noted: This software aimed at
making face recognition more difficult only works with photos that we upload
ourselves.
Artificial intelligence is also being used in another area of security. This is
Predictive Policing or Predictive Analytics for police work. The use of algorithms
in expert systems is intended to predict when and where which type of crime is
likely to occur (see Fig. 5.1).
As part of Predictive Policing, data from past crimes are evaluated because crim-
inals often follow predictable patterns in their actions (see [23]). These relate to
the crime scene and the time of the crime, the type of crime, and the routes used to
Crime scene. Predictive policing is based on the observation of habitual offenders.
On the other hand, it can often be observed that similar crimes are committed at
similar times in similar crime scenes. Therefore, this is also referred to as the so-
called Near-Repeat-Theory: This means that burglars often return to the area they
previously visited after a short time and strike again. Therefore, objects near the first
crime scene also have an increased risk. However, this risk decreases over time and
tends to be lower the further the object is from the first crime scene.

Burglaries from the past Socio-demographic data


- Address of the crime scene - Population structure, e.g.
- Time of the crime proportion of families
- Course of the crime - Building development
- Loot - ...

Predictive policing software


Algorithm processes input data

Alarm: Software indicates areas with a high probability of a new crime.


The police can
- increase patrols,
- undercover check areas,
- inform residents.

Fig. 5.1 Functioning of Predictive policing


5.3 Security Sector—Social Scoring 337

In addition, the police often assume the Rational-Choice-Approach. According


to this, a person’s willingness to act—here a crime—is based on the balance between
the expected benefit (“loot”) and the possible costs (“risk of detection” and “punish-
ment”) of a crime. It is assumed that a potential criminal asks the following ques-
tions before the crime as part of a Cost-Benefit-Calculation:

• How much police are in a certain area at a certain time?


• How much resistance would I have to expect?
• What is my risk of getting caught there?
• How high could my loot be?

Based on these answers, police forces can be more present at the times with a high
predicted probability of crime at the identified locations. These can either act as a
deterrent or catch criminals in the act. It remains to be seen to what extent such
concepts will affect crime rates. Many cities—also in Germany—are already exper-
imenting with this solution.
Additional support can be provided to the police by the Boston Dynamics [24]
developed four-legged robot named Spot. The robot dog has already proven itself
not only with the police but also in firefighting operations in fire ruins and even in
agriculture. The robot dog can move at up to 5.57 kilometers per hour, open doors,
and climb stairs. Equipped with an intelligent gripping arm, 3D obstacle detection,
and multiple cameras, Spot is capable of being deployed in places that are either too
dangerous or inaccessible to humans. Moreover, such a robot dog also blurs fewer
traces at crime scenes than conventional devices on wheels or tracks. Spot also has
the ability to open interior doors that may be booby-trapped. In addition, Spot can
also survey terrain and scan buildings from the inside to detect possible damage.
The robot dog has been used not only in Baden-Württemberg but also in North
Rhine-Westphalia and Hamburg.

5.3.2 Social Credit System in China

The Social Credit System in China is an initiative of the Chinese government to


monitor, evaluate, and regulate the financial, social, moral, and political behavior of
Chinese citizens as well as the country’s companies through a system of penalties
and rewards. The basis for this is the “Planning Outline for the Construction of a
Social Credit System (2014–2020)”, which was already issued by the State Council
in June 2014. The People’s Republic of China decided and has since been imple-
mented more comprehensively. The declared goal is to give advantages to trustwor-
thy people and discipline unreliable people. In total, a—as defined by the
rulers—socially acceptable behavior in the whole Chinese society is to be
achieved (cf. [25]).
This system is based on an integrated use of various AI solutions. Big data
analysis techniques evaluate a multitude of data points from different sources. These
include not only data traces from the internet and social media (mainly WeChat), but
338 5 Health and Safety

also the data from the comprehensive surveillance of cameras installed in public
spaces. The content obtained there can be clearly assigned to individual individuals
through facial recognition.
The rules and the point allocation of the Social Credit System in China are not
completely transparent and can vary from region to region. Nevertheless, various
behavior patterns are known that affect the point account. The decisions about
which behavior earns plus and minus points are made by a small group—with indi-
rect and direct effects on a population of 1.4 billion people. One wonders with
which value system the algorithms of AI applications work. In total, the system
leads to a state-ordered conformity compulsion, which is contrary to the diversity
propagated in (most) Western democracies. This gives an old Chinese proverb new
relevance:
The nail that sticks out gets hammered down.
Today, every single Chinese is on the leash of the state and is only “rewarded”
if he follows its respective norms—no matter how sensible these may seem in the
eyes of the individual. One does not want to imagine what would happen if such a
comprehensive screening system of one’s own population were once hacked would
be. And one does not want to imagine what manipulation possibilities this system
already offers to those in power today.
How exactly are the rewards and punishments of one’s own population done (cf.
[26])? With a Social Credit Credit the following behavior patterns are rewarded:

• Voluntary engagement in the community


• Help for the poor
• Regular and punctual payment of bills and taxes
• Blood donation
• Care for older family members
• Participation in charity events
• Praise for the government in social media
• Study of the writings of Xi Jinping
• …

The following lead to Social Credit deductions: Behavior Patterns:

• Violations of traffic rules, e.g. unauthorized crossing of the street at a red light or
drunk driving
• Non-repayment of loans
• Spreading rumors on the internet
• “Insincere” apologies for committed crimes
• Participation in an organization classified as a cult
• Publication of anti-government messages on social media
• No regular visits to elderly parents
• “Illegal” protest against the authorities
• …
5.3 Security Sector—Social Scoring 339

A low social credit score can lead to people being put on a blacklist. These people
are then denied access to certain services, e.g.:

• Denial of access to certain schools for the children of the affected person
• No access to licenses, permits, and some social services
• Exclusion from booking flights or tickets for high-speed trains
• Less access to credit
• Restricted access to public services (e.g. the issuance of a passport)
• Public shaming: publication of the names, photos, and ID numbers of citizens
who are on the blacklist, either online or on television screens in public places
• Officially ordered dial tones indicating that one is calling a “dishonest debtor”

In return, people with a high social credit score can gain privileged access to cer-
tain services. These include, for example:

• Priority for admission to high-demand schools and for employment in preferred


companies
• Faster promotion at work
• Shorter waiting times in hospitals
• Free access to fitness facilities
• Discounts on public transport
• Deposit-free bicycle and car rental
• Easier access to bank loans and consumer credit
• Tax relief

Food for Thought


One could cynically remark: The social credit system in China is gamifica-
tion at the highest level. However, this is not a game, but a matter of grave
seriousness!

The proclaimed goal of the social credit system is to promote trust in society. In
fact, it primarily serves the surveillance of its own population, to detect possible
protests early on, to sanction or completely suppress them. For this, any right to
privacy is lost. Criticisms are also made of the system’s transparency and insuffi-
cient opportunities to appeal against unfair ratings.
This also applies to the corporate social credit system, which targets companies
operating in China. Through this concept, Chinese state capitalism is evolving into
surveillance state capitalism. This is about a data-driven project to assess the
trustworthiness of companies. In the study by Lin and Milhaupt [27], the follow-
ing was found. Ostensibly, the corporate social credit system is supposed to check
compliance with laws. However, it turned out that companies with “political con-
nections” received a higher score. This result is due to a category “social responsi-
bility” in the rating system. Companies that received government awards or
340 5 Health and Safety

contributed to purposes sanctioned by the Communist Party of China were rated


better. Thus, the corporate social credit system becomes a tool to strengthen the
loyalty of companies to the party state’s politics.

Food for Thought


The long-term effects of the social credit system on people and companies
in China are still unclear. The awarding of points for various activities could
lead to people and companies changing their behavior—almost through self-
censorship—in order to achieve a higher score. Whether this can build trust in
the all-seeing state and suppress criticism in the long term remains to be seen.
The system could also serve as a model for other countries with dictato-
rial ambitions to completely monitor and possibly suppress their own popula-
tion and economy. Here too, a systemic competition between different country
blocs is emerging.
The AI Act of the European Union—the Law on Artificial Intelligence—
explicitly prohibits the use of AI for social scoring in Europe.

Think-Box: Questions You Should Ask Yourself!


• Does AI contribute to improving the cyber resilience, i.e., the company’s
resistance to cyber attacks, in our company?
• Can the AI in our company be used for detection of security threats such
as malware or anomalies in Can it be used for network traffic?
• Can AI-assisted predictive analyses be conducted to identify and prevent
potential future security threats?
• Will AI contribute to the improvement of physical security in critical
areas, such as through facial recognition systems or motion detection?
• Can we incorporate the concept of predictive policing as a provider or
user—and possibly transfer it to other areas of application?
• How can we deal with the Chinese social credit system?
• What relevance does the corporate social credit system have for us if we
operate as a company in China?

5.4 Summary

• The facial recognition plays a central role in various security applications.


• The use of facial recognition must be particularly critically observed because it
is associated with many dangers.
• With predictive policing, attempts are made to anticipate crimes in order to
avoid them or to catch the perpetrators in the act.
• In China, we can experience first-hand what facial recognition in conjunction
with AI-driven big data analyses can “achieve” in the social credit system.
References 341

• The corporate social credit system has a significant influence on the “accepted”
behavior of companies in China.

References
1. Ziemssen, T. (2021, November 3). Multiple sclerosis: The potential of the digital twin.
Frankfurter Allgemeine Zeitung, V4.
2. Bastani, H., Drakopoulos, K., Gupta, V., Vlachogianni, I., Hadjicristodoulou, C., Lagiou, P.,
Magiorkinis, G., Paraskevis, D., & Tsiodras, S. (2021). Efficient and targeted COVID-19 bor-
der testing via reinforcement learning. Nature, 599, 108–113. Accessed May 26, 2023, from.
https://doi.org/10.1038/s41586-­021-­04014-­z
3. Bauer, W., & Vukelic, M. (2018). Research project EMOIO, interface to the world of com-
puters. In R. Neugebauer (Ed.), Digitalization, key technologies for economy and society
(pp. 135–151).
4. Saha, S., Mamun, K. A., Ahmed, K., Mostafa, R., Naik, G. R., Darvishi, S., Khandoker, A. H.,
& Baumert, M. (2021). Progress in brain computer interface: Challenges and opportunities.
Frontiers in Systems Neuroscience, 15, 578875. Accessed May 25, 2023, from. https://doi.
org/10.3389/fnsys.2021.578875
5. Neuralink. (2023). Breakthrough technology for the brain. Accessed May 25, 2023, from
https://www.neuralink.com/
6. Koch, M.-C. (2022). Brain-computer-interfaces: Musk wants to start human trials in 6 months.
Accessed May 25, 2023, from https://www.heise.de/news/Brain-­Computer-­Interfaces-­Musk-­
will-­Versuche-­an-­Menschen-­in-­6-­Monaten-­starten-­7363027.html
7. Deep Genomics. (2023). Programming RNA therapies any gene, any genetic condition.
Accessed May 25, 2023, from https://www.deepgenomics.com/
8. Flow Health. (2023). Diagnostic medicine. Reimagined. Accessed May 25, 2023, from https://
www.flowhealth.com/
9. Intuitive. (2023). Improving treatment quality. Accessed May 26, 2023, from https://www.
intuitive.com/de-­de
10. Backes, L. (2018, June 30). Do you know this one? (p. 119). Der Spiegel.
11. Dassault Systemes. (2023). Digital twin. Accessed May 25, 2023, from https://discover.3ds.
com/digital-­twin
12. Atomwise. (2023). Artificial Intelligence for drug discovery. Accessed May 25, 2023, from
https://www.atomwise.com/
13. Insilico Medicine. (2023). Artificial Intelligence for every step of pharmaceutical research and
development. Accessed May 22, 2023, from https://insilico.com/
14. Nuralogix. (2023). Anura, Take a selfie, know your healthie! Accessed May 25, 2023, from
https://www.nuralogix.ai/
15. WinFuture. (2019). Curious biohacking attempt: Eye drops enable night vision. Accessed May
25, 2023, from https://winfuture.de/news,86433.html
16. Donati, A. R. C., & Shokur, S. (2016). Long-term training with a brain-machine interface-
based gait protocol induces partial neurological recovery in paraplegic patients. Scientific
Reports, 6, 30383. https://doi.org/10.1038/srep30383
17. Bendel, O. (2023). Cyborg. Accessed May 25, 2023, from. https://wirtschaftslexikon.gabler.
de/definition/cyborg-­54197
18. VivoKey. (2023). Apex Flex. Accessed May 25, 2023, from https://vivokey.com/flex/
19. Deep Instinct. (2023). EDR is not enough. Accessed May 30, 2023, from https://www.deep-
instinct.com/de
20. Clearview AI. (2023). We are Clearview AI. Accessed May 26, 2023, from https://www.clear-
view.ai/overview
342 5 Health and Safety

21. Hill, K. (2021). The secretive company that might end privacy as we know it. Accessed
November 23, 2021, from https://www.nytimes.com/2020/01/18/technology/clearview-­
privacy-­facial-­recognition.html
22. Lowkey. (2023). LowKey—Prevent your images from being used to track you. Accessed May
30, 2023, from https://lowkey.umiacs.umd.edu/
23. Association of German Criminal Officers. (2023). Predictive policing is effective support for
police work in NRW. Accessed June 08, 2023, from https://www.bdk.de
24. Boston Dynamics. (2023). Spot, automate sensing and inspection, capture limitless data, and
explore without boundaries. Accessed May 27, 2023, from https://www.bostondynamics.com/
products/spot
25. Bünte, C. (2020). The Chinese AI revolution. Springer Gabler.
26. Bertelsmann Foundation. (2023). China’s social credit system. Accessed May 30, 2023, from
www.bertelsmann-­stiftung.de
27. Lin, L., & Milhaupt, C. J. (2021). China’s corporate social credit system: The dawn of surveil-
lance state capitalism? European Corporate Governance Institute—Law working paper no.
610/2021, City University of Hong Kong School of Law Legal Studies research paper no.
Forthcoming, Stanford Law and Economics Olin working paper no. 560. Accessed May 30,
2023, from https://ssrn.com/abstract=3933134
Energy Sector and Smart Home
6

Summary In the energy sector, AI systems can make significant contributions to


increasing energy efficiency in energy generation, energy transmission, and energy
consumption. Prerequisites for this are, among other things, Smart Grids and Smart
Meters. In the Smart Home, AI-supported products and processes can be used.
These offer a high level of convenience but also additional security risks.

6.1 AI-Supported Applications in the Energy Sector

One of the main areas of application of artificial intelligence in the energy sector is
energy management. Here, AI applications make an important contribution at every
stage of the value chain: from electricity generation to electricity transmission and
use by end users. A particularly important area is the prediction of supply and
demand for anticipation of supply and demand peaks. This allows an optimal
energy mix from various power sources to be generated. The ability to achieve an
optimal balance has become increasingly difficult with the increasing use of decen-
tralized renewable energy sources. After all, with the number and variance of inde-
pendently acting power sources the volatility of power generation increases. This
brings great challenges for network management if network failures are to be avoided.
This is supported by the development of intelligent power grids. These so-
called Smart Grids enable the analysis of data from the entire power grid. This
makes it easier to maintain the balance between energy production and consump-
tion and to avoid bottlenecks and overloads. AI helps to improve the power grid’s
response to failures and to shorten recovery times. In addition, AI can help facilitate
the integration of renewable energies into the power grid and minimize the effects
of fluctuations in energy production. Such intelligent power grids increase supply
security and ideally also lead to a cost reduction.

Supplementary Information The online version contains supplementary material available at


https://doi.org/10.1007/978-­3-­658-­46131-­7_6.

© The Author(s), under exclusive license to Springer Fachmedien Wiesbaden 343


GmbH, part of Springer Nature 2024
R. T. Kreutzer, Understanding Artificial Intelligence, Future of Business and
Finance, https://doi.org/10.1007/978-3-658-46131-7_6
344 6 Energy Sector and Smart Home

In most countries, the modernization of the network and the use of intelligent
meters (Smart Meters) are already underway to achieve a more dynamic coordina-
tion of supply and demand. Smart Grid initiatives allow small, private energy pro-
ducers (including individual households) to sell excess capacities to regional energy
suppliers. In this case, AI-based systems can also help supply companies assess the
reliability of small providers (e.g., private Households with solar systems). To this
end, the lifespan and integration capability of the systems installed there are predicted.
This could further develop the power grid into a marketplace where, in addition to
traditional power generators, a multitude of small actors offer power from various
sources (including car batteries, rooftop solar cells). An intelligent grid manage-
ment would help to integrate this large number of mini-­suppliers in such a way that
supply and demand peaks can be better managed through intelligent storage concepts.
Additionally, AI can assist in the use of renewable energies to optimize energy
production and reduce costs. For wind turbines, AI systems can predict optimal oper-
ating conditions and thus improve performance. This involves capturing a multitude
of data via sensors and evaluating it through AI algorithms. The insights gained allow
for an increase in the efficiency of power generation, for example by optimally
adjusting the generation units to the respective wind and sun conditions. Artificial
intelligence can also assist in selecting the best locations for new wind and solar
installations. To this end, data on weather, topography, and other factors are analyzed.
The predictive maintenance also contributes to supply security (see in depth
Sect. 3.1.3.4). By analyzing data from a variety of sensors on energy generation and
distribution systems, potential problems can be identified early on before they lead
to failures. This avoids costly repairs and downtime and extends the lifespan of the
systems. AI can also help determine the optimal maintenance times to minimize
operating costs. The energy suppliers’ field staff then receive real-time updates on
technical problems with the systems. This can shorten response times and prevent
or reduce network failures. In the future, drones and insect-sized robots could
inspect systems without interrupting the production or transmission of energy. In
total, fewer technicians are used. They focus more on anticipatory problem solving
and less on manual recording of the respective status quo.
AI systems also assist in monitoring energy consumption in real time. Thus,
AI-controlled systems in private households, but especially in office buildings and fac-
tories, can help optimize the use of lighting, air conditioning, and other energy-inten-
sive systems to reduce energy consumption and overall increase energy efficiency.

Food for Thought


These positive effects of Smart Solutions will only occur if the networks are
comprehensively protected against hacker attacks. Because the “smarter” the
networks become, the more units are connected through them, the more (also
uncontrollable) entry points for computer viruses there will be. Here, Cyber-
Security must act more proactively than the hacker armies, so that entire
regions or countries do not plunge into darkness and standstill.
6.2 AI-Supported Solutions for Smart Homes 345

Fiction Reading Tip


If you want to get an idea of what happens when a Smart Grid is hacked, you
should read the work “Blackout” by Marc Elsberg. Stimulating and exciting
entertainment at the highest level—and perhaps even with consequences for
your own household!

6.2 AI-Supported Solutions for Smart Homes

Smart Homes are a specific manifestation of the Internet of Things. These are
houses that are often equipped with a large number of networked devices for various
household functions. At its core, a Smart Home is a comprehensively networked
living environment, both internally and externally. Externally, there is definitely
a connection to the Internet, possibly also to a Smart Grid. Internally, depending on
the technological enthusiasm of the residents, all sorts of devices and processes can
be connected to the Internet and thus become “smart” (see Fig. 6.1; also [1]).
The use of AI in the Smart Home supports the automation of household appli-
ances and of processes in the household. AI systems can recognize patterns in the
behavior of the residents and carry out actions based on this. An AI-controlled heat-
ing system can learn the preferences of the residents over time and adjust the tem-
perature accordingly to optimize energy consumption. AI-controlled lighting
systems can adjust the light to the time of day, the weather, and the presence and
preferences of different people. The most important driver for the acceptance of

Music Kitchen appliances

Power outlets

Motion detector Garage

Fig. 6.1 Functions of a smart home


346 6 Energy Sector and Smart Home

Smart Homes is the convenience of users. After all, many smart applications can be
operated via the Internet—usually via apps on mobile devices. Thus the following
functions within an apartment can be controlled from any location in the world
connected to the Internet:

• Brightness of the apartment


• Temperature, possibly even more comprehensively the climate of the apartment
• Volume of radio, TV and other sound systems
• Monitoring of the children’s room (through web-cams)
• Cooking processes in the kitchen (e.g. turning on the oven, coffee machine and
similar)
• Insight into the refrigerator (through a camera installed there, which transmits
images directly to the smartphone)
• Roller shutters, awnings
• Entrance doors and garages
• Washing machine

An additional level of convenience is achieved when certain functions no longer


require the resident’s initiative, but are independently recognized as necessary by
(experience-based) AI algorithms:

• The garage opens automatically when the user’s vehicle is within 50 m and the
parking space in the garage is still available.
• Already 30 to 60 min before the expected arrival of the residents, the preferred
room climate is controlled in the respective used rooms. This can vary individu-
ally for each resident—depending on the time of day.
• With the opening of the entrance door, the preferred illumination of the apart-
ment is set—again depending on the person.
• Occasionally, possibly even oriented to the mood of the resident (derived from
his last Facebook or Twitter post)—the “appropriate” music can be played (ori-
ented to the preferences of each individual person stored on Spotify).
• Based on individual habits, the—correctly tempered—bath water can already
be filled.
• Depending on individual morning routines, the coffee machine can be started
automatically (however, it should be noted that with coffee machines, a rinsing
process precedes the coffee preparation and possibly both end up in the already
placed coffee cup!).
• The fill level of refrigerator, pantry and wine cellar can be automatically mon-
itored and lead to autonomous purchasing processes (for example via Alexa).
• Depending on past or planned activities, cleaning services can be booked inde-
pendently and the corresponding forces can be let into the apartment at a pre-
defined time window (password protected or via face recognition).
• Washing machines can start automatically when the least annoyance of the resi-
dents occurs, the best time to hang up the laundry is or the electricity costs
are lowest.
6.2 AI-Supported Solutions for Smart Homes 347

• From the baby’s sleep patterns, it can be recognized when the next night hunger
is due and the previously boiled water can be heated in time to the ideal tem-
perature for bottle preparation.

For an invitation to a 30th birthday, the following questions can be answered with
AI support by evaluating the respective information flows:

• Which individuals have accepted the invitation to the party? (Evaluation of


Facebook, email or WhatsApp messages)
• What music preferences do the guests have? (Analysis of the Facebook, Spotify
and YouTube profiles of the guests)
• What food suggestions for the party are agreeable—vegan, vegetarian, organic
food etc.? (Analysis of the Facebook and Instagram profiles of the guests or from
posts in WhatsApp groups)
• What drinking habits do the guests have? (based on the Facebook and Instagram
profiles of the guests)
• What food purchases need to be made? (Evaluation of the refrigerator’s fill level)
• Which drinks need to be bought? (depending on the status of the drink stock)
• Are the preferred alcoholic and non-alcoholic drinks on special offer by the
time of the event? (current offers and delivery conditions of various offline and
online providers)
• Which shopping channels are used? (Based on the host’s previous buying
behavior)
• What would be suitable gift suggestions for the host? (based on the Facebook
and Instagram profile of the host)
• How extensive must the cleaning be on the “day after”? (based on photos and
videos of previous parties as well as on current photos of the rooms where the
invited people have already partied)
• Where should the cleaning staff be booked? (Based on possible positive reviews
from Helpling and/or existing contracts, also depending on the respective avail-
abilities at the host’s place of residence)

Note Box
You are called upon to examine the relevance of such developments for your
own business model. Where are there opportunities to be involved in the
smart home development with your own products and services? How can
your own business models be adapted or be further developed?

Today, the success of various service robots is already evident—also for private
households. These include mowing and vacuuming robots, which are becoming
increasingly popular. The success of window cleaning robots, on the other hand, is
still limited. Vacuum robots in the past often had the problem that, for example,
dog feces were not clearly recognized and therefore were gladly spread over the
entire floor area. To prevent this, the vacuum robot from iRobot was specifically
348 6 Energy Sector and Smart Home

trained in object recognition. For this purpose, hundreds of artificial excrements


were formed and painted accordingly. The vacuum robot was supposed to learn to
recognize these objects. However, this is much more difficult with liquid excretions
(cf. [2], p. 21). The robots from iRobot [3] can be activated via voice control
using Alexa.
AI is also supposed to improve the safety in Smart Homes. By using face and
motion recognition, AI-supported surveillance systems can detect unauthorized
persons and unusual activities—and send appropriate notifications to the rightful
users if necessary. Security systems can also recognize when the residents leave the
house to then activate the available alarm systems.
One possible development must not be overlooked in the development of smart
homes: Smart Terror. In the USA, the technologies used in the smart home envi-
ronment are already being specifically used to control and terrorize one’s own ex-
partner. For example, the ex-partner changed the security code of the front door so
that the remaining partner could no longer enter the apartment. In addition, the air
conditioning cooled the apartment extremely down in winter. Also, loud music from
the smart music system could sound again and again at night without the remaining
partner being able to do anything about it. Smart objects—equipped with camera
and microphone—could additionally recognize and record all movements in the
apartment. Apps, digital personal assistants & Co. made and make it possible (cf.
[4, 5]).
A particularly interesting use of AI in the smart home for an aging society is
personal assistance systems. These systems can take over a variety of tasks. The
range extends from managing the calendar and reminding of appointments to con-
trolling household appliances by voice command. In addition, by learning the pref-
erences and habits of the users, they can give tailor-made recommendations to
improve the quality of life. This can be particularly beneficial for older people or
people with disabilities, as it helps them to live independently in their own homes
for longer.
To address this task, apartments are equipped with IoT sensors and cognitive
computing to keep older people safe at home. If the sensors detect irregularities,
relatives can be notified via a mobile app or emergency services can be called.
Appropriate triggers can be water running for a long time, intense smoke develop-
ment or stove plates that have not been switched off. Also, conspicuous behavior
patterns can be defined as triggers. For this For example, if the front door is left open
during the normal sleep phase. If the toilet is visited four times a night—deviating
from normal behavior—an alarm can also be triggered. This can also be the case if
the motion sensors do not report any activities within 6 h during the “awake phase”.
A service offer Elderly Care would bring several advantages for the elderly:

• Senior care as a central module of Smart Home

The networked use of sensors in smart home solutions allows older people—who
actually need supervision—to stay in their own homes longer. This can save care
costs and additional quality of life can be created. Through a 24/7 monitoring of
6.2 AI-Supported Solutions for Smart Homes 349

the residents they can be helped quickly in emergencies. In acute emergencies,


telemedicine can be used—as part of home health care. This can relieve caring fam-
ily members and reduce possible absences from work.
For housing companies, equipping the apartments with networked sensors offers
the opportunity to further develop their business model towards “health
management”.

• Use of smart systems

A senior care system should not only function technically flawlessly, but also
meet the ergonomic requirements of the users. This includes mobile warning mes-
sages that attract attention without causing panic. A simple traffic light system can
be used for this:

• Green: “Everything is fine”


• Yellow: “Unusual behavior detected - there may be a problem”
• Red: “Emergency requiring immediate attention”

To calibrate the system for the meaning of the colors red and yellow, user tests can
be carried out over several months.
An applicable app can enable family members to exchange information about
the condition of their parents through one-touch communication. Here is an exam-
ple: “I received a yellow notification about mom.” “Don’t worry, I just checked on
her. Everything is fine.”
Through AI-based evaluation of the daily routines of each individual, the soft-
ware can independently recognize abnormal and potentially dangerous behavior:

• What length of stay in bed can justify an alarm?


• How long can the refrigerator remain unopened without indicating a problem?
• What movements at night are “normal”?
• Which nocturnal behavior patterns can be a sign of a medical emergency?

There is a demand for Elderly-Care solutions that can be integrated into a specific
ecosystem. Relevant partners include real estate developers, emergency services,
other health service providers, insurance companies, and hardware and software
providers. It is crucial that from the outset, there is a focus on End-to-End user-
friendliness. Then it should also be possible, for example, to operate the gas or
water shut-off valve from a distance and to switch off the stove. In addition, specific
devices such as emergency call systems, bed sensors, medication boxes, blood pres-
sure and blood glucose monitors should be integrated into the monitoring. This
information would have to be evaluated by intelligent software to initiate individual
escalation processes. For this purpose, the AI system learns the movement profile
of the resident in order to generate automatic notifications or alarm messages in case
of deviations.
350 6 Energy Sector and Smart Home

Food for Thought


Smart Home offers a wonderful projection surface for different groups. Tech
freaks see in the Smart Home the fulfillment of their dreams, because their
own living environment always knows or anticipates what is desired when and
where—to ideally deliver exactly that. This also includes the possibility of
supporting older family members in their desire to live in their familiar envi-
ronment for as long as possible.
Doomsday prophets, on the other hand, fear unauthorized intrusion by
hackers into their own home through insufficiently protected internet inter-
faces. Not only can processes be manipulated, but possibly the entire habits of
a community—like a Stasi 2.0—can be spied on. In addition, the doomsday
prophets fear that the different software and hardware solutions—often with
diverging update rhythms—can diverge and thus stand in the way of trouble-
free use. After all, uniform standards for such applications are often still
lacking.
It is up to you as a provider as well as a user of Smart Home solutions to
decide where you see yourself more.

The development of Smart Homes can be further thought in the direction of the
future concept Smart City. In this approach, urban living develops into a network
of communication, logistics, and information systems. At the same time, it aims for
sustainable growth and high quality of life. In addition, it should strengthen the
willingness to participate constructively in politics. The ideas for Smart Home
can seamlessly integrate into these concepts.

Note Box
With Smart City, the core issue is nothing less than a comprehensive net-
working of the city’s technical, financial, and operational infrastructures with
each other and—depending on the application—with the infrastructure of the
citizens (Smart Home, Smart Car…) and the companies (Smart Factory,
Smart Logistics).

A sub-area of Smart City is Smart Mobility or Smart Traffic. This is about


a intelligent interplay of various modes of transport to improve passenger and
freight transport. Mobility with a Car-to-Car communication between the vehi-
cles used can achieve a higher level of safety. Additionally, robots in logistics
can be integrated to transport people and goods. A multitude of legal questions
still need to be clarified in this regard. Finally, the increased use of sensors leads
to an ever more comprehensive intrusion into the privacy of individuals, who, for
example, are recorded during their everyday activities while driving autono-
mously (cf. [6]).
Far beyond this is the application of the US start-up ZenCity. The goal is to
incorporate citizens’ expressions of opinion more comprehensively into urban
6.2 AI-Supported Solutions for Smart Homes 351

decision-making processes. To this end, a multitude of data points from interactions


with residents are collected and converted into decision-relevant insights in real
time by AI systems. The following service areas are covered (cf. [7]):

• Identification of relevant topics

Through automated recording of ongoing communication in social media, chat


rooms, and city hotlines, the—from a citizen’s perspective—relevant topics for city
administrations can be made visible. The goal is to move beyond the STP corner
(Same Ten People) and build a connection to the various areas of a community.

• In-depth analyses

A further evaluation shows where which topics are discussed via which chan-
nels with which keywords. If necessary, the analysis can go down to the individual
post. For this purpose, the online available feedback data of the residents are identi-
fied, collected and automatically evaluated. The data found in hundreds of external
and internal channels and feedback sources are converted into actionable insights
and customized reports.

• Alert messages

Based on the insights gained, Push Notifications can be sent to the respective
responsible persons to react promptly. These can be sudden deviations or slowly
emerging trend changes. Here it becomes visible in real time when, where and how
resources need to be deployed.

• Sentiment Analysis

To recognize the mood in the population, posts can be classified as positive,


neutral, and negative. This allows faster identification of problems in the various
areas of city management (cf. Sect. 1.3.1).
Such systems enable city managers to keep their finger on the pulse of their own
citizens. The exciting question is how quickly the insights gained here can be con-
verted into active action in the face of the often dominant bureaucracy in city
management.
Where the larger cities in Germany stand in terms of digitization and the use of
AI solutions is shown by the Smart City Index of Bitkom [8]. The Smart City Index
is the digital ranking of German cities with over 100,000 inhabitants. To deter-
mine this, the degree of digitization in the areas of administration, IT and commu-
nication, energy and environment, mobility and society is determined. Here, 36
indicators are analyzed, which are formed from 133 parameters. A total of 10,773
data points were determined. Which cities lead the ranking in 2022 (see [8], p. 9)?

• Hamburg lands in first place with 86.1 out of 100 possible points. Hamburg is
now the smartest city in Germany for the fourth time in a row.
352 6 Energy Sector and Smart Home

• Munich lands in 2nd place with 85.3 points.


• Dresden has won the bronze medal for the first time with 81.6 points.
• Cologne, Stuttgart, Nuremberg, Aachen, Bochum, Düsseldorf, and Darmstadt
land in places 4 to 10.
• Berlin is in 11th place.
• Overall, cities from 9 of the 16 federal states can be found among the top 20 cities.

What conclusions can be drawn from the overall results (see [8], p. 21)? The Smart
City Index 2022 shows a strong dynamic in the advancing digitization of German
cities. There are clear differences here:

• A group of cities is making steady progress and has already made impressive
progress.
• Other municipalities are still in the initial phase, working on the introduction
of digital services or are even only in the planning phase.

It shows that cities that already have extensive digital infrastructure, long-term
financing plans, and comprehensive experience from previous digital projects can
drive the digital transformation more quickly and effectively.
Germany must not stand still and wait for the positive conditions that prevail in
the leading cities to spread to the remaining, usually smaller and financially less
well-equipped municipalities will adjust themselves. To implement the digital
transformation more broadly and quickly across the board and to reach all munici-
palities and rural regions, new structures and greater determination are needed on
the part of the federal government, the states, and the municipalities themselves.
The following measures are suggested for this purpose (cf. [8], p. 5):

• Coordination of funding strategies

Currently, there is a fragmented landscape in the Smart City funding. The


funding strategies of the federal government and the states are not coordinated.

• Introduction of a procurement platform

The federal government and the states should provide municipalities with a cen-
tral procurement platform for Smart City and Smart Region solutions. This plat-
form should offer solutions that meet the requirements of procurement and contract
law. In addition, they should meet the requirements for data protection and data
security.

• Redesign of financing for digitization

Digitization is often not yet firmly anchored in municipal budgets. To change


this, digitization projects need to be included as a fixed component in the budget-
ing process.
6.2 AI-Supported Solutions for Smart Homes 353

• Germany’s participation in international standardization initiatives

Germany should take a more active role in the international standard-setting


procedures in the Smart City sector. This ensures that technological innovations are
interoperable and scalable.
The digitization of German cities and municipalities is an ongoing process.
This process needs to be actively managed. In addition, the necessary structures and
resources must be provided at all levels to give all municipalities—regardless of
their size or financial situation—the opportunity to take advantage of the benefits of
digitization for the benefit of employees, citizens, and businesses.

Think-Box: Questions you Should Ask yourself!


• How can artificial intelligence in our company contribute to improving
energy efficiency and reducing energy consumption?
• What data is necessary for training AI models and how can this be col-
lected and managed safely and responsibly?
• How can privacy and security of users be ensured when using AI applica-
tions in the smart home?
• What regulations and standards apply to the use of AI in the energy sector
and in the smart home?
• How can we design AI systems so that they can be easily operated and
understood by users?
• How can smart city and smart mobility solutions contribute to improving
the quality of life of citizens and increasing the sustainability of cities?
• What infrastructure and data are necessary for the implementation of smart
city and smart mobility solutions and how can these be effectively built and
managed?
• How can the privacy and security of citizens be ensured with such services?
• How can the impacts and benefits of smart city and How can smart mobil-
ity solutions be measured and evaluated?
• How can citizens and other stakeholders be involved in the planning and
implementation of smart city and smart mobility projects?
• How can the acceptance of smart city and smart mobility solutions be pro-
moted among citizens?

Food for Thought


It is interesting to consider what the Smart Cities will look like in the future
when more and more jobs are occupied by robots and stationary retail stores
are replaced by e-commerce with robot delivery (ground or air-based).

• What kind of work will there be in the cities of the future?


• How will we transport ourselves to which workplaces (or be transported)?
354 6 Energy Sector and Smart Home

• It should be noted: The question of the future of the city is inseparably


linked to the question of the work of the future. “If the city, its spaces, its
rhythm, its collective rituals (lunch break, after-office drinks, window
shopping) are built around the organization of work and the idea and
requirements of wage work were shaped—what would it look like if this
form of work disappeared, what would public space be if it was no longer
primarily about transporting people to work?” ([9], p. 45):
• What would a city look like where most people either work in small units
or from home?
• What consequences would this have for the division of the city into a pub-
lic area (offices, shops, streets, transport, etc.) and the private area
(apartments)?
• What would it mean for urban planning if work and leisure increas-
ingly mixed?
• What would be the consequences if a multitude of work processes were
performed mobile—possibly not only location-independent, but also
time-independent?
• How would a city have to be designed if only ten hours of work were to be
performed per week?
• What would happen to the office buildings, their canteens, and park-
ing lots?
• What would a city look like if a universal basic income were widely
accepted?
• What needs would such a city then primarily have to satisfy?
• What would a city look like if the proportion of online shopping was not
just 13% as it is today in Germany, but 70 or 80%?
• What would happen to the shopping malls and the many small and large
shops within and in the catchment area of the cities if they were no longer
visited?
• …

6.3 Summary

• An important area of application for AI is energy management.


• AI solutions can support the entire energy value chain.
• The success of Smart Grids and the advancement of Smart Meters stands and
falls with the security of these networks.
• The development of Smart Homes is slowly gaining momentum. It remains to
be seen whether the tech enthusiasts or the skeptics will dominate; there are
strong arguments for both parties.
• The development of Smart Cities is only slowly progressing.
• In terms of digitizing administration, cities in Germany still have a lot of room
for improvement.
References 355

References
1. Bendel, O. (2023). Smart home. Accessed May 25, 2023, from https://wirtschaftslexikon.
gabler.de/definition/smart-­home-­54137
2. o. V. (2021, September 10). Household robot with arms. Frankfurter Allgemeine Zeitung, 21.
3. iRobot. (2023). Which product suits you? Accessed May 30, 2023, from https://www.irobot.de/
4. Patrick, W. L. (2018). Remote controlled: Domestic abuse through technology, modern
inventions make you vulnerable to predators outside and inside your home. Accessed May
30, 2023, from https://www.psychologytoday.com/intl/blog/why-­bad-­looks-­good/201807/
remote-­controlled-­domestic-­abuse-­through-­technology
5. Woodlock, D. (2016). The abuse of technology in domestic violence and stalking, violence
against women. Psychology Today, 23(5), 584–602.
6. Flügge, B. (Ed.). (2020). Smart mobility: Trends, concepts, best practices for intelligent mobil-
ity (2nd ed.). Springer Vieweg.
7. ZenCity. (2023). Community-based insights for better city management. Accessed May 30,
2023, from https://zencity.io/
8. Bitkom. (2022). Smart city index 2022, study report on the degree of digitalization of the
81 German major cities. Accessed May 30, 2023, from https://www.bitkom.org/Bitkom/
Publikationen/Smart-­City-­Index-­2022
9. Maak, N. (2018, August 24). Move into the mall! Frankfurter Allgemeine Sonntagszeitung, 45.
Education and Knowledge Transfer
7

Summary Artificial Intelligence can assist in overcoming many challenges in the


education sector. These include the development of adaptive learning systems and
the possibility of incorporating AI engines into the evaluation of written perfor-
mances. AI can also assist with administrative tasks in the education sector and in
communication with students. AR and VR applications as well as immersive learn-
ing environments contribute significantly to knowledge acquisition.

7.1 Challenges in the Education Sector

For decades, there has been discussion about how to revolutionize education through
technology. AI-supported analyses and forecasts can assist those responsible for
education systems in an important task: the alignment between the needs of people
(in terms of qualifications and job desires) and the requirements of future employ-
ers. This makes it—theoretically—possible for the curricula to be aligned early on
with the requirements of future working life. Based on this, the qualifications that
will be demanded not yesterday or today, but tomorrow, could be taught.
Unfortunately, this will remain fiction in most countries.
Current trends are focusing more on digital learning platforms, blended learning,
MOOCs (Massive Open Online Courses) as well as paid online learning platforms.
It can be observed that the way content is presented is increasingly shifting from
text to visual information—one could also say: from Google to YouTube. In aca-
demic teaching, digital learning platforms are already increasingly being used
(e.g. Moodle). There, lecturers can make learning content available online, announce
dates, inform courses about changes, and create glossaries. Learners can download
this information, view timetables, course participants and announcements, and

Supplementary Information The online version contains supplementary material available at


https://doi.org/10.1007/978-­3-­658-­46131-­7_7.

© The Author(s), under exclusive license to Springer Fachmedien Wiesbaden 357


GmbH, part of Springer Nature 2024
R. T. Kreutzer, Understanding Artificial Intelligence, Future of Business and
Finance, https://doi.org/10.1007/978-3-658-46131-7_7
358 7 Education and Knowledge Transfer

possibly upload their own contributions. This makes the university mobile and the
content can be accessed anywhere.
In school education in Germany, a School-Cloud is already being used in some
cases. For this purpose, the Hasso Plattner Institute (HPI) developed the HPI
School-Cloud from 2017 to 2021 in a pilot project. This is a secure, data protection-
compliant IT infrastructure for schools. This digital education platform supports
teachers and students in the flexible use of various end devices. Digital teaching and
learning content and other tools necessary for teaching can be accessed. Can be
accessed cooperatively. By the end of the project in 2021, the HPI School-Cloud
had created a secure digital learning space for about 4000 schools in Germany with
1.4 million users. However, these systems are not yet truly “intelligent”. They ini-
tially only enable a networking of teachers, students and learning material–but they
offer the springboard for more!
It is important that the school and university education no longer focuses on
rigid rote learning in fixed structures, but promotes independent learning in
unstructured learning environments–to develop creativity and initiative. It makes
little sense to teach students in a digital world to memorize information that can be
accessed at any time via mobile device. Nevertheless, it is indispensable to build up
a personal knowledge base–this is the basis for the development of personal values
and the prerequisite for informed decisions and personal creativity. Also, remember:
If you know nothing, you have to believe everything!
Beyond factual knowledge, it is especially important to build up one’s own
media competence in order to be able to work competently with various sources.
People who want to explain complex relationships monocausally should be met
with skepticism. Motto: X is solely to blame for Y. This is how populists in particu-
lar “explain” the world. Against this, it is necessary to independently investigate the
complexity of facts in order to recognize that there are often large differences
between correlations and causalities (cf. basic [1], pp. 1–33).

Note Box
For preparation for the future world of work, forms of knowledge transfer
and competence acquisition are needed that promote creativity, problem-­
solving ability, self-organization, initiative, appreciative and problem-solving
communication, and thinking in contexts.

7.2  evelopment of Adaptive Learning Systems–Incl.


D
Automated Evaluation

Over many centuries, the universal educational approach has changed little. A lec-
turer imparts various contents to a larger group of more or less interested individuals,
which they are supposed to learn. In this process, the individual learning progress
achieved could often not be given much attention. The goal of creating individual,
7.2 Development of Adaptive Learning Systems–Incl. Automated Evaluation 359

tailor-made, time- and space-independent learning programs for a wide variety


of participants is therefore a major driver for the use of AI in the education sector.
Today, AI-supported adaptive learning systems are already being used. AI tech-
nology enables educational platforms to individualize and optimize the learning
process. The system alta from Knewton supports adaptive learning through
dynamic, ongoing tutoring. To this end, alta adapts to the performance level of the
students with each interaction. The students do not have to perform a formal assess-
ment or diagnosis to receive the necessary instructions and exercises. These are
provided just-in-time while the students are working on the tasks. The system can
also automatically identify areas where a student is having difficulty. Then, addi-
tional resources or exercises are provided to address this specific deficiency. By
quickly identifying students’ knowledge gaps and providing timely corrective mea-
sures, alta supports the learning process. Here, the presentation of the learning
material is based on individual learning styles, speeds, and preferences. This makes
the teaching more personal and efficient (see [2]).
In some areas of traditional learning, AI systems can also replace the lecturer.
This applies to the evaluation of written work. In the evaluation of written perfor-
mances, AI-driven machines are making rapid progress. Here, companies rely on
image recognition to decipher handwriting–something that even a lecturer often
fails at today. Based on defined learning content, an automatic evaluation of the
exam performances can take place. Thus, artificial intelligence can relieve the lec-
turers through an automated evaluation. The EdTech company Gradescope, which
belongs to Turnitin, uses AI tools for the assessment not only of standardized tests
but also of free-text answers. By using AI for assessment, lecturers can spend more
time on interactive and personal aspects of teaching (see [3]). However, a value-
based interpretation, for example of historical events, eludes automated evaluation–
and this will probably always remain the case. After all, history is known to be
written by the winners. The losers usually have a completely different view of things!
AI can also support students with special needs in learning by making learn-
ing materials more accessible. Microsoft’s Seeing AI is an app that can recognize
texts, people, and objects and read information aloud. This can help blind and visu-
ally impaired people. For example, a study text can be read aloud as soon as it
appears in front of the camera. The app also offers an audio guide for capturing a
printed page. Even handwritten text can be read aloud. In addition, friends can be
recognized and people in the respective environment can be described–including
their respective emotions. When paying, banknotes can be recognized (see [4]).
The Google App Live Transcribe & Notification can convert spoken words into
text in real time. This makes everyday conversations and ambient noises more
accessible for deaf and hard-of-hearing people. This app can also provide important
support for the learning process (see [5]).
An automated image recognition can support the use of webcams to recognize
signs of boredom, engagement, over- or under-challenge, and/or a possible discon-
tinuation of the learning process in gestures and facial expressions in learners. This
allows for the identification of learning difficulties and learning preferences in
students and pupils. To this end, previously unusual data and data sources are
360 7 Education and Knowledge Transfer

used, for example, the activities of learners on social networks. Based on such
insights, artificial intelligence can improve learning and teaching through a stronger
individualization. As a result, further indicators of learning success can be identi-
fied for each participant, which were not previously known.
An interesting AI application is to enable learners to perform a more comprehen-
sive self-control over the learning process. Based on a deep analysis of the learn-
ing process, individualized recommendations for increasing learning success could
be conveyed. Thus, individual cognitive and behavior-related preferences could be
taken into account. Through such a—permission-based—evaluation, the individual
would be specifically promoted. At the same time, anonymized training data for the
further development of the AI algorithms could be obtained.
In addition, a continuous monitoring of the individual learning process can
take place. This does not only refer to the number of breaks a learner takes during a
lesson, but also records the time it takes to answer a question. The number of
attempts to answer a question before it was answered correctly can also be evalu-
ated. Image recognition in the learner, eye tracking, the analysis of mouse move-
ments as well as an emotion analysis can provide deeper insights into the
performance, mindset and cognitive abilities—provided the provider has obtained
permission for each. A more individual supervision of the learning process
becomes possible (cf. to corresponding model approaches and attempts [6, 7]).
In addition, a virtual supervisor could further support the work and behavior of the
lecturers. These virtual supervisors could send alert messages to the human lecturers
when many students fail certain tasks or tend to drop a course. In case of poor or best
performances, a proactive information to the responsible leader can take place. Thus,
important feedback discussions can be arranged to sustainably support learning success.
The question is whether we can or want to see such a development on a broader
basis. On the one hand, there is the ethical question of comprehensive surveil-
lance of learners, which would deeply invade their privacy. On the other hand,
there is the question of the necessary investments, which would be associated
with such a comprehensive approach. Presumably, such concepts will initially only
be used in particularly critical areas (such as severe learning disorders). In addition,
they can be used where there are many financial resources available, for example in
privately funded educational institutions and in the military sector.
The use of the insights gained here about optimal learning support does not have
to stop at the school or university boundary. The individual learning profile, which
can change over time, would accompany the learner throughout their life. They could
be repeatedly pointed to additional relevant learning content to prevent the increasingly
rapid obsolescence of knowledge. At the same time, the relevant learning content could
be prepared in a form that corresponds to the respective learning preferences.
The necessity for this is derived from the challenge for everyone to engage in
lifelong learning to ensure. Because education and training in every country and in
every company requires a strategic realignment and further development to meet the
challenges of the labor market in the face of digital and ecological transformation.
Figure 7.1 shows the strategic qualification gap that needs to be closed. The focus
of today’s (state) educational efforts is on early childhood education, school educa-
tion, vocational training at the entry into professional life as well as university
7.2 Development of Adaptive Learning Systems–Incl. Automated Evaluation 361

Qualifying Requalifying
(Skilling) (Re-/up-skilling)

Vocational
Kinder- Master's
School training/ Employment
garden degree
Bachelor

Age:
Up to 6-7 y. 16-18 y. 20-23 y. to 24 y. up to 67 years and over

Strategic qualification gap

Fig. 7.1 Strategic qualification gap

education (keyword Qualifying or Skilling). It is largely overlooked that people


spend their longest time—often over 40 years—in professional activities, the
requirements of which are changing at an increasingly rapid pace. This results in the
need for a Re-Qualifying or a Re-Skilling and Up-Skilling. Not only the baby
boomer generation, which will be leaving the labor market in the coming years, had
to prove themselves in a work environment that neither school, nor vocational train-
ing, nor universities could adequately prepare for. The possibilities of the internet,
the challenges of digitization and artificial intelligence were not part of the study
content at that time, because these developments were not yet foreseeable. In addi-
tion, typewriters rather than computers were part of the standard equipment of a
student, who in the 1970s and 1980s also had to get by without a smartphone, iPad,
internet, Facebook and Amazon—and still survived!
The dynamics of changes in professional life are accelerating for subsequent
generations. The following developments illustrate this:
Hundreds of thousands of employees today perform functions that did not exist
20 years ago: app developers, community managers, UX designers (UX stands for
User Experience), SEO specialists (SEO means Search Engine Optimization),
social media managers, big data analysts, cloud service managers, CDOs (Chief
Digital Officers), CSOs (Chief Sustainability Officers), AI developers etc.
Accordingly, it is expected that 70% of today’s students will work in jobs that do
not yet exist.
In 10 years, employees will work with technologies that are not yet operationally
usable today, such as Composite AI and Artificial General Intelligence (see the
Gartner Hype Cycle for Artificial Intelligence in Fig. 2.6).
The future employees will have to solve problems that are not yet known today.
To successfully shape this change as a company and as a society, the strategic quali-
fication gap needs to be systematically closed through a Re-Qualifying. It is essential
that no employee waits for the company to do something for him. Instead, a high degree
362 7 Education and Knowledge Transfer

of Self-initiative to close the qualification gap is required when one’s own employer
has not recognized the signs of the times or is not acting appropriately.

Note Box
Due to the dynamic changes in markets and technologies, the previously dom-
inant institutionalized education through schools and universities is losing
importance.
In the future, the non-institutionalized education—initiated by compa-
nies and the learners themselves—will determine which nations win in the
knowledge competition.

Volkswagen has installed a Chief Learning Officer (CLO) for these challenges.
The CLO leads the VW Group Academy—the umbrella organization for all training
and further education institutions of the group. His task is to requalify or retrain 30
to 40% of the existing workforce. This is more than 30,000 people at the German
Volkswagen locations alone. The challenges to be mastered in this context are illus-
trated by an example: To qualify a “metal worker” as a “vehicle electronics techni-
cian”, 378 qualification days are planned (see [8], p. 20).

Food for Thought


The development of individual learning and teaching profiles and education offers
oriented towards them represents an exciting field of work for schools and univer-
sities, but also for companies and governments. Here, a significant contribution
could be made to closing the strategic qualification gap—at the national level.
The principle is: Everyone is responsible for maintaining their own
employability—their own employability!

7.3 Ai-Based Administration and Student Support

AI solutions can also take over administrative tasks and support in student care.
Routine questions from students can be answered by chatbots. In addition, AI can
provide assistance in navigating through administrative processes such as enrolling
in courses.
Virtual teachers and immersive learning environments open up completely new
possibilities for knowledge transfer. This not only makes an individualized teaching
experience possible, but also availability around the clock. Such AI-supported vir-
tual teachers are also referred to as chatbots or virtual tutors. Based on natural
language processing, intelligent tutoring systems can interact with students, answer
their questions, and provide learning material. At its core, these are digital advisors
or supervisors to support learners in the process.
7.4 Development of Immersive Learning Environments—VR and AR Solutions 363

The heart of an AI-supported tutoring system is the ability to merge detailed


data at an individual level. This allows individual learning progress to be deter-
mined and individual feedback to be given. This feedback informs–often context-
oriented–about the correctness or incorrectness of the respective steps. In addition,
digital tutors can also provide context-sensitive hints and instructions to guide the
learners from learning step to learning step. For this, the digital tutor continuously
monitors the achieved level of knowledge and adjusts the course material accord-
ingly. At the same time, the human lecturers as mentors and coaches could
accompany the qualification process more individually. This requires specific skills
such as emotional intelligence, creativity and appreciative communication.
Machines will probably not be able to learn these skills in the next few years.
Companies like AI-Aloe use AI to create virtual characters capable of interacting
with students and teaching. At the center is the development of AI teaching and
learning assistants. These are intended to improve the cognitive, lecturer-related
and social presence in online adult education, to make it more efficient and effective.
The efficiency of teaching is increased when a lecturer can reach a larger number of
students without affecting individual learning success. The effectiveness of teaching
is determined by the time students need to master a lesson or a skill. In parallel, data
about the learner and the learning should be used to personalize learning (cf. [9]).
Jill Watson of the Georgia Institute of Technology is one of the most famous
digital teaching assistants in the world. At the start in 2015, Jill initially only
answered routine, frequently asked questions in the forum–but with more than 90%
accuracy. Her authenticity was so convincing that the students did not find out that
Jill was an AI agent. Now several avatars of Jill Watson are used under various
pseudonyms in both online and face-to-face events. The current version of Jill auto-
matically answers about three fifths of the student questions in the first week of the
course as well as about a third of all administrative questions about the course evalu-
ations, such as tasks, projects, exams (cf. [10]).
The education platform Udacity uses GPT-4 to answer student questions–instead
of human mentors. This way, up to 20,000 questions per day can be much more eas-
ily processed. GPT-4 is also used for the development of learning content and video
productions. Finally, GPT-4 is also used for individual career counseling–to support
students on their way to their desired profession. However, the founder of Udacity,
Sebastian Thrun, also makes it clear: AI can not support students in developing
character, nor can it replace the–also important for learning success–interpersonal
interactions (cf. [11], p. 51).

7.4  evelopment of Immersive Learning Environments—


D
VR and AR Solutions

In addition, immersive learning environments that combine AI and Virtual Reality


(VR) or Augmented Reality (AR) can enable a deeper and more engaging under-
standing of learning. The company Labster provides science students with virtual
laboratory experiences. These virtual labs simulate real scientific experiments.
364 7 Education and Knowledge Transfer

This allows students to conduct experiments, collect data, and analyze their results—
all in a virtual environment. The AI technology allows the labs to respond to the
actions of the students, give them feedback, and assess their performance.
These immersive learning environments allow students to learn in a practice-
oriented, interactive context. They also provide access to experiences that would
otherwise be difficult to achieve due to costs, safety concerns, or logistical chal-
lenges. These include, for example, highly specialized laboratory work or the explo-
ration of hard-to-reach places like space or the ocean floor. These virtual labs have
the same capabilities that science teachers can offer their students. In addition, they
open up further fields of action that meet the conditions of tight budgets, unmoti-
vated students, and growing knowledge gaps (see [12]).
In the vocational education sector, a mix of technologies is currently being
used more intensively. Approaches of Augmented Reality (AR) can specifically
and individually support employees in their work. Special data glasses (Smart
Glasses) or tablets are linked with the data from facilities and/or machines. This
allows the wearer of the glasses to directly receive instructions for the next work
steps. Such Augmented-Reality glasses can also support apprentices, career start-
ers, and unskilled personnel in training as well as in less familiar procedures and
work processes. If operated incorrectly, a red signal can appear in an AR glass or on
a tablet, prompting the operator to correct immediately. Specialists and skilled
workers can be informed about the current process progress at any time via appro-
priate devices. The transmitted information and instructions are based on real-time
measurements that are incorporated into IoT technology.
In addition, the use of Virtual Reality (VR) glasses can convey educational
content more realistically. Drawing on the gaming industry, virtual worlds are cre-
ated in which the learner is trained on practical relevant situations:
An aspiring steward can thus simulate the behavior during an emergency land-
ing of the aircraft.
The midwife must know in a training unit what special challenges are associated
with a twin birth and how to master them.
The machine builder recognizes what consequences the incorrectly installed
valve would have on the production process.
Based on AI algorithms, specific weaknesses of the learner can also be identified
here and appropriate responses can be made. At the same time, the consequences of
wrong decisions can be simulated without anyone getting hurt.

Note Box
AI-supported virtual teachers and immersive learning environments have
the potential to radically change and improve the way knowledge is conveyed,
by enabling individualized, interactive, and engaged learning.

It turns out that AI approaches are to be understood as an exciting extension and


less as a replacement for current training concepts. The challenge is called Blended
Learning—a mix of different learning concepts, e.g. of online courses and presence
7.4 Development of Immersive Learning Environments—VR and AR Solutions 365

events, virtual supervisors, AR and VR applications, immersive learning environ-


ments and more. For this purpose, the Metaverse is created as an exciting “play-
ground” to try out these possibilities (see Chap. 11).

Food for Thought


Even in the education sector, it is important to integrate the possibilities of
artificial intelligence responsibly. A ban on use, as various educational insti-
tutions have tried with the use of ChatGPT, is however the worst of all options.
When have bans—such as on cigarettes, alcohol or excessive social media
consumption—ever worked?
The AI genie is out of the bottle.
Now it’s about learning and teaching responsible handling of it—and
aligning the forms of knowledge transfer and also examination forms to it!

Think-Box: Questions You Should Ask Yourself!


• Which aspects of the learning process in our company could benefit from
AI technologies?
• How effective and reliable is the AI solution in providing educational
resources today?
• Can AI-based systems cater to the specific needs of learners and support
individual learning paths?
• Are all learners treated equally? Could the AI technology possibly deliver
biased or discriminatory results?
• Is our existing technical infrastructure capable of supporting the imple-
mentation and operation of the AI solution?
• Does our staff have the necessary skills and knowledge to effectively use
and manage the AI technology to support teaching and learning processes?
• Is there still a need for training or professional development in our com-
pany in this regard?
• What are the costs for the implementation and maintenance of the AI
technology?
• How are such costs justified in comparison to the expected benefits and
improvements?
• How are the performances of AI technology monitored and evaluated
over time?
• Do we already have established mechanisms for regular feedback and
adjustments?
• How are the data of learners protected and used? What security measures
are in place to prevent misuse and data breaches?
• What other legal or regulatory aspects need to be considered when imple-
menting AI in the education sector?
366 7 Education and Knowledge Transfer

7.5 Summary

• AI solutions enable personalized learning experiences. Individual learning


plans are created based on the strengths, weaknesses, and interests of the students.
• AI can provide real-time feedback on the work of learners. It can identify errors
and suggest corrections, contributing to the improvement of learning outcomes.
• AI-based solutions enable a prediction of learning difficulties; then, lecturers
can intervene early and provide support.
• AI-supported tutor systems can act as virtual tutors that are available around
the clock.
• Lecturers can be relieved by an AI-supported correction of assignments.
• AI engines support the automation of administrative tasks and the creation of
teaching content.
• Immersive learning environments (including the use of AR and VR technologies)
enable an improvement of online learning platforms—an exciting field of
application in the metaverse.
• AI enables easier accessibility to educational content through text-to-speech
technologies or visual aids for learners with special needs.

References
1. Kreutzer, R. T. (2018). Toolbox for marketing and management. Springer Gabler.
2. Newton. (2023). Alta’s adaptive learning technology. Accessed May 31, 2023, from https://
www.knewton.com/the-­power-­of-­altas-­adaptive-­technology/
3. Gradescope. (2023). Deliver and grade your assessments anywhere. Accessed May 31, 2023,
from https://www.gradescope.com/
4. Microsoft. (2023). Seeing AI in new languages. Accessed May 31, 2023, from https://www.
microsoft.com/en-­us/ai/seeing-­ai
5. Google. (2023). Live transcribe & notification. Accessed May 31, 2023, from https://play.
google.com/store/apps/details?id=com.google.audio.hearing.visualization.accessibility.
scribe&hl=gsw&gl=US
6. Behera, A., Matthew, P., Keidel, A., Vangorp, P., Fang, H., & Canning, S. (2020). Associating
facial expressions and upper-body gestures with learning tasks for enhancing intelligent
tutoring systems. International Journal of Artificial Intelligence in Education, 30, 236–270.
Accessed June 01, 2023, from. https://doi.org/10.1007/s40593-­020-­00195-­2
7. Khenkar, S. G., Jarraya, S. K., Allinjawi, A., Alkhuraiji, S., Abuzinadah, N., & Kateb,
F. A. (2023). Deep analysis of student body activities to detect engagement state in E-learning
sessions. Accessed June 01, 2023, from https://www.mdpi.com/2076-­3417/13/4/2591
8. Theurer, M. (2023, May 21). Wolfsburg goes online. Frankfurter Allgemeine Sunday news-
paper, 20.
9. AI-Aloe. (2023). About AI-ALOE. Accessed May 31, 2023, from https://aialoe.org/
10. GTV. (2023). Virtual teaching assistant: Jill Watson. Accessed May 31, 2023, from https://gvu.
gatech.edu/research/projects/virtual-­teaching-­assistant-­jill-­watson
11. Thrun, S. (2023, May 28). It’s just getting started. Frankfurter Allgemeine Sunday
Newspaper, 51.
12. Labster. (2023). Empower the next generation of scientists. Accessed May 31, 2023, from
https://www.labster.com/
Human Resource Management (HRM)
8

Summary Artificial Intelligence can support HRM along the entire Talent Journey.
This begins with Augmented Writing and continues with an AI-based analysis and
evaluation of applicants. Here, CV-Parsing plays a significant role. During onboard-
ing as well as in the internal identification and development of talents, AI engines
can provide important impulses.

An exciting field for the use of Artificial Intelligence is the HR Management


(HRM). How AI can be used here is demonstrated in its central areas of action.
These include strategic HR planning, recruitment, training and development, per-
formance management, and the incentivization of staff. To achieve a holistic
approach involving AI technologies, you should establish a Talent Experience
Management (TXM). This is intended to create a holistic talent experience—
essentially a Talent Experience.

Note Box
In the HR area, analogous to “The experience is the product” in marketing:
“The experience is the job (offer)!”

To create a convincing talent experience, the responsible HR managers must


leave existing information and process silos. Because only then can a holistic, con-
sistent experience for existing and future talents be created. If expectations and
experiences—from the perspective of the talents—diverge within the Talent Journey
because there is no clean integration of the individual steps, application and employ-
ment processes and a mutually beneficial collaboration cannot succeed.

Supplementary Information The online version contains supplementary material available at


https://doi.org/10.1007/978-­3-­658-­46131-­7_8.

© The Author(s), under exclusive license to Springer Fachmedien Wiesbaden 367


GmbH, part of Springer Nature 2024
R. T. Kreutzer, Understanding Artificial Intelligence, Future of Business and
Finance, https://doi.org/10.1007/978-3-658-46131-7_8
368 8 Human Resource Management (HRM)

Sought-after Employee/ Potential Exit/


Applicant Candidate Newcomer
candidate leader exploitation retirement

 What does a  Where does  How qualified  How  Is a feedback  Is there  Is the
potential an applicant and prepared convincing is meeting held regular intention to
candidate find our do candidates the after a few monitoring of change
read about vacancies? find us in the onboarding months? service recognized at
our company?  How does an application process?  Is there provision? an early
 How do we applicant feel process?  Are the ongoing  Is potential stage?
score on "picked up"  How (unspoken) support/coach actively  Are
company on our competent is expectations ing to identify identified? employees
rating website? the system known? and overcome  Is there and
platforms?  What does an support  Are they potential continuous re- managers
 What does a applicant feel experienced being met? hurdles? skilling and actively
recruiter know on first  Which online  Is company-  ... up-skilling? supported
about our contact with tools are internal  Are ideas for during a
company our company? offered? networking optimizing change to
(employer  ...  ... systemically your own retirement?
brand)? promoted? services and  ...
 ...  ... processes
actively
collected?
 ...

Fig. 8.1 Comprehensive design of the Talent Journey—with relevant questions

In what way can the Talent Experience Management remedy this? Talent
Experience Management represents a holistic approach. The goal is to create con-
vincing experiences from the application process to the departure of an employee or
a manager from the own organization. This is intended to promote the engagement
of employees and managers and to develop them into enthusiastic ambassadors of
their own company and their own brands. For this, four “experience areas” need to
be addressed (see [1], p. 9):

• The applicants find the right job—quickly and efficiently.


• The recruiters become more productive and increase the quality of acquisitions.
• The dedicated employees and executives can contribute and develop their poten-
tial in the long term.
• The executives increase their productivity and gain the insights that are indis-
pensable for their leadership tasks.

How can you achieve this? The Talent Experience Management tries to design
each point on the Talent Journey with a goal-oriented approach. Which questions
within the Talent Journey need to be answered by the company is shown in Fig. 8.1.
In each phase, it is necessary to check what contribution artificial intelligence can
make to improve the experiences. For this purpose, individual solutions are pre-
sented as examples to give a concrete idea of the concepts already available today.

8.1 Strategic HR Planning

Strategic HR planning is the starting point of Talent Experience Management. At the


center of activities here is the prediction of future personnel needs and especially the
necessary qualifications. For this, a Human Resource Information System (HRIS)
needs to be set up—as an important basis for the use of AI (see also [2, 3]).
The central task of the HR Information System is the compilation and evaluation
of data about employees and executives as well as about current and future
8.2 AI-Supported Recruitment 369

Fields of application

Opmizaon of job
adversements and Chatbots as a point Analysis ofaudio and Analysis of CVs, CV
Matching of profiles Creaon of a ranking
correspondence of contact video recordings parsing
(augmented wring)

Sought-after candidate Applicant Candidate

Tools

Fig. 8.2 AI tools for the early phases of the Talent Journey

qualification requirements. This can additionally substantiate personnel decisions


and increase the efficiency of HR management. More and more companies are mov-
ing towards a AI-supported Human Resource Information System—to explore
rewarding areas of interaction between artificial and human intelligence. How the
Talent Journey can be supported in the early phases by AI tools is shown in Fig. 8.2.
These tools will be presented in detail in the following sections.

8.2 AI-Supported Recruitment

Oriented towards the requirements of strategic HR planning, the recruitment of the


required employees and executives takes place. Already today, a shortage of quali-
fied (specialist) staff is becoming apparent in more and more areas. Therefore, the
recruitment activities need to be intensified. A central task of the recruitment pro-
cess is initially to generate a sufficient number of qualified applicants. For this, can
make a significant contribution to the automated optimization of job advertise-
ments. The technical term for this is Augmented Writing. In Augmented Writing,
(AI-supported) systems design the titles and texts of a job advertisement in such a
way that as many qualified applicants as possible feel addressed. In addition, the
advertisement can be played out via the channels considered particularly relevant.
To illustrate the process of Augmented Writing, the concept of the US company
Textio is presented. To train the AI engine, Textio evaluates millions of job offers
and recruitment emails provided worldwide by companies from various industries.
A Prediction Engine uses this data to uncover meaningful patterns in language that
lead to more powerful communication and thus better business results. The results
obtained in each case generate further data to advance the learning process through
a learning loop (cf. [4]).
The insights gained from this flow into Augmented Writing. In the application
process, Augmented Job Posts can initially be used, which are intended to trigger
370 8 Human Resource Management (HRM)

particularly qualified applications. Through Augmented Recruiting Mails, the


most interesting candidates can then be addressed. An activating approach is indis-
pensable here, as particularly qualified individuals certainly receive many corre-
sponding emails. Therefore, it is important that the “spark jumps over” already at
this contact. Textio helps to write personal and effective cover letters much faster
(cf. [4]).
For this purpose, the visual editor of Textio provides formulations during the
writing process that are particularly effective. Suggestions are provided for the for-
mulation of the subject line to arouse the recipient’s interest. The Textio -Guide is
specifically tailored to recruitment emails. The Textio -Score indicates to what
extent the Textio -Guidelines have been adhered to. Evaluation and highlights are
displayed during input. For terms that could deter evaluators, the tool suggests syn-
onyms. This can continuously improve the language. Through the Textio -Browser
Extension, for example, Johnson & Johnson increased the response rate of its appli-
cants through emails with high Textio -Scores by 25% (cf. [4]).

Note Box
What is also important here: In the end, the HR manager decides which job
advertisements are published and which texts are used. However, augmented
writing can help to reach as many qualified applicants as possible and speed
up the search for suitable candidates.

Jobspreader [5] supports the recruiting process with data, technology, and
know-how. For this purpose, among other things, a labor market screening, a
competitive radar and a job title search engine optimization are carried out. In
this way, job offers become comprehensively visible. In addition, a dashboard is
offered to anchor data-based recruiting processually in the organization.
When people are interested in a company and its job offers, so-called Robo
Recruiters can be used. What can they do today? In sum, it must be stated that cor-
responding concepts are not yet able to independently design the entire recruiting
process. Even in this area, the primary goal is to support HR managers in their work.
For simple dialogues in initial contact with applicants, chatbots can be used.
Chatbots can answer simple and frequently asked HR questions. These include
questions about working hours, vacation, or parental leave. Chatbots can also sup-
port the onboarding process.
The digital assistants presented in Sect. 4.2 can conduct more extensive dia-
logues. One area is the determination of job-relevant qualifications through corre-
sponding dialogues. This includes, for example, the existence of certain degrees
from vocational schools or universities. The level of foreign language proficiency
can also be determined. Such a solution is offered by IBM RPA (Robotic Process
Automation). This is a central tool for creating and managing bots with a native,
AI-capable digital assistant (see [6]). These applications can chat with people to
8.2 AI-Supported Recruitment 371

provide effective support in the recruiting process. If questions become more com-
plex, immediate forwarding to an HR manager is indispensable.
Furthermore, with permission, automated analyses of audio and video record-
ings can be carried out. The technologies used here evaluate sound and/or image
recordings of candidates in order to draw conclusions about the personality traits
and competencies of the applicants.
The Video AI of the German company Retorio [7] is based on scientifically vali-
dated personality and culture models that allow statements regarding behavior and
performance. The basis for this is the Five-Factor Model, also called the Big-5-­
Model (see fundamental [8]). This model of personality is a hierarchical organiza-
tion of personality traits in the form of five basic dimensions:

• O penness: Tendency towards openness to new experiences


• C onscientiousness: Tendency towards long-term planning, self-discipline, and
diligence
• E xtraversion: Tendency towards dominance and sociability
• A greeableness: Tendency towards friendliness and harmony
• N euroticism: Tendency towards anxiety and nervousness

Together, the initial letters of the English terms form the acronym OCEAN. This
model can be used to describe personality differences in all cultures. Each trait has
several sub-dimensions (see [9]). The AI model based on this personality concept
was trained with a demographically diverse dataset of more than 100,000 people
from around the world and is therefore globally applicable. Retorio places particular
emphasis on AI fairness with regard to age, gender, and skin color (see Sect. 1.4).
The models used are intended to exclude human prejudices and the associated risks
of discriminatory behavior. Supervised learning and deep learning models were
used for training.
Through automated video interviews, Retorio can create personality profiles
based on the analysis of facial expression, body language, and voice. In addition, the
spoken words are transcribed and the language usage is analyzed. It is important
here that the model from Retorio is based on the actual behavior—and not on a
self-assessment of the candidate. This avoids the risk of participants overestimating
their characteristics to increase the likelihood of being hired or promoted. This risk
can be avoided by trained personnel during observations.
The results that can be achieved with this AI tool were determined by a self-test
by the author. First, the potential applicant was asked to introduce himself briefly.
Then he had to describe a professional situation. A few minutes are sufficient for
each. A few days later, the results were sent by email. The author was lucky—a
recommendation to invite for an interview was made (see Fig. 8.3).
However, the further results achieved through the AI-supported video analysis
are much more exciting. The Big-5 personality profile is shown in Fig. 8.4.
The further information gained from this video analysis is shown in Fig. 8.5. It
becomes clear here: The perspective of human recruiters is enriched by further
exciting content.
372 8 Human Resource Management (HRM)

Fig. 8.3 Video recording by Retorio. (Source: Retorio)

However, it also applies: The profile of a candidate must fit the culture of the
company. The corporate culture reflects behavioral patterns and attitudes that are
lived by the various members and units of an organization. To capture these as well,
Retorio uses one of the most important taxonomies in cultural research. The fol-
lowing factors of corporate culture can be distinguished according to O‘Reilly III
et al. ([10], p. 502):

• Innovation and risk-taking


• Attention to detail
• Results orientation
• Aggressiveness and competitiveness
• Support
• Emphasis on growth and reward
• Collaboration and team orientation
• Decisiveness

These factors of corporate culture can be matched with individual characteristics


and properties of people. In this way, personality traits at the individual level can
be combined into an overarching culture model. This determines how well certain
people fit into a corporate culture. The length of stay in the company tends to
increase with a good fit.
8.2 AI-Supported Recruitment

Fig. 8.4 Big-5 personality profile of the AI-supported video analysis by Retorio. (Source: Retorio)
373
374 8 Human Resource Management (HRM)

Fig. 8.5 Detailed evaluation of the AI-supported video analysis by Retorio. (Source: Retorio)

Further analysis possibilities arise when the applicant has agreed to a compre-
hensive video analysis. Then, artificial intelligence can, for example, convert a
15-min video interview into a set of 20,000 data points for facial movements, into-
nation, and word choice (cf. [11], p. 110). The insights gained can also be used for
the ongoing evaluation of employees and executives who are already on board.
This can identify development potentials and suitability for advertised internal posi-
tions, and learning agendas and career paths can be developed. At a further stage, it
could also be determined how harmoniously and successfully employees could
work with certain executives. In this way, the team structure can also be optimized
with AI support.
8.3 Capture and Analysis of Resumes 375

The language analysis VIER Emotion Analytics already introduced in Sect. 4.4
can also be used in the HR area. Here, the application serves to recognize strengths
and areas for development of applicants as well as employees. After all, the way
people communicate with each other affects their relationships and thus also the
achievable work result. For this purpose, the system measures the way people inter-
act with each other. Recruiters can thus more easily and quickly find employees
who fit into a team and generally into the communication and interaction culture of
a company. For this purpose, video interviews can be evaluated. In addition, execu-
tives can receive impulses for talent management to motivate, develop and retain
employees (cf. [12]).
An innovative HR approach is also offered by HireVue. Through an End-to-End
Hiring Experience Platform, a faster and fairer hiring process is supported. The
offered software automates the necessary workflows at the company level. The
phases of addressing, checking, evaluating, interviewing, and hiring of the talent
journey are seamlessly linked for this purpose. Automated workflows via text, web,
and WhatsApp interfaces enable a 24/7 applicant interaction. This facilitates the
scaling of hiring processes. For this purpose, a link is made with the leading calen-
dar systems to automate scheduling. For the live and on-demand video interviews,
conversational AI is also used. Recorded interviews can be evaluated more consis-
tently by HR decision-makers. In addition, more candidates can be assessed in a
shorter time with system support (cf. [13]).
The process is supported by a HireVue Builder. This avoids ad hoc developed
and thus often also inconsistent interview processes. Through this system, HR man-
agers can access more than 1000 job-specific, occupational psychologist-tested
interview guides. The HireVue Builder automatically generates questions and
selects the relevant evaluation criteria based on job-related attributes and skills. The
standardized interviews thus ensure a higher degree of consistency and fairness in
the interview process (cf. [13]).

8.3 Capture and Analysis of Resumes

The next important step in the Talent Journey is the capture and analysis of
resumes of applicants. Most of the time, these are unstructured or semi-structured
data that need to be converted into a structured resume for further processing.
Here, language and image recognition methods are used. If facial recognition tech-
niques are used, a match of the candidates with the submitted photo documents can
be checked. Once this comparison has been made and a structured dataset is avail-
able, classic plausibilities can be checked and interesting patterns in the resumes
can be identified. These identified patterns can then be matched with the job require-
ments. Such a pre-analysis and pre-selection of potential candidates can signifi-
cantly speed up the recruitment process—especially in companies with a high
376 8 Human Resource Management (HRM)

volume of applicants. However, such qualified support for personnel acquisition can
only succeed if the dataset used for training does not have any bias and does not
allow any prejudices generated in the past to influence the evaluation (see in depth
Sect. 1.4). If these conditions are met, the qualifications of the candidates can be
evaluated more impartially.
To support these processes, AI tools for automated analysis of resumes are
used. This is referred to as CV parsing or résumé parsing (see basic parsing). In
CV parsing (CV for curriculum vitae or resume), the relevant data from a digitally
available, unstructured resume are systematically filtered out, structured, and
imported into an applicant management software. AI algorithms recognize the rel-
evant text elements of a resume. Image recognition captures photos of the appli-
cants, so that these can then be extracted and processed separately. CV parsing
contributes to the acceleration of the recruiting process because not every resume
has to be completely reviewed to find the relevant data in the most diverse places. In
addition, a structured preparation of the CV data allows for targeted searches for
certain keywords (e.g., for specific qualifications or degrees). With a CV parsing
software, a semantic search, and a semantic matching solution between the appli-
cant’s qualification profile and the job requirements, employers and recruiters are
supported in making their personnel and recruitment processes more efficient.
A corresponding solution is offered, for example, by the German company
Talention [14]. Their CV parsing tools also enable the analysis of one-click appli-
cations. With these, you can apply directly with your Xing or LinkedIn profile. The
data found there is analyzed by the CV parser and structured into an applicant data-
base. Such an approach not only facilitates the work of HR managers, but also
improves the talent experience. This also includes that the forms for an online appli-
cation can also be used on mobile devices. to function. Because more and more
often, candidates apply via tablet or smartphone. These aspects should not be lost
sight of, especially in the War for Talents.
Textkernel has also developed a CV parsing algorithm with Extract! 4.0. This
allows resumes to be analyzed in detail (cf. [15]).
Another tool is the Talent Acquisition Suite of the company Softgarden, which
is also active in Germany. Since the majority of applicants first search for potential
employers on Google, your own offers must be optimally found there—and then
convince. In addition, employees can be involved in the recruitment process them-
selves. After all, the saying goes—analogous to friend referral—“good employees
know other good (potential) employees”. Recommendations often lead to particu-
larly successful hires. Therefore, it is important to involve the networks of your own
employees in the recruiting process. As part of the applicant management, CV pars-
ing is also used here. In addition, the employer ratings are incorporated into the
process. This supports a holistic HR management (cf. [16]).
8.4 Matching of Candidates and Job Requirements 377

8.4 Matching of Candidates and Job Requirements

The structured preparation of resumes is the prerequisite for an automated match-


ing of candidates and job requirements. Here, the profiles of internal or external
candidates are systematically compared with the job requirement profile. This can
quickly find a possible match—or it becomes visible that there is still a lack of
qualified applicants. The automated matching also supports internal talent manage-
ment and upcoming succession planning. Sometimes, as part of the matching pro-
cess, an ranking of applicants is automatically created based on the predefined
goals and requirements. Then the people are ranked according to their suitability for
a vacant position, a training, a coaching or even for a release.
Such support is offered, for example, by the Ireland-based company ZipRecruiter
[17]. The developed, accessible via an app job marketplace supports people in
their job search. According to the provider, this platform-style marketplace has over
7 million active job seekers per month. Over 1 million companies use ZipRecruiter
for their application processes—and according to the company, more than 10,000
users are added each month.
The US company Entelo [18] combines talent sourcing and talent engage-
ment in a end-to-end recruiting solution. A special search engine is used to find
the sought-after talents faster. Hundreds of variables are analyzed to quickly find
candidates that match the search criteria. This is an application of predictive
analytics.
For the hiring of a large number of employees (e.g. waiters, cashiers, customer
service staff), the Dutch company offers Harver [19] offers a software for sup-
porting hiring processes. This involves combining data-driven comparisons, end-­
to-­
end automation, and enterprise-level business intelligence. A matching
technology is used for this purpose, which allows decisions to be made in seconds
on a large scale. The matching data is used to trigger automation rules. This guides
candidates through the recruiting funnel without human intervention. At the same
time, a fast and entertaining applicant experience is to be made possible. An
enterprise-­level Business Intelligence Suite includes all relevant metrics and KPIs
to monitor the overall process.
The recruiting software from the German provider MoBerries also supports
the recruiting process with data-based insights. In the first step, the software helps
to attract applications via various recruiting channels. These include job portals,
recruiters, and talent networks. It is important here that pre-filtered applicants
from all channels are addressed. An intelligent applicant filter based on neural
networks sorts the applicants according to the relevance of their profiles. This
applicant filter continuously learns from the interactions of the candidates and
helps to prioritize the most suitable applicants and quickly sort out irrelevant
applications (cf. [20]).
378 8 Human Resource Management (HRM)

For the second step, a matching process, a special AI application is used. This
evaluates the data provided by applicants regarding skills and preferences. The respec-
tive profile is compared with the requirements of a position to be filled. If there is a
high match between the candidate’s profile and the job requirements, the candidate
can view the job offer and apply directly with a click. This application is immediately
visible in the recruiting software or in a corresponding dashboard. The application can
be accepted or rejected directly after a review. This result—combined with a brief
explanation—trains the matching AI. The same applies to the applicants themselves.
If they reject a position, feedback is also requested here. In this way, the algorithm can
learn from each feedback and improve. At the same time, companies can also see why
potential candidates do not apply to their own companies. These insights are impor-
tant for further optimizing the job advertisement (cf. [20]).
If an AI-supported expert system is used in the application process, relevant
areas of questions or very specific questions can be suggested by comparing the
qualification profile of external or internal applicants on the one hand and the
requirements of the position on the other hand. These can be asked either by a chat-
bot or a human recruiter. If these answers are digitally recorded, they can be checked
for plausibility with the CV and for correctness in general—faster than humans can
do. At the same time, the efficiency of the application process can be increased if the
language of the applicant and interviewer is automatically digitized. This also
makes later, possibly in-depth evaluations possible—as long as the necessary per-
missions of the candidates are available.

8.5 Onboarding New Employees

How the talent journey can be supported by AI tools in the further phases is shown
in Fig. 8.6. It is interesting that there are hardly any exit management offers avail-
able so far. Here, the HR department can be inspired by marketing concepts for
churn management to develop corresponding applications.
The company Enboarder promises the development of Human-Centric
Journeys, by delivering personalized and “bite-sized” content to new employees in
a timely manner—each via the preferred communication channel. For this purpose,
virtual spaces are offered to build a bond between new employees and the team.
Introduction videos are also used and (virtual) coffee chats are organized. In addi-
tion, managers can be trained to master a virtual onboarding. In parallel, technical
requests, but also the ordering of work clothes, badges, business cards etc. can be
facilitated. Regular check-in surveys check how smoothly the supported processes
are running (cf. [21]).

Note Box
AI solutions can effectively support the recruitment process and onboarding
through many exciting applications.
8.6 Talent Management—Training and Development 379

Fields of application

Chatbots as contact Idenficaon of Proposals for Predicng the


persons and desktop internal high development intenon to Exit management
assistants potenals measures terminate

Employee/ Potential
Newcomer Exit/retirement
leader exploitation
Tools

Fig. 8.6 AI tools for the further phases of the Talent Journey

8.6 Talent Management—Training and Development

A—sometimes still neglected—core area of HR management is the training and


development of employees and managers. You should pay special attention to this
task in view of the dynamic changes of the digital age and the challenges of an eco-
logical transformation. It is a learning organizational culture to be developed,
which does not just aim to fill identified gaps in the qualification profiles of the staff.
In addition, it is important to identify potential carriers in your own company early
on—to keep them!
A major challenge for many companies is to improve the mobility of performers
within the company. This requires the establishment of an easily searchable data-
base with the skills and competences of employees and managers. They should
be able to easily upload their application documents to find internal offers that
match their profile data. This makes it easier for the company’s performers to learn
about new tasks, roles and projects—and to submit their application with one click.
This promotes talent development in the company—indispensable in the face of the
often described shortage of qualified professionals. In addition, it can be determined
early on whether external sourcing is really necessary. Here, a multilingual résumé
parsing can be used to make applications from different language circles compa-
rable. A corresponding AI-supported solution is offered by Textkernel (cf. [22]).
The HR technology company Phenom supports the previously mentioned Talent
Experience Management through an AI-supported, scalable platform. This is con-
tributed to by digital talent marketplaces for companies to support them employee-­
centered in their digital transformation. A matching algorithm as a SaaS solution
matches existing data about the qualifications of the already employed employees
and executives with internal work and learning offers of the company. This is
intended to exploit the talent potential already present in the company. In addition,
the matching process is intended to promote internal networking within the
380 8 Human Resource Management (HRM)

company in order to strengthen the companies from within. For this purpose, indi-
vidual marketplaces are set up—with use cases and topics of the respective com-
pany. These include projects, job sharing offers, finding internal experts, short
assignments, but also lunch dates and mentoring offers (cf. [1]).
For companies, such an approach comes with the following advantages:

• The AI-supported matching supports the individual career paths—oriented


towards the individual goals, skills and interests of the candidates.
• Through a skill mapping qualified applicants and open positions in the company
are quickly brought together. This can relieve HR managers of routine tasks.

AI-supported systems can assist in the qualification-based “gap analysis” and at


the same time develop tailor-made offers or identify them in online and offline offers.
AI applications can already automatically suggest training and development
measures. The employees or the responsible HR managers are here—system-­
supported—measures for personnel development are suggested or recommended. These
measures are derived by an AI algorithm based on the data available in the HR system.
These steps are supported by the already introduced concept of Phenom (cf. [1]).
In this context, the system Qualtrics Employee XM also supports. For this, the
voices of the employees are captured by a listening engine to initiate the respective
appropriate steps towards increasing engagement, talent planning, productivity and
innovation. To determine these relevant data, internal company surveys can be con-
ducted (cf. [23]).
In the training process itself, training robots can be used, which were already
discussed in Chap. 7. Through visual scanning, the daily learning status of the
participants can be observed and the respective attention can be determined. In
addition, the learning progress—for each individual person—can be precisely ana-
lyzed. This makes it possible that Not only the difficulty levels of the content, but
also the type of didactic preparation and its temporal presentation in terms of the
teaching rhythm must be adapted to each learner. Through direct interaction with
the training robot, further insights into the learning progress and necessary further
assistance can be gained. The basis for this are concepts that also analyze the elab-
orations of the learners for correctness. This is now possible in more and more
areas. While, for example, mathematical elaborations can usually be clearly evalu-
ated as correct or incorrect, this is rather difficult with questions that rely on cre-
ativity (e.g. in marketing, product developments, etc.). In sum, there is the
possibility to improve the quality of learning and learning experiences as well as
the learning efficiency itself.

Note Box
Artificial Intelligence becomes a supporter or an independent developer of an
individual training agenda—for each individual person.
Tailored learning plans for every employee and every manager become
possible—oriented towards the current or future requirements of the company
and the respective personal goals, qualifications and tasks of the candidates.
8.7 Performance Management 381

A further concept for increasing the satisfaction and productivity of employees is


offered by the Berlin company Humanoo [24]. Here the focus is on the corporate
health management. Through an integrated health training, the individual physical
and mental health should be determined and promoted. For a holistic health offer
more than 3000 programs for health promotion are available. In addition, access to
psychologists and coaches is made possible through personal video coaching. For
this purpose, an app is used that can motivate employees and managers to a healthier
life. This is not only intended to increase the satisfaction of the staff, but also to
reduce absences. By bringing together people from different company areas, the
sense of community can also be promoted be. For this purpose, the app calls for
various challenges. In sum, this concept is intended to lead to measurable improve-
ments in corporate health promotion to measurable improvements in company
performance.

Note Box
A strong “we” only arises from many motivated, committed, highly qualified
and healthy “I’s”!

8.7 Performance Management

A central question is what a AI-supported Performance Management can look


like. A crucial prerequisite for this is that the performance-oriented data at the per-
son or group level can be evaluated at all. For this, the legal framework conditions
must be considered. If such data is available—for example from production or from
sales—then the performance achieved can be specifically be evaluated. Here, a real-­
time performance monitoring becomes possible. To this end, the AI systems con-
tinuously collect and analyze performance data to provide real-time feedback. This
allows managers and employees to quickly respond to problems and achieve con-
tinuous improvements.
The program Visier People from the company Visier [25] provides appropriate
support. By combining survey data, personnel data, and business data, the current
level of employee engagement becomes visible. This allows the risks of burnouts,
but also areas of disengagement to be identified. It makes it easier for HR managers
to develop plans to address these problems. To this end, Visier People establishes a
connection between business results and employee morale and relates this morale to
organizational changes. It becomes visible whether the engagement of employees
and managers can be increased through higher compensation, more learning oppor-
tunities, or changes in policies. An interesting solution is also offered by SAP [26]:
the SAP SuccessFactors Human Experience Management Suite. A powerful
cloud HR software allows each individual to fully exploit their potential at work. To
this end, the expectations and needs of employees are recorded in order to take them
into account when designing processes. At the same time, the mobility of high per-
formers is to be promoted—based on a holistic view of each individual person. The
382 8 Human Resource Management (HRM)

US company Nice provides employees in the front and back office with a personal
desktop assistant with NEVA (Nice Employee Virtual Attendant) (see [27]). NEVA
has the intelligence and integration capabilities to be used in dynamic and complex
desktop environments of employees. AI-supported, NEVA recognizes what the
employee wants to achieve. To this end, the employee’s desktop activities on the
screen are observed and interpreted:
Which applications are selected?
Which buttons are clicked?
In which fields are entries made?
This bot presents real-time and context-related prompts for the next actions and
work steps of the employees. AI-based real-time instructions, triggered by desk-
top actions, can provide employees with important impulses to increase their per-
sonal performance. These impulses can also relate to live interactions with
customers. Here, proven sales practices can be recommended and important product
details for the conversation can be displayed. A comprehensive view of the cus-
tomer—based on various data sources—enables the employee to present the right
arguments. In addition, the intelligent bot NEVA can also remind of upcoming tasks.
Based on the desktop data obtained during the use of NEVA, the processes can be
continuously optimized. In addition, further possibilities for automating desktop
processes can be identified (see [27]). As NEVA in action can be seen in this video:
https://www.youtube.com/watch?v=x4jVeKIAKS4

Food for Thought


The example of NEVA shows how comprehensive AI-based solutions can
already intervene in everyday work. And this is just the beginning! On the one
hand, this can lead to a disempowerment of employees who merely follow
instructions from AI systems that they no longer understand—and whose rec-
ommendations they may not be able to question in the future or do not want
to question. On the other hand, such systems—as far as they are legally per-
missible—allow the use of less qualified employees, because their work
steps are continuously monitored This could help overcome a lack of corre-
sponding employees with higher qualifications.
However, one thing also becomes apparent: NEVA and comparable sys-
tems are only an intermediate step to automate more and more processes com-
prehensively. The insights gained from the use of such concepts train the AI
applications—so that the automation of processes can be further advanced
Until the AI agents then take over the corresponding tasks completely…

Many other companies offer similar concepts. However, it is important that there
is a consensus among all participants about such evaluations. Before you decide on
a solution, you should identify the specific needs and goals of your company It can
also be helpful to try out demo versions or trial versions of the products to get a bet-
ter idea of which solution suits you best.
8.9 Employee Retention Management 383

8.8 Incentivization

Measuring and monitoring the performance of employees and managers can be a


complex task. A AI-based performance monitoring can simplify this task by help-
ing to collect, analyze and interpret large amounts of performance data. AI tech-
nologies can help define individual performance indicators and goals and
continuously monitor them. This allows companies to adjust compensation to indi-
vidual performance and in this way create fairer and more motivating incentive
systems. In addition, AI models can be used to simulate and predict the effects of
different incentive systems. In this way, companies can make informed decisions
about which incentive systems can most effectively contribute to improving perfor-
mance. Another advantage of AI is its ability to create personalized incentives. By
analyzing the preferences and needs of each employee, customized incentive sys-
tems can be developed that motivate each employee individually.
AI applications can also help support the fairness of the compensation system.
For this purpose, performance parameters can be evaluated—regardless of gender,
age, education, skin color, ethnicity, nationality, etc.—to provide a basis for appro-
priate compensation. This separation of performance evaluation from criteria that
are not actually relevant could lead to higher fairness in compensation.
Such an approach would correspond to the blind audition (“ blind audition “)
new musicians in an orchestra. In this, the person of the auditioning musician with
all his perceptible characteristics (gender, age, skin color, ethnicity) remains invis-
ible. The presenter can only be heard. In many orchestras, this has led to an increase
in the number of female musicians.

Food for Thought


The central questions in companies are:
• Do we want to achieve greater fairness in compensation at all?
• Don’t executives still desire a high degree of opacity to favor their own
protégés?
Not fair—but often still reality!

8.9 Employee Retention Management

Another exciting field of AI application is the automated prediction of the inten-


tion to resign. This concept is known from customer relationship management (cf.
in depth [28], pp. 350–363). Similarly, in Employee Retention Management the
task is to determine the probability of termination of employment by the
employee. Based on a so-called Retention Score, a “retention strategy” can be
developed for important and particularly value-creating employees. This Retention
Score is calculated by an algorithm for each employee. At the same time, it is also
important to gain valuable information for Exit Management—to determine which
employees the company should prospectively separate from.
384 8 Human Resource Management (HRM)

Such a concept is offered by the already quoted company Qualtrics as well as by


Oracle with the concept Oracle Fusion HCM Analytics. HCM stands for Human
Capital Management. This concept is a pre-packaged, cloud-native solution for HR
management. This gives the acting persons quick valuable insights into their own
workforce. This can improve decisions regarding employee diversity, employee
turnover and retention, talent acquisition, compensation and much more. Thus,
companies are supported in conducting company-wide, departmental and cross-­
functional personnel analyses according to uniform standards. Additionally, their
own data sources can be enriched with additional information (cf. [29]).

Note Box
The use of artificial intelligence must be aligned with the expected legal
framework conditions, which will be adopted at EU level with the AI Act.
Especially with personal decisions, the autonomous AI solutions will be
closely confined (cf. Sect. 1.6).

8.10 Status of the Use of (AI-Based) HR Technologies


in Germany

An intriguing question now is: How open are HR managers to the use of (AI-based)
HR technologies in Germany? A study by the Ethics Board HR Tech surveyed 330
people on this. The central findings of this study are (cf. [30], pp. 3, 5):

• HR managers are fundamentally open to the use of technologies.


• Almost 30% of respondents already use new HR technologies or are piloting or
planning their use in HR management. This includes technologies for analyzing
resumes, creating applicant rankings, optimizing job advertisements, using chat-
bots as interlocutors, developing suggestions for development measures, predict-
ing the intention to resign, analyzing audio and video recordings, and matching
profiles.
• However, these 30% also indicate that over 70% of companies do not have AI
use in the HR area on the agenda.
• However, regulations on the use of technology in the HR area are largely
unknown and therefore hardly considered.
• 80 to 96% of respondents recognize the relevance of an HR policy.
• The implementation of data protection is already increasingly anchored. There
is still a need for action with regard to the technological issues.

Which HR technologies are already being used in Germany and to what extent is
shown in Fig. 8.7 (cf. [30], p. 20). The most widespread—albeit still at a low level—
are technologies for analyzing resumes, for optimizing job advertisements and
for matching profiles. Also, chatbots as interlocutors are already being used. An
analysis of audio and video recordings and the prediction of the intention to
resign are not yet taking place.
8.10 Status of the Use of (AI-Based) HR Technologies in Germany 385

Do you already use "..." in your organization or are you planning to use it?

Analysis of CVs 72% 11% 6% 11% n = 253

Optimization of job advertisements 71% 19% 8% 2% n = 289

Chatbots as contact persons 69% 19% 9% 5% n = 260

Matching of profiles 69% 22% 5% 5% n = 273

Creation of a ranking 80% 15% 3% 2% n = 254

Suggestions for development measures 84% 13% 2% 2% n = 248

Analysis of audio & video recordings 97% 2% n = 245

Predicting the intention to terminate 93% 6% 1% n = 250

0% 20% 40% 60% 80% 100%


Currently not planned In planning Pilot Full coverage

Fig. 8.7 Use of HR technologies in Germany (Data source: [30], p. 20)

To what extent do you consider "..." to be useful for your organization


(in terms of improving the quality of HR work)?

4%
Analysis of CVs 2% 9% 54% 31% n = 68

2%
Optimization of job advertisements 2% 6% 51% 40% n = 68

4%
Chatbots as contact persons 1% 5% 51% 39% n = 79

4%
Matching of profiles 1% 4% 60% 31% n = 77

2%
Creation of a ranking 2% 12% 62% 22% n = 52

3%
Proposals for development measures 3% 10% 58% 28% n = 41

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
Not useful at all Not useful Neither Usful Fully and completely useful

Fig. 8.8 Assessment of the usefulness of the use of various HR technologies (Data source:
[30], p. 21)

As for the meaningfulness of the use of various HR technologies as seen by


the respondents, among those who already use or are implementing the technolo-
gies shown, is shown in Fig. 8.8 (cf. [30], p. 21). The respondents, appreciate the use
of HR technologies predominantly as “useful” or even “completely useful”. The
optimization of job advertisements and the use of chatbots are rated as “com-
pletely useful”. The use of technology for matching profiles and for analyzing
resumes is considered somewhat less useful. Interestingly, a technology-based
analysis of resumes is rated highest. Given this extremely positive assessment of
the use of artificial intelligence in the HR area, it is all the more surprising how little
it is actually used today.
386 8 Human Resource Management (HRM)

In HR management, intensive collaboration between humans and machines


will also be necessary in the future. However, it is already foreseeable today that
many standard tasks in personnel management can be taken over by AI-supported
systems in the future. An important prerequisite for this is a comprehensive data-
base to build a HR-oriented expert system.
What obstacles need to be considered when using artificial intelligence in HR
management? First of all, the lack of AI specialists in general, but especially for
use in human resources, represents a serious bottleneck. In addition, there is the
overarching question of which fields of application data protection in the HR area
allows. When it comes to personal data, a particular sensitivity is required when
collecting, processing and evaluating it. It is essential that there is voluntary consent
to the use of the data. For this, AI skeptics and candidates must be convinced of the
benefits of AI application.
The quality assurance of AI applications in the HR area is particularly diffi-
cult. Information about how a rejected applicant would have developed if hired by
the company is not available and cannot be obtained. Whether a new employee or a
new manager proves successful, only becomes apparent—depending on the field of
application—after 6 or 12 months. For the corresponding performance evaluation, a
large number of personal indicators are required, the collection and evaluation of
which is legally regulated.
The necessity of correctness and freedom from bias in the datasets as an indis-
pensable prerequisite for the use of AI in general, especially in the HR area, has
already been pointed out. To ensure these as well as a correct use of AI systems, all
users of these technologies need comprehensive training and need to be sensitized
to possible problem cases. For this, appropriate Establishing Governance Policies
for the Use of AI in HR Management (see Sect. 12.3.3). These policies must not
only concern the technical processes and data management, but also take into
account the relevant legal requirements.
In the already cited study of the HR Tech Ethics Council it was also determined
how the respondents feel about regulations of the use of HR technologies (see
Fig. 8.9; see [30], p. 29). The clear majority of respondents support the use of guide-
lines on the topic of artificial intelligence and automation (83% chose “strongly
agree” or “agree”). Seventy four percent chose “strongly agree” or “agree”, so they
would support it if the organizations define guidelines for themselves and make these
internally binding. Seventy two percent advocate for binding legal regulations.
The HR Tech Ethics Council [30] has, in light of these results, defined the fol-
lowing guidelines for the responsible use of AI and other digital technologies in
HR work:

1. Transparent goal-setting process and involvement of key stakeholders


Before introducing an AI solution, the goal for its use must be defined. In this
process, all relevant interest groups should be identified and involved.
2. Use of empirically and theoretically based solutions
Those who offer or use AI solutions must ensure that these are empirically evalu-
ated and have a solid theoretical basis.
8.10 Status of the Use of (AI-Based) HR Technologies in Germany 387

Please indicate to what extent you agree with the following statements:

7%
I think that guidelines are needed for the use of
1% 9% 40% 43% n = 243
AI/automation in HR work.

I am of the opinion that binding legal regulations are


3% 13% 13% 41% 31% n = 242
needed with regard to AI/automation in HR work.

I welcome binding legal regulations with regard to


4% 11% 20% 41% 24% n = 243
AI/automation in HR work.

I think organizations should define their own guidelines for


dealing with AI/automation in HR work and make them 4% 9% 14% 51% 23% n = 244
binding (e.g. through company agreements).

Possible legal regulations leave enough leeway so that


potential progress in the area of AI/automation in HR work 10% 33% 28% 22% 7% n = 241
is not slowed down.

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

Do not agree at all Do not agree Neither Agree Agree completely

Fig. 8.9 Need for regulations for the use of HR technologies in Germany. (Data source: [30], p. 21)

3. Humans retain the final decision-making power in personnel matters


Those who use AI solutions must ensure that in important personnel decisions,
the final decision-making authority lies with a natural person.
4. HR Management drives the use of AI solutions
Successful use of AI solutions by HR requires the combination of technological,
analytical, and HR competencies. Applications are goal-oriented—not
technology-driven.
5. Liability and responsibility lie with the implementing organizations
Organizations that use AI solutions are responsible for the results of their use.
6. Purpose limitation and data minimization of AI applications
Anyone who uses personal data for AI solutions must define in advance for what
purposes these will be used. In addition, it must be ensured that this data is only
collected, stored and used for the intended purpose.
7. Duty to inform the affected individuals
Before or when using AI solutions, the people affected by it must be informed
about the use, the purpose, the logic of the AI application as well as about the types
of data collected and used for it.
8. Respect for subject quality
For use in AI solutions, no data may be collected without timely involvement and
individual consent of the affected persons, which are beyond their voluntary control.
9. Ensuring high data quality and avoiding any discrimination
Anyone who develops or uses AI solutions must ensure that the underlying data
is of high quality. System-related discriminations must be prevented.
10. Continuous review and consideration of relevant guidelines
Anyone who introduces AI solutions according to these guidelines should trans-
parently ensure that the guidelines are also observed in the operational implementa-
tion and further development.
388 8 Human Resource Management (HRM)

These rules define the framework within which AI applications in the HR area
should take place.

Food for Thought


One thing should be emphasized in the case of HR support through Artificial
Intelligence: The final decision, whether a candidate is hired or not, should
still be in the hands of the involved managers. Since there is still a lack of
Artificial General Intelligence, humans are much better at assessing the suit-
ability of a candidate for a position. After all, their intelligence is much more
generic and can combine words, gestures, facial expressions, pauses in the
flow of speech etc. with the sympathy or antipathy decisive for successful
cooperation. No AI system can do this at the moment.
Therefore, pure Data-Driven Recruiting or Robot Recruiting is (still) a
thing of the future, unless you want to hire human robots yourself!
Nevertheless, when the search for potential candidates should not forego
Artificial Intelligence. After all, AI-based technology can generate important
additional insights to substantiate the decision-making process. Moreover,
various routine processes along the Talent Journey can be automated.

Think-Box: Questions You Should Ask Yourself!


• In which areas of our Talent Journey are we currently losing the most
applicants?
• Where within the Talent Journey are human resources currently tied up for
tasks that AI can also take over?
• Are we already using systems to improve the targeted approach to
applicants?
• Can chatbots support applicant care and onboarding?
• Are audio and video analyses used to generate further insights from
applicants?
• Do we use CV parsing to process unstructured resumes faster?
• Do we use matching tools to determine a fit between internal/external
applicants and job requirements—even across national borders?
• How systematically do we analyze existing staff in terms of training needs
and promotion opportunities?
• Is it ensured that the personal information of employees is protected and
treated in compliance with data protection?
• How can we ensure that AI does not make biased or discriminatory
decisions?
• Are our AI systems and processes transparent and understandable for
employees?
• Do we have the necessary know-how in the HR team to effectively use and
manage AI tools? Do we need additional training or staff for this?
References 389

• How can we ensure that our employees and managers accept and support
the implementation of AI?
• What potential risks are associated with the implementation of AI in the
HR area—and how can we minimize these risks?
• How do we measure the success of our AI implementation? Which metrics
are most meaningful for this?
• In which applications could we achieve the greatest leverage effect?
• Who is responsible for these questions in our organization?

8.11 Summary

• AI applications can make a significant contribution to the professionalization of


HR work and support along the entire Talent Journey.
• This requires not only a familiarity with the AI-based solutions but also a cor-
responding Qualification of HR Managers themselves.
• Every HR team is called upon to examine the existing AI tools for their suitabil-
ity for their own personnel tasks.
• It is indispensable to comprehensively consider the respective legal framework
conditions and to inform internally about the use of AI.

References
1. Phenom. (2023). The definitive guide to talent management. Accessed June 01, 2023, from
https://www.phenom.com/resource/the-­definitive-­guide-­to-­talent-­experience-­management
2. Fink, V. (2021). Artificial intelligence in HR work. Schäffer Poeschel.
3. Rudel, S. (2021). People analytics: Methods and tools for working with data in human
resource management: Methods & tools for working with data in human resource manage-
ment (HRM). Vahlen.
4. Textio. (2023). End hidden bias, literally. Accessed June 01, 2023, from https://textio.com/
5. Jobspreader. (2023). We take the chance out of recruiting. Accessed June 01, 2023, from
https://wollmilchsau.de/
6. IBM. (2023). IBM robotic process automation. Accessed June 01, 2023, from https://www.
ibm.com/de-­de/products/robotic-­process-­automation
7. Retorio. (2023). Recognize & train successful behaviors for service. Accessed June 01, 2023,
from https://www.retorio.com/de/
8. McCrae, R. R., & John, O. P. (1992). An introduction to the five-factor model and its appli-
cations. Accessed March 13, 2022, from https://doi.org/10.1111/j.1467-­6494.1992.tb00970.x
9. Dorsch. (2023). Five-factor model. Accessed June 01, 2023, from https://dorsch.hogrefe.com/
stichwort/fuenf-­faktoren-­modell
10. O’Reilly, C. A., III, Chatman, J., & Caldwell, D. F. (1991). People and organizational culture:
A profile comparison approach to assessing person organization fit source. The Academy of
Management Journal, 34(3), 487–516. Accessed June 01, 2023, from https://www.jstor.org/
stable/256404
390 8 Human Resource Management (HRM)

11. Jia, Q., Guo, Y., Li, R., Li, Y. R., & Chen, Y. W. (2018). A conceptual artificial intelligence
application framework in human resource management. ICEB, proceedings of the 18th inter-
national conference on electronic business, pp. 106–114.
12. Vier. (2023). Emotion analysis through AI. Accessed May 19, 2023, from https://www.vier.ai/
emotion-­analytics-­technologie/
13. HireVue. (2023). No matter your hiring needs, we have the solution. Accessed June 02, 2023,
from https://www.hirevue.com/
14. Talention. (2023). More quality applicants. Accessed June 02, 2023, from https://www.tal-
ention.de/
15. Textkernel. (2023a). Extract! 4.0—Textkernel launches the first fully deep learning powered
CV parsing solution. Accessed June 02, 2023, from https://www.textkernel.com/newsroom/
extract-­4-­0-­textkernel-­launches-­the-­first-­fully-­deep-­learning-­powered-­cv-­parsing-­solution/
16. Softgarden. (2023). We make your candidates apply. Accessed June 02, 2023, from https://
softgarden.com/de/
17. ZipRecruiter. (2023). We lead job seekers to their dream job, and employers to success.
Accessed June 02, 2023, from https://www.ziprecruiter.de/uber-­uns
18. Entelo. (2023). Get more applications from qualified candidates. Accessed June 02, 2023,
from https://www.entelo.com/
19. Harver. (2023). Better talent decisions. Faster. Accessed June 02, 2023, from https://harver.com/
20. MoBerries. (2023). Experience modern recruiting. Accessed June 02, 2023, from https://www.
moberries.com/de/
21. Enboarder. (2023). Creating better human connections. Accessed June 02, 2023, from https://
enboarder.com/
22. Textkernel. (2023b). How AI enables an “Internal skills economy”. Accessed June 02, 2023,
from https://www.textkernel.com/de/internal-­mobility/
23. Qualtrics. (2023). Make employees brand ambassadors. Accessed June 02, 2023, from https://
www.qualtrics.com/de/employee-­experience/
24. Humanoo. (2023). Strengthen the bond with your employees. Accessed June 02, 2023, from
https://www.humanoo.com/de/fuer-­arbeitgeber/
25. Visier. (2023). Simply the world’s best people analytics platform.
26. SAP. (2023). Human capital management. Accessed June 02, 2023, from https://www.sap.
com/products/hcm.html
27. NICE. (2023). Every employee’s personal attended bots. Accessed June 02, 2023, from https://
www.nicerpa.com/neva-­attended-­automation/
28. Kreutzer, R. T. (2021). Customer dialogue online and offline. The big 1x1 of customer acquisi-
tion, customer retention and customer recovery. Springer Gabler.
29. Oracle. (2023). Oracle fusion analytics for HCM. Accessed June 02, 2023, from https://www.
oracle.com/de/business-­analytics/fusion-­hcm-­analytics/
30. Ethics Board HR Tech. (2021). Survey on automation in HR work. Accessed June 03, 2023,
from https://www.ethikbeirat-­hrtech.de/
Financial Services Sector
9

Summary Artificial Intelligence will revolutionize the financial sector, by increas-


ing efficiency and enabling tailored financial solutions. Through advanced data
analysis, AI can predict market trends and provide investment recommendations,
contributing to the improvement of portfolio management. In risk management, AI
can help detect and prevent fraud, while simultaneously supporting compliance
with regulatory requirements. Chatbots and other AI-driven customer interaction
tools can improve customer service by offering personalized support around the
clock. Finally, AI can contribute to significant cost savings through the automation
of routine processes, thus increasing profitability.

Due to its strong dependence on large amounts of data, the financial services
sector is predestined for various AI applications. The challenge here is to merge
large amounts of historical and current data about economic actors—especially
companies and their customers—and the associated transactions, as well as the
political, societal and economic environment as a whole, in order to identify rele-
vant patterns. AI applications can be used in various areas.
McKinsey predicts that the financial industry could generate additional reve-
nues between 200 and 340 billion US dollars through the use of Artificial
Intelligence. This increase, which corresponds to 3 to 5% of the industry’s turnover,
is based on the assumption that AI brings particular advantages in the knowledge-
and technology-based financial sector. Currently, AI applications are already being
used in marketing and customer care. However, the study sees the greatest added
value in the automation of administrative tasks. Employees can thus focus on
higher-value tasks (such as consultations). In addition, generative AI applications
could help with the migration of outdated IT structures, a currently complex and
error-prone problem of many banks. Generative AI can also facilitate the writing of
emails and the preparation of presentations (cf. [1]).

© The Author(s), under exclusive license to Springer Fachmedien Wiesbaden 391


GmbH, part of Springer Nature 2024
R. T. Kreutzer, Understanding Artificial Intelligence, Future of Business and
Finance, https://doi.org/10.1007/978-3-658-46131-7_9
392 9 Financial Services Sector

9.1 Wealth Management—Asset Management—


Robo Advisor

Wealth Management—in German Asset Management—refers to a highly indi-


vidualized form of financial planning. It is usually aimed at wealthy individuals or
families and includes a wide range of services: wealth planning, investment advice,
estate planning, tax planning, and possibly legal advice. Wealth Management in the
sense of private asset management was previously often only accessible to an elite
group. Here, financial managers were commissioned to increase an often many mil-
lions encompassing wealth in the form of an actively managed portfolio.
AI agents now create the prerequisite to offer asset management with signifi-
cantly lower fees. Here, a Robo Advisor decides which changes to make in the
investment portfolio. In the first step, Robo Advisors use rule-based algorithms to,
for example, trade exchange-traded funds. (ETFs/Exchange-Traded-Funds) to
select, which are considered based on information such as age, risk tolerance, and
income. ETFs are a special form of traditional investment funds that are traded on
the stock exchange. In the second step, AI-supported asset management allows for
independent investment decisions to be made, based on reinforcement learning, that
are oriented towards the specific situation of each investor.
A Robo-advisor, therefore, is an algorithm-based, automated asset management
system. Due to the use of algorithms, it is also referred to as Algo Trading. The
management of the investment portfolio is individually tailored for each investor
based on their individual goals and personal risk tolerance.
Such digital wealth advice is offered by the Deutsche Bank under the name ROBIN
(see [2]). Investors here have the opportunity to invest their wealth in the medium to
long term without having to regularly follow the development on the stock exchange.
This term ROBIN stands for Robo-Invest and combines AI technologies with the
knowledge of experienced portfolio managers and advanced risk management. An
appropriate commitment can be started from 500 €. With ROBIN, the investor can
now invest in widely diversified ETFs. Through ROBIN, professional asset manage-
ment is available for further market participants. Other Robo Advisors include, for
example, Oskar, Growney, Quirion, Ginmon, Whitebox and Solidvest. More and
more companies are continuously entering the market with corresponding offers
because it is possible to win new target groups for financial investments.
Robo Advisors are now taking over a large part of the bond trading—especially
with government and corporate bonds. Algo Trading is also penetrating less liquid
parts of the bond market. These are bonds of lower quality (so-called high-yield
bonds) as well as bonds from emerging countries. The increasing performance of
Robo Advisors makes it possible to cover these parts of the bond market as well.
The sometimes dramatic price developments in the pandemic have provided the AI
systems with valuable training data to refine the algorithms. Especially large price
fluctuations allow the developers to better understand the behavior of securities
even in difficult market conditions.
The increasing use of Robo Advisors brings two positive effects. On the one
hand, traders can focus on the more complicated transactions because standard
9.2 High-Frequency Trading 393

transactions are automated. On the other hand, the AI systems can handle signifi-
cantly more price inquiries and transactions independently—even if these each only
involve a small volume. This grows the market. It remains to be seen what possible
negative effects will occur when more and more Robo Advisors communicate with
each other in the future. This can both amplify negative and positive developments
and thus increasingly move away from the real economy.
BlackRock, the world’s largest asset manager, uses Aladdin in wealth manage-
ment. Aladdin is an acronym for Asset, Liability, and Debt and Derivative Investment
Network. This describes a data analysis system and a risk and investment plat-
form that combines scalable portfolio analysis and risk management with a com-
plete trading platform. For this purpose, comprehensive risk analyses are combined
with diverse front-, middle- and back-office functions. Advanced and flexible tools
measure the risk positions in the portfolios and automate the portfolio modeling to
achieve the goals defined by the manager within the customer guidelines. This
applies to both active and passive investment styles.
A real-time dashboard creates a very high transparency throughout the entire
company BlackRock. Here, all departments have access to consistent information
across all portfolios. With this support, investment managers can focus on interpret-
ing the results, making investment decisions, and serving customers. Automated
workflows can additionally increase efficiency and reduce operational risk (see [3]).
Aladdin Wealth is a wealth management platform for professional investors,
developed by BlackRock. It creates transparency about the risks in business and cus-
tomer portfolios, enabling more informed investment decisions. The platform is
designed to strengthen the confidence of customers and enable them to act confidently
across all market cycles. The platform offers various core competencies, including deep
insights into the customer portfolio, support in rebalancing customer portfolios based
on changing market conditions and customer requirements. In addition, Aladdin Wealth
generates systematic notifications. These support advisors to act on a data-driven basis
and to recognize when customers and portfolios need their attention (see [3]).

9.2 High-Frequency Trading

Artificial intelligence has also significantly influenced high-frequency trading


(High-Frequency Trading) by improving the speed, accuracy, and efficiency of trad-
ing. This is also referred to as automated or algorithmic trading. In high-­frequency
trading, thousands of transactions are executed within micro- or even nanoseconds.
At its core, it’s about the algorithm-supported automated buying and selling of secu-
rities. AI algorithms can process large amounts of market data in real time and react
to patterns, trends, and anomalies that human traders may overlook. This way, global
price and knowledge differences can be exploited to optimize financial investments.
In addition, AI systems can continuously learn from this data and adjust their trading
strategies accordingly. This contributes to improving trading performance.
However, there are also risks of AI-supported high-frequency trading. Some
concerns relate to the transparency and traceability of decision-making by AI,
394 9 Financial Services Sector

especially in complex, self-learning algorithms. Here, solution concepts of


Explainable AI offers various solutions to improve the transparency and under-
standing of AI decisions. In addition, strict risk management is necessary to mini-
mize potential negative impacts of errors or unexpected market conditions. This
includes monitoring systems and emergency stop switches that can automatically
stop trading when certain risk limits are exceeded. Finally, appropriate regulatory
frameworks are important to ensure fairness and stability in the markets.
Here, as in all other fields of application of artificial intelligence, the quality of
the decisions made depends on the quality and reliability of the available data. If
incorrect or outdated data is used for investment decisions or for credit rating, seri-
ous misjudgments can be the result. In addition, due to the sensitivity of the pro-
cessed data, the highest value must be placed on data security itself.

9.3 Fraud Detection and Fraud Prevention

The early detection or prevention of fraud cases is of enormous importance in the


financial industry. It’s about Fake Detection and Fraud Detection. AI algorithms
support the quick and well-founded analysis, even if huge amounts of data need to
be evaluated. Why has the need for such solutions increased? The comprehensive
digitization of (almost) all areas of life and the associated increasing networking—
in private as well as in professional areas alike—increase the risk of fraud, hacking
attacks, data compromise and other cyber attacks. The costs of cybercrime in the
global economy amount to many hundreds of billions of US dollars each year.
Systems like GiniMachine [4] provide support in this area. The GiniMachine is
a user-friendly No-Code AI platform, based on Software as a Service and designed
to support decision-making. By uploading the data, the platform enables the cre-
ation of powerful, AI-supported models. This opens up a variety of applications,
such as reducing risks by up to 45%, improving the credit portfolio, predicting
customer churn and automated decision-making. GiniMachine stands out through
its ability to process historical data and create and validate risk models in a few
minutes, without requiring specialized machine learning knowledge. The platform
can work with raw or structured data, even if some data is missing.
In specific application areas, such as debt collection scoring, GiniMachine sup-
ports prioritizing debtors with a high probability of repayment and recommends debt
collection tools based on scoring parameters. In addition, the software can help identify
creditworthy borrowers in various industries and thus simplify the application scoring
process. Banks and fintech companies can use the platform to improve traditional cred-
itworthiness checks and achieve an optimal balance between risk and return.
A major risk of damage for customers and companies alike is the identity theft or
identity fraud This refers to a situation where a fraudster uses a person’s personal
data to gain access to financial accounts or to open new accounts. To counter such
fraud attempts, various types of AI agents are used. These help financial institutions
to examine the purchasing behavior of each customer. The individual behavior is
compared with further indicators to create a complete picture of a transaction. If
9.3 Fraud Detection and Fraud Prevention 395

deviations are detected within the expected developments, e.g. unusual transaction
patterns or accesses from unusual locations, an alarm is triggered. These triggers also
include suspicious email activities or unusual login attempts. For this purpose, AI
engines examine emails and websites for suspicious signs and patterns.
In addition, AI systems can perform a behavioral biometrics. Here, the indi-
vidual behavior of a user is analyzed to identify fraudsters trying to access an
account. Behavioral patterns include typing speed, mouse movements, or interac-
tion patterns with the user interface.
If deviating or unusual phenomena are detected, either a human intervention is
carried out or automated processes are triggered. In addition, AI systems can also
support biometric technologies for authentication. Through facial, voice, or fin-
gerprint recognition, the identity of the user can be more accurately verified.
Another field of AI application in finance is the prevention of credit card fraud.
This involves fraudulent activities carried out with stolen or counterfeit credit card
data. AI can assist in detecting credit card fraud by analyzing transaction data in real
time and identifying suspicious activities. These include, for example, multiple pur-
chases—quickly in succession—at different physical locations. In addition, AI sys-
tems can learn the behavior of a cardholder and thus identify noticeable deviations.
Here too, warning signals are triggered, which initiate various process chains.
Increasingly, investment fraud is also being detected. This involves fraudulent
activities where individuals or companies are enticed to invest in non-existent or
worthless assets. AI can help detect such frauds by recognizing patterns in commu-
nication data that indicate fraud. This can be the use of certain phrases or speech
patterns. AI systems can also verify the legitimacy of investment firms by analyzing
and evaluating publicly available information from various sources.
To illuminate this environment, a study was initiated by PayPal. For this purpose,
632 people were surveyed who were familiar with their companies’ fraud preven-
tion measures. These people dealt with fraud detection and prevention and/or cyber
security measures. The aim of this study was to understand the current fraud land-
scape as well as the challenges of fraud detection and fraud prevention. The
results show that the biggest challenge is the increasing sophistication of fraud-
sters, followed by the lack of the right technologies to curb online financial fraud.
The following results were determined in detail (see [5]):

• 61% of respondents indicate that their companies do not have the right technolo-
gies to curb online financial fraud.
• Less than half of the respondents (45%) say that their company has the neces-
sary internal expertise to prevent and curb online fraud.
• 60% of respondents indicated that AI technologies for the detection of online
fraud are essential.
• 51% of respondents indicated that their companies are already using automa-
tion, machine learning and/or behavioral analysis for fraud prevention.

The necessity for the use of AI-based concepts for fraud prevention also arises
from the fact that digital transformation will further increase the risk of online fraud.
396 9 Financial Services Sector

9.4 AI-Supported Process Automation


in the Financial Sector

An initial important field of application for AI in financial service providers is pro-


cess automation or the workflow automation. Many banks and insurance compa-
nies are already using natural language processing to automate business processes.
Ideally, this should not only reduce costs, but also increase customer satisfaction.
The solutions presented in Chap. 4 can be used for this. However, there are many
more far-reaching fields of application.
In the banking sector, the process automation is used for many analysis tasks
that were previously performed by employees. Process automation reduces the pro-
cessing time for transactions and increases productivity. At the same time, sources
of error can be eliminated. So far, rule-based systems have dominated, which car-
ried out processes according to predefined rules. Now there is an increased attempt
to integrate AI techniques into these processes. This mainly involves the use of
speech and image recognition. This opens up automation potential in the areas of
lending and card business. Risk management and compliance monitoring can also
be supported by this.
To support such processes, for example, the RiskOps platform from Feedzai
can be used. This includes the areas of fraud, money laundering, compliance and
risk policy. Each phase of the risk process is managed on this platform: Initiate,
Prevent, Detect, Remedy, Comply and Adapt. The platform is designed so that it can
be individually adapted by the users to support the achievement of their respective
goals and to take into account their respective risk tolerance. This integrative
approach breaks down existing data silos between the fraud, compliance and risk
departments. This creates a central risk view. In the course of analyses, the infor-
mation is automatically contextualized to increase the meaningfulness (see [6]).
An innovative concept is being used by the US company Upstart. Upstart is a
marketplace for loans—based on Artificial Intelligence. The company’s goal is to
improve users’ access to affordable loans—based on the prediction of the actual
default risk. This is intended to reduce both the risks and the costs of lending for the
bank partners who cooperate with Upstart (see [7]).
The background to this initiative is that four out of five Americans have never
defaulted on a loan product. However, only half of them have access to a prime loan.
As a result, lucrative market potentials are not being exploited. With a smarter
model for creditworthiness checks, banks could approve significantly more
loans—and with fewer defaults. To achieve this, Upstart relies on AI-based evalua-
tion of comprehensive data streams. For this purpose, an income and default pre-
diction model has been developed to determine the creditworthiness of potential
borrowers. In addition to classic criteria (such as credit report, earned income),
educational variables (such as the attended university and the field of study) as well
as the professional career are taken into account. Based on about 1600 data points,
a statistical model of financial capability and personal willingness to repay of
the borrower is developed. In this way, Upstart goes far beyond classic scoring
approaches. The model used is continuously improved—through reinforcement
9.4 AI-Supported Process Automation in the Financial Sector 397

learning. For this purpose, the daily repayment and default data of the borrowers are
included. In this way, the Upstart model can determine creditworthiness much more
accurately than traditional lending models. To check the creditworthiness, the so-
called dark processing is used (see [7]).

Note Box
In the case of dark processing, which is also known as Dark Processing or
Straight-Through Processing, it is an automated process. This runs from
start to finish without human supervision or manual interventions—hence “in
the dark”. In many cases, they improve the efficiency and accuracy of pro-
cesses, by reducing the likelihood of human errors and increasing the speed of
the processes.
Typical tasks of dark processing are primarily changes to inventory data,
the creation and sending of invoices, the checking of insurance claims and
loan applications. Such systems have another major advantage: high scalabil-
ity. After all, such systems can also process large amounts of data quickly.

This type of AI-supported lending has expanded access to affordable loans.


This was achieved by the Identifying new ways to identify qualified borrowers. A
prerequisite for this is that the AI-based model avoids unlawful incompatible effects
and statistical biases that could be harmful to disadvantaged groups. So far, Upstart
has managed to keep the platform free from bias in the credit decision process. In
addition, it has developed a reporting procedure to ensure fairness in future versions
of the model. Moreover, Upstart has worked with regulators from the outset to
ensure compliance with the law (cf. [7]).
Also, insurance companies are increasingly relying on dark processing. The
companies and their customers often have the same goal. The claims settlement
should be quick, efficient, as accurate and fair as possible. To achieve this goal,
insurers are increasingly using AI systems for quick evaluation of insurance
claims. It is also important to detect anomalies in claims that indicate potential
insurance fraud—and to quickly settle all other claims.
The company Tractable supports these processes in the automotive industry
with an AI solution that quickly and efficiently checks cost estimates. Necessary
repairs or total losses of vehicles can be quickly identified and calculated. For this
purpose, the AI engine uses photos to assess the damage at the time of the first claim
report. To start this process, the policyholder photographs the damage and uploads
the photos—usually via a smartphone app. This can happen immediately after an
accident. The photos are checked in real time using computer vision and the repair
costs are forecasted. These can be aligned with the insurers’ standards. When calcu-
lating the costs, the AI system draws on several million photos of car accidents. This
process allows a cost estimate to be created within minutes (cf. [8]).
The company Lemonade also relies on Artificial Intelligence in its business
model. Currently, the company offers household and personal liability insurance
398 9 Financial Services Sector

in Germany and the Netherlands. Unlike many traditional insurance companies,


Lemonade only keeps a fixed share of the premiums for itself. The remaining reve-
nues are used to cover further costs, such as for reinsurance and for payment in the
event of claims. The unused remaining amount is referred to as a “giveback”. This
remaining amount is donated each year for good causes. Here, Lemonade considers
the policyholders who want to support the same good cause as “peers”. The contri-
butions collected per peer group are used to pay the claims of the respective group
and to donate the remaining money for the common good cause. If the claims to be
settled exceed the amount of money in the respective group pool, the reinsurance is
used to cover these damages (cf. [9]).
Where is AI used at Lemonade? The (potential) customer is guided through the
process by the chatbot Maya. conducted. The process is very playfully designed—
and after entering personal data, it takes less than a minute to create a customized
offer. The payout in the event of a claim is also AI-supported—and often only takes
minutes (cf. [9]).

Think-Box: Questions you Should Ask yourself!


• What specific use cases exist in our company for the use of AI—and what
goals could we achieve (increase in efficiency and speed, cost reduction,
improvement of the customer experience)?
• Do we have the necessary data for training the AI systems—or are there
corresponding data pools or pre-trained AI engines available?
• How can we further develop our offers in wealth management?
• What significance do robo advisors have for our offer?
• What expectations do customers have for these services?
• How can we further develop our solutions in high-frequency trading?
• What contribution can AI make to fraud detection and fraud prevention?
• What areas of application do we have to implement AI-supported process
optimization?
• Which fields can be included in the dark processing?
• How can it be ensured that (automated) processes in lending or risk assess-
ment are transparent, fair and traceable?
• Do we have the necessary skills, technologies and budgets to achieve sig-
nificant successes here?

9.5 Summary

• The market for financial services represents a particularly exciting field of


application for AI because here consequences of right and wrong decisions can
often be determined very quickly.
• Robo-advisors enable many more target groups to be offered individualized
wealth management advice.
References 399

• Another interesting field of application is high-frequency trading (High-


Frequency Trading).
• AI systems are used for fake detection and fraud detection.
• As in many other industries, artificial intelligence can support workflow
automation.
• Another important field of application is the creditworthiness check (Credit
Scoring) to achieve more reliable results.
• AI engines can support further forms of Dark Processing.

References
1. Preuschat, A., & Hock, M. (2023). Analysts, consultants and detectives from the machine.
Accessed June 26, 2023, from https://www.faz.net/aktuell/finanzen/ki-­in-­banken-­analysten-­
berater-­detektive-­als-­maschinen-­18960970.html
2. Deutsche Bank. (2023). ROBIN, the digital asset management – So that everyone can invest
like a professional. Accessed June 03, 2023, from https://www.deutsche-­bank.de/pk/sparen-­
und-­anlegen/geldanlage-­online/robin.html
3. BlackRock. (2023). Uncertain times require more clarity. Accessed June 03, 2023, from https://
www.blackrock.com/de/professionelle-­anleger/aladdin?switchLocale=y&siteEntryPassth
rough=true
4. GiniMachine. (2023). No-code AI decision making platform. Accessed June 05, 2023, from
https://ginimachine.com/
5. Ponemon Institute. (2021). The real cost of online fraud. Accessed June 03, 2023, from https://
www.paypal.com/us/brc/article/enterprise-­solutions-­real-­cost-­online-­fraud
6. Feedzai. (2023). Introducing the world’s First RiskOps Platform. Accessed June 03, 2023, from
https://feedzai.com/blog/introducing-­the-­worlds-­first-­riskops-­platform/
7. Upstart. (2023). This is upstart. Accessed June 05, 2023, from https://www.upstart.com/i/about
8. Tractable. (2023). The speed and accuracy of AI. Accessed June 05, 2023, from https://trac-
table.ai/
9. Lemonade. (2023). Forget everything you know about insurance. Accessed June 05, 2023,
from https://www.lemonade.com/de/?f=1
Military Sector
10

Summary In the military sector, AI is used for surveillance and reconnaissance,


where it analyzes large amounts of data to identify relevant information and pat-
terns. AI also drives the development of autonomous systems that can be used for
surveillance, reconnaissance, transport, and combat tasks. Furthermore, AI is
applied in cybersecurity to detect and respond to network anomalies and network
attacks. Even in military logistics and maintenance, significant efficiency gains can
be achieved through AI solutions.
To advance the use of artificial intelligence in weapon technology, countries
like China, Great Britain, Israel, Russia, South Korea, and the USA invest billions
every year. If serious threats to humanity from artificial intelligence are to be
expected, this is probably most likely in this area. The use of AI in the military sec-
tor has significant potential to revolutionize the way military operations are con-
ducted today and in the future.

10.1 AI Application Areas in the Military Sector

AI can be used in the military field in various areas. The following are particularly
important AI application areas in the military sector (see also Krause, [1]):

• Surveillance and Reconnaissance

AI engines are used to analyze large amounts of surveillance data to identify impor-
tant information. This includes image and video analysis from drones or satellites as
well as the evaluation of communication data to identify activities or threats.

• Cybersecurity

AI is also used in cybersecurity to detect anomalies in networks or attacks on net-


works and to respond to them. AI-driven systems can continuously monitor net-
works and often respond to threats faster than humans.
© The Author(s), under exclusive license to Springer Fachmedien Wiesbaden 401
GmbH, part of Springer Nature 2024
R. T. Kreutzer, Understanding Artificial Intelligence, Future of Business and
Finance, https://doi.org/10.1007/978-3-658-46131-7_10
402 10 Military Sector

• Logistics and Maintenance

AI can be used to make logistics and maintenance tasks more efficient. This includes
predictions about needed resources, identification of maintenance needs for military
equipment, and optimization of supply chains.

• Autonomous Systems

AI enables the development of autonomous military systems, such as unmanned air,


land, and water vehicles. These systems can be used for a variety of tasks, including
surveillance, reconnaissance, transport, and direct combat actions.

10.2 Autonomous Weapon Systems

Autonomous systems are of particular importance in the military sector. Therefore,


various autonomous systems for the military are presented below; some of the
concepts presented overlap:

• Flying Drones

Drones can be used for automated data and image analysis. A large amount of
data can be processed in real time and used for decision support. If the drone makes
independent decisions about attacks, it becomes a combat robot.
A particular risk is associated with so-called micro-drones. These can be
designed as combat robots to attack predefined targets in drone swarms. The drones
coordinate autonomously among themselves—and are hardly combatable by classic
air defense systems. If they are equipped with facial recognition software, individ-
ual persons can be targeted. The following video shows what such attacks can look
like: https://www.youtube.com/watch?v=HipTO_7mUOw

• Unmanned Submarines

Submarine drones pose a particular threat to the “balance of terror” with regard to
the submarine fleets of the military superpowers. The traditional submarine fleets
can be spied on by cheaper autonomously operating submarines, so that the fleets
lose their deterrent effect. With appropriate armament, the submarine drones also
become combat robots, if the use of weapons is decided autonomously and no
human is involved in the decision-making process.

• Combat Robots (also lethal autonomous weapon systems or Lethal Autonomous


Weapon Systems)

These are, on the one hand, fixed systems that act autonomously to fend off attacks.
They are already used today to protect military and civilian facilities (e.g., dams,
nuclear facilities), borders (such as between South and North Korea) and on
10.2 Autonomous Weapon Systems 403

warships. On the other hand, combat robots—as already shown—are also mobile in
the form of flying and diving drones. In addition, such combat robots can move
on land, using wheels, tracks or legs for locomotion. The following videos show the
developments that are emerging here: https://www.youtube.com/
watch?v=Wu1kpnCylKQ—https://www.youtube.com/watch?v=L9U3B8wnM7w

• (Partially) Autonomous Assistants

(Partially) autonomous assistants are used in mine clearance as well as in the


defusing of bombs. They can also help evacuate injured people from combat
zones, deliver supplies there or explore caves. The development of such (partially)
autonomous assistants took a major leap forward due to the nuclear disaster in
Fukushima in 2011. The area is still heavily contaminated and inaccessible to
humans in many areas. Therefore, certain cleanup tasks can still only be performed
by robots. However, radiation is also dangerous for robots. Both the lens of the
camera and the read-write memory can be damaged by radiation. The sturdy and
especially durable robots needed for this operation were initially not available in
Japan. In contrast, the USA—also driven by the dangerous deployment in
Afghanistan and Iraq—had already invested many millions of US dollars in the
development of such robots earlier.
The advance of artificial intelligence is currently accelerating the global arms race.
This is rather conducted in secret, as the “usual suspects” do not like to show their cards.
The advantages of AI use in armaments are very convincing from the propa-
gandists’ point of view:
AI-supported weapon systems can evaluate large amounts of data in real
time, identify attack targets, prioritize them and possibly attack autonomously.
In conjunction with facial recognition, attacks can be personalized.
The use of autonomous weapons can reduce human losses on the attacker’s
side and possibly also on the side of the attacked through more precise action.
AI-based decision systems are independent of fatigue, emotional states and
the attention of human actors.
If human soldiers are replaced by machines, the number of casualties for the
owner can be reduced; however, this can also lower the threshold for combat.

Food for Thought


The interplay of artificial intelligence with robotics will produce ever more
sophisticated combat machines. An intriguing question is: If it is possible to
make road traffic safer from human error through autonomous driving, can
the use of autonomous weapon systems make wars—a semantic contradic-
tion—more humane?

The risks of AI use in armaments are also severe:


A central question is whether combat robots can have the principles of interna-
tional law and the fundamental human rights “firmly” programmed into them.
404 10 Military Sector

This includes the legally binding protection of civilians and wounded soldiers. Will
image recognition be good enough to correctly identify these particularly protected
persons in combat operations?
Autonomous weapon systems can operate without human intervention and con-
sequently also without human control and make faulty decisions—just like
humans. Armed quadcopters (helicopters with four rotors) make it possible to
search for individuals who meet certain criteria in order to eliminate them—without
any human intervention. The question arises as to which decision parameters and/or
ethical principles are programmed into the robots and whether they can indepen-
dently develop further through machine learning—in whichever direction.
It is unclear whether the developments will trigger a global AI arms race or
rather prevent it. Since the major military powers are driving the development of AI
weapons with large budgets, a global arms race seems practically inevitable. Today’s
Kalashnikovs will be replaced by autonomous weapons tomorrow. This develop-
ment is almost inevitable because neither expensive nor hard-to-obtain raw materi-
als are needed for the development and production of autonomous weapons. Mass
production is also relatively easy to achieve.
As with all non-military applications, the algorithms can also be (often difficult
to trace) manipulated here. In the worst case, these manipulations are only detected
after attacks triggered by them.
Particularly critical is the autonomous use when in real deployment other data
have to be taken into account than in the training phase. It is not foreseeable how
artificial intelligence will react autonomously when a so-called “semantic gap” occurs.
The dehumanization of military conflicts can lower the threshold for corre-
sponding actions because governments no longer have to justify to their voters any
or fewer human losses on their own side in battles.
The use of killer robots can escalate armed conflicts into endless wars because
there is no fatigue of the technical systems—apart from wear and tear and “shooting
down” of the killer robots.
It becomes particularly dangerous when the political or military leadership
loses control over autonomous weapon systems and they become “indepen-
dent”—with unpredictable consequences.
The question of how criminal prosecution of war crimes should take place if
they are committed by autonomous weapon systems is also unresolved.
Easily deployable AI weapon systems can in the hands of terrorist organiza-
tions also cause devastating damage.

Film Recommendation
The ethical dilemma of military drone use is well illustrated in the thriller “
Eye in the Sky”.

Organizations, politicians, scientists, and engineers repeatedly call for non-­


participation in the development, production, trade, and use of lethal autonomous
weapons. However, a convincing global solution is not in sight. On the one hand,
10.3 Summary 405

many countries demand a ban on killer robots (see [2]). But as long as states like
China, Israel, Russia, and the USA are against such a ban, the AI arms race will
continue. And this is not for the benefit of humanity. The Future of Life Institute [3]
also takes a critical look at AI developments in this context. According to this insti-
tute, AI systems could become the most effective mechanism in human history—
with potential for good as well as for bad!
A particular challenge is to avoid a destructive race with lethal autonomous
weapons. The Future of Life Institute repeatedly points out these dangers in open
letters—especially the risk of a global military AI arms race. Therefore, this insti-
tute demands a ban on offensive autonomous weapons without human control.
But that is already the case today: AI-controlled drones that not only fly autono-
mously but also independently identify and combat targets (people and things). And
the past does not give much hope: so far, almost all technological possibilities have
been extensively used for military purposes—up to the atomic bomb.
Why such a ban would be of great importance to humanity is shown by the fol-
lowing virtual duel, which took place as part of the experimental tournament Alpha
Dogfight. First the result: The AI pilot defeats humans in the final of the Alpha
Dogfight with a score of 5 to 0 kills. The AI system Falco from Heron Systems
defeated a human F-16 pilot in five scenarios. The human F-16 pilot stood no chance
against the AI-controlled aircraft. The AI opponent was completely fearless, flew
head-on towards the human pilot, and safely maneuvered the virtual aircraft even in
tight loops, even though this involved g-forces nine times normal gravity. For a
machine, all this is no problem—just as lack of sleep, lack of food, and pure will to
survive are not. However, such AI-controlled machines can also lack discretion,
compassion, and inhibition to kill (see [4]).

Food for Thought


The use of AI in the military sector raises many ethical questions. These
mainly relate to autonomous weapon systems and the need to ensure human
control and responsibility in war situations. Cybersecurity and the Protection
against AI-driven cyber attacks are important challenges. Ultimately, the mili-
tary, politicians, and society as a whole must weigh the benefits of AI against
the potential risks and develop rules and regulations that ensure a “responsi-
ble” use of AI in the military sector.

10.3 Summary

• Today, billions of dollars are invested annually in the development of AI systems


for military use.
• These developments largely take place in secret and are difficult to control.
• The greatest risks of artificial intelligence are associated with the various forms
of combat robots, as these will massively change the “rules of the game” in mili-
tary conflicts.
406 10 Military Sector

References
1. von Krause, U. (2021). Artificial intelligence in the military. Springer Gabler.
2. Stop Killer Robots. (2023). Less autonomy. More humanity. Accessed June 05, 2023, from
https://www.stopkillerrobots.org/
3. Future of Life Institute. (2023). Steering transformative technology towards benefitting life
and away from extreme large-scale risks. Accessed June 05, 2023, from https://futureoflife.org/
4. Darpa. (2020). AlphaDogfight trials go virtual for final event. Accessed June 05, 2023, from
https://www.darpa.mil/news-­events/2020-­08-­07
Metaverse
11

Summary In the Metaverse, Artificial Intelligence plays a central role. It enables


personalized and immersive experiences, such as the creation of dynamic virtual
worlds and the support of intelligent avatars. AI analyzes user interactions to further
improve and personalize the user experience. Moreover, AI enables realistic and
natural interactions in the Metaverse by recognizing language, images, and increas-
ingly also emotions.

“Only those who move forward determine the direction!”

11.1 Characterization of the Metaverse

The term Metaverse (in German also Metaversum) is composed of the terms
“meta” for “beyond” and “verse” for “universe”. It is therefore about a universe that
lies beyond our analog world (see in depth [1]). The Metaverse describes the future
design of the Internet, which relies on an intensive and virtually limitless interaction
of digital and analog life. For this, the term Web 3.0 is increasingly used. A central
element of Web 3.0 is that property rights can now be acquired on the Internet based
on blockchain technology. The variants of the Internet can therefore be described
as follows:
• Web 1: Read only (writing was reserved for professionals)
• Web 2: Read & Write (the participatory web)
• Web 3: Read & Write & Own & Do Things (the walk-in web)

Supplementary Information The online version contains supplementary material available at


https://doi.org/10.1007/978-­3-­658-­46131-­7_11.

© The Author(s), under exclusive license to Springer Fachmedien Wiesbaden 407


GmbH, part of Springer Nature 2024
R. T. Kreutzer, Understanding Artificial Intelligence, Future of Business and
Finance, https://doi.org/10.1007/978-3-658-46131-7_11
408 11 Metaverse

Note Box
The Metaverse represents the next evolutionary stage of the Internet. It is an
interoperable, decentralized and persistent (i.e., permanent) network of vir-
tual, three-dimensional worlds developed in quasi-real time from raw data.
The Metaverse allows an arbitrarily large number of users to lead a virtual life
with all its facets (work, vacation, culture, education, friendships, relation-
ships etc.) there synchronously with one and the same identity as well as with
multiple identities.
The Metaverse is a Walk-in-Internet!

Proponents of the Metaverse say that we are with the Metaverse today where
we were when we first entered the Internet around 1990: The Internet promised a
lot, some could imagine some applications—but no one was aware that the Internet
would fundamentally change the world. Maybe the same applies to the
Metaverse—maybe!
The characteristics of the Metaverse shown in Fig. 11.1 are crucial for under-
standing the overall concept (cf. [1], p. 11).
What lies behind these characteristics of the Metaverse (cf. further [1],
pp. 13–20)?
• The synchronicity allows simultaneous use without restrictions by a large num-
ber of users who can interact at the same place in the Metaverse.

Fig. 11.1 Characteristics


of the Metaverse. (Source: Synchronicity
[1], p. 11) e Pe
ti m r
l

sis
ea
ir

te
as

nc
Qu

e
perability

Metaverse
Immers
ion

ero
Int

i
on

cia
zat
So

l sy
rali
Decent
stem
11.2 Interesting AI Applications in the Metaverse 409

• The persistence of the Metaverse and the personal account history means that
content generated and or purchased in the Metaverse (avatars, NFSs)… can per-
sist independently of one or more companies.
• The interoperability across platforms and devices allows the use of individual
elements of the Metaverse, such as avatars or virtual goods, in different systems.
• The decentralization of the Metaverse shows that it is independent of specific
devices or organizations and thus belongs to no one.
• Immersion refers to the “diving” into a 3D virtual world or a mix of physical and
digital world using avatars and Extended Reality (XR)—a combination of
Augmented and Virtual Reality.
• Interactivity can take place regardless of location and in quasi-real time.
• The Metaverse is a social network, where people can meet and interact with
other people and organizations—and this goes far beyond gaming.

Note Box
The Metaverse is based in many areas on artificial intelligence technologies.
Only this makes it possible to comprehensively integrate the context into
experiences. In addition, AI enables photorealistic movements and the capture
of emotions in quasi-real time. This requires highly complex computing
operations.

Food for Thought


• In recent years, businesses and customers have already made the leap from
the offline world to the online world.
• In the online world, the leap from stationary to mobile use of the internet
has occurred.
• The metaverse is now supposed to invite the leap from the 2nd dimension
to the 3rd dimension with a 360-degree perspective.

11.2 Interesting AI Applications in the Metaverse

In work life, the metaverse can bring together many of the AI applications already
shown in an innovative form (see in depth [1], pp. 58–87). In the metaverse, people
can cooperate both within and between companies and meet live in a virtual space
for a variety of use cases, regardless of their physical location. This virtual cre-
ative environment basically allows everything that would be possible in a real
office building: meetings, conferences, training, brainstorming sessions, and also
informal exchange. VR environments offer a stronger sense of presence and
expanded opportunities for remote collaboration compared to traditional video
410 11 Metaverse

conferencing. The illusion of actually being with other people in a shared space is
created in the virtual environment. Here, participants can interact directly with
each other, view and modify virtual objects together, and record thoughts on shared
whiteboards.
AR and VR applications can not only improve communication quality but also
increase employee productivity. For this, training and meetings can be moved to the
metaverse. A 3D visualization of rooms, participants, but also machines and pro-
duction facilities can improve the quality of collaboration and performance. An
additional challenge is to integrate digital and analog business processes in the
metaverse economy. Here, the use of Digital Twins can be helpful—in product
development, manufacturing, and along the entire value chain, including network-
ing with other service partners (see Sect. 3.1.3). People are already talking about the
industrial metaverse here.
The education sector is offered the opportunity by the metaverse to convey
knowledge in an innovative, immersive way. Learning environments that are immer-
sively designed allow learners to fully empathize with a situation. For this, playful
applications, special environments or scenarios are recreated. The extent of the
Immersion in the learning process can vary. A higher degree of immersion
leads to the learners being less aware that they are in a virtual world. To achieve
successful immersion, learners must mentally immerse themselves in the
­environment, which can be achieved through realistic location illusions and
three-­dimensional representation techniques, as well as through the active par-
ticipation of learners in the simulated world.
Artificial Intelligence offers a wide range of action fields in the Metaverse for
sales as well. For this, not only secure digital identities need to be developed, but
also AR- and VR-technologies need to be integrated into the Omni-Channel-­
Strategy of companies. Then, entirely new immersive and personalized customer
experiences can be created. Companies like Balenciago, Nike and Guggi are already
successfully using the Metaverse for their business models. Also, car manufacturers
Audi, Hyundai and Nissan are developing concepts for the Metaverse. Here, 3D
models of products can be viewed in original size before making a purchase
decision.
The EHI study “Metaverse in Retail” from 2022 points out that retailers recog-
nize a multitude of new opportunities in the Metaverse. For this, 433 executives
from 284 different retail companies in the German-speaking area were surveyed.
The following insights were gained (see [2]):
• 77.3% of respondents see the possibility of virtual consultation or shopping
assistance, to mimic the experience in physical retail digitally.
• 75.4% support a 3D product presentation in virtual rooms or using AR
technologies.
11.2 Interesting AI Applications in the Metaverse 411

• 69.2% opt for advertising in the virtual space, to expand traditional communi-
cation channels with virtual advertising messages.
• 67.9% rely on an extension of physical shopping with virtual experiences, such
as the use of VR glasses or terminals for the purchase of digital goods.
• 59.3% consider gamification to improve brand perception.
• 56.6% could create virtual shopping worlds, to allow customers to stroll
through virtual shopping centers.
• 54.7% envision a virtual fitting—with the user’s avatar.
• 38.6% consider trading digital goods (such as avatars, digital clothing, digital
art, digital land, etc.).
79.9% of retailers believe that they can benefit from a presence in the Metaverse.
they could achieve their classic marketing goals, such as a increase in brand
awareness or a improvement of the image. 65.1% are convinced that by partici-
pating in the metaverse, they could achieve their economic goals, including
increases in sales and cost reductions. 56.2% believe that the metaverse can also
help in achieving environmental goals.
In the health sector, the metaverse also offers many approaches to merge vari-
ous AI solutions. To facilitate training and further education, operations can be prac-
ticed virtually before the real scalpel is used. With AR headsets, operators could be
provided with live additional information that makes constant looking away from
the operating table to the external monitor unnecessary. Collaboration of geographi-
cally distant team members and international experts becomes possible. The provi-
sion of a VR training app allows convalescents immersive training—without having
to leave their own home.
The greatest benefit of the metaverse and AI in healthcare will depend on how
comprehensively both can complement each other. Such integration could help
eliminate mistrust, friction losses and errors that often occur today when switching
between doctors, clinics, care facilities and work environments. But healthcare
needs, perhaps more than any other industry, a data base that ensures trust, security
and freedom of choice for all participants (see [3], p. 12).
Given these possibilities, it is surprising that only 26% of companies see the
metaverse predominantly or rather as a opportunity. Twenty percent see the meta-
verse rather/mostly as a risk. Twenty one percent cannot assess the development.
Thirty three percent% of companies assume that the metaverse will have no impact
on their business. These are the results of a representative survey by the digital asso-
ciation Bitkom among 604 companies in Germany with at least 20 employees (see
Fig. 11.2; Bitkom, [4]).
412 11 Metaverse

Largely as risk Largely as


a chance

Rather as a risk
Rather as
a chance

Don´t know/no answer

33

The metaverse hasno


Impact on our company

Fig. 11.2 Evaluation of the metaverse by companies in Germany. (Source: Bitkom, [4])

11.3 Will the Metaverse Become a Success Story?

In Germany, the integration of artificial intelligence into the metaverse is expected


to play a significant role in digital transformation. The use of AI in the metaverse
allows companies to create personalized experiences for their customers, automate
processes and develop new business models. However, progress in this area will be
influenced by regulatory requirements, data protection concerns and technical chal-
lenges. It is expected that the government will actively participate in shaping poli-
cies and standards for the use of AI in the metaverse to create trust and protect the
interests of consumers.
In the coming years, the use of AI in the metaverse will gradually increase in
various industries. Which industries belong to the innovators and early adopters,
and which are more likely to belong to the early and late majority, as well as the
laggards, is shown in Fig. 11.3 using the diffusion curve of innovations (cf. [1],
p. 22; cf. for the data basis [5, 6]). Not surprisingly, the gaming industry is among
the innovators, as many AI developments were adopted early on here. The public
administration will also belong to the laggards in this area.
11.3 Will the Metaverse Become a Success Story? 413

Music & art TV & movie

Fashion & Sports Defence


cosmetic

Health Logistic
Education inudstry

Communi- Automotive Architec- Gastro-


cation ture nomy

Consulting Tourism Retail Finance Power


supply
Public
IT & Social Office Production Cities FMCG adminis-
Gaming Media tration
Time
Innovators Early adopters Early/late majority Latecomers

Fig. 11.3 Adoption of the Metaverse by various industries over time

Note Box
One thing is certain: It is worth starting to deal with the Metaverse early on.
The Metaverse ecosystem is currently being massively expanded by many
companies—and more are gradually entering this future market.
The Metaverse represents a particularly exciting playing field for AI
technologies.

Food for Thought


One thing we should be clear about:
Metaverse leads to a Dataverse.
Every step, every action, every spoken word, every shown emotion—sim-
ply everything that happens there is recorded in terms of data. Whoever has
this data can promote our lives in an unprecedented way—or manipulate them.

Think-Box: Questions you Should Ask yourself!


• What role can Artificial Intelligence in the Metaverse play for our com-
pany and what specific benefits could result from it?
• What challenges and risks could arise in integrating AI into the Metaverse
for our company and how can we overcome these?
• Do we have the necessary technological skills and resources to effectively
use AI in the Metaverse?
• How can AI in the Metaverse contribute to improving the customer experi-
ence and offering personalized services?
• How could AI-driven Metaverse applications be integrated into existing
business processes?
• How could AI in the Metaverse help develop new business models and tap
into new markets?
414 11 Metaverse

• How can we ensure that the use of AI in the Is the Metaverse in compliance
with the applicable data protection and security regulations?
• What training and support do our employees need for the use of AI in the
Metaverse?
• What partnerships or collaborations could support our company to effec-
tively use AI in the Metaverse and overcome the associated challenges?
• Who is responsible for all these issues in our company?

11.4 Summary

• The Metaverse represents one of the most exciting “playgrounds” for Artificial
Intelligence.
• To unlock this, a profound build-up of Metaverse know-how is required in
companies.
• Every company is called upon to identify exciting AI-related use cases for the
Metaverse and to start initial pilot projects.
• The race for a leading position in the Metaverse has already begun.

References
1. Kreutzer, R. T., & Klose, S. (2023). Metaverse compact. Springer Gabler.
2. EHI. (2022). Metaverse in retail. Accessed June 05, 2023, from https://www.ehi.org/produkt/
whitepaper-­metaverse-­im-­handel/
3. Kalis, B., McHugh, J., Safavi, K. T., & Truscott, A. (2022). Accenture digital health technology
vision: Meet me in the metaverse. How the continuum of technology and experience is reshap-
ing healthcare. Accessed June 05, 2023, from https://www.accenture.com/_acnmedia/Thought-­
Leadership-­Assets/PDF-­5/Accenture-­Meet-­Me-­in-­the-­Metaverse-­Full-­Report.pdf
4. Bitkom. (2022). The German economy is divided on the Metaverse. Accessed June 05, 2023,
from https://www.bitkom.org/Presse/Presseinformation/Metaverse-­deutsche-­Wirtschaft-­gespa
lten#msdynttrid=Mm57TkFY5QDlDAQi6WNlMiPE3zUhAkG1t51yq_OzJUk
5. Abovitz, R., Banerjee, S., Gilliland, G., Liu, C., Sackey, E., Timashkov, A., & Trollinger,
R. (2022). How the metaverse will remake your strategy. Accessed June 05, 2023, from https://
mkt-­bcg-­com-­public-­pdfs.s3.amazonaws.com/prod/impact-­of-­metaverse-­on-­business.pdf
6. Newzoo. (2022). The metaverse, blockchain gaming, and NFTs: Navigating the internet’s
uncharted waters | Newzoo trend report. Accessed June 05, 2023, from https://newzoo.com/
insights/trend-­reports/newzoo-­intro-­to-­the-­metaverse-­report-­2021-­free-­version
Part III
AI Challenge – How to Anchor Artificial
Intelligence in the Company

Even the longest journey begins with the first step! Laozi

Every company is called upon to analyze the (further) use of AI in its own com-
pany. One thing must not happen: That executives miss the triumphant advance of
Artificial Intelligence and do not or do not check early enough, in which fields an AI
deployment can lead to competitive advantages in the market.
Developing an AI Journey in your Own
Company 12

Summary Artificial Intelligence has the potential to fundamentally change poli-


tics, economy and society. Despite this, there is still a great deal of uncertainty about
how and especially how quickly this technology will continue to develop. Therefore,
every company is called upon to start an individual AI journey—now!

12.1 Phase Concept for the Company’s Own AI Journey

No company should pursue a wait-and-see approach in the face of the major inno-
vation leaps in AI applications. Such an approach could quickly lead to serious
competitive disadvantages. Instead, it is important to quickly start a AI journey to
gradually unlock the AI potential for the own company (see also [1], pp. 19–24; [2],
p. 28). For this purpose, Fig. 12.1 provides an important framework for gradually
reaching the stages AI entry, AI expansion and AI scaling.

Supplementary Information The online version contains supplementary material available at


https://doi.org/10.1007/978-­3-­658-­46131-­7_12.

© The Author(s), under exclusive license to Springer Fachmedien Wiesbaden 417


GmbH, part of Springer Nature 2024
R. T. Kreutzer, Understanding Artificial Intelligence, Future of Business and
Finance, https://doi.org/10.1007/978-3-658-46131-7_12
418 12 Developing an AI Journey in your Own Company

Anchoring AI Identify and exploit AI


comprehensively potential in the entire
in the business business model
model

Change management
Enriching Developing AI-only solutions
products and
services with AI Integrating AI into products and services

Integrate AI into
Piloting the first internal/external company
Optimize/realign processes
AI applications
processes Long list Short list Develop milestones
use cases use cases plan for AI projects

Establish, maintain and further develop AI network - internally and externally


Building AI Developing Company-wide communication of goals, strategies and results of AI deployment
basics AI
governance

3-horizon model
Analysis of the AI maturity map
status quo
Install and successively expand AI team

AI entry AI expansion AI scaling

Fig. 12.1 Phases of the company’s own AI journey

First, it is about conducting a comprehensive analysis of the status quo to deter-


mine the company’s starting situation and its position in the entire industry. For this
purpose, the concepts of the 3-horizon model and the AI maturity map should be
used. Ideally, this analysis is already carried out by a specially installed AI team,
which essentially represents the nucleus for the company’s AI engagement.
Based on the insights gained, important AI basics need to be developed. An
important prerequisite for future AI successes is above all the creation of own AI
competencies in the company. This is achieved by recruiting new as well as training
existing employees. In addition, it is necessary to check to what extent access to
external AI resources can be achieved. In this phase, a AI governance must also be
developed to provide a guideline for corporate AI engagement. In parallel to this, it
is necessary to develop goals and strategies for AI use—and to communicate them
within the company. This could, for example, involve process optimization for the
development of product and service innovations. In parallel, work can be done on
building a proprietary data ecosystem to create a large and balanced data base for
training the algorithms. In parallel or alternatively to building your own data, a deci-
sion must be made about accessing or participating in external data ecosystems. In
this phase, decisions also need to be made about the development of powerful
algorithms for specific use cases. To support such steps, it is necessary to build and
maintain AI networks both within the company and externally.
These measures create the prerequisites for processes to be optimized with AI
support and possibly realigned. Since AI is not an end in itself, but is intended to
support the profitable growth of companies, it is now necessary to identify excit-
ing use cases on a long list and then prioritize them on a short list. The further
development can be supported by agile methods to quickly start initial AI pilot
applications and then integrate AI into various processes. This can lead to
dynamic transitions to AI integration into own products and services to consis-
tently drive AI expansion. This can also lead to AI-only solutions that would not
12.2 Analysis of the Status Quo 419

be possible without AI use. In the course of an AI scaling, the entire business


model can be supported by artificial intelligence. In addition, new business
­models—AI-based—can be developed.
Alongside this, there is a need for change management to take the entire orga-
nization and each individual employee on the AI journey. After all, an ever-­increasing
use of AI will lead to the loss, but also to the creation of new jobs. For this, a profes-
sionally managed change process is indispensable. For this, the corporate culture
also needs to be further developed. A particular challenge is to promote a data-
oriented mindset throughout the company, which sees data as a valuable resource
to drive innovation. It also requires companies to act agilely and be ready for change
in order to implement new AI applications and adapt to the rapidly changing AI
landscape. Finally, the creation of an AI culture also requires a commitment to
ethical use of AI and consideration of data security and privacy issues.

Note Box
The entrepreneurial AI journey is started once—and will never end. The
dynamics of AI developments will increase even more in the coming years.
Consequently, we are dealing with AI expansion and AI scaling dealing with
a “Moving Target”. Due to the enormous investments that are currently flow-
ing into AI development, month by month, further exciting breakthroughs in
AI technologies and AI applications can be expected.

Food for Thought


For our very personal AI journey, we should be guided by the following
principle:
If you fail, fail fast, fail cheap and fail early!
Failures will not be avoided even in a well-orchestrated AI journey.
Therefore, the same applies here:
Never give up!

12.2 Analysis of the Status Quo

The variety of the described AI application examples shows enormous strategic


development potentials across all industries. Here is—temporarily—an exciting
strategic window—the famous Strategic Window of Opportunity—open for the
development of new market opportunities. This temporal window must be used if
you do not want to be surprised by AI newcomers as an established provider—or if
you as a start-up intend to exploit the design possibilities offered yourself. It is nec-
essary to check the internal company AI fields of action systematically for their
relevance. For this, it is indispensable that you are aware of the extent of the respec-
tive change. Various tools can be used to determine this.
420 12 Developing an AI Journey in your Own Company

12.2.1 A
 nalysis of the Impact of Artificial Intelligence
on Competitive Dynamics

To start the status quo analysis, answering the following questions is helpful. Here
you can determine how artificial intelligence will change the competitive basis
between sectors and companies:

• Changes in the competitive landscape through Artificial Intelligence


• Which industries and companies could particularly benefit from AI due to their
access to IT computing power, data, algorithms, and skilled workers?
• How could AI change the competitive dynamics in various industries and
between different companies in the coming years?
• Which industries could lose importance due to AI and which could gain new
importance?
• Role of technology platforms in the transformation process
• What role do technology platforms play in the AI-driven transformation of indus-
tries and companies?
• How can these platforms contribute to democratizing access to AI resources and
enabling new business models?
• In what form could these platforms make the use of AI in their own company
more difficult?
• What partnerships can help to manage the challenges of an AI implementation
more easily?
• Designing the AI transformation in other companies—within/outside their
own industry
• How have other companies designed the AI transformation to effectively involve
their employees and executives in the AI implementation?
• What process optimizations and redesigns were third parties able to achieve par-
ticularly quickly through the use of AI technologies?
• Which use cases promise a particularly quick return on investment (identification
of the so-called “low-hanging fruits”)?
• What product and service innovations have other companies already developed
through the use of AI?
• How have other companies been able to build a collaborative, agile and non-
hierarchical organizational culture to promote the introduction and use of AI
technologies?
• How have companies managed to involve their employees and executives in this
cultural change and consistently develop their skills and competencies in dealing
with AI?
• How is the performance of AI applications measured and evaluated in other com-
panies and/or industries?
• How have third-party companies mastered the risks and ethical challenges asso-
ciated with the use of AI? What were the biggest challenges here?

The answers to these questions simultaneously describe the background for action
for the two methods presented below.
12.2 Analysis of the Status Quo 421

Level
evel of innovation

Horizon 3: New (disruptive) „Unknown“


business models

Horizon 2: New business model options „Partially known“


in existing markets

Horizon 1: Optimization of existing business models „Known“

Time
T

Fig. 12.2 3-Horizon Model

Strategic fit between


environment
and strategy Horizon 1 Horizon 2 Horizon 3

Time
Today
Horizon 1 - Manager's view Horizon 2 - Entrepreneurial view Horizon 3 - Visionary view
Today's focus: the current Awareness of the perspective decline Background: new paradigms in the
strategy of Horizon 1 activities industry
• It works well until changes in • Development of powerful concepts • Horizon 3 activities have long
the environment lead to that can replace Horizon 1 appeared too ambitious and
decline - always just a matter activities innovative
of time! • Development of new AI-based • Development of AI-based new
• Often no or only incremental products, services and processes business models
use of AI • Horizon 2 innovations can serve
as an enabler for these activities

Fig. 12.3 3-Horizon Model for strategic analysis of AI integration

12.2.2 3-Horizon Model

For further analysis, you should use the status quo analysis the 3-Horizon Model
(see Fig. 12.2; see [3], pp. 5–17; [4, 5], pp. 167–170). A comprehensive AI integra-
tion into products, services, processes and possibly entire business models requires a
strategic anchoring in top management. On the other hand, comprehensive AI knowl-
edge in the organization is necessary. In addition, certain conditions must be created
to achieve success through artificial intelligence. Using the 3-Horizon Model, you
can check to what extent AI activities in your company are already integrated—or not.
The relevant contents of AI activities located on different horizons are as follows
(see Fig. 12.3):
422 12 Developing an AI Journey in your Own Company

• Horizon-1 Business Models

The Horizon-1 business models describe the current status of a company. The exist-
ing business model is depicted and executed. The resulting revenues and cash flows
are at the center of consideration. These are not least a prerequisite for financing
(AI-related) innovation activities at all. This core business should be optimized,
expanded and/or defended at the Horizon-1 level if necessary. In these—often
mature—business models, it is necessary to make incremental improvements to
processes, products and/or services through the integration of artificial intelligence
in order to support the growth of the established business model and to secure its
profitability. For example, AI-supported automation can take place in marketing. Or
AI is used in the HR area to optimize the talent journey.
This horizon is very easy to overlook because the company is in familiar waters
here. Therefore, this horizon is called “Known”. It is more the manager’s perspec-
tive in the sense of optimizing what is already there.

• Horizon-2 Business Models

At the Horizon-2 level, it is checked which innovations can support the activities at the
Horizon-1 level. New emerging business model initiatives—based on extensively fur-
ther developed processes, products and/or services—are often built up with consider-
able investments. Even now, these further developed business models can generate
initial revenues, although their business peak will often only be reached in 4 to 5 years.
When integrating artificial intelligence, the question arises here as to which new
orientations the various activities of a company can achieve through a more compre-
hensive use of AI. These activities go far beyond the incremental optimizations of
Horizon 1. For example, steps can be taken towards a smart factory, which can help
secure the existing business model. Educational institutions can check how AI
engines can be used for knowledge transfer and student support.
This horizon is already less easy to overlook. The company is—at least par-
tially—leaving familiar waters. Therefore, this horizon is called “Partially known”.
This is about an entrepreneurial perspective. The responsible parties are aware of
the “finiteness” of the business model of Horizon 1 and act proactively to tap future
revenue potentials.

• Horizon-3 Business Models

The Horizon-3 business models are highly innovative and often also disruptive.
They represent approaches for completely new business logics, some of which are
only made possible by AI concepts. These include predictive maintenance offers,
which can lead to a completely new partnership between a manufacturer and its
customers. The use of robots in various industries can create new Expanding market
fields. Entering the business with Smart Homes opens up new, previously untapped
growth paths. In research, AI systems can lead to groundbreaking developments that
go far beyond the existing business field. Many other areas of innovation have been
presented in the previous chapters.
12.2 Analysis of the Status Quo 423

To develop such business models, a deepening—and far beyond the current day-
to-day business—analysis of individual corporate capabilities or customer groups
may be necessary (cf. the required methods [5, 6]). On the Horizon 3 level, strategic
options for disruptive changes are explored and ideas are transformed into concrete
models. Artificial Intelligence with its diverse possibilities is a particularly impor-
tant driver here. However, this horizon is difficult to oversee. The company opens up
new fields of action—with great opportunities and risks alike. Therefore, this hori-
zon is called “Unknown”. This requires a visionary view, which is also ready to
enter entrepreneurial uncharted territory.
The 3-Horizon Model shows the different ranges of (AI-based) developments of
own products, services, processes and/or business models. One aspect should be
particularly emphasized here. Companies are called upon to be active on all three
horizons at the same time. It is not about a chronological, sequential step-by-step
approach like:

• Today: Horizon 1
• Tomorrow: Horizon 2
• The day after tomorrow: Horizon 3.

Companies must manage all horizons at the same time. However, it should also be
noted that the three horizons require different management approaches, organiza-
tional structures and capabilities to be successfully processed. For this challenge,
the term Ambidexterity (in English Ambidexterity) for “ambidexterity” has
become established in management language. It is simply about mastering the daily
business today as well as shaping the future on horizons 2 and 3. The daily business
is about Exploitation in the sense of exploiting already processed potentials. On
horizons 2 and 3, on the other hand, the focus is on Exploration in the sense of
exploring new business fields. After all, the digital age leads to the ever faster emer-
gence and disappearance of business models. Artificial Intelligence is an additional
accelerator for this (cf. in depth [5]).

Note Box
The 3-Horizon Model draws attention to a particular strategic challenge.
While day-to-day business is handled at the Horizon 1 level, every company
must be active in parallel on Horizon levels 2 and 3 in order to successfully
shape the future.
This challenge is referred to as Ambidexterity.

12.2.3 AI Maturity Map

The results of the 3-Horizon Model can be supplemented by another location deter-
mination, even if its results often do not generate enthusiasm. For this, the AI
Maturity Map shown in Fig. 12.4 is used. This distinguishes between AI basics and
424 12 Developing an AI Journey in your Own Company

Marketing/sales 100% Service provision

80%

60%

Customer Service 40% Production

20%
AI fields of application
0%

AI basics

AI goals/ AI Budget
AI strategy

AI employees AI systems

Fig. 12.4 AI Maturity Map—to be adapted individually for each company

AI application fields. The AI basics are to be analyzed for all companies based on
the four dimensions mentioned. It is necessary to critically determine to what extent
goals and strategies for AI use have already been developed. In addition, the ques-
tion arises whether people and budget with the label “Artificial Intelligence” have
already been installed. It is also necessary to determine to what extent AI systems
are already known and/or used in the company. Such a determination must neces-
sarily be company-wide.
The relevant AI application fields are to be defined company-specifically. These
can—as here—include the areas of Customer Service, Marketing/Sales, Service
Provision and Production. Depending on the company’s focus, the maintenance sec-
tor, the Human Resources area, knowledge management or other fields may also be
at the center of the analysis. Therefore, before using the AI Maturity Map, you
should check whether it already represents the areas of application that are impor-
tant to you. If necessary, you can add further areas or remove existing ones. In this
way, you create a company-specific AI Maturity Map.

Note Box
It is important that you check each link in your value chain to see whether
the use of Artificial Intelligence can reduce costs and/or generate additional
value contributions for customers, suppliers and/or the company itself can
be generated.

The contents of the individual fields of the AI Maturity Map are specified
below. In doing so, you will find out how comprehensive the described contents are
already present in your company (AI basics) or are being used. AI Applications).
To analyze the presence of AI fundamentals, the following fields are analyzed
in depth:
12.2 Analysis of the Status Quo 425

• AI Goals/AI Strategy
• Have you formulated clear, measurable, time-specific, and sustainable goals for
the use of AI in your company?
• Is there a well-structured AI strategy or at least strategic concepts for the use of
AI in your company?
• Are AI applications already integrated into existing business processes?
• Is it known which business processes could be supported or automated by AI in
the future?
• AI Budget
• Have you set a specific budget for the development and use of AI?
• Does the AI budget have a significant size compared to the total R&D budget or
to the revenue and profit of your company?
• Or is the AI budget a token investment—without a strategic realignment?
• AI Personnel
• Do you already have AI specialists such as data scientists or machine learning
specialists in your team?
• Are the tasks, competencies, and responsibilities of employees with AI-related
tasks clearly regulated in your organizational structure?
• Is the responsibility for AI located at high hierarchical levels?
• Are external forces involved in building competencies when your own compe-
tencies are still being developed or simply not sufficient?
• Is there already an awareness and acceptance of AI in your company culture?
• Have comprehensive measures already been taken to promote understanding and
acceptance of AI in your company?
• AI Systems
• Are various AI systems or machine learning platforms and different data
pools used?
• Is the focus primarily on in-house developments or are open-source solutions
also integrated?
• Are AI systems used for decision support and/or for autonomous decisions?
• Are comprehensive data streams available for AI applications—Small Data from
your own company and Big Data from various sources?
• Have internal data and process silos already been overcome?
• Is a powerful data management for AI applications already installed in your
company?
• Are data protection and data security ensured in the context of AI?

In the AI basics you can distinguish the following characteristics in the evaluation:

• 0 to 20%: Missing, no reliable answers available; AI journey not yet started


• 21 to 40%: Present in places, topics have been partially addressed
• 41 to 60%: AI basics have been created in individual areas—but still unconnected
• 61 to 80%: Convincing basics for the use of AI are already present in many
areas—but only partially connected
• 81 to 100%: AI-supported resources are completely content-wise connected and
structurally anchored in the company; Company DNA contains AI elements
426 12 Developing an AI Journey in your Own Company

Analysis of the presence of AI applications

• Customer Service
• To what extent are significant areas of your customer service already supported
by AI applications?
• Can all relevant areas of application be covered by AI solutions?
• How effective and efficient is the use of AI in your customer service?
• Marketing/Sales
• Are revenue or result-strong customers in marketing and sales identified by AI
applications to reduce scatter loss in acquisition?
• Do you use AI solutions to develop the next communicative impulses and trans-
fer them into marketing automation?
• Is predictive analytics used to recognize patterns from customer-related data sets
that indicate revenue/profit potential?
• Service Provision
• Are your services already supported or even provided independently by AI systems?
• Are various types of AI technologies already being used to provide services
(such as facial recognition, digital personal assistants, or humanoid robots—for
example, at hotel check-in)?
• How effective and efficient has the use of AI in the provision of your services
proven to be so far?
• Production
• Do you use AI-supported solutions in the areas of procurement, production plan-
ning, resource control, predictive maintenance, and supply chain?
• Is your production already comprehensively supported by AI solutions?
• Has the use of AI in your production already created added value?
• You should align the AI maturity to your respective company focuses. Then,
instead of the areas shown in Fig. 12.4, human resources, R&D activities, or
corporate management could be analyzed. The following questions can be
answered to determine the AI maturity:
• Human Resources
• Do you use AI in your human resources, for example in recruitment, employee
development, or to improve working conditions?
• How effective and efficient has the use of AI in your human resources proven to
be so far?
• Research and Development
• Is AI used in your research and development department, for example to predict
market trends or to develop new products and services?
• How effective and efficient has the use of AI in your research and development
department proven to be so far?
• Corporate Management
• Is AI used in your corporate management, for example to support strategic deci-
sions or to improve company performance?
• How effective and efficient has the use of AI in your corporate management
proven to be so far?
12.2 Analysis of the Status Quo 427

For these application fields, you can assign the following characteristics in the
analysis:

• 0 to 20%: No use or minimal beginnings


• 21 to 40%: Rudimentary use given, e.g., in tests and pilot applications
• 41 to 60%: Various fields of application have been opened up, but not yet system-
atically and not networked
• 61 to 80%: A multitude of AI applications are in use; an overall strategy is
emerging
• 81 to 100%: AI applications are—based on a company-wide strategy—in use
everywhere in the company where it is purposeful

How can you arrive at the answers to the questions raised here? For this purpose,
internal company workshops or—especially in larger companies—surveys among
employees and executives are suitable. This way, you can determine the AI maturity
from the self-perspective (self-image). It is important that the respondents are
aware that this should be an honest inventory. Lying to oneself is not purposeful in
this important topic. In addition, it is usually helpful, if not indispensable, to supple-
ment the self-image with an external image. External consultants can be used for
this purpose, who support in the (comparative) determination of AI maturity.
Especially an “external” view broadens the horizon and helps to focus on challenges
that the company itself does not “have on the radar”. The external view should pri-
marily contribute to the realization of what is already possible with AI today.

Note Box
The solution space of Artificial Intelligence is also a “Moving Target”.
Therefore, a critical inventory of self-image and external image should be
repeated at least every 12 months.

Depending on the results achieved, you can position your company on the scale
of maturity levels of an AI-Journey (see Fig. 12.5). The results shown here should

Not started Experimentator Maker Professional Integrator

AI is not on the Impulses for AI An AI vision is in AI systems are Company DNA


company's agenda; entry have been place; systematic widely used and contains AI
it is not being set; first prototypes implementation has anchored in the molecules; many
addressed are in use begun organization use cases
established

AI not yet an issue Hardly any activities First successes Long-term value Business model
apart from initial have been realized creation achieved realigned
tests
0 – 20 % 21 – 40 % 41 – 60 % 61 – 80 % 81 – 100 %

Fig. 12.5 Maturity levels of an AI-Journey


428 12 Developing an AI Journey in your Own Company

be widely communicated within the company. Ideally, it can show how far your own
company has already come on the AI-Journey. Or it becomes visible what a long
and possibly also arduous path still has to be taken. In any case, these maturity lev-
els provide important information for internal company storytelling. The extent to
which this information should also be made public depends on the result. If it
becomes clear that the company has not yet started or is acting at most as an experi-
menter, this can certainly have a negative impact on the stock market value and the
attractiveness for (new) employees and executives (see a similar concept [7]).
This also includes conducting an analysis of the AI maturity of your key com-
petitors. It is important not to strictly adhere to the previously valid industry bound-
aries. Artificial Intelligence in particular will lead to a further loss of importance of
industry boundaries. Often, new competitors do not come from your own industry,
but are creative start-ups that simply try something new.
The result of the AI maturity analysis ideally shows that your company has
already achieved an AI lead. Or it becomes clear that your company is already lag-
ging behind today. In any case, the responsible managers now have to start an AI
journey to vigorously advance the AI commitment.

12.3 Building AI Basics

12.3.1 Building Comprehensive in-House AI Competence

Before you develop an AI strategy (further), it is essential that your executives and
employees deal comprehensively with the subject area of artificial intelligence. It is
important to go beyond the buzzwords and the media euphoria or media pessimism
that prevails in the trade and popular press, but also in social media. The key to suc-
cess is the establishment of a carefully selected AI competence team—as the
nucleus of your company’s AI commitment. However, competence building must
not be limited to this team.
The establishment of comprehensive in-house AI competence is therefore of
central importance for many companies to be successful in the digital age. However,
building this competence requires strategic considerations, investments in technol-
ogy and employees, and an adaptable corporate culture. The following describes the
steps companies should take to develop solid AI competence.
An initial important approach is the promotion of data competence in the com-
pany. To build AI competence in the company, the process begins with strengthen-
ing data competence. Data is the lifeblood of any AI application. Therefore, it is
essential that employees understand the importance of data quality and data man-
agement. This can be achieved through training and workshops in which the basics
of data collection, processing and analysis are taught. In addition, it is important to
pursue an interdisciplinary approach in which professionals from different depart-
ments work together to integrate data-driven decision-making into the entire busi-
ness operation. This also ensures that any existing data silos are broken down.
In addition, there is a need for building specific AI competencies. In addition to
general data competence, it is important to offer targeted training and development
12.3 Building AI Basics 429

programs for specific AI competencies. This could include training in machine


learning, natural language processing, or computer vision. It is advantageous to
offer both in-depth training for AI specialists and more general training for
executives and other employees. The latter should understand the basic principles
of AI and its application possibilities in the company, in order to possibly reduce
existing uncertainties and fears that often arise from ignorance.
The Development and recruitment of in-house AI talents presents another
major challenge. If you do not yet have your own AI expertise, it is worth relying on
external service providers in the beginning to accelerate internal AI processes.
This also includes projects with research centers to gain access to qualified
employees. This can also be achieved through collaborations with companies in
your own industry, even if you are in competition with them. For innovations at the
Horizon 3 level, it is often necessary to overcome your own industry boundaries.
For this, a cooperation with AI start-ups or other leading AI companies can be help-
ful. Through practical experience with AI applications, you gain access to the latest
research results and expertise and can benefit from their expertise. Create opportu-
nities for the entire company to come into contact with AI systems and AI applica-
tions in this way—such as through a AI hub.
Finally, building AI competence also involves promoting an AI-friendly corpo-
rate culture that encourages innovation and a willingness to experiment. The use of
AI often brings changes and can provoke resistance. Therefore, it is important to
develop a corporate culture that welcomes changes, supports learning from mis-
takes, and promotes continuous knowledge and competence building. To support
this change process, you should also allow playful engagement with artificial
intelligence. This can help to reduce fears within the company and to recognize the
advantages of these technologies. The experiences you gather here enable you to
identify initial AI application areas for your company.

Food for Thought


Building AI competencies is a long-term investment that requires commitment and
strategic planning. However, by combining training, collaboration, and cultural
change, companies can build strong AI competence that helps them to take advan-
tage of this transformative technology and to be successful in the digital economy.

12.3.2 Development of Goals and Strategies for AI Use

The goals for AI use should be derived from the findings of the status quo analysis.
These goals can vary greatly depending on company size, industry, budget, compe-
tence level, and specific challenges. Check which of the following goals of AI
implementation are particularly important for your company:

• Efficiency increase and cost reduction

AI-based solutions can help automate and optimize processes, leading to cost sav-
ings and an increase in operational efficiency. This could be in the form of
430 12 Developing an AI Journey in your Own Company

automating routine tasks, improving supply chain logistics, or by reducing mainte-


nance and repair costs through predictive maintenance.

• Improvement of customer experience

AI solutions can also be used to create personalized customer experiences, increase


customer loyalty, and acquire new customers. This can be achieved through person-
alized product or service recommendations, as well as through chatbots for cus-
tomer service.

• Development of new products and services as well as business model


innovations

AI can support companies in developing innovative products or services and explor-


ing new business models. This could be achieved through the use of AI in research
and development or by using AI to analyze market trends and customer behavior.

• Improvement of decision-making quality

By using AI for data analysis, companies can make better-informed decisions—for


example in procurement, in production and logistics, as well as in marketing and
sales. AI can help to identify patterns and correlations in large amounts of data,
which can lead to improved forecasts and decisions.

• Risk reduction

AI can be used to identify and mitigate risks, for example through fraud detection in
the financial sector or through predictive analysis in the field of operational safety,
especially in the cybersecurity.
These goal categories can only serve as a starting point for your specific goal
formulation. These goals need to be adapted to the specific needs and challenges of
your company. It is also essential that the AI goals of your company align with the
corporate purpose, the corporate strategy and the company values. These goals then
form the starting point for formulating the AI strategy to achieve these goals.

Note Box
Any use of AI should start with business-oriented questions:

• In which areas of the value chain can AI contribute to cost reduction and/
or efficiency increase and thus to value enhancement in a comprehen-
sive sense?
• Which AI applications can improve the customer experience?
• What new products, services and/or business models are made possi-
ble by AI?
12.3 Building AI Basics 431

The following applies:


Artificial intelligence has no value in itself—the relevance of AI only
arises from its value-creating use!
AI applications should contribute to creating added value—for employees,
executives, customers as well as the economy and society as a whole.
Therefore, it is essential to define specific KPIs to capture the desired AI
effects. Such AI KPIs need to be integrated into the processes to continuously
capture the achieved success contributions.

A AI strategy refers to the systematic plan to gradually unlock the potentials of


artificial intelligence for a company. For this, the possible application areas need to
be identified, advantages and disadvantages evaluated, business- and coordinating
IT teams, providing budgets, and adapting organizational solutions to support the
implementation of AI. The following areas represent key elements of an AI strat-
egy for businesses:

• AI Vision

The AI strategy should be embedded in the overall context of the company and
closely linked to its overarching ambitions. It should be clearly stated how AI sup-
ports the digital transformation and other strategic goals of the company. As part of
the strategy work, various approaches and focus areas should be defined that are to
be strategically developed through AI.

• Strategic Action Plan for Artificial Intelligence

Examine the potential impacts of artificial intelligence on your business models,


processes, employees, and capabilities. Pursue a portfolio approach to AI opportu-
nities and assign clear responsibilities for the development and implementation of
the AI strategy. Interdisciplinary teams and data-driven competencies are crucial for
success. This action plan also includes the development of internal and external AI
networks.

• Implementation of AI

Describe the framework for possible AI application fields to guide the search for
exciting applications. Using scoring models, the next step is to select exciting proj-
ects to set priorities for implementation in a long and short list.

• Definition of Success Metrics

At this stage, the success criteria should be defined against which the results of the
AI initiatives will be measured. These KPIs should be compiled in a KPI
dashboard.
432 12 Developing an AI Journey in your Own Company

• AI Risk Management

Analyze the extent of your risk exposure in various key areas. These include regula-
tory risks (such as compliance with data protection laws and copyrights), reputation
risks (e.g., wrong decisions due to AI bias) and organizational risks (e.g., lack of
skills or inadequate infrastructure, such as for data backup). Based on the insights
gained here, measures to reduce or avoid risk should be developed.

• Communication on the Status of AI Implementation

Actively communicate the start and progress of the AI initiative in your company.
Use the successes of the AI team to tell stories and encourage other executives to
communicate the importance and value of the AI strategy.

Note Box
To alleviate fears of executives and employees about a potential “job killer”
AI and robots, it is important to involve your own workforce early in the
decision-making processes. This can promote acceptance before the “new
colleagues” arrive in the company.

Food for Thought


You should be aware that the use of artificial intelligence usually does not
promise short-term success. Both the benefits for customers and a ROI will
often only materialize later with AI applications. Therefore, you must create a
constant test and prototype mentality in your company, which allows
enough room for failure.
For this, your managers need “skill-to-kill”, to end projects—without
damaging their own reputation—when the expectations placed in them are
not met. This may sound initially frustrating, but it is crucial for long-term
success. At the same time, you should not be too impatient with the ROI
expectations of AI projects. Otherwise, you will end some projects before
they could prove themselves.

12.3.3 Development of AI Governance

An important module in the corporate use of AI is the development of a AI


Governance. The AI Governance refers to the systematic structures, procedures,
and policies used to manage and monitor AI systems and AI technologies within an
organization. It is an integral part of overall IT governance and includes areas such
as ethics, data protection, data security, compliance, accountability, and transpar-
ency in relation to the development, implementation, and use of AI.
As AI is increasingly embedded in business processes, the need for solid gover-
nance becomes ever more urgent. Effective AI governance ensures that a company’s
12.3 Building AI Basics 433

AI applications adhere to ethical standards, comply with laws, and minimize the
risk of misconduct. It also promotes trust from stakeholders and the public in the
company’s AI systems, by ensuring transparency and accountability. In addition,
well-established AI governance can help to unlock the full potential of AI and opti-
mize the benefits and performance of AI investments.
The following steps to introduce AI Governance should be considered:

• Development of an AI Governance structure

First, a structure for the AI governance structure needs to be developed. This


includes the appointment of an AI governance committee or a similar group, which
takes responsibility for the monitoring and control of the AI initiatives in the com-
pany. The committee should include representatives from various areas, including
IT, law, ethics, compliance, and business areas.

• Development of AI policies and procedures

In the next step, the company should develop clear policies and procedures for the
use of AI. These should address questions about the Data usage and protection, tak-
ing into account ethical and compliance requirements, as well as responsibility and
control of AI applications.

• Training and awareness for AI deployment

It is important that all employees and managers who work with AI are informed
about the policies and procedures and are trained in them. This also includes raising
awareness of the ethical and legal implications of AI use.

• Monitoring and review of AI deployment

In addition, a process for regular monitoring and review of AI initiatives should be


established. This should include the review of compliance with AI policies and pro-
cedures as well as the assessment of the effectiveness of AI governance. This pro-
cess should also provide the opportunity for continuous improvement of AI
governance practices.
By establishing a solid AI governance structure and implementing clear policies
and procedures, a company can ensure that its AI applications are ethical and com-
pliant with the law. This strengthens the trust of stakeholders. In addition, this
should also maximize the potential of AI applications.

Note Box
Develop a AI governance for your company and apply it to your AI applications.
The development of laws is always much slower than the development of new
technologies. This also increases your personal responsibility to create processes,
products, services, and business models that aim to benefit people as a whole.
So that technology helps people—and not the other way around.
434 12 Developing an AI Journey in your Own Company

12.4 Optimize Processes and Realign

The integration of AI applications as well as the results achieved through artificial


intelligence presents each organization with more or less extensive challenges. Here
there may be a process redesign. Based on new insights or through new possibili-
ties of process design processes and procedures in the company can be questioned.
Existing processes can be completely automated (for example in administrative,
production and/or logistics tasks). Here the machine-to-machine interface is in
focus. In other cases, human activities can be enriched by AI results. Here, relevant
information can be made available to employees in the customer service center or in
the maintenance area in real time. This involves the design of the human-to-
machine interface—for example between digital assistants or expert systems and
the employees in the company. Every company is called upon to recognize the rel-
evant AI use cases early on.

12.4.1 Identification of Exciting Use Cases

• The implementation of possible use cases into profitable business cases is


essential for a successful AI journey. It does not make sense to install a humanoid
robot in the reception area of a hotel if it neither benefits the guests nor generates
a positive ROI in the long term by providing services more cost-effectively
through robots.
• Be sure to pay attention to an appropriate context! The use of artificial intelli-
gence also requires a sense for correct timing. This must first fit for your employ-
ees and executives and secondly for the customers. If your teams do not support
the AI applications, these solutions will certainly not convince in the eyes of the
customers. Therefore, it is important that especially the employees in direct cus-
tomer contact have intensively dealt with the visible AI solutions at the customer
front and can also positively represent them to the customers.
• With regard to timing, however, it also applies that you should not wait too long
with a comprehensive exploration of the possibilities of artificial intelligence for
your company until new and old competitors are already active in the market
with new solutions. Take—supported by budget and competence—an active role
in innovation!

Especially companies that are still in the early stages of their AI journey tend to
pursue AI use cases with a focus on cost savings and efficiency improvements in
existing processes. Here, companies are moving on the Horizon 1 level (see sect.
12.2.2). This is associated with a high risk in case of uncritical use. The AI integra-
tion into existing processes only makes sense if these processes are already running
optimally. But even then, the question arises whether you could not achieve more
convincing results through an AI-based re-design of processes. One mistake should
be avoided in any case—simply continuing the existing processes supported by
AI. After all,
12.4 Optimize Processes and Realign 435

If you digitize a shitty process, you will get a digitized shitty process.
Rather, it should be checked from the beginning whether existing processes
could be designed completely differently through the use of AI. Only then will the
possible optimization potential through AI be fully exploited. As AI maturity pro-
gresses, companies typically expand their AI use and shift their focus, for example,
to key elements of the customer experience or the entire value chain. The use and
thus also the effect of artificial intelligence consequently increases with an increas-
ing AI maturity of the company, because ever broader AI applications are initiated.
In addition, it must be determined whether you can achieve immediate perfor-
mance improvements in your existing processes and in your own product/service
portfolio through artificial intelligence. Here, you can focus on those use cases
where proven AI technologies already exist. These include the automation of pro-
duction processes, the use of predictive maintenance or the refinement of analysis
possibilities in the CRM area (such as customer value analyses or credit ratings).
Here, you can base your decisions on a one- to two-year horizon to achieve mone-
tarily measurable successes.

• With the Horizon 2 level you broaden your view and check which new AI-driven
processes can significantly further develop existing activities. In addition, you
should also conduct an analysis of the entire Range of services (Products and
Services) drive forward. This way, you can determine which comprehensive
improvements or additions to your service portfolio can be achieved through
Artificial Intelligence. This level includes, for example, the integration of chat-
bots into customer dialogue or the integration of AI applications into the diag-
nostic and therapeutic area in healthcare. Another example of this level is the
development of digital twins of plants and aggregates to open up new business
fields for already established activities. Here you can focus on a two- to four-year
horizon.
• The Horizon-3 level finally shifts the perspective significantly beyond the exist-
ing business model and challenges you to look for exciting AI cases in related and/
or new task fields and innovative business models. This is certainly the most
demanding area. To achieve success here, you must overcome familiar patterns of
thought and action. It can be helpful to collaborate with strong external partners
from academia who are at the forefront of development. Intensive collaboration
with start-ups also helps to focus on comprehensive AI-driven innovations. In this
way, you build a network to exchange and develop common solutions.

For this horizon, you can base on a three- to five-year period.


On all three horizon levels, a test-and-learn approach is practiced. For this, the
respective business cases need to be validated continuously and time-limited experi-
ments to be conducted. This way, you can quickly determine which approaches
prove successful in order to secure future potential for success. When evaluating AI
initiatives, commercial and technical executives should be equally involved to bring
both perspectives together in a holistic evaluation approach. This way, you can
prevent silo formation: Technology here—Business Case there.
436 12 Developing an AI Journey in your Own Company

Such silos often led to the responsible managers for IT, Digital or Innovation
being alone in the lead, sometimes without a targeted focus on business cases. This
sometimes led to the “Hammer looking for nail” phenomenon:
We have a solution, where is the (customer) problem?
You can counteract such a development with the described holistic evaluation
approach.
In addition, you should look out for solution needs of your customers. Agile
project methods like Design Thinking can help you, identify new business mod-
els for AI use. What expectations customers have of your offers, you can deter-
mine, for example, through the Kano concept (see Fig. 12.6; see [5, 8],
pp. 381–383).
The Kano Model was developed by the Japanese scientist Noriaki Kano. It
provides a framework for prioritizing features or functions based on the perceived
benefit of the customer. The model categorizes product features into three
main groups:

• Basic requirements (Must-be) are the fundamental functions that a product or


service must fulfill to meet the expectations of customers. These features do not
lead to satisfaction when they are present; however, their absence leads to
dissatisfaction.
• Performance requirements (Performance) are aspects where customer satisfac-
tion is proportional to functionality: the higher the performance, the more satis-
fied the customer is.
• Enthusiasm requirements (Delighters or Attractive) on the other hand are fea-
tures that the customer does not expect, but when they are present, they lead to
high satisfaction or even to excitement. It is important to note that over time
excitement features can become performance or even basic features if customers
start to expect these features (see the arrow in Fig. 12.6).

Enthusiasm requirements
Customer − surprising
is satisfied − pleasing
− additional

Performance requirements
− articulated
− specific
− measurable

Requirements Requirements
not fulfilled Time fulfilled

Basic requirements
− implicitly
− naturally
− not articulated
− obvious

Customer
is not satisfied

Fig. 12.6 Kano Model of Customer Satisfaction


12.4 Optimize Processes and Realign 437

The Kano Model is used to understand and optimize products, services, processes
or even entire business models from the perspective of the customer. By identify-
ing and categorizing the different requirements, companies can better understand
which features are really important and which features can contribute to differentia-
tion and increasing customer satisfaction. This model also promotes continuous
product innovation, by encouraging companies to constantly look for new excite-
ment features that can increase customer satisfaction. Therefore, the ongoing review
and adjustment of features according to the Kano Model is crucial for long-term
success and customer satisfaction.
The necessity and urgency, but also exciting use cases for an AI deployment
for your company can be determined using an AI-specific adaptation of the
Business Model Canvas. The term “canvas” stands for “canvas”, on which such
a concept—clearly visible—can be developed in an interdisciplinary team (see in
depth [6], p. 75–78). In Fig. 12.7 you can find a Canvas concept for the use of
Artificial Intelligence. Using this concept, you can determine the relevance of
Artificial Intelligence for your company. The advantage of this canvas approach
is that you can use it well in internal company workshops to work together on AI
topics. If you use this Canvas for visualization in large format, ideas and sug-
gestions can be inserted immediately using Post-its. This stimulates the creativ-
ity of all.
Based on the results of the status quo analysis and the Business Model Canvas, a
long list of relevant AI projects should be developed. These projects could cover
areas where the use of AI could potentially be beneficial for the company. To develop
a short list of relevant AI projects, specific selection criteria need to be defined.
These selection criteria include, among others:

AI Opportunities AI Risks AI Goals AI Data Foundations AI Resources

• Which potentials • What risks are • What process • Which relevant • Which AI employees
can be leveraged associated with AI improvements for data is already are needed?
in our industry deployment for our customers can be available • Which external
through AI? industry? achieved? internally? resources are
• What can we gain • What risks are • What new • Which data can be required?
as a company associated with AI products/services for procured • Which IT
through AI deployment for our customers can be externally? infrastructure is
deployment? company? developed? • Where are the necessary?
• Which AI use • How should legal • Which optimizations data gaps and • Which networks
cases are risks be assessed in the value chain how can they be along the value chain
particularly (e.g. data can be achieved? closed? need to be
exciting? protection)? • What potentials in • How can established?
• … • How will customers the production area continuous data • Which additional
evaluate an AI can be leveraged? provisioning be investments are
engagement? • … ensured?- necessary?
• … • … • …

Cost Structures Revenue Structures


• What initial and ongoing costs are associated with AI • Which revenue potentials can be identified in which areas
deployment (systems, licenses, programming, data, (direct/indirect customers, new/further developed processes,
employees, training)? new products/services, new/further developed business
• What costs are associated with change management models)?
towards Artificial Intelligence? • In which timeframes can entry be expected?- What
• … uncertainties are associated with revenue entry?
• …

Fig. 12.7 Canvas for the use of Artificial Intelligence


438 12 Developing an AI Journey in your Own Company

• potential ROI
• technical feasibility
• alignment with corporate strategy
• justifiability of resource allocation (e.g., time, personnel, finances,
technologies)
• Time-to-Value (this refers to the time that will elapse before a first benefit can be
achieved by the users; cf. [5], pp. 52–54).
• extent of possible risks

Based on these selection criteria, the projects on the long list can be evaluated and
prioritized to obtain a short list. This list should include the most promising AI
projects that the company should implement in the near future. This evaluation and
selection process should be regularly reviewed and adjusted. Only in this way can
it be ensured that the selected AI projects remain relevant and are in line with the
changing business conditions and goals of the company. Moreover, a change in
available resources—new technologies, additional funding programs, etc.—could
shift the priorities for the short list.
It is crucial for success that the use cases on the short list are linked to the corpo-
rate and/or business strategy at an early stage. In addition, top management support
and the timely provision of necessary budgets and personnel resources are indis-
pensable. At the same time, it is crucial not to follow the AI hype unreflectively, but
to define AI application fields relevant to your company that promise a positive
ROI. Various application fields have already been discussed to support your creative
processes.

12.4.2 Access to AI Solution Modules

Not every company has sufficient resources to develop its own AI solutions.
Therefore, for many AI users, open-source software solutions are of great impor-
tance. Open-source projects have established themselves as a mainstay in AI
development in recent years. They represent a cost-effective, flexible, and versatile
way to explore and implement AI technologies. Tools like TensorFlow (https://
www.tensorflow.org/), PyTorch (https://pytorch.org/) and Keras (https://keras.io/)
are widely used and are used in both academic research and in used in the industry.
They offer extensive functionalities for machine learning and deep neural networks.
Thanks to the wide community support and the constant further development of
these tools, companies can stay up-to-date with AI technologies without having to
purchase expensive proprietary software solutions.
For the development of AI applications, powerful hardware and infrastruc-
ture solutions are also required. The selection of the right hardware, such as spe-
cialized graphics processors (GPUs/Graphics Processing Units) or Tensor
Processing Units (TPUs) has a lasting impact on the efficiency and speed of AI
training and inference processes. Companies can also rely on cloud-based solu-
tions, which provide flexible and scalable computing resources. Services like
12.4 Optimize Processes and Realign 439

Amazon Web Services, Google Cloud or Microsoft Azure offer specialized AI and
machine learning services that facilitate access to powerful hardware and pre-­
configured software stacks.
Data is—as has been emphasized several times—the fuel for AI. Access to large,
high-quality and diverse data sets is essential for the development of effective AI
models. Therefore, with the advancing use of AI, another important task goes hand
in hand, which was also addressed in the canvas in Fig. 12.7: the creation of your
own data ecosystem. For this, the data that is relevant for the selected use cases
needs to be defined. Subsequently, it is necessary to check which of these data is
already available in the company and on which there may even be exclusive access.
By comparing the target data and the actual data, you will identify more or less large
data gaps. Now begins the exciting task of tapping into further important data
sources. In doing so, you should consider the criteria that need to be taken into
account when selecting data (see Sect. 2.4).
At this point, you will often painfully realize what limitations the GDPR holds
for you—and to what extent companies like Amazon, Google and Meta can draw
from the full range in comparison. To compensate for this competitive disadvantage
and overcome the boundaries of your own data pool, collaborations with other data
partners may become necessary and sensible. Perhaps it is possible to build a data
ecosystem with third parties to close important data gaps. In addition to “tapping”
relevant data sources, the utilization of the widest variety of data formats pres-
ents another major challenge. If it is possible to merge different data streams in a
larger eco-system, this can lead to lasting advantages in competition. The continu-
ously flowing data stream from sensor and machine data as well as especially from
social networks needs to be managed—often in real time.
Meta is committed with MetaAI to the The development of AI technology con-
tinues. The goal of Meta is not to drive AI alone. Therefore, the company is commit-
ted to the AI community through publications, open-source software, and
participation in conferences and workshops. In addition, collaboration with repre-
sentatives from the scientific community is being promoted. The main areas of
focus here are: AI infrastructure, generative AI, NLP, computer vision, and other
core areas of AI. Many models developed by Meta can be used as open-source solu-
tions by all interested users (cf. [9]). Every company should consider the advantages
and disadvantages of such collaboration.
Another opportunity for cross-industry collaboration is offered by the Partnership
on Artificial Intelligence. This was initiated in 2016 by AI researchers from the
companies Amazon, Apple, DeepMind, Facebook, Google, IBM and Microsoft. In
2017, the partnership was expanded into a multi-stakeholder organization by the
addition of six non-profit board members. The Partnership on Artificial Intelligence
now includes 105 partners from 17 countries and defines itself as follows [10]:
Our mission: Bringing diverse voices together across global sectors, disciplines, and
demographics so developments in AI advance positive outcomes for people and society.
Our vision: A future where Artificial Intelligence empowers humanity by contributing
to a more just, equitable, and prosperous world.
Our values:
440 12 Developing an AI Journey in your Own Company

• Equity & Inclusion: As an independent body, Partnership on Artificial Intelligence


exists for the benefit of people and society. We empower diverse voices to participate
from ideation through implementation, striving for fairness, equity, and inclusion.
• Conviction & Dependability: Committed to tackling the hard questions through
meaningful dialogue, research, insights, and guidance, we’re determined to maintain
courage in the face of adversity, and are dedicated to facilitating effective processes
that lead to significant outcomes.
• Learning & Compassion: We aim to provide and participate in an unbiased process
with open hearts and minds. Our work emphasizes shared learning, open dialogue,
and direct communication.

Depending on the strategic AI relevance for your company, it may be beneficial to


join this alliance or at least to follow the discussions there intensively.
In addition, it is worthwhile to engage with the idea of the Fraunhofer Alliance
Big Data. This alliance brings together the cross-industry expertise and the experi-
ence of over 30 Fraunhofer institutes. The alliance supports companies in the devel-
opment and implementation of big data and AI strategies as part of its contract
research. For this purpose, AI systems are being developed that not only comply
with European data protection standards, but are also transparent, trustworthy, and
reliable. In parallel, the Fraunhofer Big Data Alliance has also developed a training
program to educate experts in Artificial Intelligence and Data Science. The Alliance
provides intelligent solutions for various industries (cf. [11]):

• Business and Finance


• Energy and Environment
• Life Science and Health Care
• Logistics and Mobility
• Production and Industry 4.0
• Security

Amazon SageMaker supports companies in developing, training, and implementing


machine learning models for a wide range of use cases. For this purpose, the rele-
vant infrastructure, tools, and workflows are provided. For training the AI engines,
companies can access a large amount of structured data (table data) and unstruc-
tured data (photos, video, geospatial, and audio). With an optimized infrastructure,
the training time can often be significantly reduced. For this purpose, the Machine
Learning Operations (MLOps) can be automated and standardized. Machine
Learning Operations transfer the DevOps concept established in software develop-
ment to machine learning. DevOps is based on lean principles and agile methods
and supports software development. Machine Learning Operations increase effi-
ciency in developing, deploying, managing, and monitoring machine learning mod-
els. With SageMaker MLOps, it should become possible for more people and
companies to work with Artificial Intelligence (cf. [12]).
With Amazon Rekognition, Amazon [13] provides pre-trained or customizable
Computer Vision APIs, without the need to develop new machine learning models.
The use of these APIs allows millions of images, streaming, and stored videos to be
analyzed within seconds. Such AI deployment can automate and thus accelerate
checks previously performed by humans. Through Amazon Rekognition, for
12.4 Optimize Processes and Realign 441

example, potentially unsafe, inappropriate or unwanted content in images and


videos can be detected. There, also objects (such as people, pets, packages), scenes
(delivery of a package, approach of a person) and activities (dancing or singing) can
be recognized and labeled with labels. This can also include custom objects, such
as brand logos, that need to be recognized.
To use the functionalities of Amazon Rekognition, the provided Rekognition
API needs to be integrated into the respective application. For this, no specialized
knowledge necessary for machine learning. Like other AI systems, the image recog-
nition used is continuously trained with new data to thereby increase its ability to
recognize objects (e.g., bicycle, telephone, car, building), scenes (such as parking
lot, beach, shopping mall, city) and activities (in videos, for example, “parcel deliv-
ery” or “football game”). In addition, the respective recognition accuracy is to be
improved. Depending on the volume, Amazon Rekognition offers batch and real-
time analysis. The payment for these services is based on the number of images
analyzed or the length of the videos, as well as the size of the repository provided
for image recognition, i.e., a directory with face images (cf. [13]).
As a solution partner, Aleph Alpha also offers an advanced AI language model
“Made in Germany”. Unlike ChatGPT, the offer from Aleph Alpha is more aimed at
companies and public administration—and less at consumers. The introduced
Luminous series is a family of large language models. It currently consists of three
vanilla models, which differ in their complexity and capabilities (cf. [14]):

• Luminous-supreme

Luminous-supreme is the largest and most powerful model of the Luminous model
family. It can solve all tasks that the smaller models can solve, and is particularly
well suited for creative writing of texts.

• Luminous-extended

Luminous-extended is the second largest model. It is well suited for tasks such as
information extraction and language simplification. Compared to Luminous-base, it
is more powerful and faster in a variety of tasks and more cost-effective than
Luminous-supreme.

• Luminous-base

Luminous-base is the smallest and at the same time the fastest and most affordable
model of the Luminous model family. Therefore, this model is suitable for applica-
tions where speed and low cost are particularly important, such as for tasks like
classification and labeling.

• Luminous-Control Models

The control models are versions of models that are optimized for following instruc-
tions. This means that these models have been fine-tuned for a variety of tasks that
442 12 Developing an AI Journey in your Own Company

are described by human-written instructions. One of their great advantages is that


they have much better performance in zero-shot prompting compared to the Vanilla
models. If it is sufficient for certain tasks to give the Giving a model a command in
natural language to achieve a good result is referred to as Zero-Shot Prompting.
This simplifies the application by requiring fewer inputs. The control version of a
model is available for all Luminous models.
All Luminous models were trained in the five most commonly spoken European
languages: English, German, French, Italian, and Spanish. The Luminous models
can also work with images. Thus, the Luminous family achieves multimodality. In
addition, the models contribute to explainability by allowing users to understand
where their answers come from. This includes source verification features. Thus,
the language model can indicate which part of a source the AI system-generated
answers are based on. It can also be indicated which sources may contradict these
statements. Such proof is important so that users can make responsible decisions. At
the same time, Aleph Alpha contributes to the explanation of AI-generated con-
tent (cf. [14]; on the relevance of Explainable AI cf. section 1.7.3).
IBM [15] is developing Watsonx, a comprehensive AI and data platform. This
platform consists of three powerful components that can be used depending on the
objectives of the user company:

• watsonx.ai

With this, for example, basic models and generative AI models can be developed.
Watsonx.ai allows training, validating, tuning, and deploying foundation and
machine learning models. Foundation models enable fine-tuning of AI to a compa-
ny’s specific data and domain knowledge.

• watsonx.data

watsonx.data is an open, hybrid, and controlled data store. With access to the data
available here, own AI engines can be trained.

• watsonx.governance

watsonx.governance supports achieving transparency and explainability in your


own data and AI workflows. This solution helps to control, manage, and monitor
your own AI activities.
Crucial for successful AI deployment is also the availability of further relevant
data. For this, companies can rely on a variety of Open Data Platforms access,
which provide publicly accessible data in various domains. Companies can access
the following open data platforms, among others:

• Data.gov

The platform describes itself as the “home of the U.S. government’s open data”.
This is the official U.S. government website for open data. It provides access to a
12.4 Optimize Processes and Realign 443

wide range of data, tools, and resources for conducting research, developing web
and mobile applications, designing data visualizations, and much more. In total,
there are almost 300,000 datasets available (see [16]).

• OpenML

OpenML is a global laboratory for machine learning and considers itself an open
platform for the exchange of datasets, algorithms, and experiments. A central area
is the AI-capable data. All datasets are uniformly formatted, have extensive, consis-
tent metadata and can be directly loaded into the preferred environments. OpenML
is open and can be used free of charge. Integration into ML libraries is easily pos-
sible (see [17]).

• AWS Open Data Registry

The AWS Open Data Registry provides access to datasets that are available via
AWS resources. These include datasets from the Allen Institute for Artificial
Intelligence, Digital Earth Africa, Data for Good at Meta, NASA Space Act
Agreement, NIH STRIDES, NOAA Open Data Dissemination Program, Space
Telescope Science Institute, and Amazon Sustainability Data Initiative
(see [18]).

• European Data Portal

The European Data Portal is an initiative of the European Union that facilitates
access to public data from all over Europe. This portal is the official provider of
publication services for all EU bodies, institutions, and agencies. As such, it is a
central point of contact for access to EU law, publications, open data, research
results, tender notices, and other official information. It provides data on a variety
of topics, including agriculture, climate, education, energy, environment, and more
(see [19]).

• GovData—The Data Portal for Germany

GovData is the central open data platform of the German federal government and
the states. It offers a wide range of data from various areas of public administra-
tion—currently over 80,000 datasets from the federal government and the federal
states (see [20]).

• Open Data

Open Data is a central solution approach to support the technological and commer-
cial change of the global tourism industry informatively. For this purpose, the
German National Tourist Board, various marketing organizations of countries and
cities have decided to make their data available in the form of Open Data through a
joint project (cf. [21]).
444 12 Developing an AI Journey in your Own Company

In this context, reference should be made to the project Gaia-X already intro-
duced in sect. 2.4. The goal is to create a secure, federated data eco-system, which
represents the digital sovereignty claim of the data owners, interoperability and
open-source principles and realizes the federal vision of Europe. On this basis, data
and services can be provided, networked and shared in a trustworthy manner and
used to support innovations and make the benefits of the data economy accessible to
all data suppliers (cf. [22]). Time will tell to what extent these high claims can be
implemented in reality.
Every company is called upon to check whether there are additional data pools
for its own industry that can be used for AI deployment. The tool Google Dataset
Search can support this. This is a search engine by Google, which helps research-
ers find freely available online data. Users can find data pools using a search
term, which are hosted in thousands of repositories across the web. This is
intended to make these data pools generally accessible and usable (cf. [23]). It
should be noted, that the availability and quality of the data on these platforms
can vary. Some platforms may provide data in machine-readable formats, while
other data may need to be manually extracted and processed. Before deployment
in AI applications, thorough data cleaning should be carried out to ensure a high
data quality.

12.4.3 Funding Programs for AI Applications

There are various government funding programs for Artificial Intelligence in


Germany and Europe, which support companies in the development and application
of AI. These programs usually offer financial support in the form of grants, loans or
tax incentives. Here are some of them:

• Funding program for the research, development and use of methods of


Artificial Intelligence in SMEs

This is a nationwide, cross-sector funding program for high-risk industrial research


and development projects in the field of Artificial Intelligence. The funding program
is aimed at small and medium-sized enterprises (SMEs) of the commercial sector
with a maximum of 249 employees and 50 million € annual turnover or 43 million
€ annual balance sheet total. Their partners from science and industry are also
involved in this funding (cf. [24]).

• Artificial Intelligence for the Common Good Support Program

Companies that develop and test the fundamentals for the use of common good-
oriented Artificial Intelligence can apply for a grant from the Federal Ministry for
Family, Seniors, Women and Youth. The projects can either create the basics for
common good-oriented “Artificial Intelligence” or develop and test common good-
oriented Artificial Intelligence (see [25]).
12.6 Accompanying Change Management 445

• Digital Europe Programme

The Digital Europe Programme is an EU support program that provides strategic


resources to address digital challenges and supports projects in five key areas:
Supercomputing, Artificial Intelligence, Cybersecurity, advanced digital skills and
widespread use of digital technologies in economy and society. The program has a
planned total budget of 7.5 billion € and aims to accelerate the economic recovery
and shape the digital transformation in Europe, especially for the benefit of small
and medium-sized enterprises (see [26]).
These and other support programs are usually competitive and require compa-
nies to submit an application in which they outline the potential and impacts of their
proposed AI innovation. The specific requirements and conditions can vary from
program to program. It is therefore important that companies carefully read the
relevant funding guides and conditions before submitting an application.

12.5  nrichment of Products and Services through AI:


E
Product and Service Innovations—Embedding AI
in the Business Model

In addition, products and services can be enriched by AI or product and service


innovations can be created. A multitude of ideas have already been presented in the
previous chapters. In addition, consideration should be given to AI-only solutions,
which would not be possible without access to Artificial Intelligence. Some of these
AI-only solutions have even become the central value driver of business models.
Here are a few examples of AI-only solutions or AI-supported business models:

• ChatGPT and Dall-E are two AI-only solutions that have brought the company
OpenAI to world fame and extensive capital inflows especially from Microsoft.
• The German DeepL translation program would not exist without AI. It is also the
heart of the corresponding company.
• DeepMind AlphaGo is an AI-supported product that masters the complex board
game Go more successfully than any human before.
• Aleph Alpha is a pure AI company with the vision: “To be the leading European
company researching and creating next-generation strong artificial intelli-
gence” [14].

12.6 Accompanying Change Management

To be successful in AI integration, the corporate culture and with it the mindset in


companies must be further developed. For this, trust in artificial intelligence and its
results must be built on a broad basis. The Explainable Artificial Intelligence
described in sect. 1.7.3 plays a not to be underestimated role. It should be taken into
account, that people will need time to adjust to this paradigm shift. Consequently,
446 12 Developing an AI Journey in your Own Company

the development of an AI-supporting culture is of great importance. This requires


not only investments in the development of skills of AI employees. There is also a
high degree of investment in middle and upper management required, so that
they understand the possibilities and limits of artificial intelligence comprehen-
sively—and do not suppress corresponding proposals and ideas of their employees
out of fear of embarrassment.
Consequently, internal change management is one of your biggest challenges to
align the corporate culture with the exploitation of the possibilities of artificial intel-
ligence. Because as the quote often attributed to Peter Drucker goes?

Culture eats strategy for breakfast


Therefore, it is important that you involve all employees in the process of using AI
at an early stage, instead of presenting them with a fait accompli—like a bomb drop
strategy. In the long run, you will only be successful with the best AI application, if
the employees develop a digital mindset. Because AI solutions often intervene in
existing “territories” and therefore cause strong defensive reactions (see in depth on
methods of change management [5], p. 561–595). In addition, strong support from
the C-level is indispensable for the successful use of artificial intelligence—as also
in the digital and ecological transformation of companies (see [27]).

Food for Thought


Is the mindset of your organization already digital? It needs to be, because
artificial intelligence is not a singular IT project that you drive forward for a
few years, but a strategic orientation of the entire company! And the AI
impact will massively increase over the coming years—and will not leave
any area of the company untouched.

Today, companies are in demand that enrich their processes, products, and ser-
vices with AI or bring innovative AI-based business models to the market. Do not
wait for solutions or support from politics or research.
Anyone who wants to make artificial intelligence a success must be agile,
open, and critical themselves!
Develop early and comprehensive own AI competencies in exchange with rep-
resentatives of science and the economy. Through intra-industry and cross-industry
cooperations, you can manage the necessary investments. Design your own legally
compliant solutions in exchange with politics and invent innovative products,
services, processes, and business models in exchange with your customers!

12.7 Controlling the AI Journey

To review the company’s own AI journey through controlling, suitable Key


Performance Indicators need to be defined and merged into a AI Journey
Dashboard. This should contain a series of KPIs that measure both the technical
performance of the AI systems and their impact on business goals and processes.
12.7 Controlling the AI Journey 447

The following KPIs can determine the efficiency and effectiveness of the initiated
AI initiatives:

• Accuracy of AI predictions

This captures how often the predictions or decisions of the AI are correct. This accu-
racy can be measured in different contexts depending on the tasks the AI is used for.

• Speed of the AI engines

This is about the speed at which comprehensive data pools can be evaluated and
decisions prepared or made autonomously can be. This is particularly important
when real-time analyses are required.

• Availability of the AI system

This includes the downtime of the deployed AI systems are recorded.

• Reduction of manual tasks

The focus here is on the question of the extent to which human work could be
replaced by the use of AI (on an hourly or man-day basis or in saved personnel costs).

• Improvement of decision-making

It is necessary to determine to what extent the quality of decisions has improved


after the introduction of AI solutions (e.g. through cost reduction, revenue/profit
increase, reduced machine downtime through predictive maintenance, etc.).

• Return on Investment

To determine the financial benefit of AI solutions, the achieved revenue increases


must be compared to the made AI investments. to contrast. Increases in ROI can be
achieved, for example, through cost savings, revenue increases, or increased pro-
ductivity. The ROI for AI initiatives is to be determined for individual business areas
as well as for the company as a whole.

• Compliance Rate

The compliance rate measures how well the company complies with ethical guide-
lines and legal regulations for the use of AI.

• Risk Management Metrics

Risk management metrics are used to determine the frequency and severity of data
breaches, possible bias in AI models, and wrong decisions made by AI systems.
448 12 Developing an AI Journey in your Own Company

All these KPIs should be regularly reviewed and updated, to ensure that they
continue to be relevant and useful for the monitoring of the AI Journey of the
company. It is also important that all this data is presented in an easily understand-
able format so that decision-makers can easily interpret and respond to it.
In parallel, the controlling department can create a detailed audit plan to effec-
tively review the AI journey. The AI audit can focus on the review of AI systems,
data management practices, the training processes of the AI engines with internal
and/or external data, the training of employees and executives, and on the imple-
mentation of AI initiatives. The audit plan should also define the responsibilities for
different aspects of the AI audit. As part of this audit, it is also necessary to check
whether there has been a consistent alignment with the rules of AI governance (see
sect. 12.3.3). This involves, for example, compliance with ethical guidelines and
legal regulations when using AI. In addition, it is necessary to check to what extent
artificial intelligence itself can contribute to the control of the AI journey (see
also [28]).

Think-Box: Questions you should ask yourself!


• What business problems or challenges can we solve better with AI
than before?
• What are our goals and expectations regarding AI?
• How comprehensive is our own AI competence in the company?
• Do we have the necessary data and is it of high quality?
• Do we know which AI-ready solutions are available on the market are?
• Do we have the skills and resources to develop AI projects internally or do
we need external support?
• How do we integrate AI into our existing systems and processes?
• Are there exciting AI-only products and services for our company or can
we make existing offers more successful through AI?
• Can we build a (new) business model on AI?
• How do we measure the success of our AI initiatives?
• What legal and ethical considerations do we need to take into account
when using AI?
• How do we deal with the human impacts of AI implementation, including
potential job losses or changes in the way we work?
• How do we ensure the transparency and traceability of our AI systems?
• How do we ensure the security of our AI systems?
• How do we promote a culture that supports innovation and experimenta-
tion with AI through change management?
• Has a controlling system been installed to review the results of the AI
journey?
• Do the defined KPIs cover the relevant areas of application?
• Has a binding audit plan been developed to accompany our AI journey?
12.8 Summary 449

12.8 Summary

The following guidelines for shaping your AI journey can help you make this
journey a success:

• Identify the potential areas where AI can benefit your company.


• The 3-horizon model is an important tool for checking the status quo of AI use
in your company.
• You can also formulate specific tasks to initiate activities at the horizon 2 and
3 levels, so that the strategic potential of artificial intelligence for your company
is recognized as early as possible.
• The use of the AI Maturity Map for analyzing your AI maturity is an indispens-
able first step in building AI.
• Here you can systematically determine whether you have already created the
basics necessary for successful use of artificial intelligence in your company.
• In addition, you can see in which application areas artificial intelligence has
already been incorporated.
• Based on the results of the AI Maturity Map, you begin your AI journey or
continue it with clear priorities.
• Define clear priorities for the selected use cases in the form of business cases,
based on solid analyses.
• Avoid putting all resources into a single AI initiative. to set—distribute your
investment risk, without losing focus on profitable business cases.
• Build internally own AI competencies, supported by external resources.
• Artificial Intelligence is increasingly becoming a core competency—and AI
professionals are already a scarce resource today.
• Gain business-relevant data—these are the lifeblood of AI value creation.
• Start the AI transformation with small, quick steps: pilot projects, tests, and
simulations.
• Large upfront investments are not necessary, however, agility is the key to
success!
• Examine which canvas concepts can best serve you on your AI journey.
• Reinvest the resources that have been freed up by the use of AI into innovative
business models—the goal of AI deployment is not just cost reduction, but the
securing of long-term competitiveness.
• The implementation of AI requires a cultural change in your company—an
openness for collaboration between human and machine is essential.
• Involve your employees early in the design of AI, practice active change man-
agement and talk about emerging concerns and fears.
• The traceability of AI results is an important, but difficult to manage challenge.
• Trust in AI only grows over time; therefore, be patient and consistent in your
approach.
• Set up a AI controlling to control and steer the implementation of Artificial
Intelligence in your company.
450 12 Developing an AI Journey in your Own Company

• Design a AI audit plan, so that the processes are constantly critically aligned to
compliance with standards and to achieving defined results.

References
1. Bünte, C., & Wecke, B. (2022). Artificial intelligence—The future of marketing (2nd ed.).
Springer Gabler.
2. Brenner, W., van Giffen, B., Koehler, J., Fahse, T., & Sagodi, A. (2021). Building blocks of
artificial intelligence management. A position determination. Springer Gabler.
3. Baghai, M., Coley, S., & White, D. (2000). The alchemy of growth. Perseus.
4. Blank, S. (2015). Innovation at 50x. Accessed Jun 06, 2023, from http://steveblank.
com/2015/08/21/innovation-­50x-­in-­companies-­and-­government-­agencies/
5. Kreutzer, R. T. (2021). Toolbox for digital business. Leadership, business models, technologies
and change management for the digital age. Springer Gabler.
6. Kreutzer, R. T. (2018). Toolbox for marketing and management. Springer Gabler.
7. AppliedAI. (2023). Maturity assessment tool. https://www.appliedai.de/ki-­reifegrad. Accessed
on 06.06.2023.
8. Berger, C., Blauth, R., Boger, D., Bolster, C., Burchill, G., DuMouchel, W., Pouliot, F., Richter,
R., Rubinoff, A., Shen, D., Timko, M., & Walden, D. (1993). Kano’s methods of understanding
customer-defined quality. Center for Quality of Management Journal, 2(4/1993), 3–36.
9. FacebookAI. (2023). We’re unlocking the possibilities of AI, together. Accessed Jun 06, 2023,
from https://ai.facebook.com/
10. Partnership-on-AI. (2023). Partnership on AI is bringing together device voices from across
the AI community. Accessed Jun 06, 2023, from https://www.partnershiponai.org/
11. Fraunhofer. (2023). Fraunhofer alliance big data and artificial intelligence. Accessed Jun 07,
2023, from https://www.bigdata-­ai.fraunhofer.de/
12. Amazon. (2023b). Amazon SageMaker. Accessed Jun 07, 2023, from https://aws.amazon.com/
de/sagemaker/
13. Amazon. (2023a). Amazon Rekognition. Accessed Mar 29, 2023, from https://aws.amazon.
com/de/rekognition/
14. Aleph Alpha. (2023). The power to create. Accessed Jun 02, 2023, from https://www.aleph-­
alpha.com/
15. IBM. (2023). IBM Watson. Accessed Jun 08, 2023, from https://www.ibm.com/watson
16. Data.gov. (2023). The home of the U.S. government’s open data. Accessed Jun 07, 2023, from
https://data.gov/
17. OpenML. (2023). OpenML, A worldwide machine learning lab. Accessed Jun 07, 2023, from
https://www.openml.org/
18. AWS Open Data Registry. (2023). Registry of open data on AWS. Accessed Jun 07, 2023, from
https://registry.opendata.aws/
19. European Commission. (2023a). Data.Europa.Eu—The official portal for European data.
Accessed Jun 06, 2023, from https://data.europa.eu/en
20. Govdata. (2023). The data portal for Germany. Accessed Jun 06, 2023, from https://www.
govdata.de/
21. Open Data. (2023). Open data in tourism. Accessed Jun 06, 2023, from https://open-­data-­
germany.org/
22. BMWK. (2023). The German Gaia-X Hub. Accessed Apr 03, 2023, from https://www.bmwk.
de/Redaktion/DE/Dossier/gaia-­x.html
23. Google. (2023). Google dataset search. Accessed Jun 07, 2023, from https://datasetsearch.
research.google.com/
24. BMBF. (2023). Funding program Research, development and use of artificial intelligence
methods in SMEs. Accessed Jun 07, 2023, from https://www.foerderdatenbank.de/FDB/
Content/DE/Foerderprogramm/Bund/BMBF/erforschung-­entwicklung-­nutzung-­ki-­kmu.html
References 451

25. BMFSFJ. (2023). Funding program artificial intelligence for the common good. Accessed Jun
07, 2023, from https://www.foerderdatenbank.de/FDB/Content/DE/Foerderprogramm/Bund/
BMFSFJ/kuenstliche-­intelligenz-­gemeinwohl.html
26. European Commission. (2023b). The “Digital Europe” program. Accessed Jun 07, 2023, from
https://digital-­strategy.ec.europa.eu/de/activities/digital-­programme
27. Kreutzer, R. T. (2023). The path to sustainable corporate management. Springer Gabler.
28. Schneider, T. (2022). Digitalization and Artificial Intelligence. Use by and in Controlling.
Springer Gabler.
Outlook
13

Summary Today, we almost exclusively talk about the risks of Artificial


Intelligence. We must primarily ask ourselves where AI can be used to create value.
Many fears associated with Artificial Intelligence stem from the fact that we fear
a loss of control because we can no longer understand why something happens. If
we are honest, we accept this loss of control in a multitude of applications—even
today. Who can—except for the respective specialists—explain how a combustion
or a diesel engine works exactly, how a solar power plant generates energy, how the
Google search algorithm is structured and how the processes in a smartphone work,
which we use every day? And yet we use and trust these systems—24/7.

Food for Thought


There will be no human future without Artificial Intelligence just as there will
be no everyday life without electricity or the internet.

What challenges for politics and society arise from the ever-increasing penetra-
tion of Artificial Intelligence? In addition to companies, politics has a significant
responsibility to ensure that the journey into the AI world is more associated with
advantages than with disadvantages. The first phase should be similar to that in
companies:
Self-education is indispensable—ideally through dialogue with the creators!
For politicians, it is more important than ever to assess the impacts of AI tech-
nologies in exchange with science and business. However, not only the risks, but
especially the opportunities should be at the center of consideration—despite all
the technophobia that is too often observed. Therefore, the focus should not only be
on the question of how many jobs are threatened by Artificial Intelligence. Rather,
the question should be how Artificial Intelligence can contribute to further intelli-
gent and healthy growth of our economy.
In addition, it is the duty of politicians to pass on this knowledge through appro-
priate direction of educational work also to the population, thus promoting the

© The Author(s), under exclusive license to Springer Fachmedien Wiesbaden 453


GmbH, part of Springer Nature 2024
R. T. Kreutzer, Understanding Artificial Intelligence, Future of Business and
Finance, https://doi.org/10.1007/978-3-658-46131-7_13
454 13 Outlook

development towards digitally literate citizens. Education plays a key role in


acquiring AI competence and counteracting the increasing developments of inequal-
ity in opportunities for technology access. There is a real race between technology
and education. Those who have early access to new technologies, which goes
beyond mere operation and creates application competence, are better able to use
these skills in their professional life and counteract impending unemployment.
For this, a constructive political initiative needs to be strengthened, to create a
Social Debate on the subject of Artificial Intelligence to be advanced, so that
fears and concerns are taken seriously and ideally can be overcome. Otherwise, the
discussion is left to conspiracy theorists and apocalypticists, who direct their horror
visions at an (uninformed) audience.

Note Box
Politics and economy should jointly strive for a culture of transparency and
trust in terms of Artificial Intelligence. This is the best breeding ground for
the necessary spirit of departure!

In addition, a re-skilling initiative is needed to provide entrants and those already in


employment with the relevant core competencies for the digital working world. What
is needed is a curriculum for digital education—from school through the system of
vocational training to universities and colleges. This is more important today than ever!
In addition, intensive cooperation between science and business needs to be pro-
moted, so that not only outstanding research achievements are made in Germany,
but these are also incorporated into marketable products and services. This coopera-
tion should also include medium-sized companies, which may not have the neces-
sary budget for comprehensive AI research available. The funding programs
mentioned in Sect. 12.4.3 can contribute to this.
Support in job placement is another important area for politics, to reduce costs
for employees and employers and to quickly get the right professionals working at
the right job positions. The personal recommendation from the employee’s network
is still the best way to find a lucrative job. Powerful AI systems for determining the
competencies of applicants can promote a quick comparison with the requirements
of employers. Politics is responsible for promoting the development of effective
databases—preferably in an international context. The concepts for personnel man-
agement described in Chap. 8 can also be used for this.
The long-term job effects of AI use are to be examined holistically. The further
automation and thus the release of labor progresses, without simultaneously creat-
ing new jobs to the same extent, the more difficult it will become to achieve and
maintain full employment. At the same time, the increasingly severe shortage of
skilled workers can be counteracted. It is not necessarily wealth that is decreasing,
but its distribution in the total population will change. The task of forward-looking
government action is to ensure in the course of the entire digitization debate that
artificial intelligence also serves the common good and does not solely increase the
profits of individual companies.
13 Outlook 455

Note Box
Society as a whole must be clear about what Artificial Intelligence means and
where the ethical boundaries should be set. At the same time, awareness needs
to be created about the contribution that Artificial Intelligence can and must
make to shaping the future so that Germany and Europe do not lose touch here.

In addition, politics has to develop a binding legal framework on issues such as


security and data protection in the use of AI. Many companies that already use AI
applications operate in gray areas, as there are no regulated provisions for many
digital processes yet. The challenge is to strike a balance between the societal goal
of protecting personal data on the one hand and the availability of high-quality
training data for AI algorithms on the other.

Food for Thought


The GDPR has significantly restricted European companies’ access to personal
data (e.g., from prospects and customers). Here, US companies like Amazon,
Google and Meta will maintain their competitive advantage just as the corre-
sponding Chinese companies like Alibaba, Baidu, JD.com and Tencent.
The situation is different with data obtained in the context of Industry 4.0.
Through the Internet of Things or the Internet of Everything, many data are col-
lected that are not subject to the GDPR. Here, there are exciting fields of appli-
cation for German and European companies that need to be actively pursued.

It is the government’s task to ensure a balanced relationship between regulatory


necessity and freedom for innovative business models. Like every company, every
country must decide for itself how high it wants to play in the AI league and what
state investment support is granted. China is determined to lead the world league of
artificial intelligence, rather than just going along. What does Germany want? And
above all: What is Germany doing here—preferably within the European frame-
work? At least in Germany, a master plan for artificial intelligence is being dis-
cussed, which has also been equipped with funding. But we all know: Plans and
budgets are necessary prerequisites for success. The success itself, however, largely
depends on a goal-oriented implementation.
In addition to these regulatory aspects, artificial intelligence as part of the digita-
lization process requires a powerful infrastructure as a technical framework. This
primarily includes a fast, comprehensive (!) and efficient internet, which unfortu-
nately is still not available in Germany.
The comprehensive use of artificial intelligence creates many opportunities—but
their implementation is also associated with great challenges involved. The advan-
tages of artificial intelligence only become apparent in the long term. Therefore, the
advantages of initial investments are not immediately apparent. Thus, in economy,
politics and society—not only patience, but also a long-term strategy and a thinking
456 13 Outlook

of politicians and managers beyond election periods and contract terms is required.
Political and economic decision-makers alike must take a clear leadership role to suc-
cessfully shape the transformation process and overcome obstacles. The fear of job
loss is just as serious as the danger of falling behind globally if AI technologies are
ignored.
Against this background, the following questions need to be answered at the
country level—preferably also at the EU level:

• Which AI investments not only strengthen the competitiveness, but also pro-
mote the creation of new jobs?
• How should the education systems and offerings be designed to promote a
profound understanding of the capabilities and limitations of artificial
intelligence?
• How can we ensure an increase in government investments in the human capi-
tal of the country?
• How should the anticipated changes in the labor market be made socially
acceptable, for example through joint initiatives by government, businesses,
and unions?
• How can a healthy competition among companies be promoted to prevent a
tendency towards AI monopolies, similar to the monopolies in the areas of search
engines and social networks?
• How can legal frameworks (such as for new forms of work and data usage) be
designed so that the transformation process is legally secured?
• How should the copyright be designed so that AI systems can be recognized as
creators of creative solutions, not just humans?
• What solutions could be used for the financial security of people if the use of AI
creates significant employment gaps and associated unemployment (e.g. through
measures to promote mobility, training measures, conditional/unconditional
government transfers)?
• How and to what extent can global AI standards (including military boundaries,
data protection or universal value images of AI algorithms) be defined?

It will take even longer for a majority of companies to experience an AI-first strat-
egy. Unfortunately, too many companies still have to deal too much with managing
their digital and ecological transformation to be able to tackle the next development
stage—the comprehensive integration of artificial intelligence—today. This is very
worrying because it means that future challenges are not being addressed!
This assessment may help us to the necessary serenity as citizens in dealing
with artificial intelligence (see [1]):
Our intelligence is the product of millions of years of evolutionary development
and is designed to constantly search for explanatory patterns that protect us from
threats. Therefore, our focus is not on mathematical precision. Rather, our human,
intelligent thinking is strongly characterized by randomness. In other words: Our
mental freedom comes from the fact that our thinking does not follow a universal
algorithm. No one or nothing has programmed us completely. This makes us only
References 457

partially predictable, as we constantly make smaller and larger mistakes that are not
programmable.
Therefore, it will take a very long time until an even approximate replication of
the human brain will be successful. The journey there will be accompanied by a
multitude of changes in our private and accompany professional life. We wish all
our readers the necessary courage and the necessary self-confidence for their con-
structive and brave overcoming!

Food for Thought


“Whoever wants to co-determine how far technology dominates the world
must first master the technology itself.” ([2], p. 67).
“AI is likely to be either the best or worst thing ever to happen to humanity,
so there’s huge value in getting it right.” [3].

13.1 Summary

• Our society must globally prepare for disruptive changes due to artificial
intelligence.
• We are in a constant metamorphosis process, triggered by digitization and fur-
ther intensified by artificial intelligence.
• And: This process will never end!
• Discussions about whether artificial intelligence should be used in general are
not productive; after all, no one worldwide can stop the use of this technology.
• More promising are active discussions about how to curb discrimination, abuse,
and misdevelopments of AI systems.
• Artificial intelligence should primarily be understood as a chance to cope with
the ongoing global challenges of increasing urbanization and mobility, of pro-
found demographic changes as well as global climate change.
• An international initiative is needed to develop a global AI guideline that pro-
vides a binding framework for all countries.
• The internet knows no boundaries—neither does artificial intelligence!

References
1. Gabriel, M. (2020). The purpose of thinking. Ullstein.
2. Jung, A., Nezik, A.-K., Rosenbach, M., & Schulz, T. (2018). Nightmares. Der Spiegel,
46, 66–71.
3. Hawking, S. (2014). BBC interview from 2.12.2014. Accessed December 13, 2021, from
https://www.youtube.com/watch?v=fFLVyWBDTfo
Data, Law and Responsibility—Legal
Challenges in the Use of Artificial 14
Intelligence
Lawyer Franz Peter Altemeier, DDV Managing Director

Summary This chapter addresses the current legal challenges in the use of artificial
intelligence and shows ways for a pragmatic approach in business practice. At the
same time, it discusses the ambivalence of AI and the role of law in its use. It addresses
the influence of the AI tool ChatGPT and the associated legal risks, especially in con-
tent marketing, are explained. Copyright and related rights aspects are considered.

14.1  he Ambivalence of Artificial Intelligence


T
and the Role of Law

There are few societal issues that polarize as much as the debate about the use of
artificial intelligence. The topic is one of the contentious issues of the twenty-first
century. This is primarily due to its ambivalence: What inspires one person and
enables them to achieve peak performance, paralyzes another and makes them
shudder. For the average citizen, the topic is difficult to grasp: For them, it is hard to
guess or only speculate, to what positive as well as negative achievements AI and its
rapid development is capable of and what it could result in. The topic has both posi-
tive and negative fascination.
Often the discussion is driven by the desire for a better, more civilized world and
for a progressive use of AI to cope with crises and other man-made problems. This
optimistic view of the world is not opposed by this chapter. Therefore, if we take a
sober look at the matter, it appears to us, at least when we think in legal categories,
more solvable or at least controllable than initially assumed. However, the legal
debate in recent years also teaches us that we are still at the beginning of a long
journey, in which we have to prepare for new, possibly unsolvable conflicts and
legal uncertainties. At first glance, the complexity of the current legal situation
may seem daunting and overwhelm many legal users. It is mainly due to the fact that
numerous regulatory aspects are still open or need further adjustments. If we want
© The Author(s), under exclusive license to Springer Fachmedien Wiesbaden 459
GmbH, part of Springer Nature 2024
R. T. Kreutzer, Understanding Artificial Intelligence, Future of Business and
Finance, https://doi.org/10.1007/978-3-658-46131-7_14
460 14 Data, Law and Responsibility—Legal Challenges in the Use of Artificial Intelligence

to face these implications, legal thought patterns and the belief in the ordering func-
tion of law can help us.

14.2  rtificial Intelligence and Law: Concept, Definition,


A
Legal Questions

14.2.1 Concept and Definition

The term Artificial Intelligence encompasses different technologies for solving com-
plex tasks. Even the term is somewhat unfortunately chosen, as it derives from the
attempt to replicate human cognitive abilities in the processing of information to cope
with tasks using the means of computer science. It is already unclear what is meant by
“human intelligence”. It Replicating them, with all their errors and vulnerabilities,
seems unattractive. Much more impressive and ultimately more accurate is the defini-
tion used by the European Parliament. According to this, Artificial Intelligence is “the
ability of a machine to mimic human abilities such as logical thinking, learning, plan-
ning, and creativity” (cf. [1]; for a more detailed discussion, see Sect. 1.1).
It becomes more tangible when we understand that in addition to the simulation
of “humanly intelligent performances” by information technology, work is pri-
marily being done on originally “intelligent” solutions. As a basic technology, AI
is primarily a driver for innovations and capable of triggering a new industrial revo-
lution. Because such systems can, unlike their predecessors, adapt their actions,
analyze the consequences of previous actions, classify them and—this is what
makes them so revolutionary—autonomously further develop them.

14.2.2 Legal Questions Regarding the Use of AI

Legal science is currently primarily concerned with four categories of legal


questions:

• Where may AI systems be used?


• What standards must artificial intelligence systems meet and how transparent
must their use be?
• What responsibility (of humans) exists for the use of AI?
• How can this responsibility be described both civilly and criminally and what
legal application commands can be derived?

In addition, various attribution questions are on the table:

• Who is the inventor in the legal sense when an AI invents something?


• How much “human” must remain in the use of AI so that the result of its use can
be attributed to him?
• How far can and may the use of artificial intelligence go—ethically and legally?
14.3 Tsunami of the Information Society 461

Not without reason, for example, Article 22 of the General Data Protection
Regulation (GDPR) requires that there always be a human final decision in the use
of technology. This is intended to prevent humans from being subjected to an exclu-
sively automated decision and thus being degraded to objecthood. This rightly fol-
lows the standard of the “object formula”, which is read out of the human dignity
of Article 1 Paragraph 1 of the Basic Law by the jurisprudence of the Federal
Constitutional Court (cf. [2]). This raises the question of how much residual deci-
sion-making authority must remain with humans. This plays a major role, for
example, in the risk consequence assessment of autonomous driving (cf. on autono-
mous driving and the trolley problem).

14.3 Tsunami of the Information Society

The discussion is being fueled by the AI tool ChatGPT from the US company Open
AI. It is currently becoming the most successful solution of generative artificial
intelligence worldwide. Unlike “normal” bots, it generates insights and context-
related information in a non-deterministic way through intelligent, open input
prompts, the so-called prompts (see in depth Sect. 4.2.2). This has triggered a tsu-
nami of the information society (see also [3]):
Let’s take a closer look at the relevant cases. ChatGPT is currently mainly used
as a sparring partner in marketing—as a source of inspiration, idea generator, but
also for keyword research. The legal risks for this type of use are manageable.
Caution is only required when sensitive data is fed into the tool by the user himself,
such as trade secrets or health data. The user must be aware that an AI cannot
absolve him of liability and responsibility if such data is made available to an AI
engine. Anyone who uses an AI tool must be aware that he remains responsible for
the content he publishes (see on this feeding process Sect. 4.2.2).
The results of ChatGPT can generally be used in the process of idea finding. But
here too, a critical look at the results is worthwhile. Even learning systems still
make mistakes (see [4]). A corresponding proofreading is in every case not only
recommended, but indispensable (see Figs. 4.7 and 4.8). When passing on the con-
tents of the texts generated by ChatGPT to third parties (such as agencies), several
aspects need to be differentiated.

14.3.1 Copyright

In principle, it is possible that an AI like ChatGPT generates its content from copy-
righted texts, images, and codes. The user will regularly not be aware of this. This
does not have to concern him initially, as he as a mere user of the software would
not be liable, but only the provider. Something else could apply to content that is
created and published through the use of AI.
A prerequisite for copyright protection of a work is always a personal intel-
lectual creation (§ 2 para.2 UrhG). This means that to apply copyright law, the
462 14 Data, Law and Responsibility—Legal Challenges in the Use of Artificial Intelligence

question must be clarified whether and to what extent the works to be protected are
based on a are attributable to the creative activity of a human person. Here it applies:
Content generated exclusively by AI is not protected by copyright.
The same applies, for example, to AI-generated image files. If the human was not
involved in the process, copyright protection for the created work is excluded.
Proving this will pose challenges in practice: The more artificially generated texts
flow into the training data of the AI tool, the smaller the remaining substrate of
human creative activity becomes. This increases the requirements for proof. It can
be assumed that the number of public domain texts and images on the internet will
increase rapidly in the coming years, without them (even with the help of special
software) being distinguishable from human works.
Unlike this output, the prompts, i.e. the (human) texts for briefing the AI, can be
protected by copyright. The intellectual works of so-called Prompt Engineers, who
formulate or describe tasks for the AI to perform, can already be found in (paid)
prompts databases.
Against this background, it can be assumed that with the rapid development and
use of AI tools, copyright as we know it today will come to an end. This applies at
least to texts and forms of expression that will establish themselves outside of liter-
ary works. For such literary works, a creative author and thus copyright protection
can usually still be assigned. In addition, the temptation is likely to be great to not
name ChatGPT and other AI tools as (co-)authors when publishing the texts. This
will also make it increasingly difficult to prove natural authorship in the future.

14.3.2 Related Rights

The related rights are also not (yet) up to the new challenges. Related rights are
referred to as “neighboring rights”. These apply in addition to copyright and are tied
to the process of providing a service. Related rights protect services that—unlike
copyright—are not creative, but are associated with copyrighted works or are simi-
lar to the creative performance of the author. The law focuses on the provision of
services.
For example, the person who has made a substantial investment in the creation
of a database can benefit from a related right, the protection of the database maker
(§§ 87a ff. UrhG). A related right for artificial texts against adaptation, modifica-
tion or exploitation does not exist. On the other hand, AI-generated music could
fall within the scope of the law, specifically the related right of the phonogram pro-
ducer (§ 85 UrhG). Whether and to what extent any protection gaps should be closed
by special AI-related rights is discussed in the legal literature. Here, for example,
the services of the programmers as manufacturers of artificial intelligence or other
people involved in the development and training process are considered. Also in
patent law, the question arises whether works created using artificial intelligence
can be patented in favor of a natural or legal person.
14.4 Generative Artificial Intelligence: Regulation Is at a Crossroads 463

14.3.3 Obligation to Label

A obligation to label AI-generated works or content does not (yet) exist. The
nature and extent of such an obligation is currently being debated at the European
level. It is expected that with the planned AI regulation (keyword AI Act, cf. Sect.
1.6) such an obligation will be introduced in Europe. The regulation will be the first
attempt worldwide to regulate artificial intelligence. The attempt will be more than
just a blueprint for other non-European countries. The AI ACT could be suitable for
setting pioneering standards that will also be reflected in other legal systems. The
GDPR has already been able to set new standards in dealing with data protection
outside the EU legal area. The European efforts to create a uniform “Level Playing
Field” will also have consequences for the Anglo-American legal area and its eco-
nomic relations, especially with EU member states. This should give enormous
weight to European legislation.
It remains to be noted that companies remain responsible for their customer
communication. This also applies when using “classic” chatbots in customer por-
tals. Copyright infringements, competition violations, violations of personal rights
and especially violations of data protection regulations can be threatened. The latter
become particularly relevant when customer personal data is processed through ser-
vice requests. Depending on the type of data, a data impact assessment, i.e. a risk
assessment according to Art. 35 GDPR, may regularly be necessary. This applies
even more when using AI-generated chatbots (cf. Sect. 14.5).
With the aforementioned AI regulation (AI Act), there will also be new compli-
ance requirements for this area. A company must also be liable for incorrect infor-
mation. Special caution applies to (un-)wanted contract conclusions. Through the
attribution theory, legally effective contracts can be concluded by the chatbot. If
consumers are involved, the distance selling information obligations and right of
withdrawal from consumer law known from e-commerce must be observed.

14.4  enerative Artificial Intelligence: Regulation Is at


G
a Crossroads

The claim to a (European) regulation is an appropriate balance between effective


regulation and the promotion of innovative business models in the field of gen-
erative artificial intelligence. This became clear in a hearing in the Digital Committee
of the German Bundestag in May 2023. The experts largely agreed that time is
pressing for corresponding (European) guidelines. This is especially true with
regard to transparency issues for the labeling of AI-based texts and other open
questions when using AI. The trilogue negotiations that started in June 2023—and
thus faster than initially expected—will soon provide clarity about the regulatory
matter and guidelines for users. Political observers expect the regulation to be
adopted before the European elections in 2024. could be. However, the new regula-
tions—taking into account an 18-month transition period—are not expected to
come into force before 2025.
464 14 Data, Law and Responsibility—Legal Challenges in the Use of Artificial Intelligence

The regulation provides as a basis for regulation a tiered protection concept.


Specifically, transparency, code, and liability issues, as well as bans on particularly
risky AI, are on the table. Special examinations should, for example, be required for
the medical sector. While there are a number of AI approaches that are seen as hav-
ing minimal risks (such as AI-supported video games), special transparency obli-
gations should apply in the case of higher risks. For example, it must be recognizable
that a chatbot is being used in customer dialogue. High-risk AI systems must meet
high testing and approval requirements. Bans should, for example, apply to AI
systems and AI applications that manipulate human behavior to circumvent the free
will of users. Social scoring should also only be possible under very strict condi-
tions, if at all (see example China). According to estimates by the EU Commission,
about 5 to 15% of systems or applications will fall into the highest risk category.
The progress of the directive on adapting the non-contractual civil liability for AI
(Directive on AI liability) should also be interesting. To this end, the EU
Commission presented a corresponding draft in September 2022 (see [5]). According
to the EU Commission’s intention, it should be ensured that “victims of damage
caused by AI receive the same protection as victims of damage caused by products
in general”. In addition, companies should have more freedom in the use of
AI. Because there are still considerable legal uncertainties and liability risks.
Obstacles should be removed and liability risks for companies reduced. Nevertheless,
it can be assumed that the compliance requirements for corporate activities will
increase. The high threats of fines for non-compliance with compliance regulations
are likely to prove particularly paralyzing for innovative business models (see AI
governance Sect. 12.3.3).

14.5  rotection of Personal Data: High Requirements


P
for Automated Decision-Making Systems

Particular importance is attached de lege lata, i.e. “according to existing law”, to the
protection of personal data: All providers and users of AI systems are (regardless
of the planned AI regulation) obliged to protect personal data. The GDPR in particu-
lar places high demands on automated decision-making systems. It will be crucial
for the legal user to what extent the national and European data protection authori-
ties will be able to convince with a pragmatic approach in dealing with generative
AI like ChatGPT (see [6]).
Not only in the legal commentary literature, the decision of the Italian Data
Protection Authority in the course of the activation of ChatGPT in November 2022.
This had temporarily banned the operation of ChatGPT nationwide. The authority
accused the operator OpenAI of not having sufficiently checked the relevant GDPR
regulations before entering the market. This concerned personal data in training
data, lack of legal basis, lack of transparency and problems with data accuracy and
age control. This bang was not just a short-term reflex of an authority that lifted the
ban after OpenAI quickly improved ChatGPT. After all, other authorities, such as
14.7 Red Flags in Handling AI Tools 465

the German, but also the European Data Protection Committee ask critical questions
about the use of ChatGPT. This was about purposes of profiling, about the process-
ing of sensitive data and about indirect data collection.
It remains to be seen how other data protection authorities will proceed in the
future (cf. [7, 8]). As far as can be assessed, a general ban is not (any longer) on the
table, however, it must be assumed that OpenAI will have to improve and its
ChatGPT in view of the (new) requirements (cf. Sect. 14.4) must adapt.

14.6 Reactions from GDPR Legislators and Business

The reactions at national (e.g. Bundestag and Bundesrat) and international level
(e.g. European Data Protection Board) to the handling of ChatGPT once again show
that data protection law needs to be reconsidered. The prohibition with reserva-
tion of permission does not work (anymore). Because a “consent in knowledge of
the facts”, as demanded by the GDPR is not or only difficult to achieve with
AI-supported data processing. The processing at the level of the individual data
record largely takes place in a “black box”. There is likely to regularly lack a legal
basis and the “legitimate interest” according to Art. 6 GDPR must be resorted to.
This is likely to the principle of transparency serve only with difficulty. Also the
requirement of “data minimization” is likely to have become obsolete as a criterion
for corrective measures. Rather, it would have to accept that the GDPR should apply
in a technology-neutral and innovation-oriented way (cf. recital 15 of the GDPR).
The latter would primarily question its meaningfulness. At any rate, it will have to
be accepted that the effort for impact assessments and additional protective mea-
sures is likely to be considerable.
This is likely to run counter to the (political) efforts to reduce bureaucratic hur-
dles. Possibly the GDPR-compliant functioning could be confirmed by certifica-
tions and conformity seals. For this purpose, the data protection authorities should
be able to make reliable guidelines for pseudonymization and anonymization of
personal data (cf. also [9]).
As a confidence-building measure To the user and the supervisory authority,
providers of AI applications that do not pose a high risk could rely on (voluntary)
codes of conduct or self-commitments of representative associations. Some compa-
nies are already working with voluntary codes of conduct or self-commitments. For
example, Deutsche Telekom recently subjected itself to a “Manifesto for the Use of
Artificial Intelligence”. It sets the framework for the processing of employee data in
AI systems [10].

14.7 Red Flags in Handling AI Tools

In consulting practice for companies, three fields in particular need to be consid-


ered, which are summarized here in a “buzzword-like” manner in dealing with
AI tools:
466 14 Data, Law and Responsibility—Legal Challenges in the Use of Artificial Intelligence

• The issue of “responsibility” must become second nature. As a rule, AI-generated


texts and images should not be published without review.
• When using AI-generated output, copyright must be observed. It also applies to
lines of code. In legal literature, for example, there is controversy over how to
deal with software that creates new images based on mathematical representa-
tions and uses (even partially) copyrighted source data (especially relevant for
deep learning models). In business practice, AI tools can help detect possible
copyright infringements.
• Data protection compliance (e.g., during “feeding”): There are data protection
risks lurking in machine learning. Caution is advised with protected information
such as trade secrets or information protected by attorney-client privilege. The
GDPR places high demands on automated decision-making systems (especially
in “scoring”). If AI-based systems store or use personal data for training, infor-
mation obligations must be observed for the production or use of the AI program.
In principle, companies should know that a data protection impact assessment is
likely to be required here. This means that companies should develop guidelines
to meet compliance requirements, for example when setting prompts. These
prompts should generally not contain any personal data or trade secrets. In addi-
tion, quality control processes are needed (especially for output).

14.8 Outlook

• AI is becoming increasingly intermodal: Artificial intelligence will have to


solve a variety of tasks. The result is diverse implications in all areas of life (such
as in the workplace with employee data protection).
• Law cannot keep up with technical development. Politics is called upon to fur-
ther develop the law quickly and in this process to clarify and simplify the regu-
latory requirements for the economy.
• Some basic ideas and goals of data protection law are in diametric contradic-
tion with the massive data processing required for AI technologies. Viewed posi-
tively, legal uncertainty also opens up opportunities that need to be exploited.
The interplay of Law and AI will need to be redefined again and again in
the future.
• Every company—including SMEs—should design their own digital strategy in
view of ensuring competitiveness. This should, in addition to a necessary digita-
lization concept, also consider the use of AI applications.
• The networking offers of the numerous AI platforms and the AI funding offers
of the federal and state governments can and should be used. An important con-
tact here, for example, is the German Dialog Marketing Association for its
members.
References 467

References
1. EU. (2023). Artificial Intelligence – Excellence and trust. Accessed on April 03, 2023, from
https://commission.europa.eu/strategy-­and-­policy/priorities-­2019-­2024/europe-­fit-­digital-­age/
excellence-­and-­trust-­artificial-­intelligence_de
2. Dürig, G. (1958). Commentary Art. 1 Para. 1, Rdnr. 28; BVerfGE 27, 1, 6 – Microcensus. In
T. Maunz & G. Dürig (Eds.), Basic law. Beck.
3. DDV. (2023). Quintessence, OpenAI – The company behind ChatGPT. Accessed on June 26,
2023, from https://www.ddv.de/ihre-­woechentliche-­quintessenz-­openai-­das-­unternehmen-­
hinter-­chatgpt-­und-­dall-­e.html
4. Gall, I. (2023). ChatGPT—Hamburg expert warns of great dangers. Accessed on June 26,
2023, from https://www.abendblatt.de/hamburg/article238286585/ChatGPT-­Hamburger-­
Expertin-­warnt-­vor-­grossen-­Gefahren.html
5. EU. (2022). COM (2022) 496: Proposal for a Directive of the European Parliament and of
the Council on adapting the rules on non-contractual civil liability to artificial intelligence
(Directive on AI liability). Accessed on June 23, 2023, from https://eur-­lex.europa.eu/
legal-­content/DE/HIS/?uri=COM%3A2022%3A496%3AFIN
6. Data Protection Conference. (2019). Hambach declaration on Artificial Intelligence. Accessed
on June 23, 2023, from https://www.datenschutzkonferenz-­online.de/media/en/20190405_
hambacher_erklaerung.pdf
7. Heise. (2023). How data protectionists grill OpenAI. Accessed on June 23, 2023, from https://
www.heise.de/news/DSGVO-­So-­grillen-­Datenschuetzer-­OpenAI-­9159102.html
8. ULD Schleswig-Holstein. (2023). Official request. Accessed on June 23, 2023, from https://
www.datenschutzzentrum.de/uploads/chatgpt/20230419_Request-­OpenAI_ULD-­Schleswig-­
Holstein_IZG.pdf
9. Foundation for Data Protection. (2022). Practical guide to anonymizing personal data by the
Foundation for Data Protection from December 2022. Accessed on June 23, 2023, from https://
stiftungdatenschutz.org/fileadmin/Redaktion/Dokumente/Anonymisierung_personenbezo-
gener_Daten/SDS_Studie_Praxisleitfaden-­Anonymisieren-­Web_01.pdf
10. Deutsche Telekom. (2023). Manifesto. Accessed on June 23, 2023, from https://www.tele-
kom.com/resource/blob/1025798/169e50ff1e61f2c69512aa9961dd1d03/dl-­2 30210-­k i-­
manifest-­data.pdf
Index

A AI audit, 448, 450


ABB, 158, 168 AI-based administration, 362–363
Acceptance gap, 53 AI-based content creation, 277
Access to AI solution modules, 438–444 AI-based evaluation of everyone’s daily
Acoustic design guidelines, 246 routines, 349
Acrolinx, 280 AI-based HR technologies, 384–389
Actions, 4, 8, 9, 11, 17, 20, 31, 32, 43, 47, 48, AI-based performance monitoring, 383
51, 55, 57, 58, 60, 64, 66, 71, 73, AI-based personalized medicine, 323, 324
119, 127, 138, 139, 146, 154–156, AI-based real-time instructions, 382
163–167, 173, 174, 177, 179, 182, AI basics
184, 189, 201, 214, 243, 252, analysis of the presence, 426–428
255–257, 261, 268, 275, 294, 295, AI bias, 60, 432
297, 305, 307–309, 322, 333, 351, AI budget, 425
364, 367, 382, 384, 402–404, 410, AI-capable culture, 188, 370, 443
413, 419, 420, 423, 431, 435, 454 AI coaching system, 273
Adam, 313 AI competence, 26, 428–429, 448, 454
Adaptive learning systems, 358–362 AI Competence team, 428
Additive manufacturing, 159 AI-driven optimization of logistics
Address publishers, 120 processes, 173
Administrative tasks in healthcare, 328 AI-driven platform for website
Advertising in virtual space, 411 optimization, 295
Advertising without third party cookies, 300 AI-driven price agreements, 302, 303
Affectiva, 271, 272, 297 AI-driven processes, 435
Affective AI technology, 328 AI-driven process optimization, 312
Affective computing, 57–60 AI-driven real-time coaching, 273, 274
Agriculture AI-driven skincare app, 304
AI application areas, 203–205 AI effects
IoT-based monitoring, 204 development of new business models, 101
status of AI application, 205–207 economic impacts, 129
AI Act, 95, 97, 98, 102, 340, 384, 463 efficiency increase, 101
AI-Aloe, 363 product and service innovations, 101,
AI and data platform, 442 418, 445
AI application areas in the military AI-first companies, 26
sector, 401–402 AI-friendly corporate culture, 429
AI applications AI-generated magazine, 233
analysis of presence, 426 AI-generated photo models, 283
AI arms race, 404 AI goals, 425, 430, 437

© The Author(s), under exclusive license to Springer Fachmedien Wiesbaden 469


GmbH, part of Springer Nature 2024
R. T. Kreutzer, Understanding Artificial Intelligence, Future of Business and
Finance, https://doi.org/10.1007/978-3-658-46131-7
470 Index

AI Governance AI-supported development of videos, 285–288


development of AI policies and AI-supported expert system, 4, 378
procedures, 433 AI-supported film production, 285
development of a structure, 433 AI-supported generation of images, 283
elaboration, 433 AI-supported Human Resource Information
introduction, 433 System (HRIS), 369
monitoring and review of AI AI-supported lending, 397
deployment, 433 AI-supported matching, 380
training and awareness for AI AI-supported new creations, 288, 292
deployment, 433 AI-supported performance management, 381
AI Hub, 429 AI-supported process automation, 396–398
AI hype, 438 AI-supported recruitment, 369–375
AI innovations in the hype cycle, 126 AI-supported sentiment detection, 239
AI Journey AI-supported surveillance systems, 348
analysis of the status quo, 419–428 AI-supported text generation, 235, 277–280
entrepreneurial, 419 AI-supported tutoring system, 363
maturity levels, 427, 428 AI-supported virtual teachers, 362, 364
monitoring, 448 AI systems, 6, 108, 151, 216, 320, 344, 359,
phases of the company’s own, 417–419 381, 392, 405, 422, 454, 460
start, 417, 428 AI-teaching and learning assistants, 363
in your own company, 417–450 AI-Tools for the further phases of the Talent
AI-Journey-Dashboard, 446 Journey, 378, 379
AI laggards, 100 AI video producer, 285
AI leaders, 99 AI Vision, 431
AI-leadership, 99 Aladdin, 393
AI-legal framework Aladdin Wealth, 393
AI Act, 95, 97, 98, 102, 340, 384, 463 Aldi Shop & Go, 310
AI-Maturity Aleph Alpha, 441, 442, 445
Analysis of the main competitors, 428 Alexa, 3, 28, 35, 111, 218, 224–226, 243, 247,
AI-Maturity from the self-perspective, 427 255, 256, 259, 260, 262, 266, 269,
AI-Maturity-Map, 423–428, 449 270, 346, 348
company-specific, 424 Algorithm, 4, 7, 9, 11–13, 15–17, 19, 21,
fields, 424 23–25, 38, 40, 44, 46, 55–57,
AI-Monitor, 78 60–62, 64, 67, 71–75, 77, 78, 98,
AI networks, 78, 418, 431 109, 126, 127, 163, 165, 167, 170,
AI-only solutions, 418, 445 171, 173, 180, 191, 215, 217, 261,
AI personnel, 93, 425 265, 271, 275, 278, 283, 286, 288,
AI-platforms for media planning, 298 289, 292, 293, 296, 301–303, 306,
Airbnb, 153 310, 321, 323, 324, 326, 327, 334,
AI risk management, 432 336, 338, 344, 346, 360, 364, 376,
Airships, 200 378–380, 383, 392–394, 404, 418,
AI-solutions for the post-cookie era, 298–300 420, 443, 453, 455, 456
AI sommelier, 278, 279 Algorithmic cartels, 302, 303
AI strategy, 213, 424, 425, 428, 430–432 Algorithmic fairness, 60
AI-supported advertising planning, Algo Trading, 392
297–298, 300 Alibaba Group, 226
AI-supported analyses and forecasts, 357 AliGenie, 224, 226
AI-supported analysis of advertising messages, Alignment, 233, 234, 295, 357, 438, 448
296–297, 300 Alpha Dogfight, 405
AI-supported business models, 445 AlphaGo, 17, 18, 24
AI-supported customer acquisition, 292–300 AlphaGo Zero, 17, 18
AI-supported detection of pests and plant Amazon, 28, 29, 55, 57, 98, 99, 224–226, 242,
diseases, 203 255, 256, 260, 262, 266, 269, 270,
AI-supported development of images, 281–285 279, 285, 293, 310, 361, 439, 440,
AI-supported development of music, 288–291 443, 455
Index 471

Amazon Alexa, 224, 225, 262 Augmented Recruiting Mails, 370


Amazon Connect, 270 Augmented writing, 277, 367, 369
Amazon Connect Voice ID, 270 Auto-GPT, 236
Amazon Go, 310 Automate, 8, 205, 211, 212, 306, 308, 332,
Amazon Marketplace, 153 375, 382, 393, 396, 412, 429, 440
Amazon Polly, 29 Automated analysis of resumes, 376
Amazon Rekognition, 336, 440, 441 Automated data and image analysis, 402
Amazon SageMaker, 440 Automated evaluation, 358–362
Amazon Web Services, 439 Automated image recognition, 359
Ambidexterity, 423 Automated insights, 278
Amecas, 52 Automated or Algorithmic Trading, 393
Analysis of audio and video recordings, 384 Automated support systems, 213
Analysis of facial and voice expressions, 271 Automated video interviews, 371
Analysis of the relative competitive position of Automatic writing, 277
different countries, 100 Automation, 22, 23, 26, 55, 56, 87, 114, 146,
Annotation, 34 147, 149, 155–160, 170, 175, 176,
answerthepublic.com, 251, 252 184, 186, 187, 190–193, 208,
Anticipatory Shipping, 261 211–213, 215, 217, 221, 234, 252,
Apple Siri, 225 277, 292, 306, 309, 328, 345, 366,
Application-centered AI, 128 370, 377, 382, 386, 391, 395–399,
appliedAI, 88 422, 435, 454
Areas of application of the digital factory, Automation of Desktop Processes, 382
175, 176 Automation of price decisions, 306
Argument Mining, 239 Automotive industry, 49, 108, 183, 304, 397
Artificial General Intelligence (AGI), 4, 7, 21, Autonomous driving, 4, 49, 50, 56, 57, 63,
67, 69, 129, 236, 361, 388 118, 173, 192–197, 304, 403, 461
Artificial Intelligence at the wheel, 196 Autonomous driving trucks, 198
Artificial Intelligence of Things (AIoT), 115, Autonomous Driving Vehicles, 53, 54,
116, 187 195, 196
Artificial Intelligence–Where does the German Autonomous flying vehicle, 198
economy stand?, 80 Autonomous vehicles
Artificial neural networks (ANN), 10, 12, 296 development, 193–197
Artistic creativity, 287 Level 1: assisted driving, 193–194
Art on Demand, 291 Level 3: highly automated driving, 194
Asksuite, 312 Level 4: fully automated driving, 194
ASMPT, 173 use, 195
Asset management platform, 392–393 Autonomous weapon systems, 402–405
Associated Press (AP), 278 Avatar, 36, 53, 240–242, 285, 409, 411
Assono AI chatbot, 215 AWS Open Data Registry, 443
AtomNet platform, 327
Atomwise, 327
Attribution issues, 460 B
Attribution modeling, 294 Baby Artificial General Intelligence (Baby
Attribution theory, 463 AGI), 236
Audio branding, 245 Babysitter for AI, 77
Audio logo, 245 Background knowledge, 20
Auditory brand elements, 244 Barack Obama, 44
Auditory elements, 245 Bard, 35, 227
Audit plan, 448, 450 Basic requirements, 436
Augmented Job Posts, 369 Basics of artificial intelligence, 26, 76
Augmented reality (AR), 55, 114, 162, 188, Batch level, 156
304, 307, 334, 363–365, 410, 411 BAT companies, 301
Augmented reality glasses, 364 B2B marketplace for digital media
Augmented reality in the cosmetics purchasing, 299
industry, 304 BCI transplants, 323
472 Index

Beethoven X, 289 Business model canvas, 264, 437


Before Fact Approach, 201, 202 Buyer Persona, 245
Behavioral biometrics, 395 BVLOS route, 199
Behavioral model, 168
Benchmarking with an automatic
implementation of C
optimizations, 159 Caller authentication, 270
Benefits of AI Use in Armaments, 403 Cambridge Analytica/Facebook Scandal, 16
Bias, 61, 66, 71–78, 128, 271, 297, 376, 386, Campaign film, 286
388, 397, 432, 447 Campaigns for specific target groups, 293
Bias in–Bias out, 73 Cannibalization effects, 301, 306
Bidirectional Encoder Representations from Canvas for the use of Artificial
Transformers (BERT), 227 Intelligence, 437
Big Data Cartel, 302, 303
Five Vs, 117, 119 Cartel authority sector investigations, 302
Big-5 model Car-to-Car communication, 350
five-factor model, 371 Cashless shopping experiences, 311
Bio-chips, 326 Categorizing a text, 35
Bio-hack, 328–332 Causal AI, 127, 129
Bio-hacking Celia, 225
DNA hacker, 330 Center for Efficient Factory Senftenberg, 189
fields of application, 525 Certifications, 72, 465
Biometric database, 334 Challenges for politics and society, 453, 455
Biometric technologies for authentication, 395 Change management, 419, 445–446, 448, 449
Bio-tech, 158 Change of the classic value chain, 151
Bixby, 35, 218, 225 Change process, 419, 429
Bizerba, 309 Channel management, 220
Black Box of Artificial Intelligence, 71, 136 Chatbot, 4, 7, 28, 35, 36, 45, 58, 96, 110, 211,
BlackRock, 393 213–241, 256, 258, 259, 261, 311,
Blended learning, 357, 364 314, 328, 362, 369, 370, 378, 379,
Blind audition, 383 384, 388, 398, 430, 435, 463, 464
BlueYonder, 309 ChatGPT
Body-Hacking, 114, 328, 329 application possibilities, 234
Bond trading, 392 Prompt Engineering without APIs, 234
Book Me Bob, 312 willingness to use, 235
Boomerang effect, 193, 203 Chat interface, 228
Bosch, 187, 254 Check24, 153
Boston Dynamics, 337 Checker, 284, 285
Botika, 283 Chief Learning Officer (CLO), 362
Brad Rutter, 23 China, 24, 38, 43, 65, 73, 100, 131, 132, 148,
Brain-computer communication, 323 162, 179, 182, 226, 313, 337–341,
Brain-computer interface (BCI), 322, 323 405, 455, 464
Brain-machine interface (BMI), 322, 331 Circular economy, 138, 139, 154, 171,
Brand ecosystems, 298 279, 280
Brand language, 244–246, 268 Citizens’ expressions of opinion, 350
Brand persona, 245 City management, 351
Brand recommendations, 254 Claim, 24, 116, 119, 183, 234, 238, 245, 282,
Brand Sound, 245 290, 397, 398, 444, 463
Bühler Motor, 162 CLAIRE, 87, 88
Building AI competencies, 429 CLAIRE Hub, 88
Building specific AI competencies, 428 Clarity Score, 297
Burda Publishing, 233 Clearview, 334, 335
Business Analytics, 156 Clearview AI, 334
Business cases, 129, 141, 434–436, 449 Click-and-Collect, 308
Business Intelligence Suite, 377 Closed Domains, 229
Index 473

Closed patient data cycle, 326 Copyright protection of a work, 461


Cloud-based solutions, 174, 438 Core competencies for the digital working
Cloud-first data platform, 174 world, 454
Cobot 2.0, 158 Coreference resolution, 34
Cogito, 273 Core of Artificial Intelligence, 5–21
Cognitive computing, 348 Corporate culture
Collusion, 302 factors, 372
Collusive behavior, 302 Corporate health management, 381
Communication on the status of AI Corporate Language, 244
implementation, 432 Corporate Social Credit System, 339–341
Complexity of tasks, 8 Corporate Sound, 245
Compliance guidelines, 466 Cortana, 35, 218, 225
Compliance requirements, 433, 463, 464, Cost-benefit-calculation, 337
466 Creation of image and video content, 281, 288
Composite AI, 126, 127, 129, 361 Creation of new products and services, 146
Computer Vision Creation of own AI competencies, 418
Image Processing, 37–46 Creative Artificial Intelligence, 21, 22
Computer Vision APIs, 440 Credit card fraud, 334, 395
CONATIVE, 299 Creditworthiness check, 72, 394, 396, 399
Conformity compulsion, 338 Creditworthiness check model, 396
Conformity seal, 465 Creepy valley, 53
Congenial reproductions, 287 Uncanny Valley, 53
Connectivity, 113, 128, 134, 153 CRISPR technology, 330
Consciousness Cross-price elasticity, 301
concept, 41 Cross-publisher context targeting, 300
phenomenal, 41 Cross-sectional technology, 27, 141
Consolidation of data streams, 182 Crossword puzzle, 25
Constant Metamorphosis Process, 457 Culture model, 371, 372
Consumer-to-Manufacturer (C2M), 180, 181 Curriculum for digital education, 454
Content creation, 279, 280, 285 Customer-company-expectation
Content Impact Platform, 280 matrix, 211–213
Content marketing, 234, 279, 294 Customer experience, 114, 172, 177, 178, 180,
Content moderator, 40, 46 182, 197, 212, 213, 217, 218, 221,
Content Shock, 279 257, 268, 274, 295, 303–306, 311,
Context-based services, 266 398, 410, 413, 430, 435
Context marketing, 266, 294 Customer service, 58, 90, 186, 211–314, 377,
Context targeting, 299, 300 424, 426, 430, 434
Contextual booking, 299 Customer Touchpoint Tracking, 294
Contextual targeting, 299, 300 Customer value, 265, 292–295, 435
Continuous inventory review, 307 CV parsing, 376, 388
Conversational AI agents, 214 Cyber-physical systems (CPS), 150, 151, 184
Conversational AI platforms, 261 Cyber-security, 151, 333, 344, 395
Conversational Artificial Intelligence, 264, Cyborg, 114, 331
265, 276, 375 Cyborg pioneers, 332
Conversational interface, 214
Conversational keywords, 251
Conversion funnel, 253–256, 299 D
Conversion of text into video content, 281–291 DALL-E, 127, 136, 227, 281, 282
Convolutional Neural Networks (CNN), 12 Dark processing, 397–399
Cookie technology, 298 Dassault, 174
Cooperations with companies, 86 Data
Copyright, 237, 288, 290, 292, 433, 456, first party data, 47, 119–121, 295, 298, 299
461–463, 466 metadata, 122, 240, 276, 278, 279, 443
Copyright infringements, 42, 290, 463, 466 Second Party Data, 120, 121
Copyright law, 237, 461 semi-structured, 117, 154, 238, 375
474 Index

small data, 119, 126, 425 DeepZen, 37


structured, 29, 117, 126, 154, 238, Define Media, 299
376, 440 Definition of success metrics, 431
synthetic, 126, 127 deinSchrank.de, 171, 172
third party data, 47, 120, 121 Delegation of decisions, 57
unstructured, 117, 118, 126, 154, 238, Deloitte, 66
375, 440 Dematerialization
Wide Data approaches, 126 of products, services and processes,
Zero Party Data, 119, 120 109–112, 141
Data aggregator, 120 Demonstration-based training, 159
Data and process ecosystem, 151 Demystification of Artificial Intelligence, 20
Data audit, 75 Description, 8, 11, 29, 37, 38, 42, 61, 216,
Data-based feedback loop, 180 227, 277–279, 281, 282, 307, 308
Database with the skills and competencies of Designers, 119, 180, 187, 361
employees and executives, 379 Desktop assistants, 382
Data-centered AI, 126, 129 Development of data Volume, 121
Data competence, 428 Development of immersive learning
Data-driven/forward-chained approach, 47 environments, 363–365
Data-driven recruiting, 388 Development of individual learning and
Data economy, 123, 183, 322, 444 teaching profiles, 362
Data ecosystem, 183, 184, 418, 439, 444 Development of innovative business
Data gaps, 439 models, 146
Data.gov, 442–443 Development of new business models, 101,
Data protection compliance, 466 114, 134, 153, 166, 176
Data protector, 119 Development of powerful algorithms, 265, 418
Data provider, 120 Developments in production, 145, 146
Data service provider, 120 DevOps, 440
Data sets with photos for training 3DEXPERIENCE Platform, 174
purposes, 335 Dialogue agent, 35
Data Silo, 151, 189, 396, 428 Dialogue paths, 219
Data space, 182, 183 Dialogue system
Data space Industry 4.0, 183 language-based, 28, 35, 36
Data-to-Text (DTT), 29, 229 Dialogue variants, 220
Da-Vinci surgical system, 324 Dietary supplements, 330
Death algorithm, 62 Diffusion, 25, 37, 42, 281, 282, 412
Debt collection scoring, 394 Digital assistants, 28, 29, 36, 67, 111, 191,
Decentralized AI-supported data 213–242, 246, 247, 255, 256, 262,
processing, 128 266, 267, 270, 312, 370, 434
Decision intelligence, 126, 128, 129 Digital avatars, 240, 241
Decision making Digital butlers, 263, 314
assisted, 55–56 Digital competence, 46
autonomous, 56–57 Digital Darwinism, 131
delegated, 56 Digital education platform, 358, 454
five-stage model of automation, 55, 56 Digital Enterprise, 170
partial, 56 Digital Human, 240–242, 283
verified, 56 Digital identification tag, 38
Deduction, 5, 6, 338 Digital learning platforms, 357
Deep Blue, 23 Digital mindset, 446
Deepdub, 37 Digital personal assistants, 3, 28, 35, 247, 257,
Deepfakes, 43, 44, 60 259, 261, 263, 268, 271, 348, 426
Deep Genomics, 324 Digital ranking of German cities, 351
Deep Instinct, 333 Digital representation, 168
DeepL, 3, 29, 30, 48, 56, 57, 276, 445 Digital shadow, 122
Deep-learning framework, 9, 12, 21, 24, 26, Digital signage, 260, 308
64, 116, 282, 333, 371, 466 Digital signage software, 260
Index 475

Digital signature, 174, 293 E


Digital talent marketplaces, 379 4-Ear-Listening, 30–32
Digital thread, 176–178 Easyway, 312
Digital transformation, 129, 140, 352, 379, Echo chamber, 57
395, 412, 431, 445 Ecological transformation, 129, 140, 360, 379,
Digital twins, 168–169, 171, 175, 320, 321, 446, 456
331, 410, 435 E-commerce, 29, 191, 218, 257–258, 353, 463
Digital value chain, 151, 152, 190 Edge AI, 126, 128, 129, 151
Digital (informational) value chain, 151 Edmond de Belamy, 283
Digital voice print, 270 Education
Digitization in production institutionalized, 362
drivers, 149, 150 non-institutionalized, 362
goals, 149 Education sector, 58, 357–359, 364, 365, 410
Digitization of signage, 308 Effects of exponentiality, 108–109, 141
Dimensions of a trustworthy AI, 66 Efficiency and effectiveness of initiated AI
Dimensions of communication initiatives, 447
call to action, 31, 32, 297 Efficiency increase in power generation, 344
factual content, 30, 32 Efficiency increases, 101, 146, 149, 173, 176,
relationship, 31, 32 190, 218, 311, 429–430
self-disclosure, 32 Effort for impact assessments, 465
Direct Answer Box, 248 Einstein account insights, 294
Directive on AI liability, 464 Einstein activity capture, 294
Direct-to-Customer (D2C), 180, 181 Einstein GPT, 294
Discovery engine, 327 Einstein lead scoring, 294
Discriminator model, 284 Einstein opportunity insights, 294
Distributed Denial of Service (DDoS), 156 Elderly-Care solution, 349
Diversity, 75, 76, 85, 338, 384 Electroencephalography (EEG), 322
Division of labor between human and machine ELIZA, 54
action, 55, 60 Emotional AI, 27, 57–59
Do-it-yourself bio-hacking, 328 Emotional artificial intelligence
Doomsday prophets, 93, 350 for spoken language, 57
Downtime, 147, 150, 163, 165–167, 171, 173, for texts, 57
177, 344, 447 for video content, 57–58
Driven by Intuition, 286 Emotion analysis, 58, 360
Driverless transport vehicles (FTF), 198 Emotion analytics AI, 272, 274
Drivers for the breakthrough of a Emplifi, 239
technology, 228 Employability, 136, 362
Drivers of Artificial Intelligence, 107–141 Employee engagement, 381
Drones, 38, 49, 50, 55, 64, 173, 191, 192, Employee retention management, 383–384
198–206, 208, 286, 344, 401, 402, Enboarder, 378
404, 405 End-to-end custody chain, 199
delivery services, 199 End-to-end data solution, 151
diving, 403 End-to-end hiring experience platform, 375
flying, 199, 402 End-to-end pharma technology company, 327
micro-drones, 402 End-to-end recruiting solution, 377
unmanned submarines, 402 End-to-end user-friendliness, 349
urban delivery route, 199 Energy management, 343, 354
use, 199–201, 203, 286 Energy sector, 343–354
use for logistics tasks, 200 Engagement algorithm, 180
Dubai Metro, 191 Entelo, 377
Dynamic pricing, 197, 301–303, 305, 306 Enthusiasm requirements, 436
Dynamic profiling, 293 Entity, 21, 33, 34, 41, 220, 244–246, 263, 334,
Dynamics of changes in professional life, 394, 395, 408
361 Escalation processes, 349
Dynamization of the error concept, 198 ESG criteria, 137–140, 153, 279, 280
476 Index

Essence of a thing, 40 Facial recognition systems, 41, 42, 61, 312,


Ethical norms, 62 334, 340
Ethical principles of AI use, 64, 66 Fairness of artificial intelligence, 60–77
Ethical question of comprehensive Fairness of the compensation system, 383
surveillance of learners, 360 Fake accounts, 36, 45, 237
Ethical questions, 61, 64, 71, 405 Fake detection, 44, 394, 399
Ethical requirements for AI, 61, 101 Fake news, 36, 46
Ethics False information, 46
descriptive, 61 Fashion assistant, 258
digital, 128, 129 Fast fashion, 140, 179, 180, 182
empirical, 61 Federal Ministry for Economic Affairs and
normative, 61 Climate Protection, 86
Ethics and fairness of artificial Feeding, 206, 235, 461, 466
intelligence, 60–77 Feedzai, 396
European data portal, 443 Fields of application of artificial intelligence,
European General Data Protection Regulation 4, 22, 26–60, 62, 98, 101, 111, 141,
(GDPR), 94–96, 98, 99, 119, 123, 143–144, 189, 191, 203, 304, 333,
439, 455, 461, 463–466 394, 424
Evaluation of still images, 38 Fields of application of automation and
Evaluation of written performances, 359 digitalization, 22
Event-induced/backward-chained approach, Filter bubble, 57, 217
47 Financial services, 86, 88, 202, 391–399
Excitingness score, 297 Firefly, 37
Exit management, 378, 379, 383 First Mover, 242
Exoskeleton, 325, 331 First party data, 47, 119–121, 295, 298, 299
Exoskeleton robots, 331 Five-factor model, 371
Experience-based AI algorithms, 346 5G mobile standard, 132, 133
Expert systems 5G technology, 132–134, 141
access to powerful, 48 Five-stage model of decision
component for communicating automation, 55, 56
solutions, 47 Fixations, 296
component for developing problem Flixbus, 153
solutions, 47 Floating Point Operations Per Second
component for knowledge acquisition, 47 (FLOPS), 132
components, 47–48 Flow Health, 324
Explainable artificial intelligence (XAI), Flytrex, 200
71–77, 286, 394, 442, 445 Follow-up rhythms, 293
Exploitation, 265, 334, 423, 446, 462 Forms of image recognition, 335
Exponential development of performance, 107 Four, 9, 24, 25, 30–32, 40, 76, 108, 126, 127,
Exponential growth, 108, 121 132, 166, 234, 236, 250, 253, 322,
External image, 427 348, 368, 386, 396, 404, 424,
Extract! 4.0, 376 435, 460
EyeQuant, 296, 297 4D printing, 160
Four-Ears Model
4-Ear-Listening, 30–32
F FOUR Emotion Analytics, 273
Fabasoft, 174 Four-eyes principle, 76
Fabasoft business process cloud, 174 Fraud detection, 394–395, 398, 399, 430
Facebook-cambridge-analytica debate, 98 Fraunhofer alliance big data, 440
Face ID, 41 Friendshoring, 185
FacePay, 42 From cradle to cradle, 139
Facial recognition, 4, 41, 42, 61, 74, 214, 312, From cradle to grave, 138
334, 335, 338, 340, 375, 402, Functional magnetic resonance imaging
403, 426 (fMRI), 322
Index 477

Functional model, 168 Go game, 18, 24


Funding programs for artificial Good-enough decisions, 48, 49
intelligence, 444–445 Google Alert, 238
Future of life institute, 405 Google App Live Transcribe &
Notification, 359
Google Cloud, 439
G Google Dataset Search, 444
GAFA companies, 98, 267 Google Home, 3, 28, 236, 243, 257
Gaia-X, 121, 183, 444 Google Knowledge Panel, 248
Gamification, 182, 411 Google Reverse Image Search, 38, 39,
Gamification at the highest level, 339 335, 336
Gaming industry, 364, 412 Google Translate, 3, 28, 29, 48, 57, 276
Garbage-in-Garbage-out, 72 Gordon Moore, 108
Garry Kasparov, 23 GovData, 443
Gartner, 8, 124–129, 141, 361 Governance, 66, 67, 137–140, 279, 280, 386,
Gartner Hype Cycle, 124, 126, 361 418, 432, 433, 442, 448, 464
Gatekeeper, 153 Go World Champion, 24, 28, 141
General Data Protection Regulation (GDPR), GPT-4, 233, 234, 236, 363
94–96, 98, 99, 119, 123, 439, 455, Gradescope, 359
461, 463–466 Greenwashing, 138, 279, 280
Generation, 3, 8, 21, 23, 29, 54, 55, 60, 126, Grey Box of Artificial Intelligence, 71, 78
132, 235, 242, 277, 291, 343, 361 Growney, 392
Generation of personalized content, 293 Guidelines for responsible use of AI, 386
Generation R, 54 Guidelines for the use of language, 244
Generation Robotic, 54 Guiding principles for AI, 66
Generative Adversarial Networks (GANs),
284, 285
Generative AI, 21, 22, 42, 57, 127, 129, 136, H
277, 281, 294, 391, 439, Hacker, 115, 120, 200, 262, 329–331, 344, 350
442, 463–464 Hallucination, 229, 232
Generative model, 284 HALOS, 276
Generative Pretrained Transformer 3 (GPT-3), Hamburg Electric Autonomous Transportation
227, 228, 281 (HEAT), 196
Generative Pretrained Transformer (GPT), 4, Hammer-seeks-nail phenomenon, 436
226–229, 232, 234–236, 294 Harver, 377
Generator, 42, 62, 233, 235, 281, 284, 285, Health management, 349, 381
344, 461 Health sector, 319–332, 411
Generic experience treasure, 40 Higher-level symbol, 40
German AI Start-up Landscape, 88 High-frequency trading, 393–394, 398, 399
German Railways, 48, 49 High-performance computer, 132
German Research Center for Artificial High-quality transmissions, 287
Intelligence (DFKI), 280 High-risk AI-systems, 96, 464
Gigaset, 186 HiJiffy, 312
GiGo-effect, 72, 118 HireVue, 375
GiniMachine, 394 HireVue Builder, 375
Global AI race, 131, 404 Holistic evaluation approach, 435, 436
Globalfoundries, 167 Holistic health offer, 381
Globalization of production and commodity 3-Horizon model
flows, 191 Horizon-1 business models, 422
Goal formulation, 219, 430 Horizon-2 business models, 422
Goals of AI use, 418, 424, 429–432 Horizon-3 business models, 422–423
Goals of Artificial Intelligence Horizon-1 level, 422, 434
definition of ethical, 59 Horizon-2 level, 422, 435, 449
Go competition, 24 Horizon-3 level, 422
478 Index

Hotel and hospitality industry, 311–314 Impacts of Artificial Intelligence on


HPI School Cloud, 358 Competitive Dynamics, 420–421
HRS, 153 Implementation of AI, 97, 365, 389, 420, 429,
HR support through artificial intelligence, 388 431, 432, 440, 448, 449
HR technologies Improvement of health systems, 328
meaningfulness of use, 385 Improvement of the customer experience, 398
regulations of use, 384, 386, 387 Incentive systems, 383
Huawei, 225 Incentivization, 367, 383
Human Augmentation, 328 Incident management, 196
Human-centered AI, 128 Increase in knowledge/enlightenment, 126
Human-Centric Journeys, 378 Indicators of learning success, 360
Human dignity, 62, 137, 461 Individual career paths, 380
Human Enhancement, 328 Individual customer profile, 308
Human final decision, 461 Individualized patient management, 321
Human-human communication, 35 Individualized recommendations, 17, 279,
Human-in-the-loop, 76–77 293, 360
Human-machine communication, 35, 134, 267 Individualized therapy, 323
Human-machine interface (HMI), 147, Individual learning profile, 360
322, 434 Individual training agenda, 380
Humanoo, 381 Induction, 6
Human Resource Information System Industrial Analytics, 166
(HRIS), 368 Industrial communication, 148
Human Resource Management Industrial image processing, 161
(HRM), 367–389 Industrial Internet of Everything, 114–116
Hype Cycle for Artificial Intelligence, Industrial Internet of Things (IIoT), 113–115,
124–126, 141, 361 134, 146, 148, 153–155, 161, 163,
Hype Cycle for new technologies, 124, 168, 173
126, 141 Industrial 3D printing, 159, 198
Industry 4.0, 134, 145–149, 179, 183, 186,
304, 440, 455
I Info-Box, 248, 250
I 2.0, 329 Informational supply chain, 146, 201
IBM RPA, 370 Information gap, 48
IBM Watson, 52, 215, 286 Innovations diffusion curve, 412
IBM Watson Assistant, 220 Innovation trigger, 124
Identification of fake accounts, 45 Innovative logistics solutions, 197–201
Identification of learning difficulties and Insights about optimal learning support, 360
learning preferences of Insilico, 303, 327
students, 359 Insilico Medicine, 303, 327
Identity fraud, 394 Inspection, 8, 61, 128, 161, 163, 167, 208
Identity theft, 334, 394 Institute for Cognitive Science at the
Idomoo, 285 University of Osnabrück, 296
Image editing, 38 Institute for Employment Research, 100
Image processing, 27, 37–46, 55, 161, Insurance companies, 348, 349, 396–398
204, 205 Integrated value chain systems, 152
Image recognition, 4, 14, 27, 38–40, 57, 173, Integration of AI applications, 141, 434, 435
204, 208, 307, 310, 327, 335, 359, Intel, 55, 108
360, 375, 376, 396, 441 Intelligence
Images based on text descriptions, 281 bodily-kinesthetic, 5
Immersion, 409, 410 creative, 6, 21, 22, 292
Immersive and personalized customer existential/spiritual, 6
experiences, 410 interpersonal, 5
Immersive experiences, 410 intrapersonal, 5
Impacts of AI technologies, 453 linguistic, 5, 28
Index 479

logical-mathematical, 5 Jurisprudence, 73, 461


machine replication of human, 22 Just-in-time arrival, 48
manifestations, 5–7
multiple approach, 5
musical, 5 K
naturalistic, 5–6 Kano concept, 436
spatial-visual, 5 Kano model
Intelligence explosion, 62, 67–71, 77 basic requirements, 436
Intelligence platform for face recognition, 334 enthusiasm requirements, 436
Intelligent factory, 145, 171 performance requirements, 436
Intelligent logistics solutions, 191, 197–201 Ken Jennings, 23
Intelligent manufacturing systems, 147 Keras, 438
Intelligent Meters, 344 Key elements of an AI strategy, 431
Intelligent network management, 343 Keyhole, 239
Intelligent personal assistants, 261 KLM Messenger, 215
Intelligent power grids, 343 Knewton, 359
Intellimize, 295, 296 Knowledge
Interaction formats via facial recognition, 214 declarative, 74
Interaction with the environment, 17, 51
Interactive digital signage, 308
Interactive voice response (IVR), 28, 35 L
Internal AI action fields in companies, 429 Labeling, 13–15, 20, 129, 307, 441, 463
Internet of Everything, 107, 113–116, 131, Labeling AI-generated works/content, 288,
151, 455 291, 442, 463
Internet of Things (IoT), 112, 113, 115, 116, Labster, 363
128, 131, 134, 207, 260, 307, 309, Language, 3, 111, 159, 214, 334, 362, 369,
345, 348, 364, 455 396, 423
Interpretation of medical notes, 320 Large language models (LLMs), 226–237,
InterSystems, 174 241, 253, 254, 277, 280, 441
InterSystems IRIS, 174 categories, 226
Intrusion detection and prevention, 156 hybrid models, 226–227
Intuitive, 28, 40, 41, 54, 58, 174, 228, 324 recurrent neural network (RNN)
Intuitive model of the world, 19 models, 226
Inventory planning, 307 transformer-based, 226
Investment fraud, 395 Law against Restraints of Competition, 302
In vitro, 321 Law of Disproportionality of Information, 123
In vivo, 321 Leadec, 174
iPhone moment, 25, 83 Lead-prediction, 292
Island Intelligence, 59 Lead-profiling, 292
IT hacking, 329 Learning operations, 440
Learning organizational culture, 379
Learning process
J monitoring of the individual, 360
Jeeves, 312, 313, 325 more individual care, 363, 380
Jeopardy, 23, 141 self-control, 360
Jill Watson, 363 Learning through reward, 17
Jingle, 245 Lee Sedol, 18, 24
Job demand, 172 Legal challenges in the use of Artificial
Job effects of AI use, 454 Intelligence, 459–466
Job Futuromat, 100 Legal framework, 4, 94–101, 119, 199, 292,
Jobspreader, 370 381, 384, 389, 455, 456
Journey into the future, 169 Legal issues, 237
Journey into the past, 169 Lemonade, 397, 398
Jukebox, 288 Lengoo, 276
Jump-digitization, 121 Leonardo Hotels, 313
480 Index

Level Playing Fields, 463 industrial companies, 89, 90


Leverage, 190, 212, 218, 311, 389, 437 Mapping of human DNA, 323
Levity, 239 Market for artificial intelligence, 4
Lexus, 286 Marketing automation, 212, 215, 252,
Libratus, 25 292, 426
Lifecycle and channel-oriented Marketplace for loans, 396
pricing, 306–307 Masked texts, 227
Lifelong learning, 360 Mass customization, 171, 184
Life science industry, 327 Masterplan artificial intelligence, 455
Light Detection and Ranging (LiDAR), 50, Masterplan Smart Factory, 176
54, 63, 195 Matching AI, 378
Limits of AI acceptance, 76 Matching algorithm, 379
Linear development, 130 Matching of candidates and job
Linear economy, 138 requirements, 377–378
Linguistic exchange, 5 Matching process, 377–379
Linguistic intelligence, 5, 28 Matrix production, 173
Listening engine, 380 Matternet, 199
Listicle, 248 Mean Time between Failures (MTBF),
LLaMa, 227 163, 164
Location-based services, 266 Mean Time to Recovery (MTTR), 163, 164
Location-optimized assortment planning, 306 Media buying workflow, 300
Logic of artificial intelligence, 284, 387 Media competence, 46, 358
Long list of relevant AI projects, 437 Media planning, 298
Longtail keywords, 251 Medical data for AI applications, 319
Look-alike audiences, 293 Medical imaging, 319–324
L’Oréal, 304 Mention, 239
Loss of control, 82 Mercedes-me-Skill, 262
LowKey, 336 MetaAI, 439
Ludwig van Beethoven, 289 Metadata, 122, 240, 276, 278, 279, 443
Luminous-base, 441 Metaverse
Luminous-control models, 441–442 adoption by various industries, 412, 413
Luminous-extended, 441 AI application areas, 409–412
Luminous-supreme, 441 characteristics, 408
decentralization, 408, 409
economy, 410
M immersion, 408–410
Machine learning (ML) industrial, 410
deep learning, 9, 12, 21, 24, 26, 64, interoperability, 409
116, 466 persistence, 408, 409
supervised learning, 13–16, 21 synchronicity, 408
types of learning, 13–20 Metaverse in Retail, 410
Machine-to-machine communication, 134, 267 Microsoft, 28, 55, 98, 168, 216, 217, 225, 228,
Machine-to-machine interface, 434 234, 246, 254, 255, 334, 439, 445
Maintenance logic, 163 Microsoft Azure, 233, 246, 336, 439
Management Execution System (MES), Microsoft Cortana, 35, 218, 225
155, 161 Microsoft’s Seeing AI, 359
Management of the customer experience, 218 Midjourney, 25, 42, 233, 282
Manager perspective, 422 Military sector, 360, 401–405
Manna, 200 Mindset of your organization, 446
Manufacturing productivity, 150 MoBerries, 377
Manufacturing-X initiative, 183 Mobility
Map of AI start-ups in Germany automation, 190–193
AI technology stack, 88, 90 forms of automation, 191
corporate functions, 90 goals and expectations for automation, 192
enterprise intelligence, 90, 92 reservations against automation, 188
Index 481

Mobility as a Service (MaaS), 197 input layer, 13


Model-centered AI, 127–128 model depth, 10, 11, 13
Modeling of target dialogues, 219 model width, 13
Model of the world, 19, 40 output layer, 10, 13
Modular system, 171 processing nodes, 10
Monitoring of livestock, 204 Neurofeedback devices, 331
Monopoly Commission, 302 Neuroflash, 279
Moore’s Law, 108–109, 141 Neurotechnological implants, 70
Morality, 61 News and press agency, 278
Most powerful computer in the world, 131 News Square
Movement components, 51 4-Ear-Listening, 30–32
Movement profiles of people, 42 New technologies, 24, 95, 100, 107, 109,
Moving target, 98, 419, 427 124–136, 140, 141, 190, 197, 205,
Multi-Experience, 127 236, 433, 438, 454
Multi-Experience Model Next-best Action, 294
be me, 128 Next-Product-to-Buy, 294
know me, 127 Next-word predictor, 228
see me, 127 Nice, 382
sync me, 127 Nice Employee Virtual Attendant (NEVA), 382
Multilingual resume parsing, 379 Night vision, 330
Multimodal biases, 61 No-Code AI platform, 394
Multimodal model, 61, 234 No-Code Robotics Platform, 159
Multi-stakeholder organization, 439 Noise, 76–77, 115, 192, 359
MusicLM, 25, 37 Non-linear growth, 130
Non-representative training dataset, 72
Nuralogix, 329
N NYX Hotels, 313
Named entity recognition, 34
Natural language generation (NLG), 28, 32,
33, 35, 36, 277 O
Natural language processing (NLP), 4, 28–37, Object recognition, 9, 333–337, 348
57, 127, 226, 227, 236, 268, 274, OCEAN, 371
278, 362, 396, 429, 439 Offshoring, 185, 186
Natural language understanding (NLU), Olay Skin, 304
30, 32–36 Omni-channel demand planning, 306
Near-Repeat-Theory, 336 Omni-channel management, 307
Nearshoring, 181 Onboarding new employees, 378–379
Nespresso, 260, 261 On-demand transport services, 197
Nespresso-Club, 261 One-click application, 376
Networked computerization of manufacturing ONE Market, 299
technologies, 145 Open data, 442–444
Networked factory Open data platforms, 442, 443
Smart Factory, 145, 150–154, 163, 164, OpenML, 443
167, 173, 174, 176, 184–190, Open-source software solutions, 438
350, 422 Optimization of job advertisements, 369, 385
Networking Optimization of product
external, 151 development, 303–305
internal, 151, 379 Oracle, 384
of products, services, processes, animals Oracle Fusion HCM Analytics, 384
and people, 112–116 Orientation of educational work, 136, 453
Neuralink, 322, 323 Oskar, 392
Neural machine translation (NMT), 276 Otto, 257, 258
Neural network Overall Equipment Effectiveness (OEE), 115,
artificial, 10, 12, 296 150, 155–157, 163, 164, 176,
hidden layer, 10, 13 187, 189
482 Index

Over-information, 330 Position zero, 247–250


Oversupply by overtreatment, 328 Power by the hour, 166
Power generation, 343, 344
Precision medicine, 323–324
P Prediction, 6, 8, 9, 11, 20, 61, 68, 123, 156,
Paraphrases, 33 202, 227, 234, 272, 273, 292, 296,
Pareto channels, 219, 220 299, 323, 327, 333, 343, 366, 368,
Pareto tasks, 213, 218 396, 402, 447
Parsing, 33 Prediction of corporate profits, 273
Participatory web, 407 Predictive analytics, 127, 155, 156, 164, 175,
Partnership on Artificial Intelligence, 439, 440 188, 208, 261, 293, 377, 426
Part-of-speech annotation, 34 Predictive Analytics for police work, 336
Part-of-Speech Tagging Predictive engine, 369
word class annotation, 34 Predictive maintenance
Patient monitoring, 326 concept, 166, 190
Patient records, 319 core, 166
Pattern recognition, 7, 16, 17, 28, 38, 39, 165, forecast model, 165
238, 293 goals, 163
Pay as you go, 166 model, 163, 165, 168
Pay per Use, 166 work steps, 163–166
Peak of inflated expectations, 124 Predictive policing, 336, 340
Perception Map, 297 Predictive policing, functioning, 336
Perception of the environment, 54 Predictive Quality, 175
Perception patterns, 296 Predictive Risk Intelligence (PRI), 201–203
Performance management, 367, 381–382 Predictive Servicing, 260, 261
Performance of integrated circuits, 108 Preliminary analysis and pre-selection of
Performance quality, 150, 185, 410 potential candidates, 375
Performance requirements, 436 Preloading rules, 14, 72, 75, 77
Personal assistance systems, 348 Preparation of change management, 419
Personal health data, 319, 333 Prescription, 8, 61
Personality profiles, 371, 373 PRESTIGEenterprise, 260
Personalization of the guest experience, 313 Preventer, 119
Personalized incentives, 383 Price-setting algorithms, 302
Personalized promotions, 308 Principle of transparency, 465
Personalized therapy plans, 323, 324 Privacy by default, 123
Pharma.AI Platform, 327 Privacy by design, 123
Phase concept, 217, 417–419 Private liability insurance, 198
Phase of dynamic growth, 130 Proactive intervention, 326
Phase of slow adoption, 130 Process automation, 87, 396
Phenom, 379, 380 Process redesign, 434
Physical self-optimization, 329 Procter & Gamble, 299, 304
Placebo preparations, 328 Pro-domo sources, 118
Plateau of productivity, 126, 131 Product and service innovations, 101, 177,
Platforms, 45, 52, 57, 88, 109–111, 121, 127, 418, 420, 445
153, 167, 174, 181, 188, 200, 202, Product Carbon Imprint, 162
215, 216, 218, 219, 236, 238, 239, Production control center, 154–156
250, 252, 256, 266, 276, 282, 285, Production plant with a customer
286, 290, 292, 295, 296, 305, 307, interface, 171
309, 352, 357, 359, 363, 377, 379, Productivity plateau
393, 394, 396, 397, 409, 420, plateau of productivity, 126, 131
425, 442–444 Product liability law, 198
Platforms of established digital Product Lifecycle Management (PLM), 147,
corporations, 153 155, 161
Poker players, 25 Product/service portfolio, 435
Polarization of the labor market, 100 Profit prediction models, 273
Index 483

Programmatic Advertising, 298 Recurrent neural network (RNN), 12, 226


Programmer team, 75 Red Flags in dealing with AI tools, 465–466
Prohibition with reservation of Reduction of noise, 76–77
permission, 465 Regions of Interest, 297
Projects with research centers, 429 Regulatory necessity, 455
Prompt, 37, 234, 382, 461, 462, 466 Reinforcement learning, 13, 17–18, 21, 24, 64,
Prompt databases, 462 228, 293, 322, 392
Prompt Engineers, 462 process, 18, 21
Prosthesis, 323 Rekognition API, 441
Protection of personal data, 59, 464–465 Relationship extraction, 33
Psychology as a Service, 272, 274 Relevance of the doctor-patient
PTC, 176, 177 conversation, 328
PyTorch, 438 Remote monitoring, 58, 113, 156, 199
Repair time, 163
Re-Qualifying, 361
Q Research on explainable AI, 75
Q, 246 Reshoring
QNOVA, 235 arguments for a, 185
Qualification-based gap analysis, 380 Residual decision authority, 461
Qualifying, 361 Resilience of supply chains, 185
Quality control Re-skilling, 361
AI-based, 151, 161 Re-skilling initiative, 454
automated, 161 Resource management, 164, 173–179
image/video-based digital, 161 Respeecher, 37
Quality seal for the data used, 72 Responsibility, 57, 84, 101, 137, 139, 143,
Qualtrics, 384 166, 198, 218, 244, 290, 339, 387,
Qualtrics Employee XM, 380 405, 425, 431, 433, 448,
Quantum computer, 109 453, 459–466
Quicktext, 312 Responsible AI, 75, 128
Quirion, 392 Résumé Parsing, see Automated Analysis
of Resumes
Retail, 38, 111, 179, 275, 305–311, 353,
R 410, 413
Race between technology and education, 454 Retention Scores, 383
Rakuten, 199 Retorio, 371–373
Ranking of applicants, 377 Retresco, 277
Rational choice approach, 337 Rewe, 310
Reader activation, 275 Rewe Pick & Go, 310
Real-time dashboard, 393 Riffusion, 25, 37
Real Time Fashion, 179, 182 Risk and investment platform, 393
Real-Time Fashion provider, 180 Risk management
Real-Time Information Management, 161–163 ongoing, 201
Real-time monitoring, 148, 187, 235, 326 proactive/predictive, 201
Real-time performance monitoring, 381 types, 201
Real-time process management, 154 Risk monitoring, 201
Real Time Retail, 179 Risk of collusive behavior, 302
Real-time video adjustment, 285 RiskOps platform, 396
Real-time visibility and management of Risks of AI use in armaments, 403
changes in supply and demand, 309 RoBERTa, 227
Receptiviti, 272 ROBIN, 392
Receptiviti Insights API, 272 Robo advisor, 17, 47, 75, 392–393, 398
Recommendation algorithms, 67, 293 Robo-Invest, 392
Recommendation engines, 11, 212, 293, 308 Robo Recruiter, 370
Recomposition of Music, 283 Robot-assisted system portfolio, 325
Recruiting software, 377, 378 Robot dog, 337
484 Index

Robot hotel, 312 Robot skin, 158


Robotic Natives Rule-based system, 12, 20, 396
Generation R, 54
Robotic Process Automation (RPA), 234, 370
Robotics, 49–57, 69, 97, 146–148, 157, 159, S
160, 172, 175, 234, 324, 331, Saccades, 296
370, 403 Sales Cloud Einstein, 294
Robot journalism, 378 Salesforce, 294
Robot-recruiting, 388 Samsung, 225, 254
Robots SAP SuccessFactors Human Experience
autonomous, 159, 307 Management Suite, 381
ban on killer robots, 405 Scalability, 183, 219, 397
basic components, 50–51 Scaling of hiring processes, 375
classic, 50 schema.org, 255
classification by degree of human School-cloud, 358
appearance, 50 S-curve, 129–131, 141, 190
classification by degree of interaction with Search Engine Result Pages (SERPs), 250
humans, 50 Second Mover, 242
classification by fields of application, 49 Second Party Data, 120, 121
cobots, 50, 157 Security sector, 333–340
collaborative, 50, 146, 157–158 Seedtag, 300
collaborative two-arm, 158 Self-adaptive algorithms, 11
combat robots, 402, 403, 405 Self-driving cars, 38, 55, 134, 194
Context-Aware Robots, 158 Self-image, 245, 427
for direct patient care, 49, 325 Self-initiative to close the qualification
in the domestic environment, 325–326 gap, 362
exploration and military robots, 49 Self-learning ability, 67
four-legged, 337 Self-learning system, 63, 159
for home care, 325 Self-optimization, 328–332
human likeness, 52, 53 Self-Service Technologies (SST), 48
humanoid, 50, 52, 78, 158, 312, 426, 434 Self-supervised learning, 13, 19–20, 22, 59
humanoid robot Sophia, 52 Semantic gap, 63, 404
for indirect patient care, 49, 325 Semantic parsing, 32, 33
industrial, 49, 51, 148, 157, 444, 460 Semantics, 30, 33, 118, 276, 299, 376, 403
in logistics, 51, 350 Semantic targeting
machine-like, 50 context targeting, 299
medical robots, 49 Semi-autonomous assistants, 403
in the medical sector, 90, 141, Sense of prices, 302
319, 324–326 Sensetime, 73
mobile, 50, 134, 312 Sensor-Economy, 114
motor skills, 50 Sensors for environmental detection, 50–51
navigation robot, 49 Sentiment analysis, 33, 57, 276, 351
programming, 159, 172 Sentiment detection, 239
reception robot, 312 SentiOne, 239
Robot Sophia, 52 Set of functions, 51
service robot, 4, 49, 312–314, 347 SheIn, 140, 179–182
stationary, 50, 151 ShelfView, 307
for surgical assistance or for complete Ship-from-Store, 308
execution of operations, 324–325 Ship-to-Home, 308
toy robot, 49 Ship-to-Store, 308
training robot, 159, 380 Shit-in-Shit-out, 72
types, 49–50 Shopfloor Management Software Suite, 173
use of service robots, 312–314 Shopping assistance, 410
vacuum robot, 347, 348 Short List of relevant AI projects, 437
as virtual caregivers, 325 Siemens Electronics factory, 170
Index 485

Simplify, 212, 227, 328, 466 Social, 40, 45, 100, 135, 137, 139, 337,
Simulation of processes and material 339, 409
flows, 167–173 Social Bots, 36, 237–240, 242
Simultaneous Localization and Mapping Social credit system
(SLAM), 54, 55 credit, 337–340
Single point of truth, 119 deduction, 338
Single source of truth, 119, 174 goal, 339
Siri, 3, 28, 35, 222, 224, 225, 242, 243, point allocation, 338
246, 247 Social debate, 454
Skill mapping, 380 Social Engineer, 270–271
Skills, 256, 262 Social engineering, 270, 271
Skills for voice design, 267 Social hacking, 271
Skill-to-kill, 432 Social listening, 180, 238, 239
Skype, 48 Social Listening Tools, 180, 238
SLAM solution providers, 55 Social network, 40, 45, 265, 360, 409,
Sleeper apps, 266 439, 456
Slogan, 245, 266, 272 Sock puppet
Slope of enlightenment, 126 fake account, 45
Smart City, 350, 352–354 Softgarden, 376
Smart City Index, 351, 352 Solidvest, 392
Smart Displays, 226 Solution space of Artificial Intelligence,
Smart Factoring, 146 427
Smart Factory, 145–151, 153–156, 163, 164, Sonic logo, 245
167, 169, 173, 174, 176, 177, Sound branding, 245
184–190, 304, 350, 422 Sound logo, 245
advantages of an AI-based, 184 Sound spectrogram, 269
disadvantages of an AI-based, 184 Source verification, 442
goal, 145–146 Sparring partner in marketing, 461
Smart Glasses, 364 Speakable Content, 248, 249, 253, 264
Smart Grid Initiatives, 344 Speakable markup, 255
Smart Grids, 343, 345, 354 Speaker authentication, 268
Smart home devices, 247, 262 Speaker recognition, 268–271, 274
Smart homes Speaker verification, 268
functions, 345 Speech analysis, 268–274
security, 348 Speech generation engine, 278
Smart Manufacturing Speech markup, 255
technologies, 153–179 Speech processing, 27, 28
Smart Manufacturing as a business Speech recognition
model, 179–182 importance of, 28
Smart Manufacturing Platforms, 147 speech processing, 28
Smart Meters, 148, 343, 354 Speech-to-Speech (STS), 28, 29
Smart mobility, 350, 353 Speech-to-Speech translation, 28
Smart production, 173 Speech-to-Text (STT), 28
Smart service terminal, 110, 111 Speed of 5G, 133
Smart shelf solutions, 309 Spot, 337
Smart Solutions, 344 Sputnik Moment
Smart speaker for China, 24
sale, 242, 243 for the USA, 24
Smart Speaker Study, 225 Sqreem Technologies, 299
Smart systems, 349 Stable Diffusion, 25, 37, 42, 282–283
Smart Terror, 348 State capitalism, 339
Smart Traffic, 350 State investment support, 455
Smart TV, 226 State’s leash, 338
Smooth pursuit, 296 Status of AI development in Europe, 94
Snippet, 248, 250, 252 Status of AI in Germany, 78, 79
486 Index

Status of AI use in the production Technological impulses


sector, 187–189 innovation trigger, 124
Stereotype of a Professor, 282 Technological singularity, 62, 68, 70, 77
Straight-Through Processing Technology acceptance model (TAM),
dark processing, 397 135, 141
Strategic action plan for Artificial Teleportation, 169
Intelligence, 431 TensorFlow, 18, 438
Strategic demand management, 305–306 TEO analysis, 275
Strategic HR planning, 367–369 TEO Suite, 274, 280
Strategic skills gap, 360–362 Test-and-learn approach, 435
Strategic window, 419 Test and prototype mentality, 432
Strategic Window of Opportunity, 419 Testing and approval requirements, 464
Strong artificial intelligence, 67, 69, 77, 78, Text-based communication interface, 214
129, 445 Text-based dialogue systems (TTT), 35
Style transfer, 283, 289 Textio, 369, 370
Summetix, 239, 240 Textio Browser Extension, 370
Sunspring, 287 Textio Guide, 370
Supercomputer, 109, 132, 133, 233 Textio Score, 370
Superintelligence Textkernel, 376, 379
development of a, 69 Text mining methods, 239
Supervised learning Text performance, 274–275
process, 14, 15 Text Performance Index (TPI), 275
Supervisory Control and Data Acquisition Text prompt, 37, 234, 281, 282
(SCADA), 147, 156, 161 Text recognition, 41
Support from the C-Level, 446 Text-to-music, 37
Support in drug research, 326–328 Text-to-picture generators, 42, 281
Support of hiring processes, 375, 377 Text-to-speech (TTS), 28, 29, 255, 366
Surveillance capitalism, 123, 124 Text-to-speech application, 37
Surveillance state capitalism, 339 Text-to-Text (TTT), 29, 48, 214
Surveillance technology, 333 Text-to-Text translation, 29, 48
Sustainability Twins, 171 Textual interaction, 213
Sustainable corporate governance, 137–140 Think Voice, 266
Syntactic parsing, 33 Third-party cookies, 298, 300
Syntax, 30, 33 Third Party Data, 47, 120, 121
Synthesia, 285 Threaten, 97
Systemic rival, 65 3D model, 168, 410
3D product presentation, 410
Throwaway economy, 138
T Time buffer, 48
Tagging, 38 Tipping point, 108
Talent Acquisition Suite, 376 TOBi, 215
Talent experience, 367, 376 Tomi.ai, 295
Talent Experience Management (TXM), 367, Tonality of a statement, 239
368, 379 Tools for monitoring and controlling, 220
Talention, 376 Touch ID, 41
Talent Journey, 367–369, 375, 378, 379, 388, Toxicity, 61
389, 422 Track & Trace system, 156
Talent management, 375, 377, 379–381 Tractable, 397
Talker, 251 Trade-off between availability and
Targeting concept, 298 profitability, 307
Taxonomies in cultural research, 372 Trade-off between comprehensibility and
Tay, 216, 217 precision, 64
tcl.digital, 274 Trade-off between service quality and
Tech freaks, 350 personnel costs, 308
Index 487

Traffic expert system, 49 Up-skilling offensive, 136


Training data, 12–15, 17, 44, 71–73, 75, 127, Upstart, 396, 397
218, 229, 272, 276, 284, 360, 392, Urbanization, 191, 457
462, 464 US-American-Chinese Duopoly, 99
Training data set, 13, 73, 75, 284 Usefulness of a technology, 135
Transaction space, 182–184 User convenience, 260
Transformer architecture, 226, 281 User-friendliness of a technology, 135
Transhuman age, 328
Transhumanism, 62, 67–71, 77, 329
Translation, 3, 8, 14, 28–30, 33–35, 37, 48, 56, V
57, 59, 67, 215, 234, 246, 276, 277, Value, 8, 107, 146, 211, 320, 343, 358, 383,
280, 287, 445 391, 410, 424, 456
Transparency Value chains
about AI processes, 75 networking of physical and digital, 152
the algorithms, 73–74 system of, 152, 190
the data, 71–72, 77 Value instance, 217
the delivery of data, 74 Value-Irritant Matrix
Transparency issues, 463 Customer-Company-Expectation
Transparency obligation, 464 Matrix, 211–213
Transportation as a Service (TaaS), 197 Value-oriented customer management, 294
Trial-and-error process, 17 Values
Trigger, 20, 36, 41, 45, 47, 61, 63, 68, 100, socialist, 65
115, 124, 181, 215, 219, 220, 250, Value systems
252, 260, 271, 286, 293, 309, 322, value chains (integrated systems), 100,
326, 348, 369, 377, 395, 404 112, 140, 146, 151, 152, 161, 183,
TripAdvisor, 252 190, 206, 309, 311, 410, 435
Triumph of Artificial Intelligence, 25, 107 Variants of the Internet, 407
Troll, 45 Variety, 22, 27, 47, 88, 114, 117, 118, 141,
Trolley problem, 57, 62, 461 145, 146, 159, 179, 182, 192, 238,
Troll factories, 45 241, 251, 262, 265, 292, 300, 312,
Trough of disillusionment, 126 348, 359, 394, 402, 409, 419, 439,
Trust-building measure, 465 441–443, 466
Trustworthy AI, 66, 67, 88 V-Commerce
Try-on tool, 304 Voice-Commerce, 244, 256–261, 268
Turnitin, 359 Vehicles, 4, 27, 49, 50, 54, 57, 62, 63, 77, 108,
TÜV NORD, 166 119, 128, 134, 170, 173, 188,
24/7 monitoring of residents, 348–349 191–200, 203, 226, 262, 271, 304,
Typer, 251 346, 350, 362, 397, 402
Velocity, 116–118, 141
Veracity, 117, 118, 141
U Video AI, 371
Udacity, 363 Virtual assistants, 58, 214, 215, 226, 328
Ultra Fast Fashion, 179, 182 Virtual caregivers, 325
Uncanny Valley, 53, 58 Virtual consultation, 410
Under-information, 330 Virtual creative environment, 409
Understanding of S-curves, 131 Virtual experiences, 411
Unified Profiles, 261 Virtual fitting, 411
Unity level, 156 Virtual health agents, 223
Unruly, 286 Virtual lab experiences, 363
Unsupervised learning Virtual marketplaces, 183
process, 16, 21 Virtual onboarding, 378
Uploading, 69–70, 198, 394 Virtual reality (VR), 114, 155, 162, 164, 304,
Upshifting, 69, 70 363–365, 409–411
Up-skilling, 136, 361 Virtual shopping worlds, 411
488 Index

Virtual supervisor, 360, 365 VR glasses, 162, 364, 411


Visier People, 381 VW Group Academy, 362
Visionary view, 421, 423
Visual intelligence, 38
VivoKey, 331 W
Vocational education sector, 364 Wait-and-see approach, 417
Vodafone, 215 Walk-Again-Project, 331
Voice Walk-in Internet, 408
Custom Voice, 246 Walk-in web, 407
digital, 246 Walmart, 200
individual, 246, 270, 306 Wandelbots, 159
male or female, 245, 246 Watson, 23, 286
Voice Ads, 260 Watsonx, 442
Voice analysis of managers, 273 watsonx.ai, 442
Voice-analytics watsonx.data, 442
voice analysis, 272–274 watsonx.governance, 442
Voice Apps, 242, 244, 256, 263–267 Wave.video, 285
Voice App Stores, 256 Waymo, 194
Voice Assistants Waymo Driver, 194, 195
areas of application, 243 Weak Artificial Intelligence, 67, 77
Voice Branding, 244–246, 268 Wealth management
Voice Commerce, 244, 256–261, 264, Algorithm-based, automated, 392
268 Wealth Management, 392–393, 398
Voice Controlling, 244, 263–268 Weapon Technology, 126, 402–405
Voice dialogue systems, 28, 29 Web 1, 407
Voice Distribution, 256 Web 2, 407
Voice Engine Optimization (VEO), 243, Web 3, 407
244, 246–256 Web 3.0, 407
Voice first, 247, 266, 267 Whitebox, 392
Voice Identification, 268–274 White Box of Artificial Intelligence, 71
Voice integration, 244, 262–263, 268 Willingness to use a (new) technology, 135
Voice Interface, 242, 244, 254, 266, 268 Wilo, 186
Voice Marketing Winner-takes-it-all models, 267
areas of responsibility, 244 Wise Athena, 301
fields of action, 243 Wordsmith, 278
learning process, 264 Wordsmith platform, 278
Voice navigation, 254 Workflow automation, 396, 399
Voice-of-Society Measurement, 128 W-questions, 250, 252
Voice only, 246, 266
Voice-only economy, 266
Voice print, 269–271 Y
Voice recognition, 271 YuMi, 158
Voice requests, 251
Voice search
transaction-specific, 255 Z
Voice trend, 247 Zalando, 57, 153, 258
Voice User Interfaces (VUI), 255 Zara, 179, 182
Volkswagen, 34, 262, 331, 362 ZenCity, 350
Volume, 4, 7, 17, 48, 59, 115–118, 121, 122, Zero-Click Search, 250
128, 134, 141, 180, 191, 192, 197, Zero Party Data, 119, 120
203, 205, 218, 242, 251, 271, 346, Zero-Shot Prompting, 442
376, 393, 441 ZipRecruiter, 377
VR experience, 162 Zuckerberg’s Law, 122

Вам также может понравиться