Master Thesis Applications of Artificial Intelligence in Austrian Businesses: A Qualitative Analysis of The Status Quo On Adoption
Master Thesis Applications of Artificial Intelligence in Austrian Businesses: A Qualitative Analysis of The Status Quo On Adoption
Submitted to
II
THANK YOU NOTE!
I would like to express my deepest gratitude to everyone who contributed to the
successful completion of this thesis.
First and foremost, I extend my heartfelt thanks to all the interview participants who
generously shared their time, insights, and experiences. Your willingness to engage
in thoughtful discussions and provide valuable information was instrumental in
shaping the depth and breadth of this research. Your contributions have been
invaluable, and I am deeply appreciative of your openness and cooperation.
Next, I would like to thank my co-supervisor, BA.MA Rudolf Grünbichler, for your
unwavering support, guidance, and encouragement throughout this journey. Your
expertise, constructive feedback, and thoughtful advice have been crucial in refining
my research and helping me navigate the complexities of this project. Your
mentorship has been a source of inspiration and motivation, and I am profoundly
grateful for your contributions.
Additionally, I extend my gratitude to my primary supervisor, o. Univ.-Prof. Dipl.-Ing.
Dr. techn. Ulrich Bauer, for your continual support and guidance. Your insights and
feedback have significantly enriched my research and contributed to its overall
quality.
I am grateful to the faculty and staff at BWL institute, TU Graz for providing a
conducive environment for research and learning. Your support and resources have
been essential in the completion of this thesis.
Furthermore, I would like to acknowledge the support of my family and friends. Your
encouragement, patience, and understanding have been my pillars of strength
throughout this journey. Thank you for believing in me and supporting me in
countless ways.
Lastly, I extend my appreciation to everyone involved, directly or indirectly, in this
thesis. Your contributions, no matter how small, have played a vital role in the
successful completion of this research.
Thank you all for your support and encouragement. This thesis would not have been
possible without your invaluable contributions.
III
ABSTRACT
Machine learning empowers systems to adapt, learn patterns from human behaviour,
and facilitate prediction, analysis, and recommendation. With the growth of data and
the democratization of AI, even smaller companies can now develop AI-enabled
services. Numerous consultants and software development firms are assisting both
large and small companies in adopting AI.
This master's thesis aims to investigate the applications of Artificial Intelligence (AI)
in Austrian businesses. AI technology plays a pivotal role in modern enterprises by
providing insights, optimizing operations, enhancing decision-making, and advancing
various operations such as text analysis, text generation, and content creation, which
are prominent in contemporary applications
For the qualitative analysis of the current status of AI use in Austrian companies,
interview partners were selected who develop software solutions and act as
consultants for various companies in order to obtain an overview of most industries
in Austria on the subject of AI use. The questionnaire is developed on literature
review. The interview content is analysed using inductive coding, and themes and
aggregated domains are identified. The results are summarized to provide an
understanding of the status quo of AI adoption in Austrian businesses.
IV
CONTENTS
1. Introduction ......................................................................................................... 1
1.1 Initial Situation ............................................................................................... 1
1.2 Research question and Research objective .................................................. 2
1.3 Methods and Procedure ................................................................................ 4
1.4 Delimitation of Study ..................................................................................... 6
2 Artificial Intelligence ............................................................................................ 8
2.1 Definition and introduction ............................................................................. 8
2.2 Requirement of Data for Artificial Intelligence ............................................. 11
2.3 Artificial Intelligence Terminology ................................................................ 13
2.4 Factors for Artificial Intelligence revolution .................................................. 15
2.4.1 Data ...................................................................................................... 15
2.4.2 Processing Power ................................................................................. 16
2.4.3 Algorithms............................................................................................. 16
2.5 Machine learning ......................................................................................... 17
2.6 Machine Learning models ........................................................................... 19
2.7 Building a model.......................................................................................... 20
2.8 Metrics in Model selection ........................................................................... 20
2.9 Regression Metrics ..................................................................................... 21
2.10 Generative Artificial Intelligence ............................................................... 22
2.11 Algorithms ................................................................................................ 24
2.11.1 Linear Regression ............................................................................. 25
2.11.2 Regularization ........................................................................................ 26
2.11.3 Logistic Regression ........................................................................... 27
2.11.4 SoftMax Regression .......................................................................... 28
2.11.5 Decision Tree .................................................................................... 28
2.11.6 Ensemble Models .............................................................................. 29
2.11.7 Bootstrap Aggregating (Bagging) ...................................................... 30
2.11.8 Clustering .......................................................................................... 31
2.11.9 Deep Learning ................................................................................... 32
2.11.10 Convolutional Neural Networks ............................................................ 33
2.12 Building a machine learning project ......................................................... 34
2.13 Workflow of a Data Science Project ......................................................... 35
V
3 Research process ............................................................................................. 38
3.1 Preparation of Questionnaire ...................................................................... 39
3.2 Data preparation and analysis..................................................................... 46
3.3 Data Quality ................................................................................................ 47
4.Analysis and Results ............................................................................................. 48
4.1 AI Integration Across Businesses................................................................ 48
4.2 Transformation Through AI ......................................................................... 50
4.3 AI Technologies Across Domains ............................................................... 51
4.4 AI Implementation Challenges .................................................................... 52
4.5 Overcoming challenges and innovation ...................................................... 53
4.6 AI Project Lifecycle ...................................................................................... 54
4.7 Advice for Beginner Businesses.................................................................. 55
5.Summary ............................................................................................................... 56
5.1 Critical reflection.......................................................................................... 59
5.2 Outlook ........................................................................................................ 59
References ............................................................................................................... 61
Table of figures ........................................................................................................ 66
List of Tables ............................................................................................................ 67
Abbreviations ........................................................................................................... 67
Appendix 1: Interview: ARTI ROBOTS ..................................................................... 68
Appendix 2: Interview: LEFTSHIFT ONE ................................................................. 73
Appendix 3: Interview: AISSISTANCE ..................................................................... 78
Appendix 4: Interview: PETER JEITSCHKO ............................................................ 83
Appendix 5: Interview: CITYCOM AUSTRIA ............................................................ 85
Appendix 6: Interview: WIRECUBE and SHOPREME ............................................. 88
Appendix 7: Interview: HERZENSAPP ..................................................................... 92
VI
1. Introduction
Technology has always been an integral part of any business’s success. Like from
the medieval ages whichever business had or used a better technology always had a
competitive advantage. Take, for example, the rise of Amazon during the age of the
internet. By recognizing the potential of online product sales, Amazon became a
dominant force in the ecommerce industry. In today's digital age, data is the
cornerstone of virtually every aspect of business operations. Whether it's the data
stored on a computer's hard disk or the genetic information encoded in our DNA,
data underpins everything we do. Harnessing this data is crucial for making informed
decisions. By analysing historical data and identifying patterns, we can gain valuable
insights into the nature of our operations and even predict future events. So, as
human brains work with gathering data, forming neural connections in our brain to
store information and then predict events or even discover and invent new things, is
known as natural intelligence. When similar process is designed and a similar nature
of behaviour is expected from a machine or computer then it is named ‘Artificial
intelligence’.
1
logistics, sales, marketing is using this machine intelligence to its advantage. Policy
makers, CEO, and technical officers are racing to learn and make their machines
intelligent and be ahead in today’s era of providing perfect services to their
customers.
The research questions emerged from the buzz surrounding artificial intelligence
technology and my continuously growing interest in the field. Research as defined by
(Chandra & Hareendran, 2017) is a creative work undertaken on a systematic basis
in order to increase the stock of knowledge, including knowledge of man, culture and
society, and use of this stock of knowledge to devise new applications.
Research Questions:
2
• This question aims to investigate the impact of AI on business
process and productivity
3. What are the key AI technologies being applied to get positive business
impacts
• This question focuses on identifying specific AI technologies being
utilized for enhancing business processes.
4. What factors influence the adoption of AI in Austrian businesses?
• This question investigates the barriers and solution to overcome
these challenges.
Research Objectives
The research objective of this thesis is to explore and analyze the current status of
AI adoption among Austrian businesses through a qualitative analysis. It aims to
identify the applications, challenges, and benefits of AI integration across various
sectors. Additionally, the research seeks to provide insights and recommendations to
enhance AI implementation strategies for businesses in Austria.
Significance of Study
3
In this thesis, literature review of the Artificial Intelligence technology and it’s use in
different business fields is performed and a qualitative analysis of the status quo of
the implementation of this technology across different businesses in Austria is done
by interviewing stakeholders from different companies from Austria.
How exactly till now businesses have been implementing this technology and how
has it come to their advantage? One notable example of the application of Artificial
Intelligence is NASA’s Mars Exploration Rover (MER) mission. The MER mission
deployed two rovers to explore the Martian surface, these rovers equipped with
advanced Artificial Intelligence systems that allowed them to navigate autonomously
and perform scientific experiments without human intervention (Portas-Levy, Joseph,
Vaitsos, Quinn, & Simmons, 2022). The Artificial Intelligence algorithms used
enabled them to make decisions about where to move and what experiments to
conduct based on their surrounding and scientific objective. Some technology CEOs
experienced their Artificial Intelligence movement in year 2014 when Google
acquired UK based Deep Mind whose Artificial Intelligence had learned on its own to
play certain Atari video games with superhuman performance (Russell, 2016).
Based on the literature review most commonly and rapidly developing areas of
applications of artificial intelligence were observed and shortlisted, like healthcare,
finance, human resource, robotics, Generative AI and manufacturing. The aim was
to interview such companies having customer base in variety of industries, like for
example software companies have customer base in mostly all fields like logistics,
4
manufacturing, healthcare and so on so that most industries could be covered
After researching for the companies by looking for them online on social media
websites like LinkedIn, attending startup fests, university events, social events like
Christmas get-together of AI professionals, companies were contacted. Many on
spot interviews were also conducted in startup event where the CEOs were also
present.
The interviews were recorded to capture the audio content accurately. Software
development firms like ‘Wirecube’, ‘Leftshift One’ which are engaged in developing
AI software’s for companies belonging to finance, healthcare, marketing, production,
human resource, supermarket, logistics industries. Consultancy firms were also
interviewed, as to know about their experience in introducing ‘Generative AI’ and
other AI technologies to different companies. A robotics and healthcare startup are
interviewed to gain in depth knowledge about the applications and status of use of AI
in healthcare and robotics domain. All these companies belong to different cities in
and across Austria.
To summarize and analyse the qualitative data acquired through the interviews and
literature review. After transcription of the interview, a rigorous coding was done to
identify and create 7 dimensions to answer specific questions.
Every process is dependent on other process, the process starts with formulation of
the research question. Basically, the question of how the companies have been
using the AI technology internally to smoothen and optimize processes, have better
productivity from the employees and allowing the companies to be more creative
5
and customer oriented. What customers mostly experience is the end product of the
AI utilization like a smart app with better recommendations, or a smart assistant or
maybe a better financial plan for your investments, but what exactly goes on behind
such products what is it like to start with getting your products smarter, what
challenges exactly are faced by the company stakeholders and so on.
Qualitative Analysis of
results
Interviews
Interview Invitatioins
Preparing questionnaire
Industries shortlisting
Reviewing literature on
AI
Identifying research
questions
To understand the AI technology from basic it’s important to review the existing
literature. The literature has given an in depth working and the basic building blocks
of the AI technology. Like what are the algorithms what purposes can they be used
what are the advantages when a particular algorithm in an application. This
increased the excitement about how are experts in industry implementing in real
business situations. The preparation of questions was not all predetermined but also
intuitive and were thought of at the time of interviewing as counter questions.
Following the interviews from the stakeholders the recorded interviews were
analyzed using the axial coding and the conclusion is derived.
This thesis aims to investigate the applications of Artificial Intelligence within Austrian
businesses. To maintain a focused and manageable scope, several delimitations
6
are outlined below (Bouma, 2004):
7
2 Artificial Intelligence
Artificial intelligence (Artificial Intelligence) is a disruptive technological development
that, together with robotics, is changing the operating model in companies in each
and every of its basic functions (Choi & Ozkan, 2019). Artificial Intelligence is a
development of man-made consciousness which permits PCs to gain from
information without programming performed by people (Panda & Mehta, 2021).
Simulation of human intelligence by computer systems including learning, reasoning,
self-correction. Professor John McCarthy in 1955 the father of Artificial Intelligence
described the process of creating computer programs or machines capable of
behaviors generally regarded as intelligent if exhibited by humans as; making a
machine behave in ways that would be called intelligent if a human were so
behaving (Fontana, 2021).
The task that constitutes Artificial Intelligence are Problem solving, knowledge
representation, reasoning, decision making, Actuation and perception. The essence
of Artificial Intelligence is the ability to make appropriate generalizations in timely
fashion based on limited data.
Artificial intelligence can be classified into various types based on their capabilities
and functionalities. Learning Artificial Intelligence can fall under the types “Narrow”,”
General”,” Super” (Hintze, 2016):
1. Narrow Artificial Intelligence: This is something that the current (until 2023)
kind of capabilities are offering, something that takes relatively big data set
from past and make predictions or answers questions. Narrow Artificial
Intelligence isn’t conscious or driven by emotions the way that humans are
(Jajal, 2018). For example: object detection from new images, people
recognition from voices, disease detection (Kanade, 2022). When conversing
with Siri, Siri isn’t a conscious machine responding to the queries instead it is
designed to process the human language, enter it into search engine and
return with results (Jajal, 2018). Though narrow Artificial Intelligence is
considered as weak Artificial Intelligence, it has relieved us a lot of boring
tasks like automatic data analysis, ordering pizza online, finding best movies
8
based on the interest and finding best routes to the destination. Narrow
Artificial Intelligence may help companies make better strategic decisions.
Narrow Artificial Intelligence also acts as building blocks for the upcoming
Artificial Intelligence technology in future. A narrow Artificial Intelligence is
used as decision support tool with focused application, albeit a potentially
broad range of operations that it may be applied to [i.e., workflow planning
and optimization, fraud detection, error reduction, health diagnosis] (Harwood,
2019)
3. Super Artificial Intelligence: Which is shown in movies these days, where the
Artificial Intelligence has the cognitive, physical capabilities superior to
humans and surpass them.
9
The above figure (figure 2) explains the lifecycle of any Artificial Intelligence project,
which generally includes following steps as mentioned in (Kura, 2020):
1. Business and use case Development: This is the first step where the specific
problem is identified, the goal is to understand the business objective and
define clear use case for the Artificial Intelligence solution. The key challenges
and opportunities that Artificial Intelligence can address within the
organization. For example, a retail finance company wants to improve
customer satisfaction by developing an Artificial Intelligence-powered chatbot
to handle customer inquiries.
2. Design Phase: In this phase algorithms, models that will be used as well as
defining the overall architecture and data requirements is done. The design
phase also involves considering any ethical or regulatory considerations
associated with the Artificial Intelligence systems. For instance, the design
phase for the chatbot may involve determining the conversational flow,
designing the user interface, and selecting natural language processing
algorithms.
3. Training and test data procurement: The relevant data is collected for training
the models. Proper amount of data is required to perform accurate
predictions. Test data is also collected to evaluate the performance of the
Artificial Intelligence model during the test phase. Continuing with the chatbot
example, the company would need to gather a dataset of customer inquiries
and their corresponding responses for training the chatbot.
4. Building: After the train and test data is made available Artificial Intelligence
model is built. This involves using machine learning techniques to train the
model on the training data which helps the model learn pattern and make
predictions. The model is developed using programming languages and
related framework. For the chatbot, the company would use the collected
dataset to train the chatbot model to understand and generate appropriate
responses.
5. Testing: The accuracy and performance of the Artificial Intelligence model is
tested. This is done on the test data. Testing helps identify the errors in the
model and allows for improvements to be made. In the case of the chatbot,
the company would assess how well it understands customer inquiries and
10
provides accurate and helpful responses.
6. Deployment: Once the Artificial Intelligence model has been tested and meets
the desired performance criteria, it is ready for deployment. Deployment
involves integrating the Artificial Intelligence model into the intended system
or application. For the chatbot, the company would deploy it on their website
or mobile app, allowing customers to interact with it for assistance.
7. Monitoring: After deployment the Artificial Intelligence systems needs to be
continuously monitored to ensure its performance, accuracy, and reliability.
Monitoring involves collecting data on how the Artificial Intelligence system is
performing in real world scenarios, detecting any anomalies or errors and
making necessary adjustments or updates to maintain its effectiveness.
This iterative process ensures that AI models remain accurate, relevant, and
effective in solving business problems.
Data is a game changer in Artificial Intelligence. Data analysis is very important for
data to work correctly with chosen algorithm. The essence of Artificial Intelligence is
the ability to make appropriate generalizations in a timely fashion based on limited
data. Data is a critical aspect of artificial intelligence (Artificial Intelligence) as it is the
fuel that drives the machine learning algorithms behind most Artificial Intelligence
applications. Big data are very large data sets, Gartner, Inc. defines big data as “Big
data is high-volume, high-velocity and high-variety information assets that demand
cost-effective, innovative forms of information processing for enhanced insight and
decision making” (Mueller & Massaron, 2019). Another definition given in
TechAmerica Foundation is as follows: “Big data is a term that describes large
volumes of high velocity, complex and variable data that require advanced
techniques and technologies to enable the capture, storage, distribution,
management, and analysis of the information.” (TechAmerica Foundation’s Federal
Big Data Commission, 2012)
Data can be usually categorised in structured and Unstructured data. Structured data
is traditional data like spreadsheets, long tables, or database with columns and
rows of information, that can be summed or averaged or analysed. But this is rarely
11
how data is presented currently, Structured data, which constitutes only 5% of all
existing data (Cukier, 2010). The datasets commonly encountered are much messier
and usually the information is extracted and made into tidy and structured way. With
the digital age, unstructured data can be collected from the digital interactions:
emails, Facebook and other social media interactions, text messages, shopping
habits, smartphones (and their GPS tracking), websites visited, how time spent on
the website and contents surged, CCTV cameras and other video sources, etc. The
amount of data and the various sources that can record and transmit data has
exploded.
The challenges of working with data can be like its big and has a lot of raw data that
needs to be able to store and analyse. Secondly its constantly updating and
changing. The variety can be overwhelming. A famous statistician, John Tukey, said
in 1986, “The combination of some data and an aching desire for an answer does
not ensure that a reasonable answer can be extracted from a given body of data.”
(Provost & Fawcett, 2013) suggested that Volume, variety and velocity are the three
dimensions of challenges in data management.
The process of using data to train an Artificial Intelligence algorithm usually involves
the following steps: (Géron, 2019)
1. Data acquisition: The first step in using data in Artificial Intelligence is to acquire it.
This could involve collecting data from various sources, including sensors, social
media, customer feedback, or other digital devices.
2. Data cleaning: Once the data is acquired, it needs to be cleaned and pre-
processed to remove any inconsistencies, errors, or missing values. This is a critical
step as the quality of the data directly affects the accuracy and reliability of the
Artificial Intelligence algorithm.
3. Data labelling: In supervised learning, the data needs to be labelled to help the
Artificial Intelligence algorithm understand the patterns and relationships in the data.
For example, if the data is being used to develop a spam filter, the data needs to be
labelled as either spam or not spam.
4. Data training: Once the data is cleaned and labelled, it is used to train the
Artificial Intelligence algorithm. During training, the algorithm uses the data to
12
identify patterns and relationships and adjust its parameters to improve its accuracy.
6. Data deployment: Once the Artificial Intelligence algorithm has been validated, it
can be deployed to make predictions or decisions based on new data.
To build a mobile app predicting certain values for example house pricing values so
that features would be given as input A and price would be output B then this would
be a Machine Learning (ML) models system and would be one of those that learns A
to B mapping, so the machine learning (ML) often results in running Artificial
Intelligence systems.
There can also be a situation in business where a team may want to analyze the
dataset to gain insights, so the team might come up with a conclusion like ‘newly
renovated homes have a 15% premium and this can help make decision, so this
would be an example of data science projects, where the insights from data can help
make business decisions.
Machine Learning as defined by ‘Arthur Samuel’ is the field of study that gives
computers the ability to learn without being explicitly programmed. In contrast data
science is the science of extracting insights and relevant knowledge from data.
1. Supervised learning: In this the Artificial Intelligence models learns from labeled
training data, where each data instance has an associated target or label. The
13
basic ingredient of supervised machine learning algorithm is a data set containing
matching input and output (Klaas, 2019). The inputs can take many shapes and
forms; like company fundamentals, timeseries, transaction records, news, and
reports. The collection of data is often referred to as the design matrix and they are
explanatory variable in the model. The supervised learning enables a machine to
learn the human behavior or object behavior in certain tasks. The learned knowledge
can then be used by the machine similar actions on these tasks. (Liu, 2012). This
type of machine learning has been successfully used in areas such as information
retrieval, data mining, computer vision, and market analysis. (Cunningham, 2023)
14
Figure 3: Types of Artificial Intelligence (Géron, 2019)
The above figure (figure 3) explains how machine learning and deep learning are the
integrated in the term artificial intelligence. Deep learning is a part of broader family
of machine learning methods, which is based on artificial neural networks. Artificial
intelligence is a simulation of human intelligence and an interplay exists between
goal seeking, data processing, used to achieve that goal, and data acquisition used
to better understand the goal. Machine learning is one of a number of subsets of
Artificial Intelligence. In machine learning the goal is to create a simulation of human
learning so that an application can adapt to uncertain or unexpected conditions. To
perform these tasks ML relies on algorithms to analyze huge datasets. Deep learning
is a subset of machine learning in both cases, algorithms appear to learn by
analyzing huge number of datasets, however deep learning varies in the depth of its
analysis and the kind of automation it provides.
Data, processing power, Algorithms, and Global network were the critical elements
needed for the launchpad and were provided by the internet and smartphone
revolution. These factors have collectively contributed to the rapid growth and
widespread adoption of Artificial Intelligence technology in recent years.
2.4.1 Data
15
videos, and documents to voices, began piling up representations of the realities,
capturing them in digital formats and uploading them in the ocean of internet
(Boobier, 2018). The ability to tap into the large amount of data to gain insights
develop new perspectives, discover new knowledge, and make better decisions.
Algorithms that had been around since 1980s suddenly found new life as Artificial
Intelligence algorithms require large amounts of data to be trained effectively. This
data is used to teach the model to recognize patterns, make predictions, and perform
other tasks, without this data Artificial Intelligence wouldn't be able to learn and
improve over time.
Processing power is another crucial factor that has made Artificial Intelligence
possible. The graphic processing unit (GPU) power enhancement improved the
ability of a machine to process data and the big data architectural improvement
enabled multiple machines to work together in collaboration with each other to
process major volumes of data. The combined effect of empowered the algorithms to
achieve learning efficiency. Artificial Intelligence algorithms often require large
amounts of computational resources to train and run effectively. For instance, deep
learning models which are commonly used in image and speech recognition require
immense amounts of processing power to analyse vast data sets and identify
patterns. This processing power has enabled researcher’s breakthrough in Artificial
Intelligence applications such as natural language processing, speech recognition,
computer vision and robotics (Makridakis, 2017).
2.4.3 Algorithms
16
create new discoveries and solve problems (Roll & Wylie, 2016).
Supervised learning is a type of machine learning where the data set used to train
the algorithm includes labelled examples. In other words, the data set includes both
inputs (features) and the corresponding outputs (labels). The algorithm learns to map
inputs to outputs based on the labelled examples in the training set. The inputs can
take many shapes and forms like company fundamentals, timeseries, transaction
records, news and reports. (Klaas, 2019). Once the algorithm is trained, it can be
used to predict the output for new, unlabelled inputs. For example, a system can, be
trained differentiate between a Koala and Kangaroo, by inputting the data on both
the animals. The system thus learns to differentiate the images based on set of
features. It initially learns the features of each image from both the animals and then
differentiates accordingly. As the system gets more and more data it improves the
prediction.
17
In businesses, supervised learning can be used to predict customer behaviour,
optimize pricing strategies, and improve supply chain management. For example, a
retail company might use supervised learning to predict which products a customer
is likely to purchase based on their browsing history and purchase history. This
information can be used to provide personalized recommendations and improve the
customer experience.
In reinforcement learning the algorithm does not have historical data to draw
conclusions. This a type of Artificial intelligence where the agent learns to navigate
and environment by receiving awards or punishments. Based on the consequence of
its previous decisions the agent learns the best course of actions. The goal of
reinforcement learning is to maximize the cumulative rewards over time, which can
lead to optimal decision making in complex and uncertain situations (Javatpoint,
2022). A popular use case of reinforcement learning is in game playing, particularly
in development of intelligent game playing agents. Any use case of reinforcement
learning (RL) demonstrates the ability of technology. To learn from experience and
improve over time through feedback from its environment making, it a powerful tool
in finance.
18
Figure 4: Machine learning process (Kura, 2020)
The above figure explains the machine learning process where an input is car and, in
the process, the important features are extracted and when fed to algorithm it gives
output whether the given image is car or not a car.
1. The first thing is set of features to use or the attributes of the data which are used
as inputs to the model.
2. second is to define the choice of algorithms: there are different kinds of algorithms
that can be selected from. Algorithms acts as general form for the model that is
created to define the rough shape and structure of model.
3. Each algorithm has a set of hyper-parameter values that is needed to define. This
helps in making the algorithm more simple or complex to best fit the problem.
Machine learning models are algorithms that allow computers to learn from and
make predictions based on data. These models are trained on datasets to recognize
patterns and make decisions without being explicitly programmed for each task.
19
2.7 Building a model
CRISP-DM is a across industry standard process for applying data science and
machine learning to solve problem. This process starts with step one called Business
understanding which focuses on having a good solid understanding of the problem
that you are trying to solve and what defines success in solving the problem and how
to measure success. Step two includes collecting and organizing and identifying the
data that you need to solve that problem. Step three, prepare the data for modelling.
Step four includes building and then evaluating the model which is then followed by
deploying the model (Yao, Zhou, & Jia, 2018).
1. Selection of features: This is done in earlier phases of gathering and preparing the
data through a process called feature selection and feature engineering. The aim is
to identify which features of the data have the most impact in terms of being able to
predict the target value or the sale price.
3. Users can adjust the individual parameters to make the model perform a little
better. Here values for hyper parameters are.
4. The fourth component is to define loss function also called cost function. Use the
loss function in a way to evaluate the performance of the model as are building the
model itself.
Machine learning, generally two have different types of metrics are used to evaluate
the model performance. The first type is called:
1.Outcome Metrics: Outcome metrics refer to the desired business impact of the
20
model or the broader product that is created either for the organisation or for the
customers. Typically, the business impact is stated in terms of currency, so it might
be so amount of money saved or might me amount of money generated. Sometimes
it can be time as well, but typically its referring to impact on the customer. Outcome
metrics does not contain model performance metrics or other technical metrics.
2. Output Metrics: Output metrics refer to the desired output of the model. It is
measured in terms of a model performance metric. Output metrics are not
communicated to the customer. These are also generally set after desired outcome
has been defined and allowed the choice of outcome metric to then dictate the
selection of output metric that is use to evaluate the model (Fontana, 2021). Use
output model metrics at several points in the modelling process. Firstly, output
metrics is used in order to compare and evaluate different models that might be
created and made the selection of model to use. Once the final model is selected
the, output metrics to evaluate the performance of that model on the test set of data
prior to deploying it and providing it into the customers.
Metrics in model selection are critical for comparing different machine learning
models to determine which one performs best for a given task. These metrics, such
as cross-validation scores, mean squared error, and AUC-ROC, help evaluate the
generalization ability and predictive power of models, guiding the selection of the
most appropriate model for deployment.
For regression modelling problems there are three common metrics Mean squared
error, mean absolute error, mean absolute percent error (Domingos, 2017).
Mean squared error is calculated by summing up the difference between the actual
target value and predicted value squared and then dividing by the number of
observations. It is heavily influenced by outliers. The lower the MSE the better the
model’s performance.
Mean absolute error measures the average absolute difference between the
predicted and actual values. It is calculated by taking the sum of absolute errors and
dividing by number of data points.
21
Mean absolute percent error measures the average percentage difference between
the predicted and actual values. It is calculated by taking the sum of absolute
percentage error and dividing by the number of data points. MAPE is often used to
compare the performance of models across different datasets. However, it can be
problematic when actual value is close to zero.
Tools which are increasing getting famous all around the world in businesses and
homes like ChatGPT, Midjourney are categorized as generative AI tools. These tools
generate something new like text, images and even videos. They generally do this
by determining the mathematical relationship between a dependent variable and
independent variable to describe some set of data and sometime by categorizing
previously unseen data (Classification). The field of generative has been around for
a decade and a lot of research has been done by graphic artists and computer
scientists creating programs to generate visual art as far back as the 1970s (Stratis,
2023) Neural networks, the building blocks of artificial intelligence, play a crucial role
in the development of Generative AI. These networks, inspired by human brain, are
adept at learning patterns and making data-based decisions (Solari, 2023). Neurons
are the base units of neural networks, are mathematical function that behave like a
simplified version of a biological neuron (Stratis, 2023). Neural networks earlies were
too computationally complex, but improvement in hardware network architecture
made deep neural networks one of the dominant forces in the world of machine
learning and artificial intelligence, especially in computer vision.
22
represents a latent code. The decoder reconstructs the input data from these latent
codes. VAEs are used for tasks like image denoising, dimensionality reduction and
generating new data. They have the advantage of providing continuous latent space,
allowing for smooth interpolation between data points. (Almalki, 2021)
GANs consist of two neural networks: a generator and a discriminator. The generator
creates new data from random noises. The discriminator evaluates whether the
generated data is real of fake. The generator aims to produce realistic data to fool
the discriminator, while the discriminator learns to distinguish real from generated
data. GANs are known for their ability to create highly realistic images. In summary
VAEs focus on latent space modelling (latent space modelling refers to the process
of mapping data (such as images, text, or other representations) into a lower
dimensional space while preserving essential features. (Sosa, 2020)
VAEs and GANs play a crucial role in the field of generative AI. VAEs learn a
continuous latent space representation of data. This latent space captures essential
features and variations.
The uses of Generative Artificial intelligence in today’s business are mostly being
utilized as personal assistants to do the repetitive tasks, as Generative AIs are good
at text generations, businesses are using it for creating scenarios for training
purposes. Customer personalization, understanding customer behavior, generating
leads creating personalized content for targeted marketing are few of the use cases
of GAI in businesses. In case of business intelligence GAI is used for predictive
analysis, where generative models are utilized for generative simulated future
scenarios based on historical data. By simulating various what if scenarios,
businesses can assess the potential impact of different decisions and develop more
informed strategies. In financial sector where there is abundance of data with the
banks and financial institutions through customer interaction and other processes it
is possible to develop to create models to respond to customers inquiries and train
staff on financial services. BloombergGPT represents the first step in developing and
applying this technology in the financial industry (Ooi, Lee, & Tan, 2023). In the
following interview you will come across the software company “Leftshift One”
developing domain focused generative AI tools for retail banks to enhance
business operations.
23
2.11 Algorithms
Algorithms are an essential component of machine learning and are used to identify
patterns and make predictions. There are two primary types of algorithms in machine
learning: parametric and non-parametric. Parametric algorithms make use of a
known form or template to model the input-output relationship, while non-parametric
algorithms do not make any assumptions about the form of the relationship. Linear
regression, ridge regression, lasso regression, logistic regression, naive Bayes,
linear SVM, perceptron, and neural networks are all examples of parametric
algorithms. Decision tree, k-nearest neighbour, and random forest are examples of
non-parametric algorithms. The choice of algorithm for a given problem depends on
several factors, including the desired level of accuracy, interpretability, and
computational efficiency (Russell & Norvig, 2016).
Linear models are parametric algorithm meaning they take on a known form which is
used as a sort of template to model the input to output relationship. This template is
fixed using a predetermined set of coefficients or parameters. And when a model is
built or train, the job is to learn the values of those coefficient or parameters.
Parametric models can train quickly and work well even on small data. The downside
of using parametric model is they are constrained to a specific form and because of
this they can sometimes be too simple for real world complex problems that were
trying to model and so they are prone to under fitting.
The other class of model are called nonparametric algorithms, these do not make
any assumptions about the form or template of the input to output relationship in
advance of building and training the model. Instead, these models are highly flexible,
and they can adapt well to complex nonlinear data and relationships. These can
result in higher performance in prediction. They require more data to train and are
prone to overfitting.
24
2.11.1 Linear Regression
Linear regression is a popular statistical and machine learning technique used for
modeling the relationship between a dependent variable and one or more
independent variables. It assumes a linear relationship between the variables, where
the dependent variable can be predicted based on the values of the independent
variables. The goal of linear regression is to find a linear relationship between the
input variables and the output variable. This is represented as a straight line on a
graph, where the x-axis represents the input variable, and the y-axis represents the
output variable. The equation for a linear regression model is (Kuhn & Johnson,
2015):
𝑦 = 𝑏 + 𝑏1 𝑥1 + 𝑏2 𝑥2 + ⋯ 𝑏𝑛 𝑥𝑛
In this equation, y is the output variable, x1, x2, x3, ... xn are the input variables, and
b0, b1, b2, ... bn are the coefficients that represent the slope of the line.
The goal of linear regression is to find the values of b0, b1, b2, ... bn that minimize the
difference between the predicted values of y and the actual values of 𝑦. This
difference is called the residual. The most common method for minimizing the
residual is called the ordinary least squares method.
Linear regression can be used for both simple and multiple linear regression. Simple
linear regression uses only one input variable to predict the output variable. Multiple
linear regression uses two or more input variables to predict the output variable
(Bird, Klein, & Loper, 2009).
Linear regression can be used for a variety of applications, such as predicting sales
based on advertising spend, predicting housing prices based on square footage and
number of bedrooms, or predicting the performance of a student based on their GPA
and extracurricular activities.
One of the key assumptions of linear regression is that there is a linear relationship
between the input variables and the output variable. If there is a non-linear
relationship, such as a quadratic or exponential relationship, linear regression may
not be appropriate. Another key assumption is that there is no multicollinearity
between the input variables. Multicollinearity occurs when two or more input
25
variables are highly correlated with each other. This can lead to unstable and
unreliable coefficient estimates (Flach, 2012).
2.11.2 Regularization
Two popular techniques of regularization are Lasso and Ridge Regression. In lasso
regression penalty factor is calculated as the sum of the absolute value of the
coefficient multiplied by lambda value. The value of lambda is fixed value that is set,
and it controls the strength of the penalty that user wants to apply. As lambda
increases, user can apply a higher penalty and vice a versa. Lasso regression has
had the effect of forcing the coefficients all the way to zero if the variables of those
coefficients are not relevant in predicting the output. If lasso regression is applied
with sufficient penalty factor it forces those coefficients to zero and thereby it
removes those features from the equation all together. Lasso regression can also be
considered as a form of feature selection because it’s generally reducing the number
of features that are present in the final model equation (Dutt & Chattopadhyay,
2018).
Ridge Regression does not have the effect of forcing the coefficients all the way to
zero. It forces irrelevant features towards zero but generally not all the way. It can be
an effective modelling strategy to reduce overfitting and improve that balance
between simplicity and fit on training data.
Regularization can be highly effective strategy when working with regression model
that often can give us better model than a standard linear regression model alone
26
can. Particularly when dealing with complex data with many features.
The logistic regression model is trained using a labelled dataset where the outcome
variable is binary, and each data point is associated with a set of input variables.
Figure 5 explains the logistic Regression model. To find the optimal values of
𝑤1……𝑤𝑝 coefficient of the linear model. First the cost function is defined. Then seek
to find the optimal values of the weights or coefficients that minimizes the cost
function. If there is a function that is to be minimized like the cost function. To
minimize that function, calculate the derivative of the function, which is also called as
the gradient of a function, and set the derivative equal to zero. In logistic regression
users resort to an iterative solving method that is called gradient descent to solve
for the values of the coefficient that minimizes the cost function (Brink, Richards, &
27
Fetherolf, 2017).
Use Case: logistic regression could be used to predict the probability of a patient
having heart disease based on their age, blood pressure, cholesterol levels, and
other medical indicators. This information could then be used to inform medical
decisions such as treatment options and preventative measures.
28
the animals known, only Mooses have horns. So, if the answer to the question is
yes, it can be predicted that the animal is a Moose. However, if the answer to the
question is no then a second question is asked.
Decision tree is a type of supervised learning algorithm that is mostly used for
classification problems but can also be used for regression problems. It works by
recursively splitting the data into subsets based on the most significant attribute or
feature at each step. The decision tree starts with the entire dataset as the root node
and select the best feature to split the data into subsets. The selection of best
feature is based on the impurity of the data, which can be measured by various
metrics such as entropy and Gini index. The feature with the highest information gain
or lost impurity is chosen to split the data. One of the key benefits of decision tree
models is that they are highly interpretable. Because of the series of questions, it’s
very easy to follow the order of the questions and to trace back how to get to a
certain prediction given an input value (Domingos, 2017).
Ensemble of models starts by creating multiple sets of data from the original data
set. Each set can be fully replicated version of the original data set or can be some
smaller slice of the original data. The models can be trained on the new datasets
created. All models can have the same algorithms and trained in different ways using
different hyper parameters on different sets of data or they can take on different
algorithms. Linear models for example can be combined with tree models. Once
these multiple models have been trained, each model can be used to generate
predictions. Then an aggregation function is needed to combine the prediction to
generate a single output prediction from the ensemble model. And here it must again
be decided in terms of the form of the aggregation function. Once the aggregation
function is chosen, predictions of the member models can be combined into a single
prediction, which is the output of the ensemble model. One of the very common use
cases of ensemble model is in the weather forecast industry. Also, electricity industry
is one of the examples where ensemble model is used to predict the load or demand
on the network (Myles, 2012).
29
Figure 6: Ensemble of models (Shalev-Shwartz, S., & Ben-David, S. (2014))
One specific method that’s commonly used for building ensemble models is what’s
called bagging, which is short for bootstrap aggregating. In bagging bootstrap
samples is used to train multiple model that is put together in an ensemble.
Bootstrap means sampling with replacement. When there is a large number of
sample or observations in the data randomly a certain number of those observation
is taken out to use to train a model and each time an observation is pulled out it is
replaced in original set. Because each model is trained on different data, their output
predictions can be considered close to independent. Thus, with the average of their
predictions, the variance is reduced (Collins, 2019).
The most common type of bagging model is what’s called a random forest. One of
the challenges faced with decision tree is that it overfits data. To overcome this
challenge, rather than growing a single decision tree for a problem that is to be
modelled, multiple decision trees can be grown and taken a majority vote between
the trees. To ensure that each tree is as close as possible to being independent of
the other trees, trees are grown using a bagging subset of the data. Combining these
trees model together and take a majority vote in case of classification. Or if random
forest is applied to aggression problem, simple average of the prediction is taken
30
generated (Yao, Zhou, & Jia, 2018) by each model. By doing this, variance of the
output predictions is reduced, and reduces the likelihood of overfitting the ensemble
random forest model to the data. Random forests are great for working with complex
real-world types of problems where there are high nonlinear relationships between
input and output.
3. Depth of trees.
Random forests are widely used in various fields such as finances, healthcare and
marketing for tasks such as fraud detection, disease diagnosis, and customer
segmentation among others.
2.11.8 Clustering
31
a problem (Flach, 2012).
A deep learning model can learn through its method of computing. The terms deep
learning and neural networks are used interchangeably. Deep learning can be
explained with an example. A Pizza selling company wants to analyse how many
pizzas are sold from its website. The analysis may give an output that higher the
price lowers the demand. A straight line may fit through the points showcasing that
as the price goes up the demand goes down.
Figure 7: Predictions
This blue line is the simplest possible neural network. Having as input the price A
and estimated demand B. The artificial neuron calculates blue curve shown in the
figure.
Suppose instead of just knowing the price of the pizza the data about the shipping
cost the customers would have to pay, the types of toppings the pizza has, the
budget spend on marketing the pizza and the type of crust the pizza has is also
available. In this case the neural network might look more complex having multiple
neurons. One neuron has job to calculate the affordability of the pizza, so the
affordability is the function of price and shipping cost. The second thing that might
32
affect the pizza demand is the taste it has to offer so here the crust type and
toppings play a major role and finally the awareness which is a function of marketing.
In total there are now three neural networks, which when fed the input to one final
neuron which computes the demand of the pizza.
With big enough data and proper training neural networks can do incredible jobs
mapping from inputs A to outputs B.
CNNs were initially developed to tackle computer vision problems but can also be
applied to many other domains (Boobier, 2018). Computer vision is a field of artificial
intelligence that enables computers and systems to derive meaningful information
from digital images videos and other visual inputs, and based on those inputs, it can
take actions (Hansen, 2023). A convolutional is an operation in which neurons are
combined to produce a third one. A CNN consists of a series of layers, including
convolutional layers, pooling layers, and fully connected layers (Hansen, 2023). The
convolutional layers are the core building blocks of CNN where the majority of
computations occur, and they learn to extract features from the input data by
applying a set of learnable filters or kernels to the input. Convolutional layer requires
a few components, which are input data, a filter, and a feature map. Each filter
produces a feature map, which highlights a particular pattern or feature in the input.
By applying multiple filters to the input, the convolutional layer learns to detect
different features at different locations in the input. The pooling layers are used to
down-sample the feature maps produced by the convolutional layers reducing the
dimensionality of the data and making the network more computationally efficient.
A common method is Max Pooling. Max pooling is serving many purposes: It
33
creates down sampled feature map at lower resolution while retaining only the most
important and relevant information for the task. It also adds a small amount of
translation in-variance meaning that small changes in the location of features in raw
inputs won’t impact the pooled feature map, making relative location more important
than absolute location.
Inside a CNN there are two sections. The objective of first section is to extract
meaningful features it consists of a succession of blocks of convolutional layers,
RelU activation units and pooling layers. The convolutional operations create a
preliminary feature map. The ReLU activation unit will force some points on the
feature map to take on value of either 0 or the value itself. And the Max pooling will
keep only levels of features. The blocks closer to the input extract low-level features
like edges and corners. Those little features combine to make mid-level features,
and the blocks closer to the output contains high-level features like wheels, bumpers
and windows from various angles, as well as any general parts of the object within
the image set. The second section of the network aims at solving the task
(Goodfellow, 2023).
Machine learning algorithm can learn input to output A to B mappings. Starting with
an example of visual analytics project in the radiology department where in after
feeding the system with a set of pictures of lungs of patients the system detects the
tumour. Or also machine learning is used in detecting of breast cancer. The first step
here involves collecting the data of past such cases of lung tumour or breast
cancers.
Step two involves training the model this means will use a machine learning
algorithm to learn an input to output. Where the input would be images of previous
cases of lung tumour. But the process involves repeatedly attempting the process of
input to output, in Artificial Intelligence it is called this iterate many times. Then finally
deploy the model. But when its finally deployed there are cases where the system
doesn't output results as expected or the accuracy achieved while training, as it
starts getting new and variety of data. So, the solution is to get the data back after
development for fine tuning and adapting the model to work better. To summarize
34
the three steps, involve collection of data, training the model and finally deploying the
model (Chishti & Bourdeau, 2020).
Data science is the science of extracting insights and knowledge from data,
analysing and suggesting hypothesis/actions to the team. There are broadly
speaking six categories in which data analysis falls, they are descriptive, exploratory,
inferential, predictive, causal, and mechanistic. The goal of descriptive data analysis
is to describe or summarize the set of data. This is the first analysis to be performed
with a new dataset. This kind of analysis creates a summarizes the samples and
their measurements including measures of central tendency, or measure of
variability. The goal of exploratory analysis is to examine or explore the data and find
relationships that weren’t previously known, this explores how different measures
might be related to each other but do not confirm that relationship is causative. All an
exploratory analysis can tell is a relationship exists not the cause. In inferential
analysis relatively small sample of data is used to infer or say about population at
large. Inferential analysis involves using the small amount of data and trying to
generalize to a larger population and then give the uncertainty about the estimate.
The goal of predictive analysis is to use current data to predict future events. Current
and historical data can be used to find patterns and predict the likelihood of future
outcome. Generally having more data and simple model performs well at predicting
future outcomes. Causal analysis sees what happens to one variable when the other
variable is manipulated, looking at the cause and effect of the relationship. Causal
analysis is often considered the gold standard of data analysis (Provost & Fawcett,
2013). The goal of mechanistic analysis is to understand the exact changes in
variables that lead to exact changes in other variables. Mechanistic analysis can be
found in material science experiments. The workflow of a data science project
involves collection of data, analysis and suggesting changes. Let clear this with an
example; when there is a need to analyse customer data for a retail company having
an online website to get insights into consumer behaviour and preferences. This
involves engaging with the company’s website, mobile app, analysing data on
customer purchases and customer satisfaction metrics. The data science
35
methodology would involve (Provost & Fawcett, 2013):
1. Processing of data where in the flaws and missing data is cleaned and made
in same order across the dataset.
2. Exploratory analysis of data to identify patterns and correlation in data.
3. Relevant features are extracted and engineered to improve the predictive
power of the mode. the insights gained from this analysis could then be used
to make data-driven decisions about inventory management, marketing
strategies, and personalized customer experiences.
4. selection of model is done on the performance of various models. The
company may use clustering algorithms to group customers based on their
historical behaviour patterns and develop targeted marketing promotions for
each group.
5. Training and testing of the model follow the selection of the modes. where in
the model is trained on a piece of data and tested that is evaluated based on
separate piece of data.
6. Finally, the model is deployed.
The data science project workflow involves collecting, preprocessing data, followed
by exploratory data analysis, model building, evaluation, and deployment. This
structured approach ensures that data-driven insights are systematically extracted
an applied to solve real-world problems.
The lessons learned from the internet era, that if company lists its products on its
website that doesn’t make it an internet company. A good internet company is not
simply a traditional company that lists its products online. Instead, it is a company
that has successfully leveraged the internet to create new business models, innovate
in ways that improve the customer experience, and create new value for customers.
This may involve using advanced analytics to gain insights into customer behaviour,
utilizing artificial intelligence to improve decision-making, or creating new distribution
channels that are more efficient and cost-effective. Ultimately, a good internet
company is one that can stay ahead of the curve and continue to innovate and
36
adapt as the internet landscape evolves.
Similarly in the Artificial Intelligence era any company which does use a little bit of
deep learning doesn't make it a great Artificial Intelligence company. Thinking
through how to get data is a key part of great Artificial Intelligence companies.
Artificial Intelligence companies use machine learning and other Artificial Intelligence
technologies to develop products and services that can learn from data and improve
over time. They may use deep learning to create models that can make predictions
or provide insights based on large amounts of data. These companies may
specialize in specific industries or applications, such as retail analytics or customer
relationship management. Ultimately, successful Artificial Intelligence companies are
those that are able to leverage Artificial Intelligence technology to create new
business models, innovate in ways that improve the customer experience, and
create new value for customers (Kipouros, 2017).
• Pervasive automation.
37
5. Develop internal and external Communications.
But the question lies how to use this technology in a project, something that builds
up to align with the corporate strategy.
3 Research process
The research process involves a systematic and methodical approach to investigate
about the status quo of adoption of AI technologies by Austrian businesses. It began
with identifying research questions followed by review of existing literature to gain in
depth knowledge on AI technologies. Then a research methodology was developed
including data collection, and analysis techniques (Bouma, 2004).
After reviewing the literature on the AI technologies, its basics, current developments
from different sources like books, articles from industry leaders like IBM and
research papers a basic idea of all the algorithms, data analysis, machine learning
and their particular application was gathered.
Starting with the text mining and how it is advantageous for the business intelligence.
Text mining is used with social media to do the sentiment analysis. As the social
media is omnipresent and used by around 5.16 billion people worldwide as of 2024
(Larson, 2024). Companies have use cases using it for customer sentiment analysis,
improving customer support, do market research, brand reputation management and
also for product development and innovation. Technologies like Natural Language
Processing, Machine learning, and deep learning are having a transformative
change in the healthcare industry enabling machine to automatically detect disease
based on prior experience, recommend better treatments and also impacting the
nursing industry. AI is transforming the way the financial industries are offering
38
products and services to its customers, like advanced fraud detection, anti-money
laundering, personalized financial investment plans based on customer profile and
customized and personal query handling.
The first part was to understand the company and its background as a part of the
research question to know the status quo of the implementation of AI and to know
their AI journey question I prepared where:
A report by IBM on adoption of AI by businesses also looks into the questions like
can technology like AI be integrated into businesses? Does it offer the ability to scale
across the organization? (Quincy, 2023). And dives deeper into the technologies
adopted by different businesses and its advantages and impact like the Natural
language processing (NLP) which is a form of AI have found its advantages into
already proved NLP use cases like enhancement in engaging customers, by
providing personalized experiences, and having a human like conversations through
a chatbot. Use and integration of NLP has also shown enhancements in employee
productivity. Hence the following question to know the technologies being adopted
by businesses from different industries in Austria: ‘Could you share specific
examples or use cases where AI has made a notable impact on improving efficiency,
decision making, or customer experiences within your company?’
The Harvard Business Review case study like “Customer Experience in the Age of
39
AI” dwell deeper into how traditional businesses having a large customer base first
faced challenges even though having generated vast amount of data they had
generated overtime and were facing tough competitions from technologically
advance competitors, but with collaborations with different AI startups they tapped
into their customer database analyzing thousands of messages and customer
choice details and began using AI to personalize their products and increase sales
figure dramatically (Abraham, 2022). The report by ‘Boundless AI’ shares journeys of
different small and medium sized businesses, and explain how the journey of AI for
small business usually starts with using AI tools for solving specific painful tasks.
Some start using it for gaining deeper insights into customer behaviors or other for
tailoring customer-oriented products (AI, 2023). Hence the following question for the
companies in Austria: “When and how did your AI journey begin and what were initial
goals?”
The second part starts with what exact technologies and in what processes where
they implemented. What algorithms where used and how it impacted the process
and how did it fulfil the requirement of their customers. The questions were added
and subtracted depending on the response of the interviewee.
40
algorithms to automate routine tasks and elevate customer interactions to new levels
of efficiency and satisfaction.
The IBM article “The most valuable AI use cases for business” provides insights into
27 highly productive ways that AI can help businesses improve their bottom line. It
processes behind technology selection is crucial for understanding the strategic
decision making within organisations. As highlighted in industry reports on
technological integration and innovation strategies, companies employ diverse
methodologies to select technologies that align with their goals and objectives. For
instance, some companies conduct rigorous market research and analysis to identify
emerging trends and assess the potential impact of various technologies on their
operations. Others rely on partnerships with technology vendors or engage in pilot
projects to evaluate the feasibility and performance of different solutions (AI, 2023).
Hence the question for Austrian businesses what technologies they adopted for
achieving their goal/objective: “How did you select AI technologies?”
A report on data and its importance by Apify blog outlines common challenges of AI
implementation (Lukáč, 2023) like not enough talent, old infrastructure, costs, data
quality and storage intrigued me for the case with Austrian Businesses.
Understanding the challenges faced during the implementation of AI technologies is
essential for gaining insights into the complexities of technological integration. Like
as discussed in ‘Dataconomy’ report (Çıtak, 2023) data-related issues such as data
quality, availability, and integration often pose significant hurdles. Organizations
frequently grapple with a shortage of skilled personnel who possess the necessary
expertise to develop, deploy, and maintain AI technologies. This skills gap can slow
down implementation and increase dependency on external consultants or vendors.
Hence the question related to challenges: “Challenges during implementations?”
41
technologies offer valuable insights into the transformative potential of AI within
businesses. Article ‘AI implementation in business strategy, benefits and examples’
(FP data soulutions team, 2024) provides insights into successfully implementing AI
into businesses and also practical implementation benefits like improved decision
making, enhanced customer experience, automation and cost savings, valuable data
insights, increase efficiency. To know deeper into what Austrian business have seen
the impact of implementation of AI the question: “Improvements and benefits
observed since implementing AI in your business? Positive and negative
consequences?”
Examining how data collection, storage and quality assurance are managed in AI
project reveals the foundational practices that underpin successful AI
implementation. As discussed in (Javaid, 2024) effective data management is critical
or accuracy and reliability of AI models. Planning and need of identification of
sources of data is very important to address the ethical and legal factors. On the
project goals it’s important to know from where the data will come from. Hence the
question: “How was data collection, storage, and quality assurance for AI projects
managed?”
The article ‘Reskilling in the age of AI’ in Harward Business review (Tamayo, 2023)
discusses the challenges of reskilling workers in the face of technology change,
including the impact of automation and AI. Discusses change management
initiatives, and responsibilities of leader and management. The willingness of
employees and resistance to change is a major factor. Also given an example of how
companies often supplement their internal training efforts by hiring external experts
and consultants who bring specialized knowledge and experience to the
organisation. Hence the question: “How did you address the skill gap in the early
stage of AI implementation? How was upskilling done?”
42
research projects, internships, and collaborative workshops. These shared learning
and cross pollination of ideas among startups, leading to better identification and
capitalisation of new opportunities (Venture Mind AI, 2023). Hence the question:
“Have you engaged in collaborations or partnerships with AI startups, research
institutions or other businesses to advance your AI project? How has collaboration
impacted your AI journey?”
Lastly to ask for advises on the AI implementation for other Austrian businesses
getting started “What advice or recommendations would you offer to other Austrian
Businesses considering AI implementation?”
And finally, the technical sections. As mentioned before not very deep technical
43
question where asked as the most candidates where from the business development
side. As the knowledge gained in literature review of what algorithm can solve what
problems like recommender systems in healthcare which play a major role in
similarity check and personalization of treatment of patients.
Following questions where prepared to ask to the candidate. Some questions were
changed based on the response of the interviewee.
Developing custom algorithms for AI projects involves careful selection and design
processes tailored to specific business needs. Based on recent industry reports and
case studies several custom AI algorithms play a crucial role in various domains,
from digital advertising to finance. Also mentioned in one article, the notable factor in
rise of custom algorithms is the exponential growth in data processing capabilities
enabled by cloud computing (Kaye, 2021). Hence following questions were asked:
“Overview on specific machine learning or deep learning algorithm used in your
applications? Any custom algorithms developed? How were the algorithms selected
what were the key considerations?”
44
in drug discovery and medical imaging, expediting R&D and improving diagnostic
accuracy (Zhavoronkov, 2020). Hence the questions: “How is generative Artificial
Intelligence currently being utilized in different businesses or industries? Can you
share examples of specific use cases where generative Artificial Intelligence has
shown notable impact or innovation in a business context?”
To more about the business in Austria, in which particular sector they are
implementing the Generative AI the following question: “Could you provide examples
of how generative Artificial Intelligence is specifically applied in industries like
marketing, healthcare, finance, or manufacturing? How does generative Artificial
Intelligence adapt or vary in its applications across different business sectors?”
45
$4.4 trillion to the global economy annually (McKinsey & Company, 2023). Hence to
get an idea from Generative AI consultancies in Austria the question: “What future
trends do you foresee regarding the use of generative Artificial Intelligence in
businesses? Are there any emerging applications or advancements in generative
Artificial Intelligence technology that could potentially disrupt various industries?”
In synthesizing and evaluating the qualitative data from the interviews, a coding
procedure aimed at identifying patterns and themes is utilized. This method distils
the information into seven distinct subcategories, each tailored to address the
research questions. Data for the thesis is derived from interviews with seven different
interview partners representing eight companies. These interviews are recorded and
transcribed, and the statements and opinions are used to create a novel theory. The
interview partners are selected because they belong to various industries in Austria
and have a wide customer base across different business sectors.
Inductive coding involves data collection, which in this thesis is conducted via
interviews with stakeholders from different businesses across Austria. Open-ended
46
questions based on the literature review, research questions, and objectives are
presented. The recorded interviews are transcribed and thoroughly read to get a
general sense of the content. After becoming familiar with the material, significant
statements are identified. Similar codes are grouped together to form broader
themes. The data is revisited to refine these themes, and then each theme is defined
to form aggregated dimensions (Magnani & Gioia, 2023).
Following table gives the details of the businesses and their representatives.
47
partner companies are software developer providing services to a vast
customer base. That way most of the sectors in Austria are tried to be
covered. Interviews are audio recorded and transcribed to capture the
context.
2. Data Analysis: Thematic analysis is performed to identify recurring patterns
with the interview transcripts. The interviews were repeatedly coded and
discrepancies were solved through thorough analysis.
3. Triangulation: The findings were triangulated by comparing the interview data
with other sources relevant literature. Consistency across different sources
strengthened the dependability of the results.
4. Transferability: While this thesis focuses on the applications of AI technology
in the Austrian businesses, I believe the findings have broader acceptability
because a detailed description of study context including the industry,
technology, advantages, processes, and business background is provided.
Reader can assess the relevance of the findings to similar context.
5. Reflexivity: As researcher of this thesis, I recognize my subjectivity and
potential biases. But I have remained open to alternate interpretations,
documenting my decision-making process during analysis and reflecting on
my assumptions.
The process is after identifying initial codes second order coders and themes are
identified to form a aggregate dimension. It is shown in following figure:
48
Intelligent autonomous behaviours in robots.
Applications in logistics,agriculture,specialized machinery, construction site.
Text generation, text analysis, data analysis. Robotics
IT service tickes analysis, categorization, dispatching. IT service Tickets
tegration across Austrian Businesses Smarter chatbots. AI in Recruiting
Predictive maiintenance for production industry. Applications in Logistics, agriculture
Content creation for media industry. Smart robots
GenAI models for classification and prediction. Predictive maintenance
GenAI in sales, marketing,administration,product design. Contnent creation
Business Intelligence, management reporting and data analysis. Sales marketing
Health care personalization. Busienss intelligence
AI in recruiting.
Information and communication technology.
Internet services.
Health care providers.
THEMES
Intelligent autonomous behavior in robots.
Applications in logistics, agriculture, specialized machinery, construction site
Text Generation, Text Analysis, Data Analysis
IT Service Tickets Analysis, Categorization, Dispatching
Smarter Chatbots
Predictive Maintenance for Production Industry
Content Creation for Media Industry
GenAI in Sales, Marketing, Administration, Product Design
Business Intelligence, Management Reporting, and Data Analytics
Healthcare Personalization
AI for Internet Services
AI in recruiting
49
Companies are increasingly investing in automating repetitive tasks, streamlining
service ticket categorization in the IT industry, and optimizing the dispatching
process.
The second order codes and themes derived to form an aggregated dimension
‘Transformation Through AI’ is shown in following figure:
Second order codes contain significant statements from interview partners like
‘’first level customer support, ‘speed up response time’ which represent the
50
transformations made through AI in businesses.
THEMES
Efficiency and automation
Advanced intelligent services
Predictive and proactive maintenance
Personalized customer interaction
Innovation in healthcare
Content generation and enhancement
Pattern recognition and analytical insights
The second order codes and themes derived to form an aggregated dimension
‘AI Technologies Across Domains’ is shown in following figure:
Object classification
rerouting to obstacle paths
Machine learning algorithms
reinforcement learning
trial and error Machine Learning and AI Algorithms
computer vision Computer Vision and Image Processing
51
Figure 15: Analysis process for 'AI Technologies Across Domains' dimension
In the following table the themes derived from the second order codes are
mentioned.
Table 4: Themes for AI technologies Across Dimensions
THEMES
Machine learning and AI algorithms
Computer vision and image processing
Natural language processing
Automation and robotics
Predictive and Analytical Tools
The second order codes and themes derived to form an aggregated dimension
‘AI Implementation Challenges’ is shown in following figure:
problems with fraud detection, automating process, personalizing of customer service Ethical and Regulatory Concerns AI Implementation Challenge
more challenges with streamlining data, in human resources Organizational and Management Issues
reponsible and fair use of AI Personalization and Customization Challenges
Bias in data
change management
fear of rerplacements
government regulations
Open AI model unacceptatble
privacy and security concern
challenges with technical services
making application intuitive and user friendly
52
In the following table themes mentioned from the second order codes are mentioned.
THEMES
Data challenges
Technical and Resources limitation
Ethical and Regulatory concerns
Organisational and management issues
Personalization and customisation challenges
The second order codes and themes derived to form an aggregated dimension
‘Overcoming challenges and innovation’ is shown in following figure:
53
designing a problem the customers can automatically train
prioritize high quality data gathering
managing substantial volumes of data by database and cloud based storage
develop common basic model that can be altered as per need
predefined parameters to ensure data securith
in banking,HR and pharmma refrain from using clound storage
ethics guide
historical data audit Data Management and Security
human centered AI appraoch , change with effective communication Customization and Flexibility
compliance with GDPR and government regulation Regulatory Compliance and Ethics
Figure 17: Analytic process for 'Overcoming challenges and innovation' dimension
The second order codes are analysed to categorise into themes which are
mentioned in following table.
THEMES
Data management and security
Customization and Flexibility
Regulatory compliance and ethics
Industry- specific consideration
Infrastructure and technology implementation
Collaboration and partnership
Hand on learning and experimentation
Training and skill development
The second order codes and themes derived to form an aggregated dimension
54
‘AI Project Lifecycle’ is shown in following figure:
Success of project is measured depending on its nature, customer acceptance, product appeal, incorporating feedback from support personnel
AI development conducted in house
our in house team collects, cleanses the data to ensure its accuracy and consistency
Frame works like Pytorch, tensor flow, keras, designing model architecture, determining number of layers, selecting activation function and choosing optimizers
Level of digitalisation for AI implementation
Size of company measured for AI implementation
understanding specific objectives of companies within industries is crucial this can help tailor solutions
Creating own models eventhough a challenging and time consuming endeavor Project Success Measurement
With out models accuracy of 80% is achieved then we consider success In-House Development and Data Management
we analyze cost calculations per day or per task Technical Frameworks and Tools AI Project Lifecycle Components
benchmarking our AI driven processes against human performance we gauge efficacy of our solutions Implementation Strategies
It can be challenging to measure ROI when integrating the AI technologies as it is difficult to quantify the imact Project Management and Processes
we evaluate the success of implementation of our tools with any business process based on the time and cost saved
Also in some cases effort is save when we are automating some tasks and increase in productivity
Generally the implementation of Gen AI is by knowing the insides of a company
after collecting high quality data and then choosing appropriate genai architecture then training and evaluation
I utilize cloud based tools like AWS or Microsoft Azure to set up necessary infrastructure
I create technical architecture based on business case then experiment and create the first prototype
prototypes are done on low or no code
we have been doing machine learning since a long time before it was cool
In past our projects were about data analysis where our industry customers gave us huge data sets
we use kanban process in most projects which allows us to track progress and manage tasks efficiently
THEMES
In house development and data management
Technical frameworks and tools
Project success management
Implementation strategies
Project Management and processes
The AI project lifecycle in Austrian businesses involves key stages, data collection
and preprocessing, model development, testing and validation, deployment and
continuous improvement. Factors like business goals, strategies and knowing the
business from inside are also taken into consideration. Companies begin with
understanding specific business objectives and tailoring solutions accordingly. The
choice of AI technology depends on the business’s unique needs and challenges.
Rigorous testing and validation so AI models are essential to ensure reliability and
effectiveness. Start with small, manageable projects helps build confidence in AI
technologies, and continuous learning and adaption are crucial to staying current
with AI advancements.
The second order codes and themes derived to form an aggregated dimension
55
‘Advice for Beginner Businesses’ is shown in following figure:
Exchange technology
don’t underestimate AI complexity
It takes a lot of money and time
instead of trying to start everything from beginning, smart to ask for help from businesses that already work in that field
start small
not happy with companies big artificial intelligence strategies
Take a use case, define goals learn together with a partner and step by step grow with AI
some are just starting with the digitalization and are distant from adopting AI
currently downturn into certain sectors particularly manufacturing
AI is becoming ubiquitous much like IT in various departments like HR, Production, customer service
Level of digitalisation for AI implementation
Size of company measured for AI implementation
understanding specific objectives of companies within industries is crucial this can help tailor solutions
In my opinion GenAI will used for creative processes, administrative work, generalist and routine work, image enhancing, text generation ADVICES Advices for beginner Austrian Businesses
GenAI will be grown in the direction of a digital assistant
generaly implementing GenAI is not a huge implementation project its not a big budget project
companies can start quickly and not need to be afraid
consider parterning with experts
start small with manageable projects
Get loose of our tunnel vision
get complete idea of AI, and how it can improve services and customer satisfaction
Introspect if your solutions even need Ai
I quote the first sentence in the Google's Machine learning handbook "Are you absolutely sure you need ML or AI to solve your problem"
the adoption of this AI technology depends on the company's strategic plan and the nature of their product and services
it is important to asses whether implementation of a particular technology is truly necessary while there is often hype surrounding AI
customers needs to be a primary consideration determining whether AI is geninely beneficial
Blind following following AI tech can lead to wastage of time money
Figure 19: Analytic process for 'Advices for Beginner Businesses' dimension
Experience businesses advice to start by assessing their digital maturity and defining
clear business goals and objectives. Prioritizing high quality data acquisition and
rigorous testing is crucial. Collaborating with experienced partners and research
institution can provide valuable insights and support. Beginners should start with
small scale projects to build confidence and gradually scale up. Emphasizing
continuous learning, experimentation, and technology exchange helps foster a
culture conducive to successful AI adoption. Ethical and regulatory compliance,
particularly regarding data privacy, is essential for sustainable AI integration.
5.Summary
To summarize, it is found out that many companies are divided in their response.
Some readily accepted the change, while other are in hesitation due to concerns
over change or potential job loss. Some companies were already having small
portions of their operations smart where they had developed their own Machine
Learning algorithms. However, upon experimenting with tools like ChatGPT and
encountering AI consultants who demonstrated the capabilities of the technology, it
becomes evident that traditional algorithms in some cases were cumbersome, time
consuming, resource intensive.
56
significant transformation due to AI integration. The introduction of smarter chatbots
and generative AI models is revolutionizing business intelligence, management
reporting, and data analytics, enabling companies to make more informed decisions
and streamline their operations.
Advanced product and service feature are another area where AI is making a
substantial impact. The development of smarter products, smart digital assistants,
and AI powered chatbots is enhancing the user experience and providing more
sophisticated and reliable internet service. Companies are investing in these
technologies to stay competitive and meet the growing demands. Predictive and
proactive maintenance is crucial application of AI, particularly in production and
manufacturing industries. AI enables companies to achieve quality requirements
through predictive maintenance, proactively address potential problems before they
escalate, and reduce downtime. This predictive capability is essential for maintaining
high operational standard and minimizing costs associated with unexpected
equipment failure.
57
enhancement, generating text for medical records, and creating targeted content for
various audiences. This capability is particularly beneficial for media and art
industries, where creativity and efficiency are paramount.
Pattern recognition and data analysis are integral to the successful implementation
of AI across Austrian businesses. AI technologies are used to detect unexpected
situations, interpret business data, craft training scenarios, and match CVs with job
descriptions. Auto language translating chat services and the ability to recognize
patterns in large datasets are enabling companies to make better-informed decisions
and optimize their operations.
The research process for this thesis involved synthesizing and evaluating qualitative
data from interviews with representatives from various Austrian companies. After
transcription, a rigorous coding process was conducted to identify and create
domains that address the research questions. The analysis provided valuable
insights into the current state of AI applications in Austrian businesses, highlighting
both the challenges and opportunities presented by AI integration.
58
personalize customer interactions, and improve healthcare outcomes. Despite the
challenges, the continuous advancements in AI technologies and the emphasis on
ethical and responsible AI usage promise a future where AI plays a pivotal role in the
growth and success of Austrian businesses.
Ethical and regulatory concerns are highlighted in the research, but the discussion
could be expanded to include a more nuanced analysis of the ethical implications of
AI. Issues such as algorithmic bias, transparency, and accountability are crucial to
the responsible use of AI, and a more detailed examination of these topics would
strengthen the ethical considerations of the study.
5.2 Outlook
The findings of this thesis have several important implications for businesses and
policymakers. For businesses, the research underscores the importance of aligning
59
AI technologies with strategic objectives and ensuring that ethical considerations are
at the forefront of AI adoption. The emphasis on collaboration, continuous learning,
and data quality provides a roadmap for companies looking to integrate AI
successfully.
For policymakers, the research highlights the need for supportive frameworks that
encourage innovation while safeguarding against the risks associated with AI.
Regulations should balance promoting AI adoption with ensuring ethical standards
and protecting individual privacy.
Future research should aim to address the limitations identified in this study. A more
extensive and diverse sample of businesses, combined with quantitative data, could
provide a more robust analysis of AI adoption. Additionally, a deeper exploration of
the technical and ethical challenges associated with AI would provide valuable
insights for both practitioners and scholars.
60
References
61
Slowik, C. (2023). A step-by-step guide to generative AI implementation.
Neoteric.eu. https://neoteric.eu/blog/a-step-by-step-guide-to-generative-ai-
implementation/
Collins, M. J. (2019). Data science for executives: Leveraging machine intelligence
to drive business ROI. Industrial Research Institute Inc.
McKinsey & Company. (2023). What’s the future of generative AI? An early view in
15 charts. McKinsey. whats-the-future-of-generative-ai-an-early-view-in-15-
charts.pdf (mckinsey.com)
Cunningham, P., & Delany, S. J. (2023). Supervised learning. In P. Cunningham &
S. J. Delany (Eds.), Machine learning techniques for multimedia: Case studies
on organization and retrieval (pp. 21-49). Springer Berlin Heidelberg.
Molteni, D. (2024). Cloudflare announces firewall for AI. The Cloudflare Blog.
https://blog.cloudflare.com/firewall-for-ai/
Finger, D. (2022). FortiNDR: Adding AI-powered network detection and response to
your security fabric. Fortinet. https://www.fortinet.com/blog/business-and-
technology/introducing-fortindr
Ruppert, D., & Matteson, D. S. (2011). Statistics and data analysis for financial
engineering. Springer.
Domingos, P. (2015). The master algorithm: How the quest for the ultimate learning
machine will remake our world. Basic Books.
Domingos, P. (2017). Master algorithm: How the quest for the ultimate learning
machine will remake our world. Penguin Books Ltd.
Flach, P. (2012). Machine learning: The art and science of algorithms that make
sense of data. Cambridge University Press.
Fontana, A. (2021). The AI-first company: How to compete and win with artificial
intelligence. Portfolio.
Future Processing data solutions team. (2024). AI implementation in business: How
to do it successfully? Future Processing. https://www.future-
processing.com/blog/ai-implementation-in-business/
Garcia, D., & Gluesing, J. (2013). Qualitative research methods in international
organizational change research. Journal of Organizational Change
Management.
Géron, A. (2019). Hands-on machine learning with Scikit-Learn, Keras, and
TensorFlow (2nd ed.). O'Reilly.
Magnani, G., & Gioia, D. (2023). Using the Gioia methodology in international
business and entrepreneurship research. International Business Review.
Hansen, C. (2023). Neural networks from scratch. IBM Developer.
https://developer.ibm.com/learningpaths/get-started-with-deep-
62
learning/neural-networks-from-scratch/
Haque, I. R., & Neider, A. (2020). Automated design of product prototypes using
generative adversarial networks. IEEE Transactions on Neural Networks and
Learning Systems.
Harwood, T., & Jones, A. (2019). Role of artificial intelligence (AI) art in care of aging
society: Focus on dementia. OBM Geriatrics.
Brink, H., Richards, J., & Fetherolf, M. (2017). Real world machine learning. Manning
Publications.
Hintze, A. (2016). Understanding the four types of AI, from reactive robots to self-
aware beings. The Conversation. https://theconversation.com/understanding-
the-four-types-of-ai-from-reactive-robots-to-self-aware-beings-67616
Holdsworth, J. (2024). The most valuable AI use cases for businesses. IBM.
https://www.ibm.com/blog/artificial-intelligence-use-cases/
Javatpoint. (2022). Reinforcement learning tutorial. Retrieved from
https://www.javatpoint.com/reinforcement-learning#Markov.
Jajal, T. D. (2018). Distinguishing between narrow AI, general AI and super AI.
Mapping Out 2050.
Javaid, S. (2024). Quick guide to AI data collection quality assurance in 2024. AI
Multiple Research. https://research.aimultiple.com/data-collection-quality-
assurance/
Ben-David, S., & Shalev-Shwartz, S. (2014). Understanding machine learning: From
theory to algorithms. Cambridge University Press.
Tamayo, J., & Di Paola, D. (2023). Reskilling in the age of AI. Harvard Business
Review.
Kanade, V. (2022). Artificial intelligence. Spiceworks.
https://www.spiceworks.com/tech/artificial-intelligence/articles/what-is-ai/
Kaye, B. K. (2021). Here’s what’s behind the rise of custom algorithms for digital ad
decisions. Digiday. https://www.chalice.ai/heres-whats-behind-the-rise-of-
custom-algorithms-for-digital-ad-decisions
Kipouros, G. (2017). AI transforming business corporate CxO perspectives. Futurum
Media Ltd.
Klaas, J. (2019). Machine learning for finance. Packt Publishing.
Kura, S. (2020). Data science vs the world of AI. Medium, 1-3.
https://medium.com/@BiglySales/data-science-vs-ai-understanding-the-
differences-and-synergies-ac5c8481ec51
Goodfellow, I., Bengio, Y., & Courville, A. (2023). Deep learning. Alanna
Maldonado.
63
Larson, S. (2024). Social media users 2024 (global data & statistics). Priori Data.
https://prioridata.com
Lee, K.-F. (2018). AI superpowers: China, Silicon Valley, and the new world order.
New York.
Elgammal, A., Liu, B., Elhoseiny, M., & Mazzone, M. (2017). CAN: Creative
adversarial networks, generating art by learning about styles and deviating
from style norms. arXiv preprint.
Liu, Q., & Wang, D. (2012). Supervised learning.
Lukáč, D. (2023). 6 challenges of AI implementation. Apify. https://apify.com
Lee, M. C. M., & Smith, H. (2024). The implementation of artificial intelligence in
organizations: A systematic review. Elsevier.
Makridakis, S. (2017). The forthcoming artificial intelligence (AI) revolution: Its impact
on society and firms. Futures, 90, 46-60.
Yao, M., Zhou, A., & Jia, A. (2018). Applied artificial intelligence: A handbook for
business leaders. Topbots.
Marr, B. (2021). How AI is transforming healthcare. Forbes. https://forbes.com
Kuhn, M., & Johnson, K. (2015). Applied predictive modeling. Springer.
Mohri, M., Rostamizadeh, A., & Talwalkar, A. (2018). Foundations of machine
learning. The MIT Press.
Chui, M. (2022). The state of AI in 2022—and a half decade in review. McKinsey
Global Institute. https://mckinsey.com
Venture Mind. (2023). AI-powered collaboration for startups: Unleashing synergy and
innovation through partnerships. https://venturemind.com
Murphy, K. P. (2012). Machine learning: A probabilistic perspective. The MIT Press.
Myles, D. C., & Conway, S. (2012). Machine learning for hackers. O'Reilly Media,
Inc.
Ng, Andrew. (2018). Machine learning yearning: Technical strategy for AI engineers
in the era of deep learning. Self-published.
Aggarwal, N., & Liu, A. (2023). KPIs for gen AI: Why measuring your new AI is
essential to its success. Google Cloud.
Oche, P. A., & Rajasekar, A. (2021). Applications and challenges of artificial
intelligence in space missions. IEEE Access.
Ooi, K. B.-E.-S., Lee, H. S., & Tan, G. W.-H. (2023). The potential of generative
artificial intelligence across disciplines: Perspectives and future directions.
Journal of Computer Information Systems.
Portas-Levy, D., Joseph, A., Vaitsos, A., Quinn, C., & Simmons, K. (2022).
64
Applications of artificial intelligence as a disruptive tool in space education.
NASA Astrophysics Data System.
Provost, F., & Fawcett, T. (2013). Data science for business: What you need to know
about data mining and data-analytic thinking. O'Reilly Media, Incorporated.
Quincy, C. (2023). IBM again recognized as a leader in the 2023 Gartner® Magic
Quadrant™ for enterprise conversational AI platforms. IBM.
Roll, I., & Wylie, R. (2016). Evolution and revolution in artificial intelligence in
education. International Journal of Artificial Intelligence in Education, 582-599.
Russell, S. J., & Norvig, P. (2016). Artificial intelligence: A modern approach.
Pearson.
Dutt, S., & Chattopadhyay, S. (2018). Machine learning. Pearson Education India.
Panda, S. K., & Mehta, V. (2021). Artificial intelligence and machine learning in
business management. CRC Press.
Smith, J., & Johnson, M. (2021). Generative AI in the entertainment industry:
Enhancing visual effects and animation. Journal of Media Production.
Solari, S. (2023). Understanding neural networks in generative AI. LinkedIn.
Sosa, J., & Jones, B. (2020). A review of latent space models for social networks.
arXiv preprint.
Bird, S., Klein, E., & Loper, E. (2009). Natural language processing with Python.
O'Reilly Media, Inc.
Stratis, K. (2023). What is generative AI. O'Reilly Media, Inc..
Chishti, S., & Bourdeau, I. (2020). The AI book: The artificial intelligence handbook
for investors, entrepreneurs, and fintech visionaries. Wiley.
Theobald, O. (2018). Machine learning for absolute beginners: A plain English
introduction (2nd ed.).
Theobald, O. (2023). Generative AI art for beginners. Packt Publishing.
IntechOpen. (2023). Types of artificial intelligence and future of artificial intelligence
in medical sciences.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... &
Polosukhin, I. (2017). Attention is all you need. In Advances in neural
information processing systems (p. 30).
Chandra, V., & Hareendran, A. (2017). Research methodology. Pearson Education
India.
Wu, R. M., & Marin, M. (2021). E-business: Higher education and intelligence
applications. BoD–Books on Demand.
Zhao, H., & Young, T. (2020). Understanding fashion trends with generative
65
adversarial networks. International Journal of Computer Vision.
Zhavoronkov, A. I. (2020). Deep learning enables rapid identification of potential
drug candidates. Nature Biotechnology.
Table of figures
Figure 1: Research Process (Bouma, 2004) ...............Error! Bookmark not defined.
Figure 2: Artificial Intelligence lifecycle (Kura, 2020) .................................................. 9
Figure 3: Types of Artificial Intelligence (Géron, 2019)............................................. 15
Figure 4: Machine learning process (Kura, 2020) .................................................... 19
Figure 5: Logistic Regression model ........................................................................ 27
Figure 6: Ensemble of models (Shalev-Shwartz, S., & Ben-David, S. (2014)) ......... 30
Figure 7: Predictions ................................................................................................ 32
Figure 8: Simple neural net ...................................................................................... 32
Figure 9: Complex Neural Network .......................................................................... 33
Figure 10: Background and context questions ......................................................... 40
Figure 11: Implementing AI Questions ..................................................................... 43
Figure 12: Technical Questions ................................................................................ 46
Figure 13:Analysis process for ‘AI Integration Across Business’ dimension............. 49
Figure 14: Analysis process for 'Transformation Through AI' dimension .................. 50
Figure 15: Analysis process for 'AI Technologies Across Domains' dimension ........ 52
Figure 16:Analysis process for 'AI Implementation Challenges' dimension .............. 52
Figure 17: Analytic process for 'Overcoming challenges and innovation' dimension 54
Figure 18: Analytic process for 'AI Project Lifecycle' dimension ............................... 55
Figure 19: Analytic process for 'Advices for Beginner Businesses' dimension ......... 56
66
List of Tables
Table 1: Interview Partner details ............................................................................. 52
Table 2: Themes for AI integration ........................................................................... 54
Table 3: Themes for Transformation Through AI ..................................................... 56
Table 4: Themes for AI Technologies Across Domains ........................................... 57
Table 5: Themes for AI Implementation Challenges ................................................ 58
Figure 6: Themes for Overcoming challenges and innovation ................................. 59
Figure 7: Themes for AI Project Lifecycle................................................................. 60
Abbreviations
AGI: Artificial general intelligence
67
NADE: Neural Autoregressive Distribution Estimation
At ARTI Robots, we recognize the growing trend towards automation across various
industries. Our AI-powered software solutions empower businesses to enhance
efficiency and productivity by implementing autonomous systems that can perform
tasks ranging from following a person to navigating complex environments from point
A to B. Whether it's optimizing operations in logistics or streamlining processes in
agriculture, our technology offers innovative solutions to meet the evolving needs of
modern businesses. With ARTI Robots, companies can leverage the power of
artificial intelligence to create intelligent behaviors in their robotic systems,
revolutionizing how tasks are performed in diverse environments.”
68
“When we started with robotics Artificial Intelligence was not that common and not
that easy to implement. There was lack of processes, lack of knowledge therefore
we started with traditional efforts which were enough for the features or abilities we
were trying to achieve in our products. After we were introduced to the AI
technologies like machine learning and deep learning, we thought of ways where we
could implement these algorithms and make the products smarter. Initially we used
AI to build foundation and started using it for basic things without much relying on it.
We also cannot use Artificial Intelligence just in sake of Artificial Intelligence I mean it
should add some value to your product solve some problems.”
“In the early stages, our focus was on fundamental tasks such as object
classification and segmentation, as well as noise filtering to ensure the accuracy of
our data. We recognized the need for artificial intelligence in diagnosing anomalies
or irregularities within our systems, enabling us to detect and address issues
efficiently.
Q How did you select any specific Artificial Intelligence technology for any
given certain problem?
69
understand procedures automatically and be readily usable. We're exploring options
for tools that customers can automatically train. While many Artificial Intelligence-
based tools exist, they may not always be the best choice for the robotics industry;
traditional algorithms may offer better solutions for certain problems. However, the
challenge lies in the effort required to design such algorithms.”
Q What were the challenges when entering the Artificial Intelligence field or
started using Artificial Intelligence?
In the field of robotics, preserving data is very necessary, even if specialized tools
were not readily available initially. Data can be sourced from a multitude of channels,
including sensors, databases, APIs, user interactions, and external repositories.
Throughout our data collection process, we place a strong emphasis on ensuring
quality, relevance, and diversity to accurately reflect real-world scenarios.
70
Furthermore, raw data often undergoes rigorous cleaning and preprocessing
procedures to rectify issues such as missing values and noise, enhancing its utility
for subsequent analysis. To manage the substantial volumes of data involved, we
leverage both database systems and cloud-based storage solutions, facilitating
seamless accessibility for retrieval and analysis whenever needed."
“The metrics for an Artificial Intelligence project vary depending on its nature. When
adopting traditional methods, achieving a solid starting metric is crucial to gaining
customer acceptance. Subsequently, improving upon this metric is essential to
enhance product appeal. This can be achieved by leveraging additional data,
analyzing error reports, establishing relationships, and incorporating feedback from
support personnel involved in training. Including these diverse perspectives as
metrics ensures a comprehensive evaluation of project success.”
71
Q How do you process the training of Artificial Intelligence models within your
organization?
72
Q Would you offer any recommendations to other companies looking forward
to implement Artificial Intelligence
By teaming up with experienced partners, you can make the process of dealing with
AI a lot easier. This way, you can avoid wasting time and money on trial and error,
and instead, focus on making progress towards your goals.”
“We initially launched our company focusing on text generation, text analysis, and
data analysis, alongside capabilities in image recognition. One of our primary
applications involves analyzing text in a business context, such as emails, tickets,
and documents.
One significant use case involves handling a large volume of IT service tickets,
sometimes reaching up to 300,000. We analyze the text content to facilitate the
dispatching process, categorize the tickets, and understand the nature of the issue.
This allows us to route each ticket to the appropriate person or department. In many
cases, we can fully automate this process using Artificial Intelligence, enabling us to
handle around 60% of tasks automatically. This automation significantly speeds up
response times.
73
quality in the production industry. For example, with one of our automotive clients,
we analyze the production of gearboxes across 25 assembly stations. Data from
each station is continuously recorded and analyzed by our AI software. This analysis
predicts whether a gearbox is likely to fail or meet quality requirements before it
reaches the final assembly station. We also identify any instances where a defective
part has been assembled at any station, allowing problems to be addressed
proactively and preventing potential issues with the gearbox”.
Counter Question: What change did it implement like what was the impact of
implementing AI in all these processes?
Using traditional methods or without using some of the AI algorithms these would be
very labor intensive and time consuming. Like as we have implemented AI powered
chatbots in finance sector to automate query handling this would have required
additional man power to attend queries, data extraction and reporting is also prone to
error and inefficiencies. Without AI all the process would result in slower, less
efficient operations, decreased customer satisfaction, increased operational cost.
Q So with this application of quality control for the gearbox what kind of
technology do you use is it image recognition or something else?
“At the end of our production line, we conduct an audio test on the gearbox. During
this test, the gearbox is run, and the sounds it produces are analyzed to detect any
potential problems. It's important to note that audio files are a form of data, and we
leverage this data by applying time series analysis and deep learning models to
determine whether the gearbox is functioning correctly or if there are any issues”.
Q What are the recurring challenges faced by your client or what are the
recurring common problems you clients have? Do you have developed a
common solution?
74
with different challenges we develop models according to the need.”
Q What challenges and business goals did some of your clients have? If you
can give some examples.
“One of our notable clients is McDonald's, for whom we've developed a recruitment
tool. With around 60,000 job applications annually, they needed a solution to
streamline their hiring process. Our Artificial Intelligence model extracts data from
their database and converts it into PDF format, considering 7 to 9 key parameters
crucial for decision-making. For Raiffeisen Bank, we've created a digital assistant to
assist customers with various services, such as reporting lost debit/credit cards or
scheduling appointments. We've implemented over 60 different processes within this
digital assistant, making it a process-driven chatbot. A critical focus for all our clients
is data security. We prioritize this aspect extensively, with more than 55 parameters
dedicated to ensuring the utmost security of their data. Specifically in banking, HR,
and pharmaceutical sectors, we refrain from using cloud storage for data. Instead,
data is stored on-premises to uphold stringent security measures and compliance
standards.”
“We conduct text analysis specifically aimed at detecting breast cancer indicators.
With a database comprising over 10,000 reports from individuals diagnosed with
breast cancer, our analysis delves deep into identifying key patterns and markers
indicative of the disease. In addition to text analysis, we also perform microbiome
analysis, generating a vast array of data points for each patient—often exceeding
300,000. Leveraging this data, we provide tailored nutrition recommendations to
optimize patient well-being. Moreover, we prioritize data security by anonymizing
patient data and running our applications on-premises. This ensures stringent control
over data access and confidentiality, safeguarding sensitive health information
against potential breaches or unauthorized access.”
Q What are the prerequisites and how do you decide whether a company is
ready to adopt Artificial Intelligence into their processes?
75
the size of the company. We find that companies with a workforce ranging from 50 to
100 employees tend to exhibit higher levels of digitalization maturity. Additionally,
understanding the specific objectives of companies within industries such as
pharmaceuticals is crucial. Identifying their intended use of Artificial Intelligence,
particularly in predictive analytics, allows us to tailor our solutions effectively. This
approach was utilized in our collaboration with McDonald's. Initially, we focused on
digitizing their operations, ensuring they reached an optimal level of digital maturity.
Subsequently, we seamlessly integrated Artificial Intelligence solutions to further
enhance their processes, leveraging predictive capabilities to meet their specific
needs.”
Q what measures are taken to navigate the responsible and fair use of the
Artificial Intelligence solutions by the clients?
“At the start of our journey, we prioritized the establishment of our ethics guide, a
practice embraced by all team members. This guide serves as a compass, guiding
our actions and decisions. One key aspect of our ethical framework involves
maintaining a blacklist of companies involved in the production of weapons or
engaged in illicit activities, such as smuggling.
Q Are there any specific industries where you see a potential for growth in
Artificial Intelligence in near future?
“What we've consistently pursued is the development of our own models, including
venturing into creating our own large language models, even though a challenging
and time-consuming endeavor. Over the past two years, we've recognized the
76
value of partnerships, forging collaborations with industry giants like Microsoft and
Google. These partnerships are instrumental in keeping us abreast of the latest
technologies, customer needs, and the evolving landscape of companies.
Given the varying stages of digitalization across industries, it's important for us to
remain versatile in our solutions. While some companies are just starting their
journey into digitalization and may be distant from adopting Artificial Intelligence,
others are more advanced and eager for innovative solutions.
“When our models achieve an accuracy rate of 80%, we consider the problem
effectively solved. To validate this, we conduct comparisons with tasks performed by
humans. For instance, we might analyze cost calculations per day or per task,
evaluating both human-performed and AI-performed tasks. By benchmarking our AI-
driven processes against human performance, we gauge the efficacy of our
solutions. If our AI systems consistently outperform humans or achieve comparable
results with greater efficiency, we consider it a success. This validation process
ensures that our AI solutions not only meet but often exceed human capabilities in
accuracy and efficiency, driving tangible benefits for our clients and partners.”
I would say start small; I am not very happy with companies having big Artificial
Intelligence strategies with Artificial Intelligence consultants you can invest 20000$
77
for consultants but its waste of money. Take a use case, define goals learn together
with a partner and step by step grow with Artificial Intelligence.
“I have a background in sales and consulting area where I first was introduced to the
Artificial Intelligence technology and its vast possibilities. Looking at the ChatGPT I
and my friends have an idea of helping and integrating generative Artificial
Intelligence technologies into industries and helping them know more about Artificial
Intelligence and its advantages. I and the cofounder wanted to take full advantage of
this new technology and make Austria Artificial Intelligence ready. ‘AIssistance’ helps
with change management and strategic management for the company to integrate
Artificial Intelligence.”
“Generative Artificial Intelligence stands apart from traditional AI models like machine
learning and deep learning. While these models excel at tasks such as classification
and prediction, Generative AI is closely associated with large language models. It
functions are similar to a sophisticated virtual mind, capable of learning patterns
and producing new content autonomously. This content can span various forms,
including art, music, articles, or text. In industrial settings, Generative AI serves
78
diverse purposes. It can function as a personalized assistant, interpreting business
data, crafting training scenarios, and assisting clients with inquiries. For instance, in
marketing, it aids in generating tailored content to engage audiences effectively. With
Generative AI, I can effortlessly create texts, videos, images, presentations, and
more, making it accessible and user-friendly, even for those without a technical
background”.
“Our customers are coming from production, media, finance industries. We are not
centred on any specific industries or about department, within the company we focus
on sales, marketing, administration, finance, and depending upon the company. We
have also helped industries working in production. Generative Artificial Intelligence
tools that we have implemented have helped businesses in increasing productivity.
Like for example in the marketing departments our tools help marketers create
personalized ads as per the customer’s preferences, demographics. Our tools also
help in image and video generation for different channels. In production and
manufacturing industries Generative Artificial Intelligence has been found useful in
product design taking into factors such as material requirements and manufacturing
constraints. Generative Artificial Intelligence models have also affected the media
industries, these models help in creating digital artwork exploring new aesthetic
possibilities and pushing boundaries of creativity”.
79
“Thank you for your email our customer support will contact you” with Artificial
Intelligence we can personalize the emails from customer perspective. This is super
small example it can also be used when developing a new product design, you can
use it for generation of image. In the generation of new product design, you can
enhance pictures and images and add new things without being an expert in
photoshop so the goal is you save plenty of time. Generative Artificial Intelligence
contributes to the business process in many ways by increasing the efficiency and
productivity mainly by personalization, customization, it helps companies innovate
faster. Generative Artificial Intelligence has enabled businesses disrupt traditional
process, innovate faster and have a competitive advantage over competitors”.
“We are currently going in healthcare industry but here it is more about the change
management but a lot of people are afraid of Artificial Intelligence, there is a fear of
replacement, there is an aspect a change management. Artificial Intelligence in
healthcare provides huge benefits when it comes to pattern recognition or like
identifying cancers and different diseases. There is a study which states that when I
as a patient is coming to the doctor and I tell the doctor all the symptoms the Artificial
Intelligence can recognize the diseases based on these symptoms. So, in my
opinion when it comes to healthcare Artificial Intelligence will play a major role here it
is more about deep learning, machine learning and so on. With Generative Artificial
Intelligence in healthcare can be used personalized healthcare where GAI can
synthesize patient specific data, allowing for the creation of personalized healthcare
models. Can also be used for natural language processing for generation of text
generation for medical records. In marketing personalization and targeted content
creation is the main feature provided by Generative Artificial Intelligence and can be
very creative in this process. In the field of finance, portfolio optimization can be
done by synthetic financial scenarios and simulating portfolio performance under
different market condition. More informed decisions could be made to achieve
financial objectives”.
80
Q How do businesses measure the return on investment (ROI) when
integrating generative Artificial Intelligence into their operations or services?
What are the common challenges faced by businesses when adopting
generative Artificial Intelligence, and how can these challenges be mitigated?
“It can be challenging to measure the ROI when integrating the Artificial Intelligence
technologies as it is difficult to quantify the impact if Artificial Intelligence driven
technologies on various aspects of businesses. We evaluate the success of
implementation of our tools with any business process based on the time and cost
saved. Actually, there are a number of ways to evaluate depending on the objective
that is to be obtained like in laborious tasks that needed automations in such cases
time saved is measured and also cost as human labor would take much more time
and cost and effort, in the department of sales we can calculate the leads generated,
increase in productivity of the sales person in targeting the customers. Where the
industry is in involved in product development Generative Artificial Intelligence tools
increase the time to market and give a competitive advantage”
81
repetitive tasks that often go overlooked. By freeing up time from such tasks,
employees can focus more on creative endeavors, thereby increasing motivation and
job satisfaction. This approach encourages employees to embrace generative
Artificial Intelligence and leverage its capabilities to enhance their work processes”.
Q What future trends do you foresee regarding the use of generative Artificial
Intelligence in businesses? Are there any emerging applications or
advancements in generative Artificial Intelligence technology that could
potentially disrupt various industries?
“Currently there are more than 10,000 tools doing different use cases in the future in
my opinion 90% of the tools will be extinct as the big players would cover most of the
tools in one single platform. In my opinion Generative Artificial Intelligence will be
used for creative processes, administrative work, generalist and routine work,
images enhancing, text generation, video would be influence in future, GAI will be
grown in the direction of a digital assistance helping and guiding you through the
whole the process. Would also help in digital marketing, increasing sales leads,
product development. Generative Artificial Intelligence will facilitate greater
collaboration between humans and machine enabling the human to enhance their
creativity and productivity””
82
tool”.
“First of all, when it comes to Generative Artificial Intelligence (GAI), it's not a huge
implementation project, nor is it a big-budget project. Companies can start quickly;
there's no need to be afraid. Fear may result in losing a competitive advantage. For
more complex Artificial Intelligence projects, always prioritize data quality and
consider partnering with experts. Use Artificial Intelligence ethically and responsibly.
Additionally, ensure thorough testing and validation of GAI models before
deployment. Start with small, manageable projects to gain insights and build
confidence in the technology”.
“Essentially, it began with a project where my team and I failed miserably, which
motivated us to create a product out of that failure. While at Microsoft, we were
83
tasked with scanning CVs for a large construction company in Austria to screen
applications and match applicants with the right job. Traditional algorithms failed us.
However, after gaining early access to Generative Artificial Intelligence tools from
OpenAI, I experimented by inputting a job description and a CV into ChatGPT,
asking it to match both and reason if the CV was from a good applicant. The results
of the experiment blew me away. The potential of Generative Artificial Intelligence
fascinated me, particularly in revolutionizing the hiring process. This experience was
the catalyst for starting my own company”.
“In the products we've created using Generative Artificial Intelligence, specifically
OpenAI’s ChatGPT, we focus on extracting information from documents.
Additionally, there's some ML involved. Our work often revolves around shaping and
preparing data for use by systems. For instance, we make efforts to make
documents, including PDFs, machine-readable, extract text, and utilize this
information. I've also implemented a system for an NGO in Austria that describes
pictures for the visually impaired. This system transcribes text from images to explain
what's present, aiding accessibility. Furthermore, I've automated the booking
process for my tax advisor by extracting text from receipts and automating the book
keeping process”.
“It all begins with the business problem. I assess the use case, then work backward.
Let's assume there's a text extraction issue from documentation. I utilize cloud-based
tools like AWS or Microsoft Azure to set up the necessary infrastructure. I link the
business problem to Lego bricks, combining different tools to solve it. I create a
technical architecture based on the business case, then experiment and create the
first prototype. Prototypes are typically done with low-code or no-code tools. Finally, I
evaluate if the use case is feasible with the created architecture and implement the
necessary tools accordingly”.
Q5 What are the challenges you face during implementation of the new
Artificial Intelligence technology into specific use cases?
84
“One challenge with the technology of Generative Artificial Intelligence is, it’s
evolving rapidly, actually every week there is something new and you have to
balance the kind of the use case of the client and problem you have to solve and
then try out different frameworks the problem here is, since this is such a quickly
moving development actually it’s pretty crazy it’s a challenge to keep up with ever
evolving technology and how do you operationalize that. On client side its more its
mostly resistance to change. Teaching them and taking away the fear of Artificial
Intelligence is a challenge”.
Q what particular positives and benefits did you observe after implementing
the Artificial Intelligence technology as a solution?
85
malwares/threats before they reach models, protecting against threats (Daniele
Molteni, 2024).
AI algorithms can predict future load or the network traffic pattern and distribute the
traffic to different servers and allocate resources accordingly. This is done based on
real time conditions. This helps the internet providers handle sudden spike in
demands without service disruption.
Asa also mentioned in the following interview Fortinet’s FortiNDR leverages machine
learning and deep neural networks to identify cyberattacks, it detects sign of
sophisticated cyberattacks utilizing advanced analytics and ML, FortiNDR delivers
pre-trained neural networks and ML based on premises traffic profiling to identify
threats (David Finger, 2022).
“We are an internet service provider. We offer high speed internet services catering
both smart factories and offices. We are wholly owned subsidiary of Holdings Graz.
Our commitment to cutting edge technology and personalized service ensures that
businesses can access high quality products and services both nationally and
internationally”.
“We have already integrated AI technologies into our operations. For text-based
interactions, we leverage advanced AI chatbots like ChatGPT to enhance
communication efficiency and customer support. Moreover, in strengthening our
cybersecurity measures, we employ AI algorithms within our firewalls for threat
recognition. These AI-powered systems continuously learn and adapt to evolving
threats, enabling proactive measures to prevent potential attacks and safeguard our
clients' data and networks effectively”.
“I mean in technology, AI more like wording of it, AI in technology is years old only
86
now it is in the media. Small portion of the systems were always intelligent. Today
the term AI has become more common often used to describe a wide range of
technological advancements, usually AI encompasses systems that can learn, adapt,
and make decisions autonomously often through machine learning or deep learning
algorithm”.
“As I said our firewalls primarily rely on machine learning for critical functions such as
threat protection, load balancing and routing, our focus remains on leveraging
machine learning systems for their essential roles in ensuring security and efficiency
of the network. Machine learning algorithms enable our firewalls to dynamically
adapt to emerging threats, identify and mitigate potential security risks in real time,
optimize network traffic for seamless performance”.
Q Can you give an overview how you select these technologies? Like Why
machine learning?
“Cause when it comes to firewalls, the ability to adapt and learn is paramount in
effectively safeguarding against cyber threats. Machine learning algorithms excel in
precisely this aspect they are highly efficient in learning patterns and identifying
anomalies in network behavior”.
Q As you said you are from sales. Have you used AI tools in sales?
“Yes, as a member of the sales team, I have utilized AI tools, particularly in context
of customer projects. (Comment: “Here I asked him whether he has used for lead
generation?”) so he answered: in our case its primarily employed for personalization
of solutions. We leverage AI algorithms to analyze customer behavior related to
bandwidth usage, network traffic patterns and data center operations. This allows us
to gain insights into our customers preferences, identify potential pain points and
tailor our solutions accordingly. It also enables us to personalize our interactions
with customers, providing them with targeted recommendations and solutions that
87
align with their unique requirements.”
“I am not sure. But we must be using. However, I can say data management
infrastructure is essential for our operations. We might have databases, cloud
storage solutions and analytics platforms, we have vast amounts of data “.
Q Can you give overview how you manage skill gap in the company?
“Mostly training and development, cross training where team members can learn
from each other’s expertise, we also use new technologies like data analysis
platforms, network monitoring tools, which allows our sales team to gain insights into
customer networks, data centers and security systems. By leveraging these tools
effectively, we identify potential sales opportunities. Collaboration also plays an
important role”.
“With the big players like Fortinet and collaboration enhance our offerings particularly
in the focus of threat detection, network security and AI-driven solutions. This has
allowed us to deliver innovative solutions that addressees evolving cybersecurity
needs of our customers”.
“Get loose of your tunnel vision get complete idea of the AI. Get an idea how your
service and customers can be improved. Does your service of product even need AI.
This is some amazing technology and can have great impact when used properly at
the right place”.
“We are a group of two companies. The company Wirecube is the parent company
which deals with software engineering in different fields and Shopreme is a startup
which is focused on self-check-out for customer”.
88
Q Have already implemented AI in your startup Shopreme? Also do you
develop AI solutions for other companies?
“So, in our software development company we develop AI solution for some of our
clients. And what we do most, really is help them figure out what they need AI and
come up with use cases, and now in the first implementation that starts in a publicly
visible project. For Shopreme we are currently evaluating the technology and what
we are using it for example is to do things like generating personas like for different
types of shoppers from their basket contents, that have soft input criterion where you
might need a lot of expert knowledge so we are using the technology one of analysis
for now and add some features in future”.
“We have been doing Machine Learning since a long time before it was cool. In the
past what we did mostly is in our peak data project where we have industry
customer, that give us huge data sets, we help them detect pattern and unexpected
situations by implementing our technology call NADE (Neural Autoregressive
Distribution Estimation) that allows you to get to put in unlabelled data and receive
alert about unlikely situations that occur between all the input process that is very
helpful step to then label those incidents and be able to work on it in future”.
NADE breaks down that data into smaller parts and looks at each part one at a time.
It tries to predict each part based on the part that came before it. For example, if its
looking at a sequence of words, it might try to predict the next word based on the
words that came before it. (Benigno Uria, 2016)}
“In Shopreme and many other Wirecube projects its mostly Generative Artificial
Intelligence like GPT4 for Shopreme to do discovery in data to detect things like
89
shopper’s personas however we are also exploring options like enriching product
description and other text generation”.
“The Shopreme features go in that direction. So, we use it at the moment do one of
analytics report for our customer to detect things like on the Fridays it’s the fathers
doing the shopping and purchase mostly beers instead of groceries and for other
companies mostly not Business Intelligence it’s really about optimizations”.
“Huge challenge that we were facing is our existing customers are well known large
companies based in Europe and for example using the open AI model is
unacceptable to them, so we have now been able to come up a tech based on AWS
where we can actually deploy some of the state-of-the-art models in the data centre
that restricted to Europe can now like really offer compelling private deployment”.
Q For your Shopreme you collect your data from your clients or the activities
like the basket content and other things, but how does data collection for the
project with Wirecube look like where you don’t have much access to the data
of the company?
“For the projects that go in the directions of IOT and big data where we have big
data intake and detect patterns, we do that for example in the Azure cloud and
there we have a data intake pipeline and our own ML deployment and generative
90
AI projects we try to do things mostly by basing them on existing data and help our
customers leverage what they have to get more value internally and to our
customers”.
Q While implementing AI solutions did you face any privacy and security
concerns from the client side?
“Exactly there were a lot of privacy and security concerns in Europe which Is why we
have pressure to come up with a model, get some of the state of art model working
in guaranteed private environment in Europe and now everything is based on that
really because privacy was a huge concern.
“We do it on two levels, sometimes it can be you have a joint customer or a project
that you do together and then you do collaboration by sharing the revenue and
providing a service together to the end customer. The other thing lot more informal
exchange for example hosting meetups”.
Q Taking a specific example. How do you track the progress of any project, are
there specific KPI like in case of Shopreme, how you measure the success of
the project?
“We use Kanban process in most projects which allows us to track progress and
manage tasks efficiently and one thing you could see a tool to measure success that
is we group our task in releases because found this is the most important unit of
work for us being able to ship releases especially important in the app stores, you
need to think a lot more about you release vs other projects where there is
continuous integration. Releases server as significant milestones in our project
timeline, marking the completion of specific set of tasks. By organizing our work in
this manner, we can assess the success of each release based on timeliness,
scope, quality and impact”.
“For example, for a client that is market leader in electronic shelf labelling that
91
means they produce E-ink displays that are put on store shelves that are battery
operated and can last for years and the ink technology saves the energy, because it
only requires energy for changing the content. We help them to analyse the battery
usage and circumstances that lead to increase battery usage for example in things
like battery reception in some corners of the store, which then leads read
transmissions, and the impact that has been achieved is to be actually able to
provide 5 years battery life guarantee to the customers which they were afraid before
our analysis”.
Q For this did you use any inhouse developed custom algorithm?
“We applied NADE (Neural Autoregressive Distribution Estimation) and we did this
by implementing the algorithm ourself based on papers that have been released in
this field”.
“We have two groups of customers one is industry IOT manufacturing, sensors, and
the other areas is we provide backend support for high traffic websites in Austria”.
Q Are there any recommendations you would like to give to other Austrian
companies who are looking forward to implementing AI?
“I want to quote first sentence in the Google’s Machine learning handbook for
developers that reads “Are you absolutely sure you need ML or AI to solve your
problem?” Start thinking about problems and use cases you want to solve first and
seconds step think about if they profit from AI”.
“We are Herzensapp we actually connect the families to the nurses or the
caregivers, we have two service within our application you can sign in as family
member then you can create a patient profile then you can send a nursing request,
that you need a nurse or a caregiver and the nursing company will get a request
and we assign you to the agency. Inside of our application we have chat service,
92
the patient can communicate with the agency and the nurses and we have smart
digitalization and documentation, the care givers can document every detail inside
our applications and export as a pdf. There is a match system, voice recognition
system, we have auto translation”.
“It’s for everyone who want have nursing service or needs a caregiver. We also cater
nursing agency to create a profile so that the patients can find them. The agency
should send us a request to create a page we send them a SMS with the link to
create a page”.
“For now, in our translation service in our chat services our application is a multi-
language application you can choose from different languages, whatever message
you send is auto translated in the language preferred by your target. For this specific
service we are using the LLM (Large Language Models) but we have some plans to
implements AI systems integrated with all of our service for example for our
matching system, we plan to use LLM for the recommendations system, the agency
must assign a patient to the specific caregiver with some information so the system
can automatically recommend, ok this patient better matches for the services
provided by ‘xyz’ caregiver according to the requirement. We can combine it with the
unsupervised learning”.
{Comment: LLM’s are super smart text generators or robots they have been trained
on massive amounts of data from books, websites, and other sources. For every
word they correlate it to a word vector that helps them to understand how words
relate to each other. LLM’s use special building blocks called transformers. The key
innovation of transformer is its self-attention mechanism which allows the model to
weigh the importance of different words in a sequence when processing each word.
The transformer architecture mechanism consists of multiple layers of self-attention
mechanism and feed forward neural networks. Each layer processes the input
sequence independently and passes the output to the next layer. The self-attention
mechanism allows to concentrate on important and relevant part which helps in
language translation, text generation and sentiment analysis (Vaswani, 2017).
93
LLM’s use a type of neural network architecture called a transformer which is
designed to process and generate data in sequence. (Ooi, 2023)
Q That means do you create profiles from the database and match with the
caregivers?
“This is the part of the future plan. We want to implement the match making system.
We can RAG system. RAG technology it works, you can create a vector database
and then you can embed your data as vectorized information and embed inside of
the vector database, then the user asks a question, and get a response. It can also
be used as API to get response to a request. When you send a request to the
system, that system automatically has a similarity search technology for example
that system can find, ok this question it has similarity search and its near of this
information ok collect it and then that information with your question go to the LLM
system then LLM interprets the data, and it gives you an accurate response without
any hallucination”.
Q Why this type of specific technology? What led you to choose this particular
technology?
"The choice of technology depends on the specific needs and challenges of your
services. In addressing problems such as creating a matching system,
recommendation system, or translation services, various technologies can be
considered. For instance, ‘DeepL’ employs LLM (Large Language Model) for
translation tasks, while ChatGPT utilizes transcription and translation models.
For example, leveraging ChatGPT's transcription model, you can convert speech to
text and then translate it into your desired language, offering improved accuracy and
superior results. This approach proves to be highly efficient in achieving optimal
outcomes for your applications."
Q Currently what are the challenges you are facing during implementation?
94
services. Currently, our chat service operates on a matrix-based system with several
microservices. These services, particularly our chat infrastructure hosted on AWS,
pose significant challenges as we scale. As we anticipate serving large volumes of
users—potentially reaching millions—the scalability and robustness of our
infrastructure become paramount. To address this, we are implementing Kubernetes
and load balancers on our backend to effectively manage the increasing workload.
Q When you are creating a database of patient do you see a problem with
patient giving their personal information to them? How does data collection
work for you?
While Flutter offers a unified codebase and streamlined development process, there
are considerations to explore alternative frameworks such as React.js or Angular for
our web applications. These frameworks are renowned for their ability to deliver
superior UI/UX experiences. However, the decision to use Flutter initially stemmed
from its efficiency in sharing a common codebase across different platforms.
95
In the realm of digitalization, where rapid development and seamless maintenance
are crucial, leveraging a unified codebase offers significant advantages.
Transitioning to a different framework should be carefully weighed against the
benefits of maintaining a consistent codebase and the potential improvements in
UI/UX offered by alternative technologies."
"In our matching system, when a user sends a request, the agency automatically
receives it and then assigns the request to multiple caregivers. Subsequently,
notifications are sent to doctors to collect relevant information."
In our tickets are generated after receiving the request from the agency to the
caregivers they can choose to serve a particular patient or not based on the
background of the patient. Same works on the patient side. The ticket is sent to the
96
patient or the family member and they can choose to whether to accept the caregiver
or not”.
Q Any advices you would like to give other Austrian business looking forward
to implement AI?
"The adoption of technology depends on the company's strategic plan and the nature
of their product or service. It's essential to assess whether the implementation of a
particular technology is truly necessary. While there's often a hype surrounding AI
and LLM, it's crucial to align these technologies with the company's goals and
purposes. Customer needs should be a primary consideration – determining whether
AI is genuinely beneficial or if classical algorithms might provide more effective
solutions. Blindly following trends can result in wasted time and resources if the
technology doesn't directly address the customer's needs or improve the business
outcomes".
97