AI Project Cycle Notes
AI Project Cycle Notes
Introduction
Let us assume that you have to make a greeting card for your mother as it is her birthday. You are very
excited about it and have thought of many ideas to execute the same. Let us look at some of the steps
which you might take to accomplish this task:
1. Look for some cool greeting card ideas from different sources. You might go online and
checkout some videos or you may ask someone who has knowledge about it.
2. After finalising the design, you would make a list of things that are required to make this card.
3. You will check if you have the material with you or not. If not, you could go and get all the
items required, ready for use.
4. Once you have everything with you, you would start making the card.
5. If you make a mistake in the card somewhere which cannot be rectified, you will discard it and
start remaking it.
6. Once the greeting card is made, you would gift it to your mother. how we plan to execute the
tasks around us. Consciously or Subconsciously our mind makes up plans for every task which we have
to accomplish which is why things become clearerin our mind. Similarly, if we have to develop an AI
project, the AI Project Cycle provides us with an appropriate framework which can lead us towards the
goal. The AI Project Cycle mainly has 5 stages:
Problem Scoping, you set the goal for your AI project by stating the problem which you wish to solve
with it. Under problem scoping, we look at various parameters which affect the problem we wish to
solve so that the picture becomes clearer.
To proceed,
● You need to acquire data which will become the base of your project as it will help you in
understanding what the parameters that are related to problem scoping are.
● You go for data acquisition by collecting data from various reliable and authentic sources.
Since the data you collect would be in large quantities, you can try to give it a visual image of
different types of representations like graphs, databases, flow charts, maps, etc. This makes
it easier for you to interpret the patterns which your acquired data follows.
● After exploring the patterns, you can decide upon the type of model you would build to
achieve the goal. For this, you can research online and select various models which give a
suitable output.
● You can test the selected models and figure out which is the most efficient one.
● The most efficient model is now the base of your AI project and you can develop your
algorithm around it.
● Once the modelling is complete, you now need to test your model on some newly fetched
data. The results will help you in evaluating your model and improving it.
● Finally, after evaluation, the project cycle is now complete and what you get is your AI project.
Let us understand each stage of the AI Project Cycle in detail.
Problem Scoping
It is a fact that we are surrounded by problems. They could be small or big, sometimes ignored or
sometimes even critical. A lot of times we are unable to observe any problem in our surroundings. In
that case, we can take a look at the Sustainable Development Goals. 17 goals have been announced
by the United nations which are termed as the Sustainable Development Goals. The aim is to achieve
these goals by the end of 2030.
As you can see, many goals correspond to the problems which we might observe around us too. One
should look for such problems and try to solve them as this would make many lives better and help
our country achieve these goals.
Scoping a problem is not that easy as we need to have a deeper understanding around it so that the
picture becomes clearer while we are working to solve it. Hence, we use the 4Ws Problem Canvas to
help us out.
Who?
The “Who” block helps in analysing the people getting affected directly or indirectly due to it. Under
this, we find out who the ‘Stakeholders’ to this problem are and what we know about them.
Stakeholders are the people who face this problem and would be benefitted with the solution. Here is
the Who Canvas:
What?
Under the “What” block, you need to look into what you have on hand. At this stage, you need to
determine the nature of the problem. What is the problem and how do you know that it is a problem?
Under this block, you also gather evidence to prove that the problem you have selected actually exists.
Newspaper articles, Media, announcements, etc are some examples. Here is the What Canvas:
Where?
Now that you know who is associated with the problem and what the problem actually is; you need
to focus on the context/situation/location of the problem. This block will help you look into the
situation in which the problem arises, the context of it, and the locations where it is prominent. Here
is the Where Canvas:
Why?
You have finally listed down all the major elements that affect the problem directly. Now it is
convenient to understand who the people that would be benefitted by the solution are; what is to be
solved; and where will the solution be deployed. These three canvases now become the base of why
you want to solve this problem. Thus, in the “Why” canvas, think about the benefits which the
stakeholders would get from the solution and how it will benefit them as well as the society.
After filling the 4Ws Problem canvas, you now need to summarise all the cards into one template. The
Problem Statement Template helps us to summarise all the key points into one single Template so
that in future, whenever there is need to look back at the basis of the problem, we can take a look at
the Problem Statement Template and understand the key elements of it.
[stakeholder(s)] Who
Our
Data Acquisition
As we move ahead in the AI Project Cycle, we come across the second element which is : Data
Acquisition. As the term clearly mentions, this stage is about acquiring data for the project.
Data can be a piece of information or facts and statistics collected together for reference or analysis.
Whenever we want an AI project to be able to predict an output, we need to train it first using data.
For example, If you want to make an Artificially Intelligent system which can predict the salary of any
employee based on his previous salaries, you would feed the data of his previous salaries into the
machine. This is the data with which the machine can be trained. Now, once it is ready, it will predict
his next salary efficiently. The previous salary data here is known as Training Data while the next salary
prediction data set is known as the Testing Data.
For better efficiency of an AI project, the Training data needs to be relevant and authentic. In the
previous example, if the training data was not of the previous salaries but of his expenses, the machine
would not have predicted his next salary correctly since the whole training went wrong. Similarly, if
the previous salary data was not authentic, that is, it was not correct, then too the prediction could
have gone wrong. Hence….
Data Features
Look at your problem statement once again and try to find the data features required to address this
issue. Data features refer to the type of data you want to collect.
After mentioning the Data features, you get to know what sort of data is to be collected. Now, the
question arises- From where can we get this data? There can be various ways in which you can collect
data. Some of them are:
API
Cameras Observations (Application Program
Interface)
Sometimes, you use the internet and try to acquire data for your project from some random websites.
Such data might not be authentic as its accuracy cannot be proved. Due to this, it becomes necessary
to find a reliable source of data from where some authentic information can be taken.
Data Exploration
In the previous modules, you have set the goal of your project and have also found ways to acquire
data. While acquiring data, you must have noticed that the data is a complex entity – it is full of
numbers and if anyone wants to make some sense out of it, they have to work some patterns out of
it. For example, if you go to the library and pick up a random book, you first try to go through its
content quickly by turning pages and by reading the description before borrowing it for yourself,
because it helps you in understanding if the book is appropriate to your needs and interests or not.
Thus, to analyse the data, you need to visualise it in some user-friendly format so that you can:
● Quickly get a sense of the trends, relationships and patterns contained within the data.
● Define strategy for which model to use at a later stage.
● Communicate the same to others effectively. To visualise data, we can use various types of
visual representations.
Visual
Representations
Modelling
In the previous module of Data exploration, we have seen various types of graphical representations
which can be used for representing different parameters of data. The graphical representation makes
the data understandable for humans as we can discover trends and patterns out of it. But when it
comes to machines accessing and analysing data, it needs the data in the most basic form of numbers
(which is binary – 0s and 1s) and when it comes to discovering patterns and trends in data, the machine
goes in for mathematical representations of the same. The ability to mathematically describe the
relationship between parameters is the heart of every AI model. Thus, whenever we talk about
developing AI models, it is the mathematical approach towards analysing data which we refer to.
Machine
Learning
Learning
Based Deep
AI Models
Learning
Rule Based
Rule Based Approach
Refers to the AI modelling where the rules are defined by the developer. The machine follows the
rules or instructions mentioned by the developer and performs its task accordingly. For example, we
have a dataset which tells us about the conditions on the basis of which we can decide if an elephant
may be spotted or not while on safari. The parameters are: Outlook, Temperature, Humidity and Wind.
Now, let’s take various possibilities of these parameters and see in which case the elephant may be
spotted and in which case it may not. After looking through all the cases, we feed this data in to the
machine along with the rules which tell the machine all the possibilities. The machine trains on this
data and now is ready to be tested. While testing the machine,
we tell the machine that
Outlook = Overcast; Temperature = Normal; Humidity = Normal and Wind = Weak.
On the basis of this testing dataset, now the machine will be able to tell if the elephant has been
spotted before or not and will display the prediction to us. This is known as a rule-based approach
because we fed the data along with rules to the machine
A drawback/feature for this approach is that the learning is static. The machine once trained, does not
take into consideration any changes made in the original training dataset.
Supervised Learning
Supervised
In a supervised learning model, the dataset
Learning which is fed to the machine is labelled. In
other words, we can say that the dataset is
Unsupervised known to the person who is training the
Learning machine only then he/she is able to label the
data. A label is some information which can
Reinforcement be used as a tag for data. For example,
students get grades according to the marks
Learning they secure in examinations. These grades
are labels which categorise the students
according to their marks.
There are two types of Supervised Learning models:
Unsupervised Learning
An unsupervised learning model works on unlabelled dataset. This means that the data which is fed
to the machine is random and there is a possibility that the person who is training the model does not
have any information regarding it. The unsupervised learning models are used to identify
relationships, patterns and trends out of the data which is fed into it. It helps the user in understanding
what the data is about and what are the major features identified by the machine in it.
For example, you have a random data of 1000 dog images and you wish to understand some pattern
out of it, you would feed this data into the unsupervised learning model and would train the machine
on it. After training, the machine would come up with patterns which it was able to identify out of it.
The Machine might come up with patterns which are already known to the user like colour or it might
even come up with something very unusual like the size of the dogs.
Unsupervised learning models can be further divided into two categories:
Dimensionality Reduction: We humans are able to visualise upto 3-Dimensions only but according to
a lot of theories and algorithms, there are various entities which exist beyond 3-Dimensions. For
example, in Natural language Processing, the words are considered to be N-Dimensional entities.
Which means that we cannot visualise them as they exist beyond our visualisation ability. Hence, to
make sense out of it, we need to reduce their dimensions. Here, dimensionality reduction algorithm
is used.
As we reduce the dimension of an entity, the information which it contains starts getting distorted.
For example, if we have a ball in our hand, it is 3-Dimensions right now. But if we click its picture, the
data transforms to 2-D as an image is a 2-Dimensional entity. Now, as soon as we reduce one
dimension, at least 50% of the information is lost as now we will not know about the back of the ball.
Whether the ball was of same colour at the back or not? Or was it just a hemisphere? If we reduce the
dimensions further, more and more information will get lost.
Hence, to reduce the dimensions and still be able to make sense out of the data, we use Dimensionality
Reduction.
Evaluation
Once a model has been made and trained, it needs to go through proper testing so that one can
calculate the efficiency and performance of the model. Hence, the model is tested with the help of
Testing Data (which was separated out of the acquired dataset at Data Acquisition stage) and the
efficiency of the model is calculated on the basis of the parameters mentioned below:
As seen in the figure given, the larger Neural Networks tend to perform better with larger amounts of
data whereas the traditional machine learning algorithms stop improving after a certain saturation
point.
This is a representation of how neural networks work. A Neural Network is divided into multiple layers
and each layer is further divided into several blocks called nodes. Each node has its own task to
accomplish which is then passed to the next layer. The first layer of a Neural Network is known as the
input layer. The job of an input layer is to acquire data and feed it to the Neural Network. No
processing occurs at the input layer. Next to it, are the hidden layers. Hidden layers are the layers in
which the whole processing occurs. Their name essentially means that these layers are hidden and are
not visible to the user.
Each node of these hidden layers has its own machine learning algorithm which it executes on the
data received from the input layer. The processed output is then fed to the subsequent hidden layer
of the network. There can be multiple hidden layers in a neural network system and their number
depends upon the complexity of the function for which the network has been configured. Also, the
number of nodes in each layer can vary accordingly. The last hidden layer passes the final processed
data to the output layer which then gives it to the user as the final output. Similar to the input layer,
output layer too does not process the data which it acquires. It is meant for user-interface.