0% found this document useful (0 votes)
13 views30 pages

Ai Project Cycle

The AI Project Cycle consists of stages including Problem Scoping, Data Acquisition, Data Exploration, Modelling, and Evaluation, providing a structured approach to AI project development. Each stage focuses on specific tasks such as identifying problems, collecting and analyzing data, developing algorithms, and evaluating model performance. Key concepts include supervised and unsupervised learning, neural networks, and various data visualization techniques.

Uploaded by

69172
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views30 pages

Ai Project Cycle

The AI Project Cycle consists of stages including Problem Scoping, Data Acquisition, Data Exploration, Modelling, and Evaluation, providing a structured approach to AI project development. Each stage focuses on specific tasks such as identifying problems, collecting and analyzing data, developing algorithms, and evaluating model performance. Key concepts include supervised and unsupervised learning, neural networks, and various data visualization techniques.

Uploaded by

69172
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

AI PROJECT CYCLE

The process of developing AI machines has different stages that are


collectively known as AI Project Cycle.

Project Cycle is the steps taken to complete a task from the beginning
to its end. AI project cycle provides us with a framework of planning,
organizing, executing and implementing an AI project to achieve a
target.
STAGES OF AI PROJECT CYCLE

PROBLEM SCOPING

DATA ACQUISITION

DATA EXPLORATION

MODELLING

EVALUATION
PROBLEM SCOPING

• This is the first and the crucial stage of AI project development which
focuses on identifying and understanding problems using 4Ws.
• It is the analytics approach that involves taking steps to solve the
problems and setting up goals that we want our project to achieve.

4Ws
WHO WHAT WHERE WHY
WHO

• The “Who” block helps in analyzing the people getting affected directly
or indirectly due to it.
• Under this, we find out who are the ‘Stakeholders’ to this problem and
what we know about them.
• Stakeholders – are the people who face the problem and would be
benefited with the solution.
WHAT

• What is the problem and how do you know that it is a problem?


• Under this block, you also gather evidence to prove that the problem
you have selected actually exists.
• Newspaper articles, Media, announcements etc. are some examples.
WHERE

• This block will help you look into the situation in which the problem
arises, the context of it, and the locations where it is prominent.
WHY

• “WHY” canvas think about the benefits which the stakeholders would
get from the solution and how it will benefit them as well as the society.
DATA ACQUISITION

• Data Acquisition is the second stage of the AI project cycle.


• Data acquisition is the process of collecting data required for training
the AI project.
• Data : Data is raw information that is used to generate meaningful
outcomes.
DATA ACQUISITION

• If we have to make an artificial intelligence system to predict the traffic


flow for a particular geographical location based on the previous traffic
data. The data needs to be fed for the previous year into the system
and the machine can be trained to use it to predict the traffic flow
effectively. The previous year traffic flow data is known as Training Data
and the prediction it makes is the Testing Data.
DATA ACQUISITION

DATA FEATURES

The type of data to be collected is called data features.

Data can be collected through:


Surveys: Customber’s feedback and reviews.
Web Scraping: Data extracted from various web pages.
Cameras: Live data from surveillance cameras, web cameras etc.
Observations: Reading and analyzing trends
DATA ACQUISITION

• We have to be sure that the source of data should be authentic and


relevant, only then the AI project can predict precisely and accurately.
There are some authentic sources of information in the form of open-
sourced websites hosted by the government. These portals have
information collected in a format that can be easily downloaded. Some of
these open-source Govt. portals are – data.gov.in, india.gov.in.
DATA EXPLORATION

• Data Exploration is a process of arranging the collected data into a form


which can be analyzed or through which we can derive useful
information.
• It is considered to be first step in data analysis where unstructured data
is explored.
• Once the data is collected, then it is observed carefully for making
pattern to observe the trends or relationship. We need to visualize our
data using any method for understanding the data in a through manner.
VISUALIZING DATA

• Data Visualization is a technique for understanding and getting insights


from the data in a better way. Data visualization is a standard term
that we can use for any graphic that helps us in understanding or
getting new insights from the data.

Two most basic data visualization forms are:


GRAPHS CHARTS
DIFFERENT WAYS TO VISUALISE DATA

• BULLET GRAPHS
• HISTOGRAMS
• SCATTERPLOT
• TREE DIAGRAM
• FLOW CHART
MODELLING

• AI Modelling refers to developing algorithms, also called models which


can be trained to get intelligent output.
Machine
Learning Learning
Based
Deep Learning
A.I. MODELS

Rule Based
RULE BASED APPROACH

• This approach is based on a set of rules and facts defined by the


developer and fed to the machine to perform its task accordingly to
generate the desired output.
• In this the machine follows the rule or instructions mentioned by the
developer, and perform its task accordingly.
RULE BASED APPROACH

Example:
• If you have a dataset that consists of weather conditions, a basis which
we can predict if the lion would be visible on a specific Safari Day to
the tourists. The parameters can be cloud cover, temperature, wind
speed, humidity. When these parameters are recorded and fed in the
machine giving the favorable combinations when the Lion would be
visible and rest can be considered that the lion would not be visible.
• Now to test the model, the machine is given a scenario of the cloud
cover, temperature, wind speed, humidity. The model will compare the
same with the fed in the dataset and if there is a match, would let
know if the lion would be visible or not. This is called a rule-based
approach.
LEARNING BASED APPROACH

• This approach refers to the model where the relationship or patterns in


the data are not defined by the developer. Random data is fed into the
machine and the machine develops its own pattern or trends based on
the data outputs.
LEARNING BASED APPROACH

• Example:
Suppose you have a dataset of 1000 images of flowers. Now you do not have any clue
as to what trend is being followed in this dataset as you don’t know their names,
colour or any other feature. Thus you would put this into a learning approach based AI
machine and the machine would come up with various patterns it has observed in the
features of these 1000 images.

It might cluster the data on the basis of colour, size, shape etc. It might also come up
with some very unusual clustering algorithm which you might not have even thought
of.
LEARNING BASED APPROACH

SUPERVISED LEARNING UNSUPERVISED LEARNING REINFORCEMENT LEARNING

Dimensionality
Regression Classification Clustering
Reduction
SUPERVISED LEARNING

• Supervised Learning is a learning in which we teach or train the machine using data
which is well labeled. It means that some data is already labelled with the correct
answer.
• Later for training prupose the machine is provided with a new set of data so that
supervised learning algorithm analyses the training data and produces a correct
outcome from labelled data.

Duck

Duck
Supervised Learning Predictive Model

Not Duck

Not Duck Predictive Model Duck


• Supervised learning classified into two categories of algorithms:
CLASSIFICATION – A Classification problem is when the output variable is a category,
such as “Red” or “Blue”, “disease” or “no disease”.

REGRESSION – A Regression problem is when the output variable is a real value, such as
“dollar” or “weight”. It works with continuous data.
UNSUPERVISED LEARNING

• Unsupervised learning is the training of machine using information that is neither


classified nor labelled. It allows the algorithm to act on that information without any
guidance or supervision. Hence the machine has to group unsorted information
according to similarities, patterns and differences without any previous training of
data.
• Here no teacher is provided that means no training will be given to the machine.
Therefore machine is restricted to find the hidden structure in unlabeled data by
itself.

Suppose a machine is given an image having animals which have not been seen ever.
Thus the machine has no idea about the category of animals so it won’t be able to
categorize them. But it can try to categorize them according to their similarities, patterns,
and differences i.e, it can easily categorize the animals belonging to a same group
REINFORCEMENT LEARNING

• Reinforcement learning is a type of learning based approach where a machine


learning algorithm enables an agent (machine with an intelligent code) to learn in an
environment to find the best possible behavior.
• In this learning approach the agent learns automatically by using hit and trial
methods or through its own experience.
EVALUATION

• After the model is designed and trained then the reliability of the model is checked
using Testing Data acquired at the Data Acquisition stage.
• This testing data is given as an input to the newly created AI model and the output
received is checked and evaluated on the basis of :
- Accuracy
- Precision
- Recall
- F1 Score
NEURAL NETWORKS

• A neural network is a machine learning algorithm based on the model of a human


neuron. Neural networks take in data, train themselves to recognize the patterns in
the data and then predict the outputs.
• The most important aspect of neural networks is that once trained, they learn on
their own just like human brains.
WHY DO WE USE NEURAL NETWORKS?

• Neural networks are a series of algorithms used to recognize hidden patterns in the
raw data, cluster and classify it, and continuously learn and improve. They are used
in a variety of applications in stock markets, sales and marketing trends, fraud
detection etc. Neural networks are primarily used for solving problems with large
datasets, like image.
The need to use neural networks are:
- It can extract data features automatically without the input from the developer.
- It is fast and efficient way to solve problems with large datasets, such as images.
- The larger neural networks tend to perform better with larger amounts of data.
WORKING OF NEURAL NETWORKS

• Neural networks are made up of layers of neurons, just like the human brain that consists of millions
of neurons. These neurons are the core processing units of the network.
• A Neural network is divided into multiple layers and each layer is further divided into several blocks
called nodes. Each node is responsible to do its task and pass on it to the next layer.
• First, we have the input layer which receives the input provided by the programmer and feeds it to
the neural network. No processing occurs in the input layer. The output layer predicts the output.
• The layers present in between input and output layers are called the hidden layers which perform
most of the computations required by our network. These layers are not visible to the user. Each
node of the hidden layer has its own machine learning algorithm which executes on the data
received by the input layer. The processed data is then fed to the subsequent hidden layer. The
processed data by the hidden layers is passed onto the output layer which then gives the final
output to the user. No processing is done in the output layer.
WORKING OF NEURAL NETWORKS

You might also like