0% found this document useful (0 votes)
18 views6 pages

Project Cycle 1-2-25

The AI Project Cycle is a systematic process consisting of five stages: Problem Scoping, Data Acquisition, Data Exploration, Modelling, and Evaluation, aimed at effectively managing AI projects from planning to review. Problem Scoping defines the problem boundaries, while Data Acquisition involves gathering relevant data for training AI systems. Modelling creates mathematical representations of problems using algorithms, and Evaluation assesses the model's performance to ensure it meets objectives and ethical standards.

Uploaded by

kpsinghnnp
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views6 pages

Project Cycle 1-2-25

The AI Project Cycle is a systematic process consisting of five stages: Problem Scoping, Data Acquisition, Data Exploration, Modelling, and Evaluation, aimed at effectively managing AI projects from planning to review. Problem Scoping defines the problem boundaries, while Data Acquisition involves gathering relevant data for training AI systems. Modelling creates mathematical representations of problems using algorithms, and Evaluation assesses the model's performance to ensure it meets objectives and ethical standards.

Uploaded by

kpsinghnnp
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

AI PROJECT CYCLE

A project is defined as a sequence of tasks that requires planning and execution to achieve the desired outcome.
A project cycle provides a structured approach to manage the project from start to end.
A project cycle consists of a set of activities,resources and constraints that contributes to the success completion
of project.
What is AI Project Cycle?
AI project cycle is a systematic and sequential process that involves effective
planning,organizing,coordinating and development of the project starting from initial planning phase and
progressing through execution,completion and review.

The AI Project Cycle mainly has 5 stages.


1. Problem Scoping
2. Data Acquisition
3. Data Exploration
4. Modelling
5. Evaluation

1. What is Problem Scoping?


Problem scoping is the process of defining and understanding the specific boundaries and details of a problem
before starting an Al project. It helps us clarify what needs to be solved and what is the best approach to solve our
problem. It involves the process of identifying a specific problem and developing a clear vision and understanding
of how to solve it.
We can figure out exactly what needs to be done, who will benefit from it and what resources we have. It helps us
set clear goals and plan our actions. It also helps us understand the limits of what can be achieved and ensures
that our solution is realistic and achievable.
This step is important because it helps us understand the problem better and find the right solution.
Problem scoping for AI can be simplified using the 4Ws Problem Canvas approach.
What is 4Ws Problem Canvas?
The 4Ws Problem canvas helps in identifying the key elements related to the problem.
The 4Ws are :
a) Who
b) What
c) Where
d) Why
a) Who? : This block helps in analysing the people who are getting affected directly or indirectly dueto a
problem. Under this, we find out who are the ‘Stakeholders’ (those people who face this problemand would
be benefitted with the solution) to this problem.
b) What? : This block helps to determine the nature of the problem. What is the problem and how dowe
know that it is a problem? Under this block, we also gather evidence to prove that the problem you have
selected actually exists.
c) Where? : This block will help us to look into the situation in which the problem arises, the context
of it, and the locations where it is prominent.
d) Why? : In the “Why” canvas, we think about the benefits which the stakeholders would get from
the solution and how it will benefit them as well as the society.
2. What is Data Acquisition?
This is the second stage of AI Project cycle. This stage involves gathering relevant datafor the
AI project. Whenever we want an AI project to be able to predict an output, we need to train it first using
data i.e. called Training Data.
For example, If you want to make an Artificially Intelligent system which can predict the salary of any
employee based on his previous salaries, you would feed the data of his previous salaries into themachine.
The previous salary data here is known as Training Data.
Once the AI system is trained,we test its performance using new data that it hasn’t seen before i.e. called
Testing Data.
For better efficiency of an AI project, the Training data needs to be relevant and authentic.
Authenticity: The authenticity of the training data refers to its accuracy, reliability and credibility, i.e., the data
Should be a true representation of the real-world picture or domain that the Al system aims to understand or
predict.
Relevance: The training data must be relevant to the problem statement scoped for the Al project.
This means that the data should capture the essential features, patterns and relationships that are necessary
for the Al system to learn and make accurate predictions or decisions. Irrelevant or unrelated data can lead
to poor performance.

Data Features
Data features refer to the type of data you want to collect. In above example, data features would besalary
amount, increment percentage, increment period, bonus, etc.
Some common types of data features are:
1. Numerical features: These types of data represent numeric values.eg: age, weight, temperature.
2. Categorical features: These represent qualitative data eg: gender (male or female), colour,
country etc
3. Binary features: These features represents data that can take only two distinct values eg: yes or
No, True or False, 0 or 1 etc
4. Image features: These represents visual data,typically in the form of pixels.
5. Time series features: These feature represents data collected over time.eg: stock prices, temperature
measurements or website traffics etc
These data’s can be collect using different ways(Data Source).
Some of them are:
1. Surveys
2. Web Scraping
3. Sensors
4. Cameras
5. Observations
6. API (Application Program Interface)
One of the most reliable and authentic sources of information, are the open-sourced websites hostedby the
government. Some of the open-sourced Govt. portals are: data.gov.in, india.gov.in
3. What is Data Exploration?
Data exploration is a way to discover hidden patterns, interesting insights and useful information
from the data we have. We can use different tools and techniques to explore the data, such as
charts, graphs and statistical analysis.
For example, imagine we have a big dataset of students' test scores. Through data exploration,
we can find out in which subjects students are doing well and which ones they are struggling
with.We can also see if there are any patterns, like whether studying for more hours leads to
better scores.
Data exploration helps us make important decisions and find insights that can be used to
improve things.
There are several techniques we can use to explore data and uncover interesting patterns,
Some of which are listed below:
Visualization: This is the most commonly used data exploration technique. It involves creating
colorful charts, graphs and diagrams to represent data visually. For instance, we can make a bar
graph to compare the popularity of different sports or a line graph to show how temperature
changes over time. Visualizations make it easier to understand data and spot trends.
Filtering and Sorting: We can use filters to focus on specific parts of the data For example,
we can filter a list of movies to show only those released in the past year. Sorting allows us to
arrange data in a specific order.
Summarization: Summarizing data involves finding key information or statistics that give us
an overview.
Pattern Recognition: This technique involves looking for repeated patterns or trends in the
data. For example, we might notice that the sales of ice cream increase during the summer
months or that there is an increase in website traffic on weekends. Recognizing patterns helps us
make predictions and understand how things might change in the future.

4. What is Modelling?
The graphical representation makes the data understandable for humans as we can discover trends and
patterns out of it, but machine can analyse the data only when the data is in the most basic form of numbers
(which is binary – 0s and 1s). AI modeling refers to the process of creating a mathematical or statistical
representation of real world system or problem. Al machines require mathematical models because
mathematics provides a means for understanding and representing complex patterns, relationships and
calculations.
he process of Al modelling has three essential components:

1. Data: High quality and relevant data is essential for Al modelling. It serves as the input for training and
testing the models.
2. Algorithms: Algorithms are mathematical formulas or instructions that process the data to make
predictions or decisions.
3. Training/Rules: Al models need to be trained using labelled data, unlabelled data or rules. During training,
the model learns patterns and relationships in the data to make accurate predictions or decisions.
Generally, AI models can be classified as follows:
This classification is based on how they make decisions and solve problems.
Rule Based Approach:
Rule-based Al approach is a type of Al modelling that uses a set of predefined rules and logic to make
decisions or take actions. These rules are defined by developers and are based on IF-THEN and ELSE
statements and are designed to mimic human decision-making processes, The rules used for decision-
making may range from very simple to extremely complex.
Example determine the grade of a student
If mark>=90:then grade=’A’
Else if mark>=80:then grade=’B’
Else if mark>=70:then grade=’C’
……
The processing of the system broken into the following tasks

1. Data Acquisition: The Al system would need access to the dataset of tests that have already
been graded by humans.
2. Rule Creation: The system would use the data to create a set of rules that define how to
grade each question. For example, if the student's score is above 90, then assign them grade
"A".
3. Decision-making: When a new student's score is uploaded to the system, it will use the
predefined rules to automatically grade each question and calculate the final score.
4. Feedback: Continuously refine and enhance the rule-based Al model by incorporating
feedback, collecting more data and updating the rules to improve its effectiveness and efficiency.
 This rule-based approach to create an Al system is simple and easy-to-understand.
 The learning is static, i.e, the system cannot learn from new data or adapt to new situations.

Learning Based Approach :


Learning-based Al approach, also known as Machine Learning, is a type of Al system that can learn from data
and improve its performance over time. Unlike rule-based AI, machine learning systems do not rely on
predefined rules but instead use statistical models to learn patterns and make decisions.

 This learning-based approach to Al is more flexible and poweful approach as it can learn from new
data and adapt to new situations.
 It requires more data and resources and more difficult to understand as compared to rule-based AI.
The learning-based approach can further be divided into three parts:
1. Supervised learning
2. Unsupervised learning
3. Reinforcement learning
1. Supervised Learning :In a supervised learning model, the dataset which is fed to the machine is
labelled. A label is some information which can be used as a tag for data.
Example: The temperature expressed in ◦C and corresponding ◦F values as input-output.

Labelled data consists of input-output pairs to train a model.which learns a mathematical


relationship or equation between inputs and outputs.The labelled data acts as a guidance for
learning.once the model has been trained,you can provide it with new input values and it will use
learned relationship to predict the output.
Supervised learning further divided into two main categories.
a. Regression: In this model, the model learns to predict a numerical value based on the input data.
Example: Predicting house prices based on features such as the number of bed rooms area and
location. In this case, the output (house price) is a continuous numerical value.
b. Classification: In classification, the model learns to classify input data into one of several
predefined categories.
Example: Identifying if an email is spam or not spam based on the email's content. In this
case, the output is a unique label - Spam or No Spam.
2. Unsupervised learning: In an unsupervised learning process, a model learns from input data
without any labelled outputs or guidance. The goal is to identify pattern structures or relationships
within the data.
The model essentially teaches itself (learns from input data) by using a three-step process as given
below:
Step 1: Analyzing the input data items for their characteristics
Step 2: Observing distinctive characteristics found in the data items
Step 3: Discovering hidden structures or grouping based on the observations
Unsupervised learning models can be divided into two:
a. Clustering
b. Dimensionality Reduction

Clustering: Clustering is a method of grouping the objects into clusters such that objects with most
similarities remains into a group and has less or no similarities with the objects of another group.

Dimensionality reduction: Many studies have proved that entities beyond three dimensions exist;
however, you cannot visualise it. To understand these entities, you must reduce their dimensions by using
dimensionality reduction. Dimensionality reduction is the process of reducing the number of input variables of
data and still be able to understand it.

3. Reinforcement learning: Reinforcement learning is defined as a unique approach to machine


learning that focuses on learning from feedback and experiences.
In this approach where a machine learning algorithm enables an agent (machine with an intelligent
code) to learn in an environment to find the best possible behavior or path.
Rule Based Approach Learning Based Approach
It refers to the AI modelling where the It refers to the AI modelling
rules aredefined by the developer. where themachine learns by
itself
In this learning is static In this learning is dynamic
The machine once trained, does not take The machine once trained, does take into
into consideration any changes made in the
consideration any changes made in the originaltraining dataset.
originaltraining dataset.

5. EVALUATION

After training, the model's performance needs to be evaluated. Evaluation helps us understand how
well the model is performing and whether it meets the desired objectives. Evaluation of any AI
system is important because of the following reasons:
1. Measure performance: Evaluate AI systems to see how well they perform on specific tasks. This
helps us know if they are accurate and ready for real-world use.
2. Find areas to improve: Evaluation helps us identify where the AI system is not doing well and
needs to get better. We look for patterns in mistakes and find out where it struggles with certain
inputs or situations.
3. Check our assumptions: Evaluation lets us check if our assumptions about the AI system and its
data are correct. We want to make sure the data we used to train the system is like what it will face
in the real world.
4. Ensure ethical use: Evaluation helps us make sure the AI system is used ethically and follows the
laws. We look for biases or unintended effects and ensure the system is used fairly and responsibly.

Artificial Neural Networks (ANN)


Deep learning is a subset of machine learning where algorithms, known as Artificial Neural
Networks inspired by the functioning of the human brain and nervous system.
Features of ANNs
 The artificial neural networks contain interconnected neurons that mimic the ones in our
brain.
 The key advantage of neural networks is that they are able to extract data features
automatically without needing the input of the programmer.
 It is a fast and efficient way to solve problems for which the dataset is very large.
An artificial neural network contains several layers, and each layer is divided into blocks called
nodes.

You might also like