AI Project Cycle-Notes
AI Project Cycle-Notes
NOTES
Introduction
The AI Project Cycle can be described as systematic and
sequential process that involves effective planning,
organizing, coordinating and development of the project
starting from the initial planning phase and progression
through execution, completion and review.
Problem Scoping
Problem scoping is the process of defining and
understanding the specific boundaries and details of a
problem before starting an AI project. It helps us to clarify
what needs to be solved and what is the best approach to
solve our problem. It involves the process of identifying a
specific problem and developing a clear vision and
understanding of how to solve it.
SDG Goals
17 goals have been announced by the United Nations which
are termed as the Sustainable Development Goals. The aim is
to achieve these goals by the end of 2030. A pledge to do so
has been taken by all the member nations of the UN.
Here are the 17 SDGs. Let’s take a look:
4W Problem Canvas
Problem scoping can be simplified using the 4Ws Problem
Canvas approach, which is a method to define and
understand a problem in a structured manner.
Data Features
Data features refer to the type of data you want to collect.
This type of data represents a measurable property or
characteristic of the object being analyzed. Some common
types of Data features are:
1. Numerical Features
2. Categorical Features
3. Binary Features
4. Image Features
5. Time-series Features
Data Sources
Data sources serve as the starting point of the data,
providing necessary raw material from which data features
are derived.
1. Databases
2. Web Scraping
3. Sensors and Internet of Things (IoT)
4. Social media
5. Surveys
6. API (Application Programming Interfaces)
Data Exploration
Data Exploration is the way to discover hidden patterns,
interesting insights and useful information from the data we
have. It involves closely looking at the data we have, asking
questions and finding answers.
Components of AI modelling:
1. Data
2. Algorithms
3. Training/Rules
Rule-based AI Approach
This type of AI Modelling uses a set of predefined rules and
logic to make decisions or take actions. These rules are
defined by developers and are based on IF-THEN and ELSE
statements and are designed to mimic human-decision
making processes.
Limitations:
1. It can only handle tasks that defined using a set of clear
rules and if the machine is tested with a dataset that
differs from the rules, it may fail to make accurate
predictions.
2. The learning is static. i.e., the system cannot learn from
new data or adapt to new situations.
Learning-Based Approach
It is also known as Machine Learning is a type of AI system
that can learn from data and improve its performance over
time.
Unlike Rule-Based, AI machine learning systems do not rely
on predefined rules but instead use statistical models to
learn patterns and make decisions.
Learning-Based Approach is more flexible and powerful than
the Rule-Based Approach as it can learn from new data and
adapt to new situations. However, it requires more data and
computational resources and can be more difficult to
understand and explain as compared to rule-based AI.
Supervised Learning
In supervised learning, a model is trained using labelled
data, which consist input-output pairs.
Types of supervised learning:
1. Regression- Output is numerical value. The model learns
to predict a numerical value based on the input data.
Continuous data is fed. For example, if you wish to predict
your next salary, then you would put in the data of your
previous salary, any increments, etc.
Reinforcement Learning
It is defined as a unique approach to machine learning that
focuses on learning from feedback and experiences, rather
than relying on labelled datasets or known outputs like
supervised learning do. In reinforcement learning, the data
from which agent learns consists of its experiences with
interacting with an environment.
Neural Networks
• Neural networks are inspired by how neurons in the
human brain function.
• A key advantage is their ability to automatically extract
features from data without manual intervention by the
programmer.
• They are particularly effective for large datasets, such as
image processing.
Structure:
1. Input Layer:
• The first layer that receives raw data.
• No processing occurs here; it only feeds data into the
network.
2. Hidden Layers:
• These layers perform the computations.
• Each node within these layers applies a machine learning
algorithm to process the data received from the previous
layer.
• The number of hidden layers and nodes depends on the
complexity of the problem.
3. Output Layer:
• Provides the final processed data to the user.
• Similar to the input layer, it does not perform any
computations.
Features:
• Layered Architecture: Organized into input, hidden, and
output layers.
• Adaptability: Automatically adjusts to improve
performance as more data is processed.
• Scalability: Larger networks perform better with more
data.
Benefits over Traditional Machine Learning
• Traditional algorithms may reach a saturation point with
increasing data, while neural networks continue to
improve with more data.
• Neural networks are better suited for complex problems
involving large, unstructured datasets.
Evaluation
The Evaluation stage in the AI Project Cycle is crucial for
determining the efficiency and effectiveness of a developed
AI model. It involves testing the model on a separate set of
Testing Data.