Project Cycle 1-2-25
Project Cycle 1-2-25
A project is defined as a sequence of tasks that requires planning and execution to achieve the desired outcome.
A project cycle provides a structured approach to manage the project from start to end.
A project cycle consists of a set of activities,resources and constraints that contributes to the success completion
of project.
What is AI Project Cycle?
AI project cycle is a systematic and sequential process that involves effective
planning,organizing,coordinating and development of the project starting from initial planning phase and
progressing through execution,completion and review.
Data Features
Data features refer to the type of data you want to collect. In above example, data features would besalary
amount, increment percentage, increment period, bonus, etc.
Some common types of data features are:
1. Numerical features: These types of data represent numeric values.eg: age, weight, temperature.
2. Categorical features: These represent qualitative data eg: gender (male or female), colour,
country etc
3. Binary features: These features represents data that can take only two distinct values eg: yes or
No, True or False, 0 or 1 etc
4. Image features: These represents visual data,typically in the form of pixels.
5. Time series features: These feature represents data collected over time.eg: stock prices, temperature
measurements or website traffics etc
These data’s can be collect using different ways(Data Source).
Some of them are:
1. Surveys
2. Web Scraping
3. Sensors
4. Cameras
5. Observations
6. API (Application Program Interface)
One of the most reliable and authentic sources of information, are the open-sourced websites hostedby the
government. Some of the open-sourced Govt. portals are: data.gov.in, india.gov.in
3. What is Data Exploration?
Data exploration is a way to discover hidden patterns, interesting insights and useful information
from the data we have. We can use different tools and techniques to explore the data, such as
charts, graphs and statistical analysis.
For example, imagine we have a big dataset of students' test scores. Through data exploration,
we can find out in which subjects students are doing well and which ones they are struggling
with.We can also see if there are any patterns, like whether studying for more hours leads to
better scores.
Data exploration helps us make important decisions and find insights that can be used to
improve things.
There are several techniques we can use to explore data and uncover interesting patterns,
Some of which are listed below:
Visualization: This is the most commonly used data exploration technique. It involves creating
colorful charts, graphs and diagrams to represent data visually. For instance, we can make a bar
graph to compare the popularity of different sports or a line graph to show how temperature
changes over time. Visualizations make it easier to understand data and spot trends.
Filtering and Sorting: We can use filters to focus on specific parts of the data For example,
we can filter a list of movies to show only those released in the past year. Sorting allows us to
arrange data in a specific order.
Summarization: Summarizing data involves finding key information or statistics that give us
an overview.
Pattern Recognition: This technique involves looking for repeated patterns or trends in the
data. For example, we might notice that the sales of ice cream increase during the summer
months or that there is an increase in website traffic on weekends. Recognizing patterns helps us
make predictions and understand how things might change in the future.
4. What is Modelling?
The graphical representation makes the data understandable for humans as we can discover trends and
patterns out of it, but machine can analyse the data only when the data is in the most basic form of numbers
(which is binary – 0s and 1s). AI modeling refers to the process of creating a mathematical or statistical
representation of real world system or problem. Al machines require mathematical models because
mathematics provides a means for understanding and representing complex patterns, relationships and
calculations.
he process of Al modelling has three essential components:
1. Data: High quality and relevant data is essential for Al modelling. It serves as the input for training and
testing the models.
2. Algorithms: Algorithms are mathematical formulas or instructions that process the data to make
predictions or decisions.
3. Training/Rules: Al models need to be trained using labelled data, unlabelled data or rules. During training,
the model learns patterns and relationships in the data to make accurate predictions or decisions.
Generally, AI models can be classified as follows:
This classification is based on how they make decisions and solve problems.
Rule Based Approach:
Rule-based Al approach is a type of Al modelling that uses a set of predefined rules and logic to make
decisions or take actions. These rules are defined by developers and are based on IF-THEN and ELSE
statements and are designed to mimic human decision-making processes, The rules used for decision-
making may range from very simple to extremely complex.
Example determine the grade of a student
If mark>=90:then grade=’A’
Else if mark>=80:then grade=’B’
Else if mark>=70:then grade=’C’
……
The processing of the system broken into the following tasks
1. Data Acquisition: The Al system would need access to the dataset of tests that have already
been graded by humans.
2. Rule Creation: The system would use the data to create a set of rules that define how to
grade each question. For example, if the student's score is above 90, then assign them grade
"A".
3. Decision-making: When a new student's score is uploaded to the system, it will use the
predefined rules to automatically grade each question and calculate the final score.
4. Feedback: Continuously refine and enhance the rule-based Al model by incorporating
feedback, collecting more data and updating the rules to improve its effectiveness and efficiency.
This rule-based approach to create an Al system is simple and easy-to-understand.
The learning is static, i.e, the system cannot learn from new data or adapt to new situations.
This learning-based approach to Al is more flexible and poweful approach as it can learn from new
data and adapt to new situations.
It requires more data and resources and more difficult to understand as compared to rule-based AI.
The learning-based approach can further be divided into three parts:
1. Supervised learning
2. Unsupervised learning
3. Reinforcement learning
1. Supervised Learning :In a supervised learning model, the dataset which is fed to the machine is
labelled. A label is some information which can be used as a tag for data.
Example: The temperature expressed in ◦C and corresponding ◦F values as input-output.
Clustering: Clustering is a method of grouping the objects into clusters such that objects with most
similarities remains into a group and has less or no similarities with the objects of another group.
Dimensionality reduction: Many studies have proved that entities beyond three dimensions exist;
however, you cannot visualise it. To understand these entities, you must reduce their dimensions by using
dimensionality reduction. Dimensionality reduction is the process of reducing the number of input variables of
data and still be able to understand it.
5. EVALUATION
After training, the model's performance needs to be evaluated. Evaluation helps us understand how
well the model is performing and whether it meets the desired objectives. Evaluation of any AI
system is important because of the following reasons:
1. Measure performance: Evaluate AI systems to see how well they perform on specific tasks. This
helps us know if they are accurate and ready for real-world use.
2. Find areas to improve: Evaluation helps us identify where the AI system is not doing well and
needs to get better. We look for patterns in mistakes and find out where it struggles with certain
inputs or situations.
3. Check our assumptions: Evaluation lets us check if our assumptions about the AI system and its
data are correct. We want to make sure the data we used to train the system is like what it will face
in the real world.
4. Ensure ethical use: Evaluation helps us make sure the AI system is used ethically and follows the
laws. We look for biases or unintended effects and ensure the system is used fairly and responsibly.