0% found this document useful (0 votes)
26 views21 pages

1ST Ai ML

The document provides an overview of Artificial Intelligence (AI), defining it as a field focused on creating machines that perform tasks requiring human intelligence. It outlines different approaches to AI, such as acting humanly and rationally, and explains the concept of rational agents using the PEAS framework to define task environments. Additionally, it discusses the properties of task environments, agent-environment interactions, types of agent programs, and the connection between intelligence, reasoning, and decision-making in AI.

Uploaded by

keerthanac426
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views21 pages

1ST Ai ML

The document provides an overview of Artificial Intelligence (AI), defining it as a field focused on creating machines that perform tasks requiring human intelligence. It outlines different approaches to AI, such as acting humanly and rationally, and explains the concept of rational agents using the PEAS framework to define task environments. Additionally, it discusses the properties of task environments, agent-environment interactions, types of agent programs, and the connection between intelligence, reasoning, and decision-making in AI.

Uploaded by

keerthanac426
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

1 ) What is AI? What are the different approaches of AI.

What is AI?
**(Page 2 of the PDF)**
Artificial Intelligence (AI) is a field of computer science that aims to create machines
that can perform tasks that typically require human intelligence. These tasks include:
•Reasoning
•Learning
•Problem-solving
•Perception
•Natural Language Understanding

Different Approaches of AI
**(Pages 2–5 of the PDF)**
The module explains four main approaches to Artificial Intelligence:
1.Acting Humanly – The Turing Test Approach
1. Tests if a machine can imitate human behavior.
2. Needs:
1. Natural Language Processing
2. Knowledge Representation
3. Automated Reasoning
4. Machine Learning
5. Computer Vision
6. Robotics

2 Thinking Humanly – The Cognitive Modeling Approach


•Aims to make machines think like humans by modeling how the human brain works.
•Involves:
•Introspection
•Psychological Experiments
•Brain Imaging
•Linked with Cognitive Science.
3 Thinking Rationally – The Laws of Thought Approach
•Based on logic and structured reasoning.
•Example: Using syllogisms like:
•"All men are mortal. Socrates is a man. Therefore, Socrates is mortal."
•Challenge: Formalizing vague human knowledge into logical rules.

4 ) Acting Rationally – The Rational Agent Approach


•Focuses on taking the best possible action to achieve goals.
•Does not aim to mimic human behavior, but to optimize decisions.
•Example: Self-driving cars, spam filters.
2) Define rational agent. Explain task environment with PEAS framework and illustrate with an
example.

Definition of Rational Agent


Page 30–31
A rational agent is an entity that takes actions that lead to the best possible outcome
based on the available information and a well-defined performance measure.

It does not just perform actions randomly.


It makes decisions that maximize success according to the situation.
Rationality means “doing the right thing” based on:
•Performance measure
•Prior knowledge
•Percepts
•Possible actions
Example (Page 32):
A self-driving car:
•Perceives traffic lights and pedestrians.
•Acts to stop at red lights and avoid collisions.
•Is rational if it follows rules, prioritizes safety, and
reaches its destination efficiently.
Task Environment and PEAS Framework
Pages 38–40
To design an AI agent effectively, we define its task
environment using the PEAS framework:
PEAS stands for:
1.P – Performance Measure
How we judge the agent’s success.
1. E.g., safety, speed, fuel efficiency.
2.E – Environment
The world in which the agent operates.
1. E.g., roads, traffic, weather.
3.A – Actuators
The agent’s mechanisms to act on the environment.
1. E.g., steering, brakes, indicators.
4.S – Sensors
Tools to perceive the environment.
1. E.g., cameras, radar, GPS.
Example: Self-Driving Taxi
Pages 38–39

PEAS Component Self-Driving Taxi Example


Safe, quick arrival; passenger comfort;
P – Performance Measure
fuel efficiency
E – Environment Roads, traffic, pedestrians, weather
A – Actuators Steering, brakes, accelerator, horn
S – Sensors GPS, cameras, speedometer, LiDAR
3 ) List out the different properties of task environment. Explain each with an example.

Properties of Task Environment


Pages 41–45
AI agents operate in environments with various properties. Understanding these helps design
better agents.

1. Fully Observable vs. Partially Observable


•Fully Observable: The agent has access to all relevant information at all times.
Example: A chessboard in a chess game is fully observable.
•Partially Observable: The agent lacks complete information due to sensor limitations or
hidden data.
Example: A self-driving car can't see beyond corners or through heavy fog.

2. Single Agent vs. Multiagent


•Single Agent: Only one agent is acting in the environment.
Example: A robotic vacuum cleaner working alone in a room.
•Multiagent: Multiple agents interact and may compete or cooperate.
Example: A multiplayer video game or autonomous cars on the road.
3. Deterministic vs. Stochastic
•Deterministic: The next state is predictable from the current state and action.
Example: Solving a math puzzle.
•Stochastic: The outcome is uncertain; randomness is involved.
Example: Weather prediction or traffic situations.

4. Episodic vs. Sequential


•Episodic: Each task/episode is independent of the previous ones.
Example: Quality check on a production line item.
•Sequential: Current decisions affect future actions.
Example: Driving a car or playing chess.

5. Static vs. Dynamic


•Static: The environment doesn’t change while the agent is thinking.
Example: Crossword puzzles.
•Dynamic: The environment changes over time, even without agent action.
Example: A self-driving car in real-time traffic.

6. Discrete vs. Continuous


•Discrete: A limited number of possible states or actions.
Example: Turn-based board games like checkers.
•Continuous: Infinite possibilities, smooth transitions.
Example: Controlling the steering angle of a car or robot arm.

7. Known vs. Unknown


•Known: The agent knows the rules of the environment.
Example: Playing tic-tac-toe where rules are predefined.
•Unknown: The agent must learn how things work through experience.
Example: A robot exploring a new room layout.
4 ) Explain Agent and environment interaction with a neat diagram and illustrate with an example.

Agent and Environment Interaction


Page 24–27
An agent is anything that perceives its environment through sensors and acts upon it using
actuators to achieve a goal.
The environment is everything that surrounds the agent and can affect or be affected by its actions.

Interaction Cycle
Page 25–26
Here’s the interaction explained step by step:
1.Perception: The agent senses the environment using its sensors.
2.Decision-Making: Based on the percepts, the agent decides what action to take.
3.Action: The agent uses its actuators to perform an action.
4.Feedback Loop: The environment changes, and the cycle repeats.

Example: Self-Driving Car


Page 27

Component Example
Agent Self-driving car
Environment Roads, traffic, pedestrians, weather
Sensors Cameras, radar, GPS, LiDAR
Actuators Steering, brakes, accelerator

How it works:
•The car perceives traffic lights and nearby vehicles using sensors.
•It decides to stop or turn.
•It acts by using actuators (brakes or steering).
•The environment updates, and the car senses again.
5 ) What are agent programs? Explain the different types of agent programs with an example.

What are Agent Programs?


Page 46–47
Agent programs are software components that decide the actions an agent should
take based on its percepts (inputs from the environment).
They answer key questions like:
•“What is the world like now?”
•“What action should I take?”
•“What effect will my action have?”
Agent programs work inside the agent and control how it behaves in response to its
environment.

Types of Agent Programs


Pages 48–55
There are five basic types of agent programs
explained in this module:

1. Simple Reflex Agents


Page 48–49
•Work on condition–action rules.
•Do not store any history; they respond only to the current percept.
Example:
A robotic vacuum cleaner:
•“If current square is dirty → Suck”
•“If clean → Move to next square”
Limitation: Can’t handle partial observability or learn from past.
2. Model-Based Reflex Agents
Page 51–52
•Maintain internal state based on past percepts.
•Use a model of the world to track unseen parts of the environment.
Example:
Car braking system that recognizes different brake light styles and adjusts accordingly.

3. Goal-Based Agents
Page 52–53
•Choose actions based on goal achievement.
•Use search and planning to determine how to reach the goal.
Example:
GPS Navigation System
•Goal: Reach destination
•Chooses best route based on current traffic and roadblocks.
4. Utility-Based Agents
Page 53–54
•Select actions based on utility (desirability) of outcomes.
•Handle trade-offs and uncertainty.
Example:
Online Shopping Recommendation System
•Suggests items not just based on goal (sell product)
•Also considers user preference, price, and availability.

5. Learning Agents
Page 55–57
•Improve performance over time by learning from experience.
•Consist of:
• Performance Element (does the job)
• Learning Element (improves over time)
• Critic (provides feedback)
• Problem Generator (suggests exploratory actions)
Example:
Self-driving taxi learns over time which routes are faster during rush hour by trying new paths.
6) Explain, how intelligence, reasoning, and decision-making are connected to AI, using
philosophy, mathematics, and economics

How Intelligence, Reasoning, and Decision-Making Are Connected to AI


Pages 7–9
AI is a multidisciplinary field influenced by philosophy, mathematics, and economics. Each of
these disciplines contributes to how AI systems think, reason, and make decisions
intelligently.

1. Philosophy – Knowledge, Reasoning, and Action


Page 7
Philosophy asks:
•What is knowledge?
•How is it used to guide actions?
Aristotle’s view: Reasoning leads to action.
Example:
If it’s cold and you want warmth, you reason that a blanket will help, so you
get one.
AI Analogy:
A self-driving car reasons through traffic data to decide when to turn or stop.

2. Mathematics – Logic, Computation, and Probability


Page 8
AI relies on mathematics to:
•Make valid conclusions (Logic)
→ Boolean Logic helps with yes/no decisions.
•Perform calculations (Computation)
→ Algorithms power things like search engines.
•Handle uncertainty (Probability)
→ Bayesian models and statistical reasoning are used in predictions.
Example:
Spam filters calculate the probability that an email is spam based on keywords and
patterns.

3. Economics – Decision Theory and Optimization


Page 8–9
Economics introduces:
•Decision Theory: Making optimal choices under constraints.
•Game Theory: Predicting behavior in competitive settings.
•Utility Theory: Choosing actions that maximize benefit (or utility).
Example:
A food delivery app chooses the best driver and fastest route to ensure timely
delivery while minimizing fuel cost.
Summary of the Connection

Discipline Contribution to AI Example


Self-driving car using rules to
Philosophy Reasoning leads to action
decide turns
Enables logic and Spam filter using keyword
Mathematics
probability-based decisions probability
Optimizes decisions based Food app choosing driver
Economics
on cost/benefit and route

7 ) Explain model based reflex agents with a diagram and write the agent program for the same

Model-Based Reflex Agents


Pages 51–52
A Model-Based Reflex Agent is an improved version of a simple reflex agent. It
can handle partially observable environments by maintaining an internal state
that reflects the history of the world based on previous percepts.

Key Features:
1.Maintains an internal state of the world.
2.Uses a model of how the world works to update the internal state.
3.Uses condition–action rules to decide the next action.

How It Works:
•Perceives the environment.
•Updates internal state using:
• The last internal state.
• The new percept.
• The model of how the world evolves.
•Chooses an action based on current internal state using condition–action rules.

Example: Car Braking System


•Perceives that the car ahead is slowing down.
•Maintains memory of previous car behavior.
•Applies brakes if the internal state suggests danger based on distance and speed.

Model-Based Reflex Agent Program


(Pseudocode)
Inspired by Page 51

Explanation of Components:
•UpdateState: Updates what the agent knows about the world.
•MatchRule: Finds which condition–action rule applies.
•RuleAction: Chooses the correct action based on the matched rule.
8 ) Write the simple agent function for vacuum cleaner and explain the terms.

Simple Agent Function for Vacuum Cleaner


Page 28–29
In the Vacuum-Cleaner World, there are two locations: A and B. The vacuum cleaner can perceive:
•Its location (A or B)
•Whether that location is dirty or clean
Based on this, it performs actions like:
•Suck (to clean)
•Move Left
•Move Right

Simple Agent Function (Pseudocode)


python

function SimpleVacuumAgent(percept):
location, status = percept
if status == "Dirty":
return "Suck"
else if location == "A":
return "Move Right"
else if location == "B":
return "Move Left"

Explanation of Terms
(Page 28–29)

Term Explanation

The agent's observation, i.e., its current


Percept
location and whether it's dirty/clean

Percept Sequence History of all percepts received so far

A rule that maps percept sequences to


Agent Function
actions

The move that the agent takes (Suck, Move


Action
Left, Move Right)

Example Flow:
1.Percept = (A, Dirty)
→ Ac on = Suck
2.Percept = (A, Clean)
→ Ac on = Move Right
3.Percept = (B, Clean)
→ Ac on = Move Left
9 ) Define Problem-Solving Agent and explain the steps involved in problem solving agent
with an example

Definition: Problem-Solving Agent

Page 2, Module 2
A Problem-Solving Agent is an intelligent system that identifies a goal and
searches for the best way to achieve it.
Example:
Using Google Maps to find the fastest route to a restaurant involves:
•Plan: Find the optimal route
•Goal: Reach the restaurant
•Action: Follow the directions step-by-step
This is exactly how a problem-solving agent behaves in AI.

Steps Involved in a Problem-Solving Agent


Pages 2–6, Module 2
A problem-solving agent follows a structured 4-step process:

1. Goal Formulation
Page 2
•The agent defines what it wants to achieve.
•Helps the agent focus on relevant actions and ignore irrelevant details.
Example:
•You're in Bangalore and want to reach Delhi quickly.
•You set a goal: "Reach Delhi in the shortest time."

2. Problem Formulation
Page 3
•The agent identifies:
• The initial state
• The goal state
• The possible actions
• The rules for transitioning between states
Example:
•Playing a maze game: only consider moves that bring you closer to the exit.

3. Search for Solution


Page 4
•The agent explores different action sequences to find the best path to the goal.
•Often uses search algorithms like:
• Breadth-first search
• Depth-first search
• A* search
Example:
•A robot vacuum cleaner plans the most efficient way to clean a room without repeating
areas.
4. Execution of the Solution
Page 5
•The agent executes the planned actions step-by-step.
•May re-evaluate and adjust actions if needed (especially in dynamic environments).
Example:
•A self-driving car:
• Finds the best route.
• Executes actions like turning or braking.
• Adjusts if traffic conditions change.

Example: Road Trip Planning


Page 6
Imagine planning a trip from Bangalore to Mumbai:

Component Example
Initial State In Bangalore
Actions Drive, take a flight, train, etc.
Driving → new city (e.g., Pune, then
Transition
Mumbai)
Goal Test "Have I reached Mumbai?"
Path Cost Total cost of time, fuel, or ticket price
10 ) How do you describe a well-defined problem? What are the components of problem

What is a Well-Defined Problem?


Page 5, Module 2
A well-defined problem is one where:
•The starting point (initial state) is known.
•The goal is clearly defined.
•The possible actions and their outcomes are known.
•A solution can be tested for correctness.
A problem-solving agent uses this structure to plan and act intelligently.

Components of a Well-Defined Problem


Page 6, Module 2
A well-defined problem consists of five main components:

Component Description
The agent’s starting condition or
1. Initial State
position.
Example: In(Bangalore)
The set of all possible actions the agent
2. Actions
can take.
Example: Drive to Pune, Fly to
Mumbai
Describes the result of performing an
3. Transition Model
action.
Example: Drive to Pune → In(Pune)
Determines if the current state is the
4. Goal Test
goal.
Example: Is location = Mumbai?
Numerical value representing how
5. Path Cost efficient the solution is (e.g., time,
money).
Example: ₹ cost, travel time, or fuel
consumption
Example: Road Trip from Bangalore to Mumbai
Element Example
Initial State In Bangalore
Actions Drive, Take a flight, Take a train

Transition Model Driving from Bangalore to Pune → In(Pune)

Goal Test Check if current city is Mumbai


Path Cost Fuel cost, travel time, tolls

11 ) Demonstrate the formulation of the problem for 8-puzzle problem.

Formulation of the 8-Puzzle Problem


Page 11, Module 2
The 8-puzzle problem is a classic example of a well-defined problem used in AI to
demonstrate state-space search. It consists of a 3×3 grid with eight numbered tiles and one
blank space.
The goal is to move the tiles around until they are in a specific configuration (goal state).

Problem Formulation Components


Component Description

Each state shows the position of the 8 tiles


1. States
and the blank space in the 3×3 grid.

Any random arrangement of the tiles and


2. Initial State
the blank.

Move the blank (empty space) Left, Right,


3. Actions
Up, or Down, depending on its position.

Describes how applying an action (e.g.,


4. Transition Model move Left) changes the state (e.g., swap
blank with tile).

Checks if the current state matches the goal


5. Goal Test configuration (e.g., tiles in order from 1 to
8).

Each move has a cost of 1. So, the total path


6. Path Cost cost is the number of moves taken to reach
the goal.
Example (from the module)
Page 11
“If we apply Left to the start state, the resulting state has the 5 and the blank switched.”
This shows how the transition model works — applying an action transforms the state.
12 ) Point out the difference between informed search and uninformed search. Explain the
breadth-first search with an agent function

Difference Between Informed and Uninformed Search


Page 26, Module 2

Feature Uninformed Search Informed Search


Uses additional problem-
Searches blindly without any
Definition specific knowledge
extra knowledge.
(heuristics).
Also Called Blind Search Heuristic Search
Uses Heuristic? No Yes
Example Techniques BFS, DFS, UCS Greedy Best-First, A* Search
Generally slower and Faster and explores fewer
Efficiency
explores more nodes nodes due to guidance
Searching for a book on Using a catalog to find the
Real-Life Analogy
every shelf without hints book’s location directly

Breadth-First Search (BFS)


Page 26–27, Module 2
How BFS Works:
•Explores the shallowest (earliest) nodes first, level by level.
•Uses a FIFO (First-In-First-Out) queue.
•Ensures that the shortest path is found if all costs are equal.

Agent Function for BFS (Pseudocode)

function BFS(problem):
ini alize queue ← [initial_state]
while queue is not empty:
state ← queue.pop(0) # FIFO: remove first element
if GoalTest(state):
return solution
for each action in Actions(state):
child ← Result(state, ac on)
queue.append(child)
Example Analogy from the Module (Page 27):
“Imagine you are in a mall looking for a store but don’t have a directory. You search floor by
floor:
•First, check all stores on the ground floor.
•Then move to the first floor, and so on…
This is how BFS explores all possibilities one level at a time.”
13 ) Explain the difference between greedy best first search and breadth first search

Difference Between Greedy Best-First Search and Breadth-First Search


Pages 27 (BFS) and 34 (Greedy Best-First Search)

Aspect Breadth-First Search (BFS) Greedy Best-First Search


Expands the node that
Explores the shallowest
Definition appears closest to the goal
nodes first (level-wise).
based on a heuristic.
Informed search – uses a
Uninformed search – does heuristic function to
Knowledge Used
not use heuristics. estimate closeness to the
goal.
Uses a FIFO queue (First-In, Uses a priority queue based
Data Structure
First-Out). on heuristic values.
Jumps to nodes that look
Explores all nodes at one
Search Strategy promising (based on
level before going deeper.
heuristic).
Complete – guaranteed to Not guaranteed to find a
Completeness
find a solution if one exists. solution in all cases.
Optimal if path costs are Not optimal – may find a
Optimality
equal. suboptimal (longer) path.
Taking a shortcut that looks
Searching for a store floor-
Example from Module fastest but may be a dead
by-floor in a mall.
end.

Examples from Module 2:


•BFS Example (Page 27):
“You are in a mall looking for a store. You search floor-by-floor (level-wise).”
•Greedy Best-First Search Example (Page 34):
“A taxi driver takes a shortcut that looks fastest but ends in a dead end—this is the risk
of being greedy.”

Summary:
•BFS is systematic and safe but can be slow.
•Greedy Best-First is fast but risky, as it can make short-sighted decisions based on the
heuristic alone.
14 ) How is machine learning linked to AI, Data Science and Statistics?

How is Machine Learning Linked to AI, Data Science, and Statistics?

Page 1 of the PDF


According to the textbook:
"Machine learning is a core subset of Artificial Intelligence. It is the practice of
using algorithms to parse data, learn from it, and then make a determination or
prediction."
Here’s how it connects with each field:

Machine Learning and AI


•Machine Learning (ML) is a subset of AI.
•While AI is a broad field focused on creating intelligent systems, ML enables those
systems to learn from data rather than being explicitly programmed.

Machine Learning and Statistics


•ML borrows heavily from statistical methods.
•Many ML algorithms use statistical concepts such as:
• Probability distributions
• Regression
• Hypothesis testing
•ML uses these to make predictions and inferences from data.

Machine Learning and Data Science


•Data Science uses ML as a tool to analyze and extract insights from data.
•ML enables automation in analyzing massive datasets.
•Data Science workflows often include:
• Data collec on → Data cleaning → ML modeling →
Interpretation

In Summary (From Page 1):


•ML is a subset of AI
•Inspired by statistics
•Empowered by data through Data Science
15) Explain in detail machine learning process model

Machine Learning Process Model


Page 2 of the PDF
The Machine Learning (ML) process involves a systematic workflow that transforms raw
data into actionable predictions or decisions using algorithms. The steps are as follows:

1. Data Collection
•This is the first step in the ML pipeline.
•It involves gathering relevant and high-quality data from different sources like:
• Databases
• Web scraping
• Sensors
• APIs
Why it matters: Poor data = poor predictions

2. Data Preparation / Preprocessing


•The data is cleaned and organized.
•Key activities include:
• Handling missing values
• Removing duplicates
• Normalization or standardization
• Feature selection or extraction
Goal: Make the data usable for ML models.

3. Choosing a Model
•Select an appropriate ML algorithm based on the problem type:
• Classification, Regression, Clustering, etc.
•Examples:
• Decision Trees, SVM, KNN, Neural Networks

4. Training the Model


•The model is trained on historical data (called the training set).
•During this phase, the algorithm learns patterns and relationships between features and
outputs.
This is where the model "learns" from data.

5. Evaluating the Model


•The model is tested using unseen data (test/validation set).
•Performance is measured using metrics such as:
• Accuracy
• Precision
• Recall
• RMSE (for regression)
Ensures the model is generalizing well.
6. Parameter Tuning (Optimization)
•Hyperparameters (like learning rate, depth, k-value) are fine-tuned for better performance.
•Methods include:
• Grid Search
• Random Search
• Cross-Validation

7. Deployment and Prediction


•The trained model is deployed into a production environment.
•It can now make real-time predictions or automate decisions.

Summary from Page 2:


“The ML process consists of steps like data collection, preprocessing, model
selection, training, evaluation, tuning, and deployment .”

16 ) Explain supervised, unsupervised, semi-supervised and reinforced learning with an example.

Types of Machine Learning


Page 3 of the PDF
The four major types of Machine Learning are:

Supervised Learning
Page 3
•The model is trained on labeled data (input-output pairs are known).
•The goal is to learn a mapping from inputs to outputs.
Example from textbook:
“Predicting house prices based on features like location, size, and number of rooms.”
Other examples:
•Email spam detection
•Image classification (cat vs. dog)

Unsupervised Learning
Page 3
•The model works on unlabeled data.
•It tries to find hidden patterns or groupings in data.
Example from textbook:
“Grouping customers based on buying behavior for targeted marketing.”
Other examples:
•Customer segmentation
•Market basket analysis
•Clustering news articles
Semi-Supervised Learning
Page 3
•A mix of labeled and unlabeled data is used for training.
•Helps when labeling data is expensive but some labeled data is available.
Example from textbook:
“Medical image diagnosis where only some images are labeled by doctors.”
Other examples:
•Language translation using limited human-labeled phrases
•Classifying web pages with some known categories

Reinforcement Learning
Page 3
•The model learns through trial and error by interacting with an environment.
•It receives rewards or penalties based on its actions.
Example from textbook:
“Teaching a robot to walk by rewarding it for each correct step.”
Other examples:
•Self-driving cars
•Game-playing AIs like AlphaGo
•Stock trading bots

Summary Table (Based on Page 3):

Type Labeled Data Goal Example


House price
Supervised Yes Predict output
prediction
Find Customer
Unsupervised No
structure/pattern segmentation
Improve learning Medical image
Semi-Supervised + Mixed
with few labels classification
Maximize
Robot walking or
Reinforcement (Reward) cumulative
playing a game
reward
17 ) List out and briefly explain the classification algorithms.

Classification Algorithms
Page 4 of the PDF
Classification algorithms are part of supervised learning, where the goal is to
predict the category or class label of given data points.
The textbook lists and briefly explains the following commonly used
classification algorithms:

Logistic Regression
Page 4
•Despite its name, it is a classification algorithm (not regression).
•Uses the logistic (sigmoid) function to map predictions to class probabilities.
•Best for binary classification (e.g., yes/no, spam/not spam).

K-Nearest Neighbors (KNN)


Page 4
•A lazy learning algorithm – no training phase.
•Classifies a new instance by checking the ‘K’ closest data points in
the training set.
•Based on distance metrics like Euclidean distance.

Support Vector Machine (SVM)


Page 4
•Tries to find the best boundary (hyperplane) that separates the classes.
•Effective in high-dimensional spaces.
•Can be used for linear and non-linear classification using kernels

Naive Bayes
Page 4
•Based on Bayes’ Theorem with the assumption of feature independence.
•Simple, fast, and works well with text classification (e.g., spam filtering).

Decision Tree
Page 4
•A tree-like model where decisions are made at each node.
•Easy to interpret and visualize.
•Can be prone to overfitting if not pruned properly.

Random Forest
Page 4
•An ensemble method built using multiple decision trees.
•Each tree gives a prediction, and the majority vote decides the final class.
•Reduces overfitting and improves accuracy.
Summary Table (from Page 4):

Algorithm Key Idea Common Use Case


Predicts probability using
Logistic Regression Binary classification
sigmoid function
Classifies based on
KNN Pattern recognition
nearest neighbors
Finds optimal separating
SVM Image classification
hyperplane
Uses probability and
Naive Bayes Text categorization
feature independence
Uses tree structure of
Decision Tree Rule-based classification
decisions
Combines many trees for General-purpose
Random Forest
robust prediction classification

You might also like