0% found this document useful (0 votes)
9 views

AI & ML Unit 1 Notes

Uploaded by

Anandakumar A
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

AI & ML Unit 1 Notes

Uploaded by

Anandakumar A
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Unit-I

Introduction to AI:
Artificial Intelligence:
• Artificial Intelligence is a wide range branch of computer science.
• It concerned with building smart machines capable of performing tasks that
typically require human intelligence.
• It uses complex algorithm and methods to build machines that can make
decisions on their own.

Types of AI based on capabilities:

• Narrow AI: Referred as weak AI, focuses on one automated specific task.
• General AI: It has ability to understand, learn and apply knowledge across task.
• Super AI: It is hypothetical software system with an intellectual scope beyond
human intelligence.

Types of AI based on functionalities:

• Purely Reactive: These machines do not have any memory or data to work with,
specializing in just one field of work.
• Limited Memory: These machines collect previous data and continue adding it
to their memory.
• Theory of Mind: This kind of AI can understand thoughts and emotions, as well
as interact socially.

Four possible goals to pursue in AI:

• Systems that think like humans.

• Systems that think rationally.

• Systems that act like humans

• Systems that act rationally


History of AI:

1943: The first work that is now generally recognized as AI was done.
1949: Donald Hebb (1949) demonstrated a simple updating rule for modifying the
connection strengths between neurons.
1958: John McCarthy had defined the high-level language.
1959: At IBM, Nathaniel Rochester and his colleagues produced some of the first AI
programs.
1980s: Neural networks which use a back propagation algorithm to train itself become
widely used in AI applications.
2011: IBM Watson beats champions Ken Jennings.
2016: The first “robot citizen,” a humanoid robot named Sophia, is created.
2018: Google releases natural language processing engine.
Advantages of AI:
• Reduced Human error
• Risk Avoidance
• Replacing repetitive jobs
• Digital Assistance
Disadvantages of AI:
• High cost of creation
• No emotions.
• Can’t think for itself.

AI Applications:
AI Applications in E-Commerce:
• Artificial Intelligence technology is used to create recommendation engines.
• These recommendations are made in accordance with their browsing history,
preference, and interests.
• It helps in improving the relationship with the customers and the loyalty towards
the brand.
AI Applications in Education:
• Artificial Intelligence can help educators with non-educational tasks.
• Digitization of content like video lectures, conferences, and text book guides
c.an be made using Artificial Intelligence.
• Artificial Intelligence helps create a rich learning experience.
AI Applications in Lifestyle:
• Automobile manufacturing companies like Toyota, Audi, Volvo, and Tesla use
machine learning to train computers to think and evolve like humans.
AI Applications in Robotics:
• Robotics is another field where artificial intelligence applications are commonly
used.
• Robots powered by AI.
AI Applications in Healthcare:
• AI uses the combination of historical data and medical intelligence.
• Ai applications are used in healthcare.
AI Applications in Agriculture:
• Artificial Intelligence is used to identify defects in soil.
• This is done using computer vision, robotics, and machine learning applications.
AI Applications in Gaming:
• AI can be used to create smart, human-like NPCs to interact with the players.
AI Application in Automobiles:
• Artificial Intelligence is used to build self-driving vehicles.
AI Applications in Social Media:
• Instagram
• Facebook
• Twitter
AI Applications in Chatbots:
• AI continues to improve, these chatbots can effectively resolve customer issues.
Problem Solving Agents:
Agent:
• An agent is anything that can be viewed as perceiving its environment through
sensors and acting upon that environment through actuators.
• Example: A human agent has eyes, ears, and other organs for sensors and hands,
legs, vocaltract, and so on for actuators.
• Agent Function: The agent function for an agent specifies the action taken by
the agent in response to any percept sequence.
• Task Environment: A task environment specification includes the performance
measure, the external environment, the actuators, and the sensors.
Steps in designing an agent:
• First step must always be to specify the task environment as fully as possible.
• The agent program implements the agent function.
• There exists a variety of basic agent program designs reflecting the kind of
information made explicit and used in the decision process.
• The designs vary in efficiency, compactness, and flexibility.
• The appropriate design of the agent program depends on the nature of the
environment.
• All agents can improve their performance through learning.
Types of Agent:
Simple reflex agents: It respond directly to percepts.

Model based reflex agents: agents maintain internal state to track.


Goal-Based reflex agents: Act to achieve their goals.

Utility Based reflex agents: It try to maximize their own expected “happiness.”
Problem-Solving Agent:
Definition:
• Problem-solving agent is a goal-based agent.
• It focus on goals using a group of algorithms and techniques to solve a well-
defined problem.
Steps Performed by Problem-solving agent:
Goal Formulation:
• It is the first and simplest step in problem-solving.
• It organizes the steps/sequence required to formulate one goal.
Problem Formulation:
1. Initial state: It is the starting state or initial step of the agent towards its goal.
2. Actions: It is the description of the possible actions available to the agent.
3. Transition Model: It describes what each action does.
4. Goal Test: It determines if the given state is a goal state.
5. Path cost: It assigns a numeric cost to each path that follows the goal.
Types of problem Approaches:
• Toy Problem
• Real-world Problem
Toy Problem: 8 Puzzle problem:
• Consider a 3x3 matrix with movable tiles numbered from 1 to 8 with a blank
space.
• The tile adjacent to the blank space can slide into that space.
• The objective is to reach a specified goal state similar to the goal state.
• The task is to convert the current state into goal state by sliding digits into the
blank space.

States: It describes the location of each numbered tiles and the blank tile.
Initial State: Can start from any state as the initial state.
Actions: Actions of the blank space is defined, i.e., either left, right, up or down.
Goal Test: It identifies whether have reached the correct goal-state.
Path Cost: The path cost is the number of steps in the path where the cost of each step
is 1.
Real World Problem: Route-Finding Problem:
• Consider the airline travel problems that must be solved by a travelplanning
Web site:
States: Each state obviously includes a location (e.g., an airport) and the current time.
Initial State: The user’s home airport.
Actions: Take any flight from the current location, in any seat class, leaving after the
current time.
Goal Test: A destination city.
Search Algorithms:
• A search algorithm takes a search problem as input and returns a solution.
Types of search algorithm:

Uninformed Search Strategies:


• Uninformed search is also called Blind search.

• These algorithms can only generate the successors and differentiate between the
goal state and non goal state.

• Types of uninformed search:


• Depth First Search
• Depth Limited Search
• Breadth First Search
• Iterative Deeping Search
• Uniform Cost Search
• Bidirectional Search
i) Depth First Search:
• Depth-first search (DFS) is an algorithm for traversing or searching tree or graph
data structures.
• The algorithm starts at the root node and explores as far as possible along each
branch before backtracking.
• It uses last in-first-out strategy.
• Implemented using Stack.
Performance measures:
Completeness : Does not guarantee to reach the goal state.
Optimality: Not optimal
Space complexity: O(bm)
Time complexity: O(bm)

Advantages of DFS:
• Uses little memory
• Less time to reach goal state
Disadvantages of DFS:
• No certainty
• It performs deep searching
ii)Depth Limited Search:
• A depth-limited search algorithm is similar to depth-first search with a
predetermined limit.
• Depth-limited search can solve the drawback of the infinite path in the Depth-
first search.
• In this algorithm, the node at the depth limit will treat as it has no successor
nodes further.
Steps:
• Set a variable NODE to the initial state, i.e., the root node.
• Set a variable GOAL which contains the value of the goal state.
• Set a variable LIMIT which carries a depth-limit value.
• Loop each node by traversing in DFS manner till the depth-limit value.
• While performing the looping, start removing the elements from the stack in
LIFO order.
• If the goal state is found, return goal state. Else terminate the search.
Performance measure:
Completeness: does not guarantee to reach the goal node.
Optimality: Not optimal
Space complexity: O(bl)
Time complexity: s O(bl)

Advantages:
• Memory Efficient
Disadvantages:
• Incompleteness
• It may not be optimal if problem has more than one solution.
iii)Breadth First Search:

• Breadth-first search (BFS) is an algorithm for traversing or searching tree or


graph data structures.

• It starts at the tree root and explores all of the neighbor nodes at the present
depth.
Performance measure:
Completeness: Finds the goal state
Optimality: Optimal solution
Space complexity: O(bd)

Advantages:
• Simple solution
• Easy to understand
Disadvantages:
• Large amount of memory
• Takes long time
iv)Iterative Deeping Search:
• This search is a combination of BFS and DFS, as BFS guarantees to reach the
goal node and DFS occupies less memory space.
• Therefore, iterative deepening search combines these two advantages of BFS
and DFS to reach the goal node.
• It gradually increases the depth-limit from 0,1,2 and so on and reach the goal
node.
Algorithm:
• Explore the nodes in DFS order.
• Set a LIMIT variable with a limit value.
• Loop each node up to the limit value and further increase the limit value
accordingly.
• Terminate the search when the goal state is found.
Performance Measure:
Completeness: May or may not reach goal
Optimality: Not optimal solution
Space complexity: O(bd).
Time complexity: O(d)

• The goal node is H and initial depth-limit =[0-1].


• So, it will expand level 0 and 1 and will terminate with A->B->C sequence.
• Further, change the depth-limit =[0-3], it will again expand the nodes from level
0 till level 3 and the search terminate with A->B->D->F->E->H sequence where
H is the desired goal node.
Disadvantages:
• Generates multiple times
V)Uniform Cost Search:
• A searching algorithm for traversing a weighted tree or graph is uniform-cost
search.
• When a separate cost is provided for each edge, this algorithm is used.
• Cost of a node is defined as: cost(node) = cumulative cost of all nodes from root.
Performance measures:
Completeness: Complete
Optimality: Best optimal
Time complexity: O(b1 + [C*/ε])/
Space complexity: O(b1 + [C*/ε])/

Advantages:
• Low cost
• Complete and give optimal solution
Disadvantages:
• It is concerned with expense of path
vi)Bidirectional Search:
• The strategy behind the bidirectional search is to run two searches
simultaneously--one forward search from the initial state and other from the
backside of the goal.
Performance Measures:
Completeness: Complete
Optimality: optimal
Time & Space complexity: O(bd/2)
Disadvantages: It requires lot of memory space.

Informed Search Strategies:


• Here, the algorithms have information on the goal state, which helps in more
efficient searching.
• This information is obtained by something called a heuristic.
• Types of informed search strategies are:
• Greedy Search
• A* Tree Search
• A* Graph Search
i)Greedy Search:
• In greedy search, we expand the node closest to the goal node.
• The “closeness” is estimated by a heuristic h(x).
• Heuristic: A heuristic h is defined ash(x) = Estimate of distance of node x from
the goal node. Lower the value of h(x), closer is the node from the goal.
Performance measures:
Completeness: Incomplete
Optimality: Not optimal
Space & Time complexity: O(bm)

Advantages:
• Works well with informed search problems.
• Fewer steps to reach goals
Disadvantages:
• Turn into unguided DFS in worst case

ii)A* Tree Search:


• A* Tree Search, or simply known as A* Search, combines the strengths of
uniform-cost search and greedy search.
• The following points should be noted with heuristics in A* search. f(x) = g(x) +
h(x)
• h(x) is called the forward cost and is an estimate of the distance of the current
node from the goal node.
• g(x) is called the backward cost and is the cumulative cost of a node from the
root node.
Performance measure:
Completeness: Complete
Optimality: Give optimal solution
Space & Time complexity: O(bd)

iii)A* Graph Search:


• A* Graph Search, or simply Graph Search, removes this limitation by adding
this rule: do not expand the same node more than once.
• Heuristic. Graph search is optimal only when the forward cost between two
successive nodes A and B, given by h(A) – h (B), is less than or equal to the
backward cost.
Local Search Algorithms:
• Local search algorithms operate by searching from a start state to neighboring
states,without keeping track of the paths, or the set of states that have been
reached.
• “Local search algorithms” where the path cost does not matters, and only focus
on solution-state needed to reach the goal node.
• Local search algorithms can also solve optimization problems, in which the
aim is to find the best state according to an objective function.
• Optimization Problems - An optimization problem is one where all the nodes
can give a solution.
• Objective Function - An objective function is a function whose value is either
minimized or maximized in different contexts of the optimization problems.
Advantages:
• Use little memory
• Find reasonable solutions in large or infinite state
Working of Local Search Algorithm:
• Location: It is defined by state
• Elevation: It is defined by the value of the objective finction.
• The local search algorithm explores the above landscape by finding the
following two points. i.e., Global Minimum and Global Maximum.
• Global Minimum: If the elevation corresponds to cost, then the task is to find
the lowest valley.
• Global Maximum: If the elevation corresponds to an objective function, then it
finds the highest peak.
• Types of Local Search strategies are:
• Hill-climbing Search
• Simulated Annealing
• Local Beam Search
i)Hill-climbing Search:
• It is a local search algorithm.
• It keeps track of one current state and on each iteration moves to the neighboring
state with highest value.
• The purpose of this algorithm is to climb the hill and reach the topmost peak.
• It is based on the heuristic search technique where the person who is climbing
up on the hill estimates the direction which will lead him to the highest peak.

• The topological regions can be defined as:


• Global Maximum: It is the highest point on the hill, which is the goal state.
• Local Maximum: It is the peak higher than all other peaks but lower than
global maximum.
• Flat Local Maximum: It is the flat area over the hill where it has no uphill
or downhill, It is a saturated point of hill.
• Shoulder: It is also a flat area where the summit is possible.
• Current State: It is the current position of the person.
Limitations of Hill-climbing algorithm:
Local Maxima: It is the peak of the mountain which is highest than all its
neighboring states but lower than global maxima.

Plateau: It is a flat surface area where no uphill exists. It becomes difficult for the
climber to decide that in which direction he should move to reach goal point.

Ridges: It is a challenging problem where the person finds two or more local maxima
of same height commonly.

ii)Simulated Annealing:
• Simulated annealing is similar to the hill climbing algorithm.
• It works on the current situation.
• It picks a random move instead of picking the best move.
• This search technique was first used in 1980 to solve VLSI layout problems.
• It is also applied for factory scheduling and other large optimization tasks.
Advantages:
• Easy implementation
• Optimization
Disadvantages:
• Takes long time to find optimal solution
iii)Local Beam Search:
• The local beam search algorithm keeps track of states rather than just one,
• It begins with randomly k generated states.
• It selects k randomly generated states, and expands them at each step.
• If any state is a goal state, the search stops with success.
• Else it selects the best k successors from the complete list and repeats the same
process.
• In local beam search, the necessary information is shared between the parallel
search processes.
Disadvantages:
• Lack of diversity
• Expensive

Adversarial Search:
• Adversarial search is a game-playing technique where the agents are surrounded
by a competitive environment.
• Such conflicting goals give rise to the adversarial search.
Elements of Game Playing Search:
• S0: It is the initial state from where a game begins.
• PLAYER (s): It defines which player is having the current turn to make a move
in the state.
• ACTIONS (s): It defines the set of legal moves to be used in a state.
• RESULT (s, a): It is a transition model which defines the result of a move.
• TERMINAL-TEST (s): It defines that the game has ended and returns true.
• UTILITY (s,p): It defines the final value with which the game has ended.
• The price which the winner will get i.e.
o (-1): If the PLAYER loses.
o (+1): If the PLAYER wins.
o (0): If there is a draw between the PLAYERS.
Example:
• Game tic-tac-toe, has two or three possible outcomes.
• Either to win, to lose, or to draw the match with values +1,-1 or 0.

Game Tree For Tic Tac Toe:


• INITIAL STATE (S0): The top node in the game-tree represents the initial state.
• PLAYER (s): There are two players, MAX and MIN.
• ACTIONS (s): Both the players can make moves in the empty boxes chance by
chance.
• RESULT (s, a): The moves made by MIN and MAX will decide the outcome of
the game.
• TERMINAL-TEST(s): When all the empty boxes will be filled, it will be the
terminating state of the game.
• UTILITY: At the end, we will get to know who wins: MAX or MIN.
Types of Algorithms in Adversarial Search:
• Minimax Algorithm
• Alpha-Beta Pruning
i)Minimax Algorithm:
• In artificial intelligence, minimax is a decision-making strategy under game
theory.
• MINIMAX algorithm is a backtracking algorithm where it backtracks to pick
the best move out of several choices.
• MINIMAX strategy follows the DFS (Depth-first search) concept.
• MIN: Decrease the chances to win the game.
• MAX: Increases his chances of winning the game.
Steps:
Step 1: Let's take A is the initial state of the tree.

Step 2: Now, first we find the utilities value for the Maximizer, its initial value is -
∞.
Step 3: In the next step, it's a turn for minimizer, so it will compare all nodes value
with +∞, and will find the 3rd layer node values. For node B= min(4,6) = 4 and For
node C= min (-3, 7) = -3

Step 4: Now it's a turn for Maximizer. For node A max(4, -3)= 4

Drawbacks:
• Explores each node in tree
• Increases time complexity
ii)Alpha-Beta Pruning:
• Alpha-beta pruning is an advance version of MINIMAX algorithm.
• Alpha-beta pruning reduces this drawback of minimax strategy.
• The method used in alpha-beta pruning is that it cutoff the search.
• Alpha-beta pruning works on two threshold values, i.e., ? (alpha) and ? (beta).
• ?: (alpha) - It is the best highest value, a MAX player can have. It is the lower
bound.
• ?:(beta) - It is the best lowest value, a MIN player can have. It is the upper
bound.
• The main condition which required for alpha-beta pruning is: α>=β.
Working Steps:
Step 1: At the first step the, Max player will start first move from node A where α= -∞
and β= +∞.

Step 2: At Node D, the value of α will be calculated as its turn for Max. The value of
α is compared with firstly 2 and then 3, and the max (2, 3) =3.
Step 3: Now algorithm backtrack to node B, where the value of β will change as this
is a turn of Min. min (∞, 3) = 3.

Step 4: At node E, Max will take its turn, and the value of alpha will change. The
current value of alpha will be compared with 5, so max (-∞, 5) = 5.
Step 5: At next step, algorithm again backtrack the tree, from node B to node A.
Step 6: At node F, again the value of α will be compared with left child which is 0, and
max(3,0)= 3.

Step 7:

Step 8: the optimal value for the maximizer is 3.


Constraint Satisfaction Problem (CSP):
• A constraint satisfaction problem (CSP) is a problem that requires its solution
within some limitations or conditions also known as constraints.
• Examples: n-Queen, Map colouring, Crossword.
• Complete Assignment: An assignment where every variable is assigned with a
value, and the solution to the CSP remains consistent. Such assignment is known
as Complete assignment.
• Partial Assignment: An assignment which assigns values to some of the
variables only. Such type of assignments are called Partial assignments
Types of Domains in CSP:
• Discrete Domain: It is an infinite domain which can have one state for multiple
variables.
• Finite Domain: It is a finite domain which can have continuous states for one
specific variable.
Constraint Types in CSP:
• Unary constraint: It is the simplest type that restricts the value of single
variable.
• Binary constraint: It has two variables.
• Global constraint: It involves an arbitrary number of variables.
Map Coloring:
• It is a map of Australia, it consists of states and territories.
• The task will be coloring each region with colors red, green or blue.
• No neighboring regions will have same color.
• The set of domain in each variable is Di = {red, green, blue}.
• Formulate the variables to particular regions. X={WA, NT, Q, NSW, V, SA, T}

• The constraints require neighboring regions to have distinct colors.


• Since there are nine places where regions border, there are nine constraints.

• If it starts coloring with the state SA, then any of the 3 colors can be chosen.
• Means, here there are 3 possibilities.
• when moving to the next node, it says WA and there are 2 colors that can be
chosen. That means it has 2 possibilities.
• The remaining states can be colored with the colors carefully.
• So the number of possibilities is 3 x 2 = 6.
• So the number of solutions is 6 x 3 = 18.
Comparison of Data Science, Artificial Intelligence and Machine Learning:

You might also like