0% found this document useful (0 votes)
38 views77 pages

Artificial Intelligence Unit 1

fdcv vbvbc gfbvvc bvdchg gcv
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views77 pages

Artificial Intelligence Unit 1

fdcv vbvbc gfbvvc bvdchg gcv
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 77

ARTIFICIAL

INTELLIGENCE
Unit 1
CONTENT
 Introduction to AI: What is AI?
 Intelligent Agents and environment;
 Rationality; the nature of environment; the structure of agents.
 Problem solving: Problem-solving agents; Example problems; Searching for
solution; Uninformed search strategies. Informed Search, Exploration,
 Constraint Satisfaction
 Adversial Search: Informed search strategies; Heuristic functions; On-line search
agents and unknown environment.
 Constraint satisfaction problems; Backtracking search for CSPs.
 Adversial search: Games; Optimal decisions in games; Alpha-Beta pruning.
INTRODUCTION TO AI
It is a branch of computer science by which intelligent machines is created
which can behave like a human, think like humans, and able to make
decisions."
OR
According to the father of Artificial Intelligence, John McCarthy, it is “The
science and engineering of making intelligent machines, especially intelligent
computer programs”.
Artificial Intelligence is composed of two words Artificial and Intelligence, where
Artificial defines "man-made," and intelligence defines "thinking power", hence
AI means "a man-made thinking power."
Components of Intelligence or What is Intelligence composed of?
1) Learning
2)Reasoning
3)Linguistic Intelligence
4)Perception
5) Problem solving
Learning:
 It is the activity of gaining knowledge or skill by studying, practicing, being taught, or
experiencing something. Learning enhances the awareness of the subjects of the study.
 The ability of learning is possessed by humans, some animals, and AI-enabled systems.

Reasoning : It is the set of processes that enables us to provide basis for judgement,
making decisions, and prediction.
Problem Solving :
 It is the process in which one perceives and tries to arrive at a desired solution from a
present situation by taking some path, which is blocked by known or unknown hurdles.
 Problem solving also includes decision making, which is the process of selecting the best
suitable alternative out of multiple alternatives to reach the desired goal are available
Perception:
It is the process of acquiring, interpreting, selecting, and organizing sensory
information. Perception presumes sensing. In humans, perception is aided by
sensory organs. In the domain of AI, perception mechanism puts the data
acquired by the sensors together in a meaningful manner.
Linguistic Intelligence : It is one’s ability to use, comprehend, speak, and
write the verbal and written language. It is important in interpersonal
communication.
Spatial Learning : It is learning through visual stimuli such as images, colors,
maps, etc. For Example, A person can create roadmap in mind before actually
following the road.
Stimulus-Response Learning : It is learning to perform a particular behavior
when a certain stimulus is present. For example, a dog raises its ear on hearing
do
AI DISCIPLINES
TYPES OF AI
Artificial Intelligence can be divided in various types, there are mainly two types of main categorization
which are based on capabilities and based on functionally of AI.
AI TYPE 1: BASED ON
CAPABILITY
Weak AI or Narrow AI:
 Narrow AI is a type of AI which is able to perform a dedicated task with
intelligence. The most common and currently available AI is Narrow AI in the
world of Artificial Intelligence.
 Narrow AI cannot perform beyond its field or limitations, as it is only trained
for one specific task. Hence it is also termed as weak AI. Narrow AI can fail in
unpredictable ways if it goes beyond its limits.
 Apple Siri is a good example of Narrow AI, but it operates with a limited pre-
defined range of functions.
 Some Examples of Narrow AI are playing chess, purchasing suggestions on e-
commerce site, self-driving cars, speech recognition, and image recognition
General AI:
 General AI is a type of intelligence which could perform any intellectual task with
efficiency like a human.
 The idea behind the general AI to make such a system which could be smarter and
think like a human by its own.
 Currently, there is no such system exist which could come under general AI and can
perform any task as perfect as a human.
 The worldwide researchers are now focused on developing machines with General AI.
 As systems with general AI are still under research and it will take lots of efforts and
time to develop such systems.
Super AI:
 Super AI is a level of Intelligence of Systems at which machines could surpass human
intelligence and can perform any task better than human with cognitive properties. It
is an outcome of general AI.
 Some key characteristics of strong AI include capability include the ability to think, to
reason, solve the puzzle, make judgments, plan, learn, and communicate by its own.
 Super AI is still a hypothetical concept of Artificial Intelligence. Development of such
systems in real is still world changing task.
AI TYPE 2: BASED ON
FUNCTIONALITY
1.Reactive Machines
 Purely reactive machines are the most basic types of Artificial
Intelligence.
 Such AI systems do not store memories or past experiences for future
actions.
 These machines only focus on current scenarios and react on it as per
possible best action.
 IBM's Deep Blue system is an example of reactive machines.
 Google's AlphaGo is also an example of reactive machines.

2. Limited Memory
o Limited memory machines can store past experiences or some data for a
short period of time.
o These machines can use stored data for a limited time period only.
o Self-driving cars are one of the best examples of Limited Memory systems.
These cars can store recent speed of nearby cars, the distance of other
cars, speed limit, and other information to navigate the road.
3.Theory of Mind
o Theory of Mind AI should understand the human emotions, people,
beliefs, and be able to interact socially like humans.
o This type of AI machines is still not developed, but researchers are
making lots of efforts and improvement for developing such AI
machines.

4.Self-Awareness
o Self-awareness AI is the future of Artificial Intelligence. These
machines will be super intelligent, and will have their own
consciousness, sentiments, and self-awareness.
o These machines will be smarter than human mind.
o Self-Awareness AI does not exist in reality still and it is a hypothetical
concept.
TYPES OF INTELLIGENCE
1)Linguistic Intelligence
 It is the ability to speak, recognize and use the mechanism of grammar,
semantic and phonology, It is an ability to think and express the scenario easily
as user understands
 Ex: Novelist, Journalist.

2)Musical Intelligence
 It is an ability to analyses pitch, rhythm and tone.
 This enables us to recognize, create, reproduce and reflect it on the music.

Ex: Musician, singers.


3) Spatial Intelligence
 It is an ability to think in virtualization this builds the capacity to visualize the
things manipulate the images, construct 3D images etc.
Ex: Graphics, game developer, architect.
4)Interpersonal Intelligence
 It is an ability to interact, understand other efficiency.
 The person must have an ability to note and react sensitivity with others depending on mood
of another person.
 The person with this type of intelligence will have multiple perspectives on one situation Ex:
politicians, leader, social worker.
5)Intrapersonal Intelligence
 It is the capacity to understand oneself and once thoughts and feelings
 They are self-motivated and they are very much aware of their own feeling

Ex: Spiritual Leader, philosopher.


6) Kinetic Intelligence/bodily intelligence
It is the capacity to manipulate the object by using physical skills this involves timings,
perfection
Ex: Surgeons or athletes
7)Logical-mathematical Intelligence
It is an ability to calculate quantify and carry out mathematical operations.
It helps in solving complex problems by a simple technique
It is an ability to understand the relationship in the absence of an object
KNOWLEDGE
 Humans are best at understanding, reasoning, and interpreting knowledge.
Human knows things, which is knowledge and as per their knowledge they
perform various actions in the real world. But how machines do all these things
comes under knowledge representation and reasoning.
 Knowledge representation and reasoning (KR, KRR) is the part of Artificial
intelligence which concerned with AI agents thinking and how thinking
contributes to intelligent behavior of agents.
 It is responsible for representing information about the real world so that a
computer can understand and can utilize this knowledge to solve the complex
real world problems such as diagnosis a medical condition or communicating
with humans in natural language.
 It a way which describes to represent knowledge in artificial intelligence.
Knowledge representation is not just storing data into some database, but it
also enables an intelligent machine to learn from that knowledge and
experiences so that it can behave intelligently like a human
WHICH IS IN THE FORM OF
KNOWLEDGE?
 Object: All the facts about objects in our world domain. E.g., Guitars
contains strings, trumpets are brass instruments.
 Events: Events are the actions which occur in our world.
 Performance: It describe behavior which involves knowledge about how to
do things.
 Meta-knowledge: It is knowledge about what we know.
 Facts: Facts are the truths about the real world and what we represent.
 Knowledge-Base: The central component of the knowledge-based agents is
the knowledge base. It is represented as KB. The Knowledgebase is a group
of the Sentences
TYPES OF KNOWLEDGE
1. Declarative Knowledge:
 Declarative knowledge is to know about something.
 It includes concepts, facts, and objects.
 It is also called descriptive knowledge and expressed in declarative
sentences.
 It is simpler than procedural language

2. Procedural Knowledge
 It is also known as imperative knowledge.
 Procedural knowledge is a type of knowledge which is responsible for knowing
how to do something.
 It can be directly applied to any task.
 It includes rules, strategies, procedures, agendas, etc.
 Procedural knowledge depends on the task on which it can be applied.
3. Meta-knowledge: Knowledge about the other types of knowledge is called
Meta-knowledge.
4. Heuristic knowledge:
 Heuristic knowledge is representing knowledge of some experts in a filed or
subject. Heuristic knowledge is rules of thumb based on previous experiences,
awareness of approaches, and which are good to work but not guaranteed.
5. Structural knowledge
 Structural knowledge is basic knowledge to problem-solving.
 It describes relationships between various concepts such as kind of, part of,
and grouping of something.
 It describes the relationship that exists between concepts or objects.
DIFFERENCE BETWEEN
PROGRAMMING WITH AI
AND PROGRAMMING
WITHOUT AI
APPLICATIONS OF AI
 Gaming − AI plays crucial role in strategic games such as chess, poker, tic-tac-toe, etc., where
machine can think of large number of possible positions based on heuristic knowledge.
 Natural Language Processing − It is possible to interact with the computer that understands
natural language spoken by humans.
 Expert Systems − There are some applications which integrate machine, software, and special
information to impart reasoning and advising. They provide explanation and advice to the users.
 Vision Systems − These systems understand, interpret, and comprehend visual input on the
computer
 Speech Recognition − Some intelligent systems are capable of hearing and comprehending the
language in terms of sentences and their meanings while a human talk to it. It can handle different
accents, slang words, noise in the background, change in human’s noise due to cold, etc.
 Handwriting Recognition − The handwriting recognition software reads the text written on paper
by a pen or on screen by a stylus. It can recognize the shapes of the letters and convert it into
editable text.
 Intelligent Robots − Robots are able to perform the tasks given by a human. They have sensors to
detect physical data from the real world such as light, heat, temperature, movement, sound, bump,
and pressure. They have efficient processors, multiple sensors and huge memory, to exhibit
intelligence. In addition, they are capable of learning from their mistakes and they can adapt to the
new environment
AGENT
The AI system consists of the agent and environment; the agents will acts up on the
environment.
Agent: An agent is anything that can be viewed as perceiving its environment through
sensors and acting upon that environment through actuators.
An Agent runs in the cycle of perceiving, thinking, and acting.
The agent receives the information from the environment from sensors and store and
process the information in the agent program do the actions by using the effectors.
The agent consists of the following terms
1. Sensors: Sensor is a device which detects the change in the environment and sends
the information to other electronic devices. Sensors can be camera, infrared ray’s
devices, keyboard, eye, ear etc.
2. Effectors: Effectors are the devices which affect the environment. Effectors can be
legs, wheels, arms, fingers, wings, fins, and display screen.
3. Actuators: Actuators are the component of machines that converts energy into
motion. The actuators are only responsible for moving and controlling a system. An
actuator can be an electric motor, gears, rails, etc
INTELLIGENT AGENTS
 An intelligent agent (IA) is an entity that makes a decision, that enables artificial intelligence to be put
into action. It can also be described as a software entity that conducts operations in the place of users or
programs after sensing the environment. It uses actuators to initiate action in that environment.
 This agent has some level of autonomy that allows it to perform specific, predictable, and repetitive tasks
for users or applications.
 It’s also termed as ‘intelligent’ because of its ability to learn during the process of performing tasks.
 The two main functions of intelligent agents include perception and action. Perception is done through
sensors while actions are initiated through actuators.
 Intelligent agents consist of sub-agents that form a hierarchical structure. Lower-level tasks are
performed by these sub-agents.
 The higher-level agents and lower-level agents form a complete system that can solve difficult problems
through intelligent behaviors or responses.
AGENTS AND ENVIRONMENT

The agent can be


1) Software as agent Software agent can have keystrokes, file contents as
sensory input and act on those inputs and display output on the screen.
2) Robot as an agent A robotic agent can have cameras, infrared range finder,
NLP for sensors and various motors for actuators
3) Human as an agent A human agent has eyes, ears, and other organs which
work for sensors and hand, legs, vocal tract work for actuators.
EXAMPLE
 Consider the vacuum cleaner and list the following properties for it.

 Environment: Room A and Room B.


 Status of environment: Dirty, Clean Percept: [location and status] i.e. [A, dirty]
 Action: left, right, suck no operation.
 Agent function: it is mapping between the percept sequence and the action
 In the above example the vacuum cleaner will works in two room A and B,it will
move from A and B automatically if current room it clean
 Its moving actions are left and right and it will perform suck operation if it detects
the dirt.
RATIONALITY
 It is the performance measure of the agent which would defines the criteria of success

 Rationality is status of being reasonable, sensible, and having good sense of judgment.

 Rationality is concerned with expected actions and results depending upon what the agent has perceived. Performing actions
with the aim of obtaining useful information is an important part of rationality.
Rationality of an agent depends on the following −
• The performance measures, which determine the degree of success.

• Agent’s Percept Sequence till now.

• The agent’s prior knowledge about the environment.

• The actions that the agent can carry out.


 A rational agent always performs right action, where the right action means the action that causes the agent to be most
successful in the given percept sequence.
The problem the agent solves is characterized by
1. Performance Measure
2. Environment
3. Actuators
4. Sensors (PEAS).
EXAMPLE
 Consider the vacuum cleaner which cleanse two rooms A and B, it can take rest for some
time by going to sleep mode. It may make the noise while sucking.
 So to measure the performance its cleanliness is not only the criteria but we must
consider its noise, it’s time to travel between two rooms, its sleeping time etc will come
into picture while deciding its performance.
Rationality of the agent is measured by following criteria:
The performance measure that defines the criterion of success.
The agent prior knowledge of the environment.
The possible actions that the agent can perform.
The agent’s percept sequence to date
PEAS
Task environment of any agent is decided by PEAS.
Following are the Properties of the agent to be listed while designing the rational agent.
P: Performance of the AI system.
E: Environment in which it acts.
A: Actuators S: Sensors
PEAS for Self driving car
 Performance: Safety, time, legal drive, comfort.
 Environment: Roads, other cars, pedestrians, road signs.
 Actuators: Steering, accelerator, brake, signal, horn.
 Sensors: Camera, sonar, GPS, Speedometer, odometer, accelerometer, engine sensors,
keyboard.

PEAS for Vacuum cleaner


 Performance: cleanness, efficiency: distance traveled to clean,battery life, security.
 Environment: room, table, wood floor, carpet, different obstacles.
 Actuators: wheels, different brushes, vacuum extractor.
 Sensors: camera, dirt detection sensor, cliff sensor, bump sensors, infrared wall sensors.
TYPES OF THE AGENTS
1. Simple reflex agent
2. Model reflex agent
3. Goal based agent
4. Utility based agent
5. Learning agent
SIMPLE REFLEX AGENT
 Simple reflex agents act only on the basis of the current
percept.
 The agent function is based on the condition-action rule
i.e If (condition) then else A condition-action rule is a
rule that maps a state i.e, condition to an action.
 If the condition is true, then the action is taken, else
not.
 Example: if( temp>40 ) Then Turn on the AC.
 This agent function only succeeds when the
environment is fully observable environment.
Limitations:
 Very limited intelligence.
 No knowledge of non-perceptual parts of state.
 Usually too big to generate and store.
 If there occurs any change in the environment, then the
collection of rules need to be updated.
MODEL REFLEX AGENT
 These are the agents with memory.
 It stores the information about the previous state, the current
state and performs the action accordingly.
 It mainly has two parts

1) Model: It represents the knowledge about the world.it stores the


complete aspects of the task that agent is performing.
Example: If it is a self driving car then Model contains complete
knowledge about driving car .
2) Internal representation: It represents the mapping between the
current percept and the previous history of the agent.
Example: while driving, if the driver wants to change the lane, he
looks into the mirror to know the present position of vehicles
behind him. While looking in front, he can only see the vehicles in
front, and as he already has the information on the position of
vehicles behind him (from the mirror a moment ago), he can safely
change the lane.
 The previous and the current state get updated quickly for
deciding the action.
 It works well in partially observable environment
GOAL BASED AGENT
 These are the agents whose aim is to reach the
Goal
 These agents will take the decisions not only
based on the current percept but also using the
knowledge of the GOAL .i.e. The agent will think
whether the present action will help to reach the
destination.
 This involves the searching and planning concepts
to take the actions and to reach the GOAL
Example: If the agent is a self-driving car and the
goal is the destination, then the information of the
route to the destination helps the car in deciding
when to turn left or right.
So in the above example the agent takes the
actions according to current percept and the GOAL.
UTILITY BASED AGENT
 These agents are used when there is multiple
options to achieve the goal and to determine which
option is best.
 The utility based agent chooses the best option
based on the users preferences, utility describes
the happiness of the agent along with reaching the
goal. i.e. Sometimes achieving the desired goal is
not enough. We may look for a quicker, safer,
cheaper trip to reach a destination.
 The utility based agent chooses the action which
gives maximum degree of happiness or
satisfaction.
 Example: for a self driving car the destination is
known, but there are multiple routes. Choosing an
appropriate route also matters to the overall
success of the agent. There are many factors in
deciding the route like the shortest one, the
comfortable one, etc
LEARNING AGENT
 A learning agent in AI is the type of agent which can learn from
its past experiences or it has learning capabilities.
 It starts to act with basic knowledge and then able to act and
adapt automatically through learning.
 A learning agent has mainly four conceptual components,
which are:
1. Learning element: It is responsible for making improvements
by learning from the environment
2. Critic: Learning element takes feedback from critic
(Analysis )which describes how well the agent is doing with
respect to a fixed performance standard.
3. Performance element: It is responsible for selecting external
action
4. Problem Generator: This component is responsible for
suggesting actions that will lead to new and informative
experiences.
 Example: Human is the best example for learning agent who
can learn the things from environment and can analyze
whether it is right or wrong and act on the environment when it
is required.
TYPES OF ENVIRONMENT
1. Fully Observable vs. Partially Observable
 When an agent sensor is capable to sense or access the complete state of an
agent at each point in time, it is said to be a fully observable environment.
 If an agent is not capable to sense the complete state of the agent at each
point of time is called as it is partially observable
 Maintaining a fully observable environment is easy as there is no need to
keep track of the history of the surrounding.
 Example: Chess – the board is fully observable, a player can see his part of
moves and the opponent part of moves as well.
 Poker (cards) – the environment is partially observable because the player
can see the moves in his hand but cannot able to see the cards in the other
player’s hands
2. Deterministic vs. Stochastic
 The agent’s current state completely determines the next state of the agent, then the
environment is said to be deterministic environment. It has uniqueness in the agent
 The agent’s current state cannot completely determine next state of the agent, and
then it is called as stochastic environment. It has randomness in its moves.
 Example: Traffic signal: If the person in the traffic signal is the agent then base on his
present color of signal he can determines next state of the signal so it is deterministic
environment.
 Self driving car: For self driving car it cannot determines its next action based on
present action it may varies frequently so it is stochastic environment
3. Single-agent vs. Multi-agent
 An environment consisting of only one agent is said to be a single-agent environment.
 An environment involving more than one agent is a multi-agent environment.
 Example: The game of football is multi-agent as it involves 11 players in each team.
 A person left alone in a maze is an example of the single-agent system.
4. Dynamic vs. Static
 An environment that keeps constantly changing itself when the agent is up with
some action is said to be dynamic.
 An environment with no change in it as the agent is acting is called a static
environment.
 Example: Vacuum cleaner in the room whose environment will not change as the it
cleans the room every time so it is static environment
 Chess game the coins will move each and every minutes so it is dynamic
environment
5. Discrete vs. Continuous
 If an environment consists of a finite number of actions that can be deliberated in
the environment to obtain the output, it is said to be a discrete environment.
 The environment in which the actions performed cannot be numbered ie. is not
discrete, is said to be continuous.
 Example: Self-driving cars are an example of continuous environments as their
actions is driving, parking, etc. which cannot be numbered.
 The game of chess is discrete environment as it has only a finite number of moves.
The number of moves might vary with every game, but still, it’s finite.
6. Episodic vs. sequential:
 If in an environment an action on current situation will not affects the actions on
present and previous sates then it is called as episodic environment.
 Here set of actions will perform based on current percept and it will not affect the next
actions.
 Example:- Chat bot: It is the best example of episodic environment because it will
answers the particular question we asked and the next answer will not be linked to the
previous questions.
 Chess: This is an example of sequential because every move in chess will depends on
its previous moves and the every current move will affect next move.
7. Known vs Unknown
 Known and unknown are not actually a feature of an environment, but it is an agent's
state of knowledge to perform an action.
 In a known environment, the results for all actions are known to the agent. While in
unknown environment, agent needs to learn how it works in order to perform an
action.
 It is quite possible that a known environment to be partially observable and an
Unknown environment to be fully observable.
8. Accessible vs Inaccessible
 If an agent can obtain complete and accurate information about the state's
environment, then such an environment is called an Accessible environment
else it is called inaccessible.
 An empty room whose state can be defined by its temperature is an example
of an accessible environment.
 Information about an event on earth is an example of Inaccessible
environment.
POPULAR SEARCH ALGORITHMS
Problem-solving agents:
In Artificial Intelligence, Search techniques are universal problem-solving methods. Rational agents or
Problem-solving agents in AI mostly used these search strategies or algorithms to solve a specific problem
and provide the best result. Problem-solving agents are the goal-based agents and use atomic
representation.
Search Algorithm Terminologies:
1. Search: Searching is a step-by-step procedure to solve a search-problem in a given search space.
A search problem can have three main factors:
a) Search Space: Search space represents a set of possible solutions, which a system is having.
b) Start State: It is a state from where agent begins the search.
c) Goal test: It is a function which observe the current state and returns whether the goal state is
achieved or not.
2. Search tree: A tree representation of search problem is called Search tree. The root of the search tree is
the root node, which is corresponding to the initial state.
3. Actions: It gives the description of all the available actions to the agent.
4. Transition model: A description of what each action do, is represented as a transition model.
5. Path Cost: It is a function, which assigns a numeric cost to each path.
6. Solution: It is an action sequence, which leads from the start node to the goal node.
7. Optimal Solution: If a solution has the lowest cost among all solutions.
TYPES OF SEARCH
ALGORITHMS
Based on the search problems the search algorithms are classified into
1. Uninformed (Blind search) search
2. Informed search (Heuristic search) algorithms.
Uninformed/Blind Search:
The uninformed search does not contain any domain knowledge such as
closeness, the location of the goal.
It operates in a brute-force way as it only includes information to traverse the
tree and to identify leaf and goal nodes.
Uninformed search applies a way in which search tree is searched without any
information about the search space like initial state operators and test for the
goal, so it is also called blind search.
It examines each node of the tree until it achieves the goal node.
Informed Search
 Informed search algorithms use domain knowledge. In an informed search,
problem information is available which guides the search.
 An informed search strategy finds a solution more efficiently than an
uninformed search strategy. Informed search is also called a Heuristic search.
 A heuristic is a way, which might not always be guaranteed for best solutions
but guaranteed to find a good solution in reasonable time.
 Informed search can solve much complex problem, which could not be solved
in another way
HEURISTIC FUNCTIONS
 As we have already seen that an informed search make use of heuristic functions in order to reach
the goal node in a more prominent way. Therefore, there are several pathways in a search tree to
reach the goal node from the current node. The selection of a good heuristic function matters
certainly. A good heuristic function is determined by its efficiency. More is the information about
the problem, more is the processing time.
 Some toy problems, such as 8-puzzle, 8-queen, tic-tac-toe, etc., can be solved more efficiently
with the help of a heuristic function.
 Consider the following 8-puzzle problem where we have a start state and a goal state. Our task is
to slide the tiles of the current/start state and place it in an order followed in the goal state. There
can be four moves either left, right, up, or down. There can be several ways to convert the
current/start state to the goal state, but, we can use a heuristic function h(n) to solve the problem
more efficiently.
 A heuristic function for the 8-puzzle problem is defined below:
 h(n)=Number of tiles out of position.
 So, there is total of three tiles out of position i.e., 6,5 and 4. Do not count the empty
tile present in the goal state). i.e. h(n)=3. Now, we require to minimize the value
of h(n) =0.
 We can construct a state-space tree to minimize the h(n) value to 0, as shown below:
It is seen from the above state space tree that the goal state is minimized from h(n)=3 to h(n)=0.
However, we can create and use several heuristic functions as per the reqirement. It is also clear from the
above example that a heuristic function h(n) can be defined as the information required to solve a given
problem more efficiently. The information can be related to the nature of the state, cost of
transforming from one state to another, goal node characterstics, etc., which is expressed as a
heuristic function.
Properties of a Heuristic search Algorithm
 Use of heuristic function in a heuristic search algorithm leads to following properties of a heuristic
search algorithm:
• Admissible Condition: An algorithm is said to be admissible, if it returns an optimal solution.
• Completeness: An algorithm is said to be complete, if it terminates with a solution (if the solution
exists).
• Dominance Property: If there are two admissible heuristic
algorithms A1 and A2 having h1 and h2 heuristic functions, then A1 is said to
dominate A2 if h1 is better than h2 for all the values of node n.
• Optimality Property: If an algorithm is complete, admissible, and dominating other algorithms,
it will be the best one and will definitely give an optimal solution.
ADVERSARIAL SEARCH

 Adversarial search is a search, where we examine the problem which arises when
we try to plan ahead of the world and other agents are planning against us.
 we have studied the search strategies which are only associated with a single agent that
aims to find the solution which often expressed in the form of a sequence of actions.
 But, there might be some situations where more than one agent is searching for the solution
in the same search space, and this situation usually occurs in game playing.
 The environment with more than one agent is termed as multi-agent environment, in which
each agent is an opponent of other agent and playing against each other. Each agent needs
to consider the action of other agent and effect of that action on their performance.
 So, Searches in which two or more players with conflicting goals are trying to explore the
same search space for the solution, are called adversarial searches, often known as Games.
 Games are modeled as a Search problem and heuristic evaluation function, and these are
the two main factors which help to model and solve games in AI.
TYPES OF GAMES IN AI:
• Perfect information: A game with the perfect information is that in which
agents can look into the complete board. Agents have all the information about
the game, and they can see each other moves also. Examples are Chess,
Checkers, Go, etc.
• Imperfect information: If in a game agents do not have all information about
the game and not aware with what's going on, such type of games are called the
game with imperfect information, such as tic-tac-toe, Battleship, blind, Bridge,
etc.
• Deterministic games: Deterministic games are those games which follow a
strict pattern and set of rules for the games, and there is no randomness
associated with them. Examples are chess, Checkers, Go, tic-tac-toe, etc.
• Non-deterministic games: Non-deterministic are those games which have
various unpredictable events and has a factor of chance or luck. This factor of
chance or luck is introduced by either dice or cards. These are random, and each
action response is not fixed. Such games are also called as stochastic games.
Example: Backgammon, Monopoly, Poker, etc.
ZERO SUM GAME
 Zero-sum games are adversarial search which involves pure competition.

 In Zero-sum game each agent's gain or loss of utility is exactly balanced by the losses or gains
of utility of another agent.
 One player of the game try to maximize one single value, while other player tries to minimize it.

 Each move by one player in the game is called as ply.

 Chess and tic-tac-toe are examples of a Zero-sum game.

Zero-sum game: Embedded thinking


 The Zero-sum game involved embedded thinking in which one agent or player is trying to figure
out:
 What to do.

 How to decide the move

 Needs to think about his opponent as well

 The opponent also thinks what to do

 Each of the players is trying to find out the response of his opponent to their actions. This
requires embedded thinking or backward reasoning to solve the game problems in AI.
FORMALIZATION OF THE PROBLEM:
 A game can be defined as a type of search in AI which can be
formalized of the following elements:
• Initial state: It specifies how the game is set up at the start.
• Player(s): It specifies which player has moved in the state space.
• Action(s): It returns the set of legal moves in state space.
• Result(s, a): It is the transition model, which specifies the result of moves in
the state space.
• Terminal-Test(s): Terminal test is true if the game is over, else it is false at any
case. The state where the game ends is called terminal states.
• Utility(s, p): A utility function gives the final numeric value for a game that
ends in terminal states s for player p. It is also called payoff function. For Chess,
the outcomes are a win, loss, or draw and its payoff values and for tic-tac-toe,
utility values are +1, -1, and 0.
GAME TREE:
 A game tree is a tree where nodes
of the tree are the game states and
Edges of the tree are the moves by
players. Game tree involves initial
state, actions function, and result
Function.
Example: Tic-Tac-Toe game tree:
 The following figure is showing part
of the game-tree for tic-tac-toe
game. Following are some key
points of the game:
• There are two players MAX and
MIN.
• Players have an alternate turn and
start with MAX.
• MAX maximizes the result of the
game tree
• MIN minimizes the result.
EXAMPLE
• From the initial state, MAX has 9 possible moves as he starts first. MAX place x
and MIN place o, and both player plays alternatively until we reach a leaf node
where one player has three in a row or all squares are filled.
• Both players will compute each node, minimax, the minimax value which is the
best achievable utility against an optimal adversary.
• Suppose both the players are well aware of the tic-tac-toe and playing the best
play. Each player is doing his best to prevent another one from winning. MIN is
acting against Max in the game.
• So in the game tree, we have a layer of Max, a layer of MIN, and each layer is
called as Ply. Max place x, then MIN puts o to prevent Max from winning, and
this game continues until the terminal node.
• In this either MIN wins, MAX wins, or it's a draw. This game-tree is the whole
search space of possibilities that MIN and MAX are playing tic-tac-toe and
taking turns alternately.
 Hence adversarial Search for the minimax procedure works as follows:

• It aims to find the optimal strategy for MAX to win the game.
• It follows the approach of Depth-first search.
• In the game tree, optimal leaf node could appear at any depth of the tree.
• Propagate the minimax values up to the tree until the terminal node discovered.
 In a given game tree, the optimal strategy can be determined from the minimax
value of each node, which can be written as MINIMAX(n). MAX prefer to move to
a state of maximum value and MIN prefer to move to a state of minimum value
then:
TYPES OF ALGORITHM IN
ADVERSARIAL SEARCH
 Here the results depends on the players which will decide the result of the
game
Types of adversarial search algorithm
1. Minmax Algorithm
2. Alpha-Beta Algorithm
MINMAX ALGORITHM
• Mini-max algorithm is a recursive or backtracking algorithm which is used in decision-
making and game theory. It provides an optimal move for the player assuming that
opponent is also playing optimally.
• Mini-Max algorithm uses recursion to search through the game-tree.
• Min-Max algorithm is mostly used for game playing in AI. Such as Chess, Checkers, tic-
tac-toe, go, and various tow-players game. This Algorithm computes the minimax
decision for the current state.
• In this algorithm two players play the game, one is called MAX and other is called MIN.
• Both the players fight it as the opponent player gets the minimum benefit while they
get the maximum benefit.
• Both Players of the game are opponent of each other, where MAX will select the
maximized value and MIN will select the minimized value.
• The minimax algorithm performs a depth-first search algorithm for the exploration of
the complete game tree.
• The minimax algorithm proceeds all the way down to the terminal node of the tree, then
backtrack the tree as the recursion.
WORKING OF MIN-MAX ALGORITHM:
Step-1: In the first step, the algorithm generates the entire game-tree and apply
the utility function to get the utility values for the terminal states. In the below
tree diagram, let's take A is the initial state of the tree. Suppose maximizer takes
first turn which has worst-case initial value =- infinity, and minimizer will take
next turn which has worst-case initial value = +infinity
Step 2: Now, first we find the utilities value for the Maximizer, its initial value is -
∞, so we will compare each value in terminal state with initial value of Maximizer
and determines the higher nodes values. It will find the maximum among the all.
• For node D max(-1,- -∞) => max(-1,4)= 4
• For Node E max(2, -∞) => max(2, 6)= 6
• For Node F max(-3, -∞) => max(-3,-5) = -3
• For node G max(0, -∞) = max(0, 7) = 7
Step 3: In the next step, it's a turn for minimizer, so it will compare all nodes
value with +∞, and will find the 3rd layer node values.
• For node B= min(4,6) = 4
• For node C= min (-3, 7) = -3
 Step 4: Now it's a turn for Maximizer, and it will again choose the maximum of all
nodes value and find the maximum value for the root node. In this game tree,
there are only 4 layers, hence we reach immediately to the root node, but in real
games, there will be more than 4 layers.
• For node A max(4, -3)= 4
PROPERTIES OF MINI-MAX ALGORITHM:
• Complete- Min-Max algorithm is Complete. It will definitely find a solution (if
exist), in the finite search tree.
• Optimal- Min-Max algorithm is optimal if both opponents are playing optimally.
• Time complexity- As it performs DFS for the game-tree, so the time
complexity of Min-Max algorithm is O(bm), where b is branching factor of the
game-tree, and m is the maximum depth of the tree.
• Space Complexity- Space complexity of Mini-max algorithm is also similar to
DFS which is O(bm).
Disadvantage of minimax algorithm is that it gets really slow for complex
games such as Chess, go, etc. This type of games has a huge branching factor,
and the player has lots of choices to decide. This limitation of the minimax
algorithm can be improved from alpha-beta pruning
ALPHA-BETA ALGORITHM
CONSTRAINT SATISFACTION PROBLEM (CSP)
 Constraint Satisfaction Problem (CSP) is a fundamental topic in artificial
intelligence (AI) that deals with solving problems by identifying constraints and
finding solutions that satisfy those constraints.
 CSP has a wide range of applications, including scheduling, resource allocation,
and automated reasoning.
 The goal of AI is to create intelligent machines that can perform tasks that
usually require human intelligence, such as reasoning, learning, and problem-
solving. One of the key approaches in AI is the use of constraint satisfaction
techniques to solve complex problems.
 CSP is a specific type of problem-solving approach that involves identifying
constraints that must be satisfied and finding a solution that satisfies all the
constraints. CSP has been used in a variety of applications, including scheduling,
planning, resource allocation, and automated reasoning.
CONSTRAINT SATISFACTION
 A Constraint Satisfaction Problem in artificial intelligence involves a set of variables, each of
which has a domain of possible values, and a set of constraints that define the allowable
combinations of values for the variables. The goal is to find a value for each variable such that all
the constraints are satisfied.
 More formally, a CSP is defined as a triple (X,D,C), where:

• X is a set of variables { x1​, x2​, ..., xn​}.


• D is a set of domains {D1​, D2​, ...,Dn​}, where each Di​is the set of possible values for xi​.
• C is a set of constraints {C1​, C2​, ..., Cm​}, where each Ci​is a constraint that restricts the values that
can be assigned to a subset of the variables.
 The goal of a CSP is to find an assignment of values to the variables that satisfies all the
constraints. This assignment is called a solution to the CSP.
TYPES OF CONSTRAINTS
IN CSP
 Several types of constraints can be used in a Constraint satisfaction problem in artificial intelligence,
including:
• Unary Constraints:
A unary constraint is a constraint on a single variable. For example, Variable A not equal to “Red”.
• Binary Constraints:
A binary constraint involves two variables and specifies a constraint on their values. For example, a
constraint that two tasks cannot be scheduled at the same time would be a binary constraint.
• Global Constraints:
Global constraints involve more than two variables and specify complex relationships between them.
For example, a constraint that no two tasks can be scheduled at the same time if they require the
same resource would be a global constraint
PROBLEM
 Each state in a CSP is defined by an assignment of values to some or all of the variables
 An assignment that does not violate any constraints is called a consistent or legal assignment
 A complete assignment is one in which every variable is assigned
 A solution to a CSP is consistent and complete assignment
 Allows useful general-purpose algorithms with more power than standard search algorithms

Example: Map Coloring


 Variables: X = {WA, NT, Q, NSW, V, SA, T }
 Domains: Di = {red, green, blue}
 Constraints: adjacent regions must have different colors
 Solution--?
SOLUTION: COMPLETE AND CONSISTENT ASSIGNMENT

 Variables: X = {WA, NT, Q, NSW, V, SA, T }


 Domains: Di = {red, green, blue}
 Constraints: adjacent regions must have different colors
 Solution? {WA = red, NT = green, Q = red, NSW = green, V = red,SA = blue, T = red}.
CONSTRAINT GRAPH
 Constraint graph: nodes are variables, arcs are constraints

 Binary CSP: each constraint relates two variables

 CSP conforms to a standard pattern a set of variables with assigned values

 Generic successor function and goal test

 Generic heuristics

 Reduce complexity

CSP as a Search Problem


 Initial state:

{} – all variables are unassigned


 Successor function: a value is assigned to one of the unassigned variables with no conflict

 Goal test:a complete assignment

 Path cost:

a constant cost for each step


Solution appears at depth n if there are n variables
Depth-first or local search methods work well
CSP SOLVERS CAN BE FASTER

 CSP solver can quickly eliminate large part of search space


 If {SA = blue}
 Then 35 assignments can be reduced to 25 assignments, a reduction of 87%
 In a CSP, if a partial assignment is not a solution, we can immediately discard further refinements of it
BACKTRACKING
 Backtracking can be defined as a general algorithmic technique that considers searching every
possible combination in order to solve a computational problem.
 Backtracking is an algorithmic technique for solving problems recursively by trying to build a
solution incrementally, one piece at a time, removing those solutions that fail to satisfy the
constraints of the problem at any point of time (by time, here, is referred to the time elapsed till
reaching any level of the search tree).
 When we have multiple choices, then we make the decisions from the available choices. In the
following cases, we need to use the backtracking algorithm:
• A piece of sufficient information is not available to make the best choice, so we use the
backtracking strategy to try out all the possible solutions.
• Each decision leads to a new set of choices. Then again, we backtrack to make new decisions. In
this case, we need to use the backtracking strategy.
BACKTRACKING SEARCH FOR CSPS

CSPs can be solved by a specialized version of depth first search.


Key intuitions:
 We can build up to a solution by searching through the space of partial assignments.
 Order in which we assign the variables does not matter---eventually they all have to be
assigned.
 If during the process of building up a solution we falsify a constraint, we can immediately
reject all possible ways of extending the current partial assignment
 These ideas lead to the backtracking search algorithm
●Heuristics are used to determine which variable to assign next “Pick Unassigned Variable”.
●The choice can vary from branch to branch
e.g., under the assignment V1=a we might choose to assign V4 next, while under V1=b we might choose to
assign V5 next.
●This “dynamically” chosen variable ordering has a tremendous impact on performance.
Example:
● N-Queens. Place N Queens on an N X N chess board so that no Queen can attack any other Queen.
■ N Variables, one per row. Value of Qi is the column the Queen in row i is placed.
Constrains:
 Vi ≠ Vj for all i ≠ j (cannot put two Queens in same column)
 |Vi-Vj| ≠ |i-j| (Diagonal constraint)

(i.e., the difference in the values assigned to Vi and Vj can’t be equal to the difference between i and j.
ONLINE SEARCH AGENTS AND
UNKNOWN ENVIRONMENTS
 Online Search Agents

Agent interleaves computation and action


Good for dynamic or semi dynamic domains
When there is a penalty for sitting around and computing
Necessary if the environment is unknown
 Problem stipulates
 List of actions in state s
 Step-cost function based on

1. previous state
2. Action
3. resulting state (only known afterwards)
 Goal test for state
 Sometimes, agent has a heuristic telling it approximately how far from the goal it
is
 Simple maze problem

• Agent starts at S and must reach G


• It knows nothing of environment

• Competitive Ratio
Path cost over path costs if agent were to know the environment
• Often infinite: Agent can end up in a dead-end state because of an irreversible
action
•While exploring, jump down a cliff

You might also like