AL3391-artificial-intelligence unit-1

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 22

lOMoARcPSD|43239349

ALPHA COLLEGE OF ENGINEERING


Thirumazhisai –Chennai

UNIT 1 - AI

Artificial Intelligence
(Anna University)
lOMoARcPSD|43239349

AL3391 – Artificial Intelligence Department of AI&DS 2023 - 2024


UNIT I INTELLIGENT AGENTS
Introduction to AI – Agents and Environments – concept of rationality – nature of environments –
structure of agents. Problem solving agents – search algorithms – uninformed search strategies

Artificial Intelligence:
 “Artificial Intelligence is the ability of a computer to act like a human being”.
 Artificial intelligence systems consist of people, procedures, hardware, software, data, and
knowledge needed to develop computer systems and machines that demonstrate the characteristics
of intelligence

Programming Without AI Programming With AI


A computer program without AI can answer the A computer program with AI can answer the
specific questions it is meant to solve. generic questions it is meant to solve.
AI programs can absorb new modifications by
Modification in the program leads to change in its putting highly independent pieces of information
structure. together. Hence you can modify even a minute
piece of information of program without
affecting its structure.
Modification is not quick and easy. It may lead to
Quick and Easy program modification.
affecting the program adversely.

Four Approaches of Artificial Intelligence:


 Acting humanly: The Turing test approach.
 Thinking humanly: The cognitive modelling approach.
 Thinking rationally: The laws of thought approach.
 Acting rationally: The rational agent approach.

Acting humanly: The Turing Test approach


The Turing Test, proposed by Alan Turing (1950), was designed to provide a satisfactory
operational definition of intelligence. A computer passes the test if a human interrogator, after posing some
written questions, cannot tell whether the written responses come from a person or from a computer.

 natural language processing to enable it to communicate successfully in English;


 knowledge representation to store what it knows or hears;
 automated reasoning to use the stored information to answer questions and to draw new conclusions
 machine learning to adapt to new circumstances and to detect and extrapolate patterns.
lOMoARcPSD|43239349

AL3391 – Artificial Intelligence Department of AI&DS 2023 - 2024

Thinking humanly: The cognitive modeling approach


Analyze how a given program thinks like a human, we must have some way of determining how
humans think. The interdisciplinary field of cognitive science brings together computer models from AI and
experimental techniques from psychology to try to construct precise and testable theories of the workings
of the human mind. Although cognitive science is a fascinating field in itself, we are not going to be
discussing it all that much in this book. We will occasionally comment on similarities or differences
between AI techniques and human cognition. Real cognitive science, however, is
necessarily based on experimental investigation of actual humans or animals, and we assume that the reader
only has access to a computer for experimentation. We will simply note that AI and cognitive science
continue to fertilize each other, especially in the areas of vision, natural language, and learning.

Thinking rationally: The “laws of thought” approach


The Greek philosopher Aristotle was one of the first to attempt to codify ``right
thinking,'' that is, irrefutable reasoning processes. His famous syllogisms provided patterns for
argument structures that always gave correct conclusions given correct premises.
For example, ``Socrates is a man; all men are mortal;
therefore Socrates is mortal.''
These laws of thought were supposed to govern the operation of the mind, and initiated the field of
logic.

Acting rationally: The rational agent approach


Acting rationally means acting so as to achieve one's goals, given one's beliefs. An agent is just
something that perceives and acts.
The right thing: that which is expected to maximize goal achievement, given the available information
Does not necessary involve thinking.
For Example - blinking reflex- but should be in the service of rational action.

Agents and environments


An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that
environment through actuators.

 Human Sensors:
Eyes, ears, and other organs for sensors.
 Human Actuators:
Hands, legs, mouth, and other body parts.
 Robotic Sensors:
Mic, cameras and infrared range finders for sensors
 Robotic Actuators:
Motors, Display, speakers etc

Page 2 of 12
lOMoARcPSD|43239349

AL3391 – Artificial Intelligence Department of AI&DS 2023 - 2024


Agent Characteristics
• Situatedness
The agent receives some form of sensory input from its environment, and it performs some action
that changes its environment in some way.
Examples of environments: the physical world and the Internet.
• Autonomy
The agent can act without direct intervention by humans or other agents and that it has control over its
own actions and internal state.
• Adaptivity
The agent is capable of
(1) reacting flexibly to changes in its environment;
(2) taking goal-directed initiative (i.e., is pro-active), when appropriate; and
(3) Learning from its own experience, its environment, and interactions with others.
• Sociability
The agent is capable of interacting in a peer-to-peer manner with other agents or humans.

Rational Agent - A system is rational if it does the “right thing”. Given what it knows.
Characteristic of Rational Agent.
▪ The agent's prior knowledge of the environment.
▪ The performance measure that defines the criterion of success.
▪ The actions that the agent can perform.
▪ The agent's percept sequence to date.

For every possible percept sequence, a rational agent should select an action that is expected to
maximize its performance measure, given the evidence provided by the percept sequence and whatever built-
in knowledge the agent has.
 An omniscient agent knows the actual outcome of its actions and can act accordingly; but
omniscience is impossible in reality.
 Ideal Rational Agent precepts and does things. It has a greater performance measure. Eg.
Crossing road. Here first perception occurs on both sides and then only action.
 No perception occurs in Degenerate Agent.
Eg. Clock. It does not view the surroundings. No matter what happens outside. The clock works based
on inbuilt program.
 Ideal Agent describes by ideal mappings. “Specifying which action an agent ought to take in
response to any given percept sequence provides a design for ideal agent”.
Eg. SQRT function calculation in calculator.
 Doing actions in order to modify future precepts-sometimes called information gathering- is an
important part of rationality.
 A rational agent should be autonomous-it should learn from its own prior knowledge (experience).

nature of environments
Properties of Environment
An environment is everything in the world which surrounds the agent, but it is not a part
of an agent itself. An environment can be described as a situation in which an agent is present.
The environment is where agent lives, operate and provide the agent with something to sense and act
upon it.
2. Fully observable vs. Partially Observable:
• If an agent sensor can sense or access the complete state of an environment at each point of time then it
is a fully observable environment, else it is partially observable.
• A fully observable environment is easy as there is no need to maintain the internal state to keep track
history of the world.
Page 3 of 12
lOMoARcPSD|43239349

AL3391 – Artificial Intelligence Department of AI&DS 2023 - 2024

• An agent with no sensors in all environments then such an environment is called as


unobservable.
• Example: chess – the board is fully observable, as are opponent’s moves.
Driving – what is around the next bend is not observable and hence partially observable.

3. Deterministic vs Stochastic:
• If an agent's current state and selected action can completely determine the next state of the
environment, then such environment is called a deterministic environment.
• A stochastic environment is random in nature and cannot be determined completely by an
agent.
• In a deterministic, fully observable environment, agent does not need to worry about uncertainty.

4. Episodic vs Sequential:
• In an episodic environment, there is a series of one-shot actions, and only the current percept is
required for the action.
• However, in Sequential environment, an agent requires memory of past actions to determine
the next best actions.

5. Single-agent vs Multi-agent
• If only one agent is involved in an environment, and operating by itself then such an
environment is called single agent environment.
• However, if multiple agents are operating in an environment, then such an environment is
called a multi-agent environment.
The agent design problems in the multi-agent environment are different from single agent
environment.

6. Static vs Dynamic:
• If the environment can change itself while an agent is deliberating then such environment is
called a dynamic environment else it is called a static environment.
• Static environments are easy to deal because an agent does not need to continue looking at the world
while deciding for an action.
• However for dynamic environment, agents need to keep looking at the world at each action.
• Taxi driving is an example of a dynamic environment whereas Crossword puzzles are an
example of a static environment.

7. Discrete vs Continuous:
• If in an environment there are a finite number of percepts and actions that can be performed within
it, then such an environment is called a discrete environment else it is called continuous environment.
• A chess game comes under discrete environment as there is a finite number of moves that can be
performed.
• A self-driving car is an example of a continuous environment.

8. Known vs Unknown
• Known and unknown are not actually a feature of an environment, but it is an agent's state of
knowledge to perform an action.
• In a known environment, the results for all actions are known to the agent. While in unknown
environment, agent needs to learn how it works in order to perform an action.
• It is quite possible that a known environment to be partially observable and an Unknown
environment to be fully observable.

Page 4 of 12
lOMoARcPSD|43239349

AL3391 – Artificial Intelligence Department of AI&DS 2023 - 2024

9. Accessible vs. Inaccessible


• If an agent can obtain complete and accurate information about the state's environment, then such an
environment is called an Accessible environment else it is called inaccessible.
• An empty room whose state can be defined by its temperature is an example of an accessible
environment.
• Information about an event on earth is an example of Inaccessible environment.

Page 5 of 12
lOMoARcPSD|43239349

AL3391 – Artificial Intelligence Department of AI&DS 2023 - 2024

Task environments, which are essentially the "problems" to which rational agents are the "solutions."

PEAS: Performance Measure, Environment, Actuators, Sensors


Performance:
The output which we get from the agent. All the necessary results that an agent gives after processing comes
under its performance.

Environment:
All the surrounding things and conditions of an agent fall in this section. It basically consists of all the things
under which the agents work.

Actuators:
The devices, hardware or software through which the agent performs any actions or processes any
information to produce a result are the actuators of the agent.

Sensors:
The devices through which the agent observes and perceives its environment are the sensors of the agent.

Consider, e.g., the task of designing an automated taxi driver:

Agent Type Performance measure Environment Actuators Sensors

Safe, Roads, Steering wheel, Cameras,


fast, other accelerator, speedometer,
Taxi Driver legal traffic, brake, GPS,
, pedestrians, signal, engine sensors,
comfortable trip, customers horn keyboard
maximize
profits

Search Algorithm Terminologies:

 Search: Searching is a step by step procedure to solve a search-problem in a given search space. A
search problem can have three main factors:
1. Search Space: Search space represents a set of possible solutions, which a system may
have.
2. Start State: It is a state from where agent begins the search.
3. Goal test: It is a function which observe the current state and returns whether the goal state is
achieved or not.
 Search tree: A tree representation of search problem is called Search tree. The root of the search
tree is the root node which is corresponding to the initial state.
 Actions: It gives the description of all the available actions to the agent.
 Transition model: A description of what each action do, can be represented as a transition model.
 Path Cost: It is a function which assigns a numeric cost to each path.
 Solution: It is an action sequence which leads from the start node to the goal node.
 Optimal Solution: If a solution has the lowest cost among all solutions.

Page 6 of 12
lOMoARcPSD|43239349

AL3391 – Artificial Intelligence Department of AI&DS 2023 - 2024


Example Problems
A Toy Problem is intended to illustrate or exercise various problem-solving methods. A real- world
problem is one whose solutions people actually care about.
Toy Problems:

1) Vaccum World
States : The state is determined by both the agent location and the dirt locations. The agent is in one of
the 2 locations, each of which might or might not contain dirt. Thus there are 2*2^2=8 possible world
states.
Initial state: Any state can be designated as the initial state.
Actions: In this simple environment, each state has just three actions: Left, Right, and Suck. Larger
environments might also include Up and Down.
Transition model: The actions have their expected effects, except that moving Left in the leftmost
square, moving Right in the rightmost square, and Sucking in a clean square have no effect. The complete
state space is shown in Figure.
Goal test: This checks whether all the squares are clean.
Path cost: Each step costs 1, so the path cost is the number of steps in the path.

2) 8- Puzzle Problem

States: A state description specifies the location of each of the eight tiles and the blank in one of the nine
squares.
Initial state: Any state can be designated as the initial state. Note that any given goal can be reached
from exactly half of the possible initial states.

Page 7 of 12
lOMoARcPSD|43239349

AL3391 – Artificial Intelligence Department of AI&DS 2023 - 2024

Actions: The simplest formulation defines the actions as movements of the blank space Left, Right, Up, or
Down. Different subsets of these are possible depending on where the blank is.
Transition model: Given a state and action, this returns the resulting state; for example, if we apply
Left to the start state in Figure 3.4, the resulting state has the 5 and the blank switched. Goal test:
This checks whether the state matches the goal configuration shown in Figure. Path cost: Each
step costs 1, so the path cost is the number of steps in the path.

3) 8 – Queens Problem:

 States: Any arrangement of 0 to 8 queens on the board is a state.


 Initial state: No queens on the board.
 Actions: Add a queen to any empty square.
 Transition model: Returns the board with a queen added to the specified square.
 Goal test: 8 queens are on the board, none attacked.

search algorithms
o Search: Searching is a step by step procedure to solve a search-problem in a given search space. A search
problem can have three main factors:

a. Search Space: Search space represents a set of possible solutions, which a system may have.
b. Start State: It is a state from where agent begins the search.
c. Goal test: It is a function which observe the current state and returns whether the goal state is
achieved or not.
o Search tree: A tree representation of search problem is called Search tree. The root of the search tree is
the root node which is corresponding to the initial state.
o Actions: It gives the description of all the available actions to the agent.
o Transition model: A description of what each action do, can be represented as a transition model.
lOMoARcPSD|43239349

AL3391 – Artificial Intelligence Department of AI&DS 2023 - 2024

o Path Cost: It is a function which assigns a numeric cost to each path.


o Solution: It is an action sequence which leads from the start node to the goal node.
o Optimal Solution: If a solution has the lowest cost among all solutions.

Page 8 of 12
lOMoARcPSD|43239349

AL3391 – Artificial Intelligence Department of AI&DS 2023 - 2024

Properties of Search Algorithms:

Following are the four essential properties of search algorithms to compare the efficiency of these algorithms:

Completeness: A search algorithm is said to be complete if it guarantees to return a solution if at least any
solution exists for any random input.

Optimality: If a solution found for an algorithm is guaranteed to be the best solution (lowest path cost) among all
other solutions, then such a solution for is said to be an optimal solution.

Time Complexity: Time complexity is a measure of time for an algorithm to complete its task.

Space Complexity: It is the maximum storage space required at any point during the search, as the complexity of
the problem.

Types of search algorithms

Based on the search problems we can classify the search algorithms into uninformed (Blind search) search
and informed search (Heuristic search) algorithms.

Page 9 of 12
lOMoARcPSD|43239349

AL3391 – Artificial Intelligence Department of AI&DS 2023 - 2024

Uninformed/Blind Search:

The uninformed search does not contain any domain knowledge such as closeness, the location of the goal. It
operates in a brute-force way as it only includes information about how to traverse the tree and how to identify
leaf and goal nodes. Uninformed search applies a way in which search tree is searched without any information
about the search space like initial state operators and test for the goal, so it is also called blind search.It examines
each node of the tree until it achieves the goal node.

It can be divided into five main types:

o Breadth-first search
o Uniform cost search
o Depth-first search
o Iterative deepening depth-first search
o Bidirectional Search

uninformed search strategies.

Uninformed search is a class of general-purpose search algorithms which operates in brute force-way.
Uninformed search algorithms do not have additional information about state or search space other than
how to traverse the tree, so it is also called blind search.

Following are the various types of uninformed search algorithms:

1. Breadth-first Search
2. Depth-first Search
3. Depth-limited Search
4. Iterative deepening depth-first search
5. Uniform cost search
6. Bidirectional Search

1. Breadth-first Search:
o Breadth-first search is the most common search strategy for traversing a tree or graph. This algorithm
searches breadth wise in a tree or graph, so it is called breadth-first search.
o BFS algorithm starts searching from the root node of the tree and expands all successor node at the
current level before moving to nodes of next level.
o The breadth-first search algorithm is an example of a general-graph search algorithm.
o Breadth-first search implemented using FIFO queue data structure.
lOMoARcPSD|43239349

AL3391 – Artificial Intelligence Department of AI&DS 2023 - 2024


Page 10 of

o BFS will provide a solution if any solution exists.


o If there are more than one solutions for a given problem, then BFS will provide the minimal solution
which requires the least number of steps.

Disadvantages:

o It requires lots of memory since each level of the tree must be saved into memory to expand the next
level.
o BFS needs lots of time if the solution is far away from the root node.

Example:

In the below tree structure, we have shown the traversing of the tree using BFS algorithm from the root node S to
goal node K. BFS search algorithm traverse in layers, so it will follow the path which is shown by the dotted
arrow, and the traversed path will be:

1. S---> A--->B---->C--->D---->G--->H--->E---->F---->I- - ->K

Time Complexity: Time Complexity of BFS algorithm can be obtained by the number of nodes traversed in BFS
until the shallowest Node. Where the d= depth of shallowest solution and b is a node at every state.

T (b) = 1+b2+b3+ + bd= O (bd)

Space Complexity: Space complexity of BFS algorithm is given by the Memory size of frontier which is O(bd).

Completeness: BFS is complete, which means if the shallowest goal node is at some finite depth, then BFS will
find a solution.

Optimality: BFS is optimal if path cost is a non-decreasing function of the depth of the node.
lOMoARcPSD|43239349

AL3391 – Artificial Intelligence Department of AI&DS 2023 - 2024


Page 11 of

2. Depth-first Search
o Depth-first search isa recursive algorithm for traversing a tree or graph data structure.
o It is called the depth-first search because it starts from the root node and follows each path to its greatest
depth node before moving to the next path.
o DFS uses a stack data structure for its implementation.
o The process of the DFS algorithm is similar to the BFS algorithm.

Note: Backtracking is an algorithm technique for finding all possible solutions using recursion.

Advantage:

o DFS requires very less memory as it only needs to store a stack of the nodes on the path from root node to
the current node.
o It takes less time to reach to the goal node than BFS algorithm (if it traverses in the right path).

Disadvantage:

o There is the possibility that many states keep re-occurring, and there is no guarantee of finding the
solution.
o DFS algorithm goes for deep down searching and sometime it may go to the infinite loop.

Example:

In the below search tree, we have shown the flow of depth-first search, and it will follow the order

as: Root node--->Left node > right node.

It will start searching from root node S, and traverse A, then B, then D and E, after traversing E, it will backtrack
the tree as E has no other successor and still goal node is not found. After backtracking it will traverse node C
and then G, and here it will terminate as it found goal node.
lOMoARcPSD|43239349

AL3391 – Artificial Intelligence Department of AI&DS 2023 - 2024

Page 12 of
lOMoARcPSD|43239349

AL3391 – Artificial Intelligence Department of AI&DS 2023 - 2024


Completeness: DFS search algorithm is complete within finite state space as it will expand every node within a
limited search tree.

Time Complexity: Time complexity of DFS will be equivalent to the node traversed by the algorithm. It is given
by:

T(n)= 1+ n2+ n3 + + nm=O(nm)

Where, m= maximum depth of any node and this can be much larger than d (Shallowest solution depth)

Space Complexity: DFS algorithm needs to store only single path from the root node, hence space complexity of
DFS is equivalent to the size of the fringe set, which is O(bm).

Optimal: DFS search algorithm is non-optimal, as it may generate a large number of steps or high cost to reach
to the goal node.

3. Depth-Limited Search Algorithm:

A depth-limited search algorithm is similar to depth-first search with a predetermined limit. Depth-limited search
can solve the drawback of the infinite path in the Depth-first search. In this algorithm, the node at the depth limit
will treat as it has no successor nodes further.

Depth-limited search can be terminated with two Conditions of failure:

o Standard failure value: It indicates that problem does not have any solution.
o Cutoff failure value: It defines no solution for the problem within a given depth limit.

Advantages:

Depth-limited search is Memory efficient.

Disadvantages:

o Depth-limited search also has a disadvantage of incompleteness.


o It may not be optimal if the problem has more than one solution.

Example:

Page 13 of
lOMoARcPSD|43239349

AL3391 – Artificial Intelligence Department of AI&DS 2023 - 2024

Completeness: DLS search algorithm is complete if the solution is above the depth-limit.

Time Complexity: Time complexity of DLS algorithm is O(bℓ).

Space Complexity: Space complexity of DLS algorithm is O(b×ℓ).

Optimal: Depth-limited search can be viewed as a special case of DFS, and it is also not optimal even if ℓ>d

4. Uniform-cost Search Algorithm:

Uniform-cost search is a searching algorithm used for traversing a weighted tree or graph. This algorithm comes
into play when a different cost is available for each edge. The primary goal of the uniform-cost search is to find a
path to the goal node which has the lowest cumulative cost. Uniform-cost search expands nodes according to
their path costs form the root node. It can be used to solve any graph/tree where the optimal cost is in demand. A
uniform-cost search algorithm is implemented by the priority queue. It gives maximum priority to the lowest
cumulative cost. Uniform cost search is equivalent to BFS algorithm if the path cost of all edges is the same

Advantages:

o Uniform cost search is optimal because at every state the path with the least cost is chosen.

Disadvantages:

o It does not care about the number of steps involve in searching and only concerned about path cost. Due
to which this algorithm may be stuck in an infinite loop.

Page 14 of
lOMoARcPSD|43239349

AL3391 – Artificial Intelligence Department of AI&DS 2023 - 2024

Example:

Completeness:

Uniform-cost search is complete, such as if there is a solution, UCS will find it.

Time Complexity:

Let C* is Cost of the optimal solution, and ε is each step to get closer to the goal node. Then the number of
steps is = C*/ε+1. Here we have taken +1, as we start from state 0 and end to C*/ε.

Hence, the worst-case time complexity of Uniform-cost search isO(b1 + [C*/ε])/.

Space Complexity:

The same logic is for space complexity so, the worst-case space complexity of Uniform-cost search is O(b1 +
[C*/ε]
).

Optimal: Uniform-cost search is always optimal as it only selects a path with the lowest path cost.

5. Iterative deepening depth-first Search:

The iterative deepening algorithm is a combination of DFS and BFS algorithms. This search algorithm finds out
the best depth limit and does it by gradually increasing the limit until a goal is found.

This algorithm performs depth-first search up to a certain "depth limit", and it keeps increasing the depth limit
after each iteration until the goal node is found.

This Search algorithm combines the benefits of Breadth-first search's fast search and depth-first search's memory
efficiency.

The iterative search algorithm is useful uninformed search when search space is large, and depth of goal node is
unknown.
lOMoARcPSD|43239349

AL3391 – Artificial Intelligence Department of AI&DS 2023 - 2024


Page 15 of
lOMoARcPSD|43239349

AL3391 – Artificial Intelligence Department of AI&DS 2023 - 2024


Advantages:

o It combines the benefits of BFS and DFS search algorithm in terms of fast search and memory efficiency.

Disadvantages:

o The main drawback of IDDFS is that it repeats all the work of the previous phase.

Example:

Following tree structure is showing the iterative deepening depth-first search. IDDFS algorithm performs various
iterations until it does not find the goal node. The iteration performed by the algorithm is given as:

1'st Iteration- - - -> A


2'nd Iteration- - -> A, B, C
3'rd Iteration----->A, B, D, E, C, F, G
4'th Iteration----->A, B, D, H, I, E, C, F, K, G
In the fourth iteration, the algorithm will find the goal node.

Completeness:

This algorithm is complete is ifthe branching factor is finite.

Time Complexity:

Let's suppose b is the branching factor and depth is d then the worst-case time complexity is O(bd).

Space Complexity:

The space complexity of IDDFS will be O(bd).

Optimal:

IDDFS algorithm is optimal if path cost is a non- decreasing function of the depth of the node.

6. Bidirectional Search Algorithm:

Page 16 of
lOMoARcPSD|43239349

AL3391 – Artificial Intelligence Department of AI&DS 2023 - 2024


Bidirectional search algorithm runs two simultaneous searches, one form initial state called as forward-
search and other from goal node called as backward-search, to find the goal node. Bidirectional search
replaces one single search graph with two small subgraphs in which one starts the search from an initial
vertex and other starts from goal vertex. The search stops when these two graphs intersect each other.

Bidirectional search can use search techniques such as BFS, DFS, DLS, etc.

Advantages:

o Bidirectional search is fast.


o Bidirectional search requires less memory

Disadvantages:

o Implementation of the bidirectional search tree is difficult.


o In bidirectional search, one should know the goal state in advance.

Example:

In the below search tree, bidirectional search algorithm is applied. This algorithm divides one graph/tree into two
sub-graphs. It starts traversing from node 1 in the forward direction and starts from goal node 16 in the backward
direction.

The algorithm terminates at node 9 where two searches meet.

Completeness: Bidirectional Search is complete if we use BFS in both

searches. Time Complexity: Time complexity of bidirectional search using

BFS is O(bd). Space Complexity: Space complexity of bidirectional search is

O(bd).

Optimal: Bidirectional search is Optimal.


lOMoARcPSD|43239349

AL3391 – Artificial Intelligence Department of AI&DS 2023 - 2024


Page 17

You might also like