0% found this document useful (0 votes)
30 views41 pages

B Section Mod 1

This document provides an overview of artificial intelligence (AI), including its goals, techniques, applications, and history. AI aims to create intelligent machines that can think and act like humans by studying processes like human learning, problem-solving, and decision-making. Key techniques for AI include machine learning, natural language processing, computer vision, and robotics. AI has been applied successfully in games, language translation, expert systems, and intelligent robots. The field has progressed significantly since the term was coined in 1956, with major advances and demonstrations in machine learning, reasoning, and autonomous systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views41 pages

B Section Mod 1

This document provides an overview of artificial intelligence (AI), including its goals, techniques, applications, and history. AI aims to create intelligent machines that can think and act like humans by studying processes like human learning, problem-solving, and decision-making. Key techniques for AI include machine learning, natural language processing, computer vision, and robotics. AI has been applied successfully in games, language translation, expert systems, and intelligent robots. The field has progressed significantly since the term was coined in 1956, with major advances and demonstrations in machine learning, reasoning, and autonomous systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

WhatisArtificialIntelligence?

Since the invention of computers or machines, their capability to perform various tasks went on growing
exponentially. Humans have developed the power of computer systems in terms of their diverse working
domains, their increasing speed, and reducing size with respect to time.

A branch of Computer Science named Artificial Intelligence pursues creating the computers or machines
as intelligent as human beings.

According to the father of Artificial Intelligence John McCarthy, it is “The science and engineering of making
intelligent machines, especially intelligent computer programs”.

Artificial Intelligence is a way of making a computer, a computer-controlled robot, or a software think


intelligently, in the similar manner the intelligent humans think.

AI is accomplished by studying how human brain thinks, and how humans learn, decide, and work while
trying to solve a problem, and then using the outcomes of this study as a basis of developing intelligent
software and systems.

Philosophy ofAI
While exploiting the power of the computer systems, the curiosity of human, lead him to wonder, “Can a
machine think and behave like humans do?”

Thus, the development of AI started with the intention of creating similar intelligence in machines that
we find and regard high in humans.

Goals ofAI
• To Create Expert Systems: The systems which exhibit intelligent behavior, learn, demonstrate, explain,
and advice its users.
• To Implement Human Intelligence in Machines: Creating systems that
understand, think, learn, and behave like humans.

What Contributes to AI?


Artificial intelligence is a science and technology based on disciplines such as Computer Science, Biology,
Psychology, Linguistics, Mathematics, and Engineering. A major thrust of AI is in the development of
computer functions associated with human intelligence, such as reasoning, learning, and problem
solving.

Out of the following areas, one or multiple areas can contribute to build an intelligent system.
Programming Without and WithAI
The programming without and with AI is different in following ways:

Programming Without AI Programming With AI

A computer program without AI can answer the


A computer program with AI can answer the
specific questions it is meant to solve.
generic questions it is meant to solve.

AI programs can absorb new modifications by putting


highly independent pieces of information together. Hence
Modification in the program leads to change in its
you can modify even a minute piece of information of
structure.
program
without affecting its structure.

Modification is not quick and easy. It may lead to


Quick and Easy program modification.
affecting the program adversely.

WhatisAI Technique?
In the real world, the knowledge has some unwelcomed properties:

• Its volume is huge, next to unimaginable.


• It is not well-organized or well-formatted.
• It keeps changing constantly.
AI Technique is a manner to organize and use the knowledge efficiently in such a way that:
• It should be perceivable by the people who provide it.
• It should be easily modifiable to correct errors.
• It should be useful in many situations though it is incomplete or inaccurate.

AI techniques elevate the speed of execution of the complex program it is equipped with.

Applications ofAI
AI has been dominant in various fields such as:

• Gaming
AI plays crucial role in strategic games such as chess, poker, tic-tac-toe, etc., where machine can
think of large number of possible positions based on heuristic knowledge.
• Natural Language Processing
It is possible to interact with the computer that understands natural language spoken by
humans.
• Expert Systems
There are some applications which integrate machine, software, and special information to
impart reasoning and advising. They provide explanation and advice to the users.
• Vision Systems
These systems understand, interpret, and comprehend visual input on the computer. For
example,
o A spying aeroplane takes photographs which are used to figure out spatial information or map of
the areas.
o Doctors use clinical expert system to diagnose the patient.
o Police use computer software that can recognize the face of criminal with the stored portrait
made by forensic artist.

• Speech Recognition
Some intelligent systems are capable of hearing and comprehending the language in terms of
sentences and their meanings while a human talks to it. It can handle different accents, slang
words, noise in the background, change in human’s noise due to cold, etc.

• Handwriting Recognition
The handwriting recognition software reads the text written on paper by a pen or on screen by a
stylus. It can recognize the shapes of the letters and convert it into editable text.

• Intelligent Robots
Robots are able to perform the tasks given by a human. They have sensors to detect physical
data from the real world such as light, heat, temperature, movement, sound, bump, and
pressure. They have efficient processors, multiple sensors and huge memory, to exhibit
intelligence. In addition, they are capable of learning from their mistakes and they can adapt to
the new environment.

History ofAI
Here is the history of AI during 20th century:

Year Milestone / Innovation

Karel Čapek’s play named “Rossum's Universal Robots” (RUR) opens in London, first use of the word
1923
"robot" in English.
1943 Foundations for neural networks laid.
1945 Isaac Asimov, a Columbia University alumni, coined the term Robotics.

Alan Turing introduced Turing Test for evaluation of intelligence and published Computing
1950 Machinery and Intelligence. Claude Shannon published Detailed Analysis of Chess Playing as a
search.

John McCarthy coined the term Artificial Intelligence. Demonstration of the first running AI program
1956
at Carnegie Mellon University.

1958 John McCarthy invents LISP programming language for AI.

Danny Bobrow's dissertation at MIT showed that computers can understand natural language
1964
well enough to solve algebra word problems correctly.

Joseph Weizenbaum at MIT built ELIZA, an interactive problem that carries on a dialogue in
1965
English.

Scientists at Stanford Research Institute Developed Shakey, a robot, equipped with locomotion,
1969 perception, and problem solving.
Artificial Intelligence

The Assembly Robotics group at Edinburgh University built Freddy, the Famous Scottish Robot,
1973 capable of using vision to locate and assemble models.

1979 The first computer-controlled autonomous vehicle, Stanford Cart, was built.

1985
Harold Cohen created and demonstrated the drawing program, Aaron.

Major advances in all areas of AI:


• Significant demonstrations in machine learning
• Case-based reasoning
• Multi-agent planning
• Scheduling
1990
• Data mining, Web Crawler
• natural language understanding and translation
• Vision, Virtual Reality
• Games

Artificial Intelligence
The Deep Blue Chess Program beats the then world chess champion, Garry Kasparov.
1997

Interactive robot pets become commercially available. MIT displays Kismet, a robot with a face
2000 that expresses emotions. The robot Nomad explores remote regions of Antarctica and locates
meteorites.

While studying artificially intelligence, you need to know what intelligence is. This chapter covers Idea of
intelligence, types, and components of intelligence.

Whatis Intelligence?
The ability of a system to calculate, reason, perceive relationships and analogies, learn from experience,
store and retrieve information from memory, solve problems, comprehend complex ideas, use natural
language fluently, classify, generalize, and adapt new situations.

Types ofIntelligence
As described by Howard Gardner, an American developmental psychologist, the Intelligence comes in
multifold:

Intelligence Description Example

5
Artificial Intelligence
The ability to speak, recognize, and use mechanisms of phonology
Linguistic intelligence
(speech sounds), syntax (grammar), and semantics (meaning). Narrators, Orators

The ability to create, communicate with, and understand Musicians,


Musical intelligence meanings made of sound, understanding of pitch, rhythm. Singers,
Composers

Logical- mathematical The ability of use and understand relationships in the absence of
Mathematicians,
intelligence action or objects. Understanding complex and abstract ideas.
Scientists

The ability to perceive visual or spatial information, change it,


Map readers,
and re-create visual images without reference to the objects,
Spatial intelligence Astronauts, Physicists
construct 3D images, and to move and rotate them.

The ability to use complete or part of the body to solve problems


Bodily-Kinesthetic
or fashion products, control over fine and coarse motor skills, and Players, Dancers
intelligence
manipulate the objects.

Intra-personal The ability to distinguish among one’s own feelings, intentions,


Gautam Buddha
intelligence and motivations.

You can say a machine or a system is artificially intelligent when it is equipped with at least one and at most
all intelligences in it.

Whatis Intelligence Composed of?


The intelligence is intangible. It is composed of:

1. Reasoning
2. Learning
3. Problem Solving
4. Perception
5. Linguistic Intelligence

6
Artificial Intelligence

Let us go through all the components briefly:


1. Reasoning: It is the set of processes that enables us to provide basis for judgement, making decisions, and
prediction. There are broadly two types:

Inductive Reasoning Deductive Reasoning

It starts with a general statement and examines the


It conducts specific observations to makes broad
possibilities to reach a specific, logical conclusion.
general statements.

Even if all of the premises are true in a statement, If something is true of a class of things in general, it is
inductive reasoning allows for the conclusion to be false. also true for all members of that class.

1. Intelligent Agent’s:
2.1 Agents andenvironments:

Fig 2.1: Agents and Environments

7
Artificial Intelligence
2.1.1 Agent:
An Agent is anything that can be viewed as perceiving its environment through sensors and acting upon that
Artificial Intelligence
environment through actuators.

✓ A human agent has eyes, ears, and other organs for sensors and hands, legs, mouth, andother body parts
foractuators.
✓ A robotic agent might have cameras and infrared range finders for sensors and variousmotors foractuators.
✓ A software agent receives keystrokes, file contents, and network packets as sensory
An AI system is composed of an agent and its environment. The agents act in their environment.
The environment may contain other agents.

What are Agent and Environment?


1 An agent is anything that can perceive its environment through sensors and acts upon that
environment through effectors.

A human agent has sensory organs such as eyes, ears, nose, tongue and skin parallel to the sensors, and other organs
such as hands, legs, mouth, for effectors.
robotic agent replaces cameras and infrared range finders for the sensors, and various motors and actuators for
effectors.
A software agent has encoded bit strings as its programs and actions.

8
Artificial Intelligence
Agents Terminology
3 Performance Measure of Agent: It is the criteria, which determines how successful an agent is.
4 Behavior of Agent: It is the action that agent performs after any given sequence of percepts.
5 Percept: It is agent’s perceptual inputs at a given instance.
6 Percept Sequence: It is the history of all that an agent has perceived till date.
• Agent Function: It is a map from the precept sequence to an action.

Rationality
Rationality is nothing but status of being reasonable, sensible, and having good sense of judgment.

Rationality is concerned with expected actions and results depending upon what the agent has perceived.
Performing actions with the aim of obtaining useful information is an important part of rationality.

Whatis Ideal RationalAgent?


An ideal rational agent is the one, which is capable of doing expected actions to maximize its
performance measure, on the basis of:
• Its percept sequence
• Its built-in knowledge base

Rationality of an agent depends on the following:

1. The performance measures, which determine the degree of success.


2. Agent’s Percept Sequence till now.
3. The agent’s prior knowledge about the environment.
4. The actions that the agent can carry out.

A rational agent always performs right action, where the right action means the action that causes the
agent to be most successful in the given percept sequence. The problem the agent solves is characterized
by Performance Measure, Environment, Actuators, and Sensors (PEAS).

The Structure of IntelligentAgents


Agent’s structure can be viewed as:

• Agent = Architecture + Agent Program

• Architecture = the machinery that an agent executes on.

• Agent Program = an implementation of an agent function.

9
Artificial Intelligence
Simple Reflex Agents
• They choose actions only based on the current percept.
• They are rational only if a correct decision is made only on the basis of current precept.
• Their environment is completely observable.

Condition-Action Rule – It is a rule that maps a state (condition) to an action.

Model-Based Reflex Agents


They use a model of the world to choose their actions. They maintain an internal state.

Model: knowledge about “how the things happen in the world”.

Internal State: It is a representation of unobserved aspects of current state depending on percept history.

Updating state requires the information about


• How the world evolves.
• How the agent’s actions affect the world.

10
Artificial Intelligence

Goal-Based Agents
They choose their actions in order to achieve goals. Goal-based approach is more flexible than reflex agent
since the knowledge supporting a decision is explicitly modeled, thereby allowing for modifications.

• Goal: It is the description of desirable situations.

11
Artificial Intelligence
Utility-Based Agents
They choose actions based on a preference (utility) for each state.

Goals are inadequate when:

• There are conflicting goals only some of which can be achieved.


• Goals have some uncertainty of being achieved and one needs to weigh likelihood of success against the
importance of a goal.

TheNatureofEnvironments
Some programs operate in the entirely artificial environment confined to keyboard input, database,
computer file systems and character output on a screen.

In contrast, some software agents (software robots or softbots) exist in rich, unlimited softbots domains.
The simulator has a very detailed, complex environment. The software agent needs to choose from a
long array of actions in real time. A softbot designed to scan the online preferences of the customer and
show interesting items to the customer works in the real as well as an artificial environment.

The most famous artificial environment is the Turing Test environment, in which one real and other
artificial agents are tested on equal ground. This is a very challenging environment as it is highly difficult
for a software agent to perform as well as a human.

12
Artificial Intelligence
Turing Test
The success of an intelligent behavior of a system can be measured with Turing Test.

Two persons and a machine to be evaluated participate in the test. Out of the two persons, one plays the
role of the tester. Each of them sits in different rooms. The tester is unaware of who is machine and who
is a human. He interrogates the questions by typing and sending them to both intelligences, to which he
receives typed responses.

This test aims at fooling the tester. If the tester fails to determine machine’s response from the human
response, then the machine is said to be intelligent.

Properties of Environment
The environment has multifold properties:
• Discrete / Continuous: If there are a limited number of distinct, clearly defined, states of the environment,
the environment is discrete (For example, chess); otherwise it is continuous (For example, driving).

• Observable / Partially Observable: If it is possible to determine the complete state of the environment at
each time point from the percepts it is observable; otherwise it is only partially observable.

• Static / Dynamic: If the environment does not change while an agent is acting, then it is static; otherwise it
is dynamic.

• Single agent / Multiple agents: The environment may contain other agents which may be of the same or
different kind as that of the agent.

• Accessible vs. inaccessible: If the agent’s sensory apparatus can have access to the complete state of the
environment, then the environment is accessible to that agent.

• Deterministic vs. Non-deterministic: If the next state of the environment is completely determined by the
current state and the actions of the agent, then the environment is deterministic; otherwise it is non-
deterministic.

• Episodic vs. Non-episodic: In an episodic environment, each episode consists of the agent perceiving and
then acting. The quality of its action depends just on the episode itself. Subsequent episodes do not
depend on the actions in the previous episodes. Episodic environments are much simpler because the
agent does not need to think ahead.

13
5. Popular Search Algorithms
Artificial Intelligence

Searching is the universal technique of problem solving in AI. There are some single-player games such as
tile games, Sudoku, crossword, etc. The search algorithms help you to search for a particular position in
such games.

SingleAgent Pathfinding Problems


The games such as 3X3 eight-tile, 4X4 fifteen-tile, and 5X5 twenty four tile puzzles are single- agent-path-
finding challenges. They consist of a matrix of tiles with a blank tile. The player is required to arrange the
tiles by sliding a tile either vertically or horizontally into a blank space with the aim of accomplishing some
objective.

The other examples of single agent pathfinding problems are Travelling Salesman Problem, Rubik’s Cube,
and Theorem Proving.

Search Terminology
Problem Space: It is the environment in which the search takes place. (A set of states and set of
operators to change those states)
Problem Instance: It is Initial state + Goal state

Problem Space Graph: It represents problem state. States are shown by nodes and operators are shown
by edges.

Depth of a problem: Length of a shortest path or shortest sequence of operators from Initial State to goal
state.
Space Complexity: The maximum number of nodes that are stored in memory.

Time Complexity: The maximum number of nodes that are created. Admissibility: A property

of an algorithm to always find an optimal solution. Branching Factor: The average number of

child nodes in the problem space graph. Depth: Length of the shortest path from initial state to
goal state.

Brute-ForceSearch Strategies
They are most simple, as they do not need any domain-specific knowledge. They work fine with small
number of possible states.

Requirements –

• State description

• A set of valid operators


• Initial state
• Goal state description

Breadth-First Search
It starts from the root node, explores the neighboring nodes first and moves towards the next level
neighbors. It generates one tree at a time until the solution is found. It can be implemented using FIFO
queue data structure. This method provides shortest path to the solution.

If branching factor (average number of child nodes for a given node) = b and depth = d, then number of
nodes at level d = bd.

The total no of nodes created in worst case is b + b2 + b3 + … + bd.

Disadvantage: Since each level of nodes is saved for creating next one, it consumes a lot of memory
space. Space requirement to store nodes is exponential.

Its complexity depends on the number of nodes. It can check duplicate nodes.
Depth-First Search
It is implemented in recursion with LIFO stack data structure. It creates the same set of nodes as Breadth-
First method, only in the different order.

As the nodes on the single path are stored in each iteration from root to leaf node, the space
requirement to store nodes is linear. With branching factor b and depth as m, the storage space is bm.

Disadvantage: This algorithm may not terminate and go on infinitely on one path. The solution to this
issue is to choose a cut-off depth. If the ideal cut-off is d, and if chosen cut- off is lesser than d, then this
algorithm may fail. If chosen cut-off is more than d, then execution time increases.

Its complexity depends on the number of paths. It cannot check duplicate nodes.

Bidirectional Search
It searches forward from initial state and backward from goal state till both meet to identify a common
state.

The path from initial state is concatenated with the inverse path from the goal state. Each search is done
only up to half of the total path.

Uniform Cost Search


Sorting is done in increasing cost of the path to a node. It always expands the least cost node. It is identical
to Breadth First search if each transition has the same cost.

It explores paths in the increasing order of cost.

Disadvantage: There can be multiple long paths with the cost ≤ C*. Uniform Cost search must explore
them all.

Iterative Deepening Depth-First Search


It performs depth-first search to level 1, starts over, executes a complete depth-first search to level 2,
and continues in such way till the solution is found.

It never creates a node until all lower nodes are generated. It only saves a stack of nodes. The algorithm
ends when it finds a solution at depth d. The number of nodes created at depth d is bd and at depth d-1 is
bd-1.

Comparison of Various Algorithms Complexities


Let us see the performance of algorithms based on various criteria:

Breadth Depth Uniform Iterative


Criterion Bidirectional
First First Cost Deepening
Time bd bm b d/2 bd bd
Space bd bm b d/2 bd bd
Optimality Y N Y Y Y
Completeness Y N Y Y Y

Informed (Heuristic)SearchStrategies
To solve large problems with large number of possible states, problem-specific knowledge needs to be
added to increase the efficiency of search algorithms.

Heuristic Evaluation Functions


They calculate the cost of optimal path between two states. A heuristic function for sliding- tiles games is
computed by counting number of moves that each tile makes from its goal state and adding these number
of moves for all tiles.

Pure Heuristic Search


It expands nodes in the order of their heuristic values. It creates two lists, a closed list for the already
expanded nodes and an open list for the created but unexpanded nodes.

In each iteration, a node with a minimum heuristic value is expanded, all its child nodes are created and
placed in the closed list. Then, the heuristic function is applied to the child nodes and they are placed in
the open list according to their heuristic value. The shorter paths are saved and the longer ones are
disposed.

A* Search
It is best-known form of Best First search. It avoids expanding paths that are already expensive, but
expands most promising paths first.

f(n) = g(n) + h(n), where


• g(n) the cost (so far) to reach the node

• h(n) estimated cost to get from the node to the goal

• f(n) estimated total cost of path through n to goal. It is implemented using priority queue by increasing f(n).

Greedy Best First Search


It expands the node that is estimated to be closest to goal. It expands nodes based on f(n) =
h(n). It is implemented using priority queue.
Disadvantage: It can get stuck in loops. It is not optimal.

LocalSearchAlgorithms
They start from a prospective solution and then move to a neighboring solution. They can return a valid
solution even if it is interrupted at any time before they end.

Hill-Climbing Search
It is an iterative algorithm that starts with an arbitrary solution to a problem and attempts to find a better
solution by changing a single element of the solution incrementally. If the change produces a better
solution, an incremental change is taken as a new solution. This process is repeated until there are no
further improvements.

function Hill-Climbing (problem), returns a state that is a local maximum. inputs:

problem, a problem

local variables: current, a node

neighbor, a node

current ←Make_Node(Initial-State[problem]) loop

do neighbor ← a highest_valued successor of current


if Value[neighbor] ≤ Value[current] then return

State[current]

current ← neighbor

end
Disadvantage: This algorithm is neither complete, nor optimal.

Local Beam Search


In this algorithm, it holds k number of states at any given time. At the start, these states are generated
randomly. The successors of these k states are computed with the help of objective function. If any of
these successors is the maximum value of the objective function, then the algorithm stops.

Otherwise the (initial k states and k number of successors of the states = 2k) states are placed in a pool. The
pool is then sorted numerically. The highest k states are selected as new initial states. This process
continues until a maximum value is reached.

function BeamSearch( problem, k), returns a solution state. start with k

randomly generated states

loop

generate all successors of all k states

if any of the states = solution, then return the state else select

the k best successors

end

Simulated Annealing
Annealing is the process of heating and cooling a metal to change its internal structure for modifying its
physical properties. When the metal cools, its new structure is seized, and the metal retains its newly
obtained properties. In simulated annealing process, the temperature is kept variable.

We initially set the temperature high and then allow it to ‘cool' slowly as the algorithm proceeds. When
the temperature is high, the algorithm is allowed to accept worse solutions with high frequency.

Start

5. Initialize k = 0; L = integer number of variables;


6. From i -> j, search the performance difference ∆.
7. If ∆ <= 0 then accept else if exp(-/T(k)) > random(0,1) then accept;
8. Repeat steps 1 and 2 for L(k) steps.
9. k = k + 1;

Repeat steps 1 through 4 till the criteria is met.

End
Travelling Salesman Problem
In this algorithm, the objective is to find a low-cost tour that starts from a city, visits all cities en-route
exactly once and ends at the same starting city.

Start

Find out all (n -1)! Possible solutions, where n is the total number of cities.

Determine the minimum cost by finding out the cost of each of these (n -1)! solutions.

Finally, keep the one with the minimum cost. end


6.1.1 Percept:
We use the term percept to refer to the agent's perceptual inputs at any given instant.

6.1.2 PerceptSequence:
An agent's percept sequence is the complete history of everything the agent has ever perceived.

6.1.3 Agent function:


Mathematically speaking, we say that an agent's behavior is described by the agent function that
maps any given percept sequence to an action.

6.1.4 Agentprogram
Internally, the agent function for an artificial agent will be implemented by an agent program. It is
important to keep these two ideas distinct. The agent function is an abstract mathematical
description; the agent program is a concrete implementation, running on the agent architecture.

To illustrate these ideas, we will use a very simple example-the vacuum-cleaner world shown in Fig

2.1.5. This particular world has just two locations: squares A and B. The vacuum agent perceives
which square it is in and whether there is dirt in the square. It can choose to move left, move right,
suck up the dirt, or do nothing. One very simple agent function is the following: if the current
square is dirty, then suck, otherwise move to the other square. A partial tabulation of this agent
function is shown in Fig 2.1.6.

Fig 2.1.5: A vacuum-cleaner world with just two locations.


6.1.5 Agent function

Percept Sequence Action

[A, Clean] Right

[A, Dirty] Suck

[B, Clean] Left

[B, Dirty] Suck

[A, Clean], [A, Clean] Right

[A, Clean], [A, Dirty] Suck

Fig 2.1.6: Partial tabulation of a simple agent function for the example: vacuum-cleaner
world shown in the Fig 2.1.5

Function REFLEX-VACCUM-AGENT ([location, status]) returns an action If

status=Dirty then return Suck

else if location = A then return Right

else if location = B then return Left

Fig 2.1.6(i): The REFLEX-VACCUM-AGENT program is invoked for each new percept (location, status) and
returns an action each time
Strategies of Solving Tic-Tac-Toe Game Playing

Tic-Tac-Toe Game Playing:

Tic-Tac-Toe is a simple and yet an interesting board game. Researchers have used various approaches to
study the Tic-Tac-Toe game. For example, Fok and Ong and Grim et al. have used artificial neural
network based strategies to play it. Citrenbaum and Yakowitz discuss games like Go-Moku,
Hex and Bridg-It which share some similarities with Tic-Tac-Toe.
Fig 1.

A Formal Definition of the Game:

The board used to play the Tic-Tac-Toe game consists of 9 cells laid out in the form of a 3x3 matrix (Fig.
1). The game is played by 2 players and either of them can start. Each of the two players is assigned a
unique symbol (generally 0 and X). Each player alternately gets a turn to make a move. Making a move is
compulsory and cannot be deferred. In each move a player places the symbol assigned to him/her in a
hitherto blank cell.

Let a track be defined as any row, column or diagonal on the board. Since the board is a square
matrix with 9 cells, all rows, columns and diagonals have exactly 3 cells. It can be easily observed that
there are 3 rows, 3 columns and 2 diagonals, and hence a total of 8 tracks on the board (Fig. 1). The goal
of the game is to fill all the three cells of any track on the board with the symbol assigned to one before
the opponent does the same with the symbol assigned to him/her. At any point of the game, if
there exists a track whose all three cells have been marked by the same symbol, then the player
to whom that symbol have been assigned wins and the game terminates. If there exist no track
whose cells have been marked by the same symbol when there is no more blank cell on the board then
the game is drawn.

Let the priority of a cell be defined as the number of tracks passing through it. The priorities of the
nine cells on the board according to this definition are tabulated in Table 1. Alternatively, let the
priority of a track be defined as the sum of the priorities of its three cells. The priorities of the eight
tracks on the board according to this definition are tabulated in Table 2. The prioritization of the cells
and the tracks lays the foundation of the heuristics to be used in this study. These heuristics are
somewhat similar to those proposed by Rich and Knight.
Strategy 1:

Algorithm:

1. View the vector as a ternary number. Convert it to a decimal number.

2. Use the computed number as an index into Move-Table and access the vector stored there.

3. Set the new board to that vector.

Procedure:

1) Elements of vector:

0: Empty

1: X

2: O

→ the vector is a ternary number

2) Store inside the program a move-table (lookuptable):

a) Elements in the table: 19683 (39)

b) Element = A vector which describes the most suitable move from the

Comments:
1. A lot of space to store the Move-Table.

2. A lot of work to specify all the entries in the Move-Table.

3. Difficult to extend

Explanation of Strategy 2 of solving Tic-tac-toe problem

Stratergy 2:

Data Structure:

1) Use vector, called board, as Solution 1


2) However, elements of the vector:
2: Empty
3: X

5: O
3) Turn of move: indexed by integer
1,2,3, etc

Function Library:

1. Make2:

a) Return a location on a game-board.

IF (board[5] = 2)
RETURN 5; //the center cell.
ELSE
RETURN any cell that is not at the board’s corner;

// (cell: 2,4,6,8)

b) Let P represent for X or O


c) can_win(P) :
P has filled already at least two cells on a straight line (horizontal, vertical, or
diagonal)
d) cannot_win(P) = NOT(can_win(P))
2. Posswin(P):
IF (cannot_win(P))
RETURN 0;
ELSE
RETURN index to the empty cell on the line of
can_win(P)

Let odd numbers are turns of X


Let even numbers are turns of O
3. Go(n): make a move

Algorithm:
1. Turn = 1: (X moves)
Go(1) //make a move at the left-top cell
2. Turn = 2: (O moves)
IF board[5] is empty THEN
Go(5)

ELSE
Go(1)
3. Turn = 3: (X moves)
IF board[9] is empty THEN

Go(9)
ELSE
Go(3).

4. Turn = 4: (O moves)
IF Posswin (X) <> 0 THEN
Go (Posswin (X))
//Prevent the opponent to win
ELSE Go (Make2)
5. Turn = 5: (X moves)
IF Posswin(X) <> 0 THEN
Go(Posswin(X))
//Win for X.
ELSE IF Posswin(O) <> THEN
Go(Posswin(O))
//Prevent the opponent to win
ELSE IF board[7] is empty THEN
Go(7)
ELSE Go(3).

Comments:
1. Not efficient in time, as it has to check several conditions before making each
move.

2. Easier to understand the program’s strategy.


3. Hard to generalize.

Introduction to Problem Solving, General problem solving

Problem solving is a process of generating solutions from observed data.


−a problem is characterized by a set of goals,
−a set of objects, and
−a set of operations.
These could be ill-defined and may evolve during problem solving.

Searching Solutions:
To build a system to solve a problem:
1. Define the problem precisely
2. Analyze the problem
3. Isolate and represent the task knowledge that is necessary to solve the problem
4. Choose the best problem-solving techniques and apply it to the particular problem.
Defining the problem as State Space Search:
The state space representation forms the basis of most of the AI methods.
• Formulate a problem as a state space search by showing the legal problem states, the legal
operators, and the initial and goal states.
• A state is defined by the specification of the values of all attributes of interest in the world
• An operator changes one state into the other; it has a precondition which is the value of certain
attributes prior to the application of the operator, and a set of effects, which are the attributes
altered by the operator
• The initial state is where you start
• The goal state is the partial description of the solution

Formal Description of the problem:


1. Define a state space that contains all the possible configurations of the relevant objects.
2. Specify one or more states within that space that describe possible situations from which the
problem solving process may start ( initial state)
3. Specify one or more states that would be acceptable as solutions to the problem. ( goal states)
Specify a set of rules that describe the actions (operations) available

State-Space Problem Formulation:

Example: A problem is defined by four items:


1. initial state e.g., "at Arad―
2. actions or successor function : S(x) = set of action–state pairs
e.g., S(Arad) = {<Arad → Zerind, Zerind>, … }
3. goal test (or set of goal states)
e.g., x = "at Bucharest‖, Checkmate(x)
4. path cost (additive)
e.g., sum of distances, number of actions executed, etc.
c(x,a,y) is the step cost, assumed to be ≥ 0
A solution is a sequence of actions leading from the initial state to a goal state
Example: 8-queens problem

1. Initial State: Any arrangement of 0 to 8 queens on board.


2. Operators: add a queen to any square.
3. Goal Test: 8 queens on board, none attacked.
4. Path cost: not applicable or Zero (because only the final state counts, search cost might
be of interest).

State Spaces versus Search Trees:


• State Space
o Set of valid states for a problem
o Linked by operators
o e.g., 20 valid states (cities) in the Romanian travel problem
• Search Tree
– Root node = initial state
– Child nodes = states that can be visited from parent
– Note that the depth of the tree can be infinite
• E.g., via repeated states
– Partial search tree
• Portion of tree that has been expanded so far
– Fringe
• Leaves of partial search tree, candidates for expansion
Search trees = data structure to search state-space

Properties of Search Algorithms

Which search algorithm one should use will generally depend on the problem domain.
There are four important factors to consider:

1. Completeness – Is a solution guaranteed to be found if at least one solution exists?

2. Optimality – Is the solution found guaranteed to be the best (or lowest cost) solution if there exists
more than one solution?

3. Time Complexity – The upper bound on the time required to find a solution, as a function of the
complexity of the problem.

4. Space Complexity – The upper bound on the storage space (memory) required at any point during the
search, as a function of the complexity of the problem.
Systematic Control Strategies (Blind searches):

Breadth First Search:

Let us discuss these strategies using water jug problem. These may be applied to any search problem.

Construct a tree with the initial state as its root.

Generate all the offspring of the root by applying each of the applicable rules to the initial state.

Now for each leaf node, generate all its successors by applying all the rules that are appropriate.

8 Puzzle Problem.

The 8 puzzle consists of eight numbered, movable tiles set in a 3x3 frame. One cell of the frame is always
empty thus making it possible to move an adjacent numbered tile into the empty cell. Such a puzzle is
illustrated in following diagram.
The program is to change the initial configuration into the goal configuration. A solution to the problem
is an appropriate sequence of moves, such as “move tiles 5 to the right, move tile 7 to the left, move tile
6 to the down, etc”.

Solution:

To solve a problem using a production system, we must specify the global database the rules, and the
control strategy. For the 8 puzzle problem that correspond to these three components. These elements
are the problem states, moves and goal. In this problem each tile configuration is a state. The set of all
configuration in the space of problem states or the problem space, there are only 3, 62,880 different
configurations o the 8 tiles and blank space. Once the problem states have been conceptually identified,
we must construct a computer representation, or description of them . this description is then used as
the database of a production system. For the 8-puzzle, a straight forward description is a 3X3 array of
matrix of numbers. The initial global database is this description of the initial problem state. Virtually
any kind of data structure can be used to describe states.

A move transforms one problem state into another state. The 8-puzzle is conveniently interpreted as
having the following for moves. Move empty space (blank) to the left, move blank up, move blank to the
right and move blank down,. These moves are modeled by production rules that operate on the state
descriptions in the appropriate manner.

The rules each have preconditions that must be satisfied by a state description in order for them to be
applicable to that state description. Thus the precondition for the rule associated with “move blank up”
is derived from the requirement that the blank space must not already be in the top row.

The problem goal condition forms the basis for the termination condition of the production system. The
control strategy repeatedly applies rules to state descriptions until a description of a goal state is
produced. It also keeps track of rules that have been applied so that it can compose them into sequence
representing the problem solution. A solution to the 8-puzzle problem is given in the following figure.

Example:- Depth – First – Search traversal and Breadth - First - Search traversal

for 8 – puzzle problem is shown in following diagrams.


Exhaustive Searches, BFS and DFS

Search is the systematic examination of states to find path from the start/root state to the goal state.

Many traditional search algorithms are used in AI applications. For complex problems, the traditional
algorithms are unable to find the solution within some practical time and space limits. Consequently,
many special techniques are developed; using heuristic functions. The algorithms that use heuristic
functions are called heuristic algorithms. Heuristic algorithms are not really intelligent; they appear to
be intelligent because they achieve better performance.

Heuristic algorithms aremore efficient because they take advantage of feedback from the data to direct
the search path.

Uninformed search

Also called blind, exhaustive or brute-force search, uses no information about the problem to guide the
search and therefore may not be very efficient.

Informed Search:

Also called heuristic or intelligent search, uses information about the problem to guide the search,
usually guesses the distance to a goal state and therefore efficient, but the search may not be always
possible.

Uninformed Search Methods:


Breadth- First -Search:
Consider the state space of a problem that takes the form of a tree. Now, if we search the goal along
each breadth of the tree, starting from the root and continuing up to the largest depth, we call it
breadth first search.

• Algorithm:
1. Create a variable called NODE-LIST and set it to initial state
2. Until a goal state is found or NODE-LIST is empty do
a. Remove the first element from NODE-LIST and call it E. If NODE-LIST was empty,
quit
b. For each way that each rule can match the state described in E do:
i. Apply the rule to generate a new state
ii. If the new state is a goal state, quit and return this state
iii. Otherwise, add the new state to the end of NODE-LIST
BFS illustrated:

Step 1: Initially fringe contains only one node corresponding to the source state A.
Figure 1
FRINGE: A

Step 2: A is removed from fringe. The node is expanded, and its children B and C are generated.
They are placed at the back of fringe.

Figure 2
FRINGE: B C

Step 3: Node B is removed from fringe and is expanded. Its children D, E are generated and put
at the back of fringe.

Figure 3
FRINGE: C D E

Step 4: Node C is removed from fringe and is expanded. Its children D and G are added to the
back of fringe.
Figure 4
FRINGE: D E D G

Step 5: Node D is removed from fringe. Its children C and F are generated and added to the back
of fringe.

Figure 5
FRINGE: E D G C F

Step 6: Node E is removed from fringe. It has no children.

Figure 6
FRINGE: D G C F

Step 7: D is expanded; B and F are put in OPEN.

Figure 7
FRINGE: G C F B F
Step 8: G is selected for expansion. It is found to be a goal node. So the algorithm returns the
path A C G by following the parent pointers of the node corresponding to G. The algorithm
terminates.

Breadth first search is:

• One of the simplest search strategies


• Complete. If there is a solution, BFS is guaranteed to find it.
• If there are multiple solutions, then a minimal solution will be found
• The algorithm is optimal (i.e., admissible) if all operators have the same cost. Otherwise,
breadth first search finds a solution with the shortest path length.
• Time complexity : O(bd )
• Space complexity : O(bd )
• Optimality :Yes
b - branching factor(maximum no of successors of any node),
d – Depth of the shallowest goal node
Maximum length of any path (m) in search space
Advantages: Finds the path of minimal length to the goal.
Disadvantages:
• Requires the generation and storage of a tree whose size is exponential the depth of the
shallowest goal node.
• The breadth first search algorithm cannot be effectively used unless the search space is quite
small.

Depth- First- Search.


We may sometimes search the goal along the largest depth of the tree, and move up only when further
traversal along the depth is not possible. We then attempt to find alternative offspring of the parent of
the node (state) last visited. If we visit the nodes of a tree using the above principles to search the goal,
the traversal made is called depth first traversal and consequently the search strategy is called depth
first search.

• Algorithm:
1. Create a variable called NODE-LIST and set it to initial state
2. Until a goal state is found or NODE-LIST is empty do
a. Remove the first element from NODE-LIST and call it E. If NODE-LIST was empty,
quit
b. For each way that each rule can match the state described in E do:
i. Apply the rule to generate a new state
ii. If the new state is a goal state, quit and return this state
iii. Otherwise, add the new state in front of NODE-LIST
DFS illustrated:

A State Space Graph

Step 1: Initially fringe contains only the node for A.

Figure 1
FRINGE: A

Step 2: A is removed from fringe. A is expanded and its children B and C are put in front of
fringe.

Figure 2
FRINGE: B C
Step 3: Node B is removed from fringe, and its children D and E are pushed in front of fringe.

Figure 3
FRINGE: D E C

Step 4: Node D is removed from fringe. C and F are pushed in front of fringe.

Figure 4
FRINGE: C F E C

Step 5: Node C is removed from fringe. Its child G is pushed in front of fringe.

Figure 5
FRINGE: G F E C
Step 6: Node G is expanded and found to be a goal node.
Figure 6
FRINGE: G F E C

The solution path A-B-D-C-G is returned and the algorithm terminates.

Depth first searchis:

1. The algorithm takes exponential time.


2. If N is the maximum depth of a node in the search space, in the worst case the algorithm will
d
take time O(b ).
3. The space taken is linear in the depth of the search tree, O(bN).

Note that the time taken by the algorithm is related to the maximum depth of the search tree. If the
search tree has infinite depth, the algorithm may not terminate. This can happen if the search space is
infinite. It can also happen if the search space contains cycles. The latter case can be handled by
checking for cycles in the algorithm. Thus Depth First Search is not complete.

Exhaustive searches- Iterative Deeping DFS

Description:

• It is a search strategy resulting when you combine BFS and DFS, thus combining the advantages
of each strategy, taking the completeness and optimality of BFS and the modest memory
requirements of DFS.

• IDS works by looking for the best search depth d, thus starting with depth limit 0 and make a BFS
and if the search failed it increase the depth limit by 1 and try a BFS again with depth 1 and so
on – first d = 0, then 1 then 2 and so on – until a depth d is reached where a goal is found.
Artificial
Intelligence

Algorithm:

procedure IDDFS(root)
for depth from 0 to ∞
found ← DLS(root, depth)
if found ≠ null
return found

procedure
DLS(node,
depth) if
depth = 0
and node
is a goal
return
node
else if depth > 0
foreach child of node
found ← DLS(child, depth−1)
l

Performance Measure:
o Completeness: IDS is like BFS, is complete when the branching factor b is
finite.

o Optimality: IDS is also like BFS optimal when the steps are of the same cost.

You might also like