0% found this document useful (0 votes)
43 views21 pages

Ai Unit 1 Notes

Uploaded by

Patel Hiren
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views21 pages

Ai Unit 1 Notes

Uploaded by

Patel Hiren
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

[CS3101]: Artificial Intelligence

Course Objective:
● To provide a strong foundation of fundamental concepts in Artificial Intelligence.
● To identify the type of an AI problem
● To acquire knowledge on intelligent systems and agents, formalization of knowledge and
reasoning
Course Outcome:
After successful completion of the course, students will able to:
CO1: Demonstrate knowledge of the building blocks of AI as presented in terms of intelligent
agents.
CO2: Analyze and formalize the problem as a state space search.
CO3: To describe the strengths and limitations of various state-space search algorithms, and
choose
the appropriate algorithm
CO4: Develop intelligent algorithms for constraint satisfaction problems
CO5: Ability to apply knowledge representation and use this to perform inference or
planning.
CO6: Formulate and solve problems with uncertain information using Bayesian approaches.

UNIT 1: Introduction
Introduction, Overview of Artificial intelligence: Problems of AI, AI
technique, Tic - Tac - Toe problem. Intelligent Agents, Agents &
environment, nature of environment, structure of agents, goal based
agents, utility based agents, learning agents.

1
Introduction:

➢ Artificial Intelligence is concerned with the design of intelligence in an


artificial device. The term was coined by John McCarthy in 1956.
➢ Intelligence is the ability to acquire, understand and apply the
knowledge to achieve goals in the world.
➢ AI is the study of the mental faculties through the use of computational models

➢ AI is the study of intellectual/mental processes as computational processes.

➢ AI program will demonstrate a high level of intelligence to a degree


that equals or exceeds the intelligence required of a human in
performing some task.
➢ AI is unique, sharing borders with Mathematics,
Computer Science, Philosophy, Psychology, Biology,
Cognitive Science and many others.
➢ Although there is no clear definition of AI or even Intelligence, it can
be described as an attempt to build machines that like humans can think
and act, able to learn and use knowledge to solve problems on their
own.

History of AI:

Important research that laid the groundwork for AI:

➢ In 1931, Goedel layed the foundation of Theoretical Computer Science1920-30s:


• He published the first universal formal language and
showed that math itself is either flawed or allows for
unprovable but true statements.
➢ In 1936, Turing reformulated Goedel’s result and church’s extension thereof.
➢ In 1956, John McCarthy coined the term "Artificial Intelligence" as the
topic of the Dartmouth Conference, the first conference devoted to the
subject.

➢ In 1957, The General Problem Solver (GPS) demonstrated by Newell, Shaw &
Simon
➢ In 1958, John McCarthy (MIT) invented the Lisp language.
➢ In 1959, Arthur Samuel (IBM) wrote the first game-playing program, for
checkers, to achieve sufficient skill to challenge a world champion.
➢ In 1963, Ivan Sutherland's MIT dissertation on Sketchpad introduced the
idea of interactive graphics into computing.
➢ In 1966, Ross Quillian (PhD dissertation, Carnegie Inst. of Technology;
now CMU) demonstrated semantic nets
2
➢ In 1967, Dendral program (Edward Feigenbaum, Joshua Lederberg,
Bruce Buchanan, Georgia Sutherland at Stanford) demonstrated to interpret
mass spectra on organic chemical compounds. First successful knowledge-
based program for scientific reasoning.
➢ In 1967, Doug Engelbart invented the mouse at SRI
➢ In 1968, Marvin Minsky & Seymour Papert publish Perceptrons,
demonstrating limits of simple neural nets.
➢ In 1972, Prolog developed by Alain Colmerauer.
➢ In Mid 80’s, Neural Networks become widely used with the
Backpropagation algorithm (first described by Werbos in 1974).
➢ 1990, Major advances in all areas of AI, with significant demonstrations
in machine learning, intelligent tutoring, case-based reasoning, multi-agent
planning, scheduling, uncertain reasoning, data mining, natural language
understanding and translation, vision, virtual reality, games, and other topics.
➢ In 1997, Deep Blue beats the World Chess Champion Kasparov
➢ In 2002,iRobot, founded by researchers at the MIT Artificial Intelligence
Lab, introduced Roomba, a vacuum cleaning robot. By 2006, two million had
been sold.

Foundations of Artificial Intelligence:

Philosophy
e.g., foundational issues (can a machine think?), issues of knowledge
and believe, mutual knowledge
Psychology and Cognitive Science
e.g., problem solving skills
Neuro-Science
e.g., brain architecture
Computer Science And Engineering
e.g., complexity theory, algorithms, logic and inference, programming
languages, and system building.
Mathematics and Physics
e.g., statistical modeling, continuous mathematics,
Statistical Physics,
Complex Systems. Sub Areas of AI
1) Game Playing

Deep Blue Chess program beat world champion Gary Kasparov

3
2) Speech Recognition

PEGASUS spoken language interface to American Airlines' EAASY


SABRE reseration system, which allows users to obtain flight
information and make reservations over the telephone. The 1990s has
seen significant advances in speech recognition so that limited systems
are now successful.
3) Computer Vision

Face recognition programs in use by banks, government, etc. The


ALVINN system from CMU autonomously drove a van from Washington,
D.C. to San Diego (all but 52 of 2,849 miles), averaging 63 mph day and
night, and in all weather conditions. Handwriting recognition, electronics
and manufacturing inspection, photo interpretation, baggage inspection,
reverse engineering to automatically construct a 3D geometric model.
4) Expert Systems

Application-specific systems that rely on obtaining the knowledge of


human experts in an area and programming that knowledge into a system.
a. Diagnostic Systems : MYCIN system for diagnosing bacterial
infections of the blood and suggesting treatments. Intellipath
pathology diagnosis system (AMA approved). Pathfinder medical
diagnosis system, which suggests tests and makes diagnoses.
Whirlpool customer assistance center.
b. System Configuration

DEC's XCON system for custom hardware configuration. Radiotherapy


treatment planning.
c. Financial Decision Making

Credit card companies, mortgage companies, banks, and the


U.S. government employ AI systems to detect fraud and
expedite financial transactions. For example, AMEX credit
check.
d. Classification Systems

Put information into one of a fixed set of categories using several


sources of information. E.g., financial decision making systems.
NASA developed a system for classifying very faint areas in
astronomical images into either stars or galaxies with very high
accuracy by learning from human experts' classifications.
5) Mathematical Theorem Proving

Use inference methods to prove new theorems.


6) Natural Language Understanding

AltaVista's translation of web pages. Translation of Catepillar Truck manuals into


4
20 languages.
7) Scheduling and Planning

Automatic scheduling for manufacturing. DARPA's DART system used


in Desert Storm and Desert Shield operations to plan logistics of people
and supplies. American Airlines rerouting contingency planner. European
space agency planning and scheduling of spacecraft assembly,
integration and verification.
8) Artificial Neural Networks:
9) Machine Learning

Applications of AI:
AI algorithms have attracted close attention of researchers and have
also been applied successfully to solve problems in engineering.
Nevertheless, for large and complex problems, AI algorithms consume
considerable computation time due to stochastic feature of the search
approaches

1) Business; financial strategies


2) Engineering: check design, offer suggestions to create new
product, expert systems for all engineering problems
3) Manufacturing: assembly, inspection and maintenance

4) Medicine: monitoring, diagnosing

5) Education: in teaching

6) Fraud detection

7) Object identification

8) Information retrieval

9) Space shuttle scheduling

Building AI Systems:

1) Perception

Intelligent biological systems are physically embodied in the world and


experience the world through their sensors (senses). For an autonomous
vehicle, input might be images from a camera and range information
from a rangefinder. For a medical diagnosis system, perception is the
set of symptoms and test results that have been obtained and input to
the system manually.
2) Reasoning

5
Inference, decision-making, classification from what is sensed and
what the internal "model" is of the world. Might be a neural network,
logical deduction system, Hidden Markov Model induction, heuristic
searching a problem space, Bayes Network inference, genetic
algorithms, etc.
Includes areas of knowledge representation, problem solving, decision
theory, planning, game theory, machine learning, uncertainty
reasoning, etc.
3) Action

Biological systems interact within their environment by actuation,


speech, etc. All behavior is centered around actions in the world.
Examples include controlling the steering of a Mars rover or
autonomous vehicle, or suggesting tests and making diagnoses for a
medical diagnosis system. Includes areas of robot actuation, natural
language generation, and speech synthesis.

The definitions of AI:

a) "The exciting new effort to make b) "The study of mental faculties


computers think . . . machines with minds, through the use of computational
in the full and literal sense" (Haugeland, models" (Charniak and McDermott,
1985) 1985)

"The automation of] activities that we "The study of the computations that
associate with human thinking, activities make it possible to perceive, reason,
such as decision-making, problem solving, and act" (Winston, 1992)
learning..."(Bellman, 1978)
c) "The art of creating machines that perform d) "A field of study that seeks to explain
functions that require intelligence when and emulate intelligent behavior in
performed by people" (Kurzweil, 1990) terms of computational processes"
(Schalkoff, 1 990)
"The study of how to make computers
do things at which, at the moment, "The branch of computer science
people are better" (Rich and Knight, 1 that is concerned with the automation
99 1 ) of intelligent behavior" (Luger and
Stubblefield, 1993)

The definitions on the top, (a) and (b) are concerned with reasoning, whereas
those on the bottom, (c) and (d) address behavior. The definitions on the left,
(a) and (c) measure success in terms of human performance, and those on the
right, (b) and (d) measure the ideal concept of intelligence called rationality

6
Intelligent Systems:

In order to design intelligent systems, it is important to categorize them into


four categories (Luger and Stubberfield 1993), (Russell and Norvig, 2003)
1. Systems that think like humans

2. Systems that think rationally

3. Systems that behave like humans

4. Systems that behave rationally

Human- Rationally
Like

Cognitive Science Approach Laws of thought Approach


Think:
“Machines that think like humans” “ Machines that think Rationally”

Turing Test Approach Rational Agent Approach


Act:
“Machines that behave like humans” “Machines that behave Rationally”

Scientific Goal: To determine which ideas about knowledge


representation, learning, rule systems search, and so on, explain various
sorts of real intelligence.
Engineering Goal:To solve real world problems using AI techniques such
as Knowledge representation, learning, rule systems, search, and so on.
Traditionally, computer scientists and engineers have been more
interested in the engineering goal, while psychologists, philosophers and
cognitive scientists have been more interested in the scientific goal.
Cognitive Science: Think Human-Like

a. Requires a model for human cognition. Precise enough


models allow simulation by computers.

b. Focus is not just on behavior and I/O, but looks like reasoning process.

c. Goal is not just to produce human-like behavior but to produce a


sequence of steps of the reasoning process, similar to the steps
followed by a human in solving the same task.

7
Laws of thought: Think Rationally

a. The study of mental faculties through the use of computational


models; that it is, the study of computations that make it possible to
perceive reason and act.

b. Focus is on inference mechanisms that are probably correct and guarantee an


optimal solution.

c. Goal is to formalize the reasoning process as a system of logical rules


and procedures of inference.

d. Develop systems of representation to allow


inferences to be like “Socrates is a man. All
men are mortal. Therefore Socrates is
mortal”

Turing Test: Act Human-Like

a. The art of creating machines that perform functions requiring


intelligence when performed by people; that it is the study of, how to
make computers do things which, at the moment, people do better.

b. Focus is on action, and not intelligent behavior centered around the


representation of the world

c. Example: Turing Test

o 3 rooms contain: a person, a computer and an interrogator.

o The interrogator can communicate with the other 2 by


teletype (to avoid the machine imitate the appearance of
voice of the person)
o The interrogator tries to determine which the person is
and which the machine is.

o The machine tries to fool the interrogator to believe


that it is the human, and the person also tries to
convince the interrogator that it is the human.

o If the machine succeeds in fooling the interrogator,


then conclude that the machine is intelligent.

8
Rational agent: Act Rationally

a. Tries to explain and emulate intelligent behavior in terms of


computational process; that it is concerned with the automation of the
intelligence.

b. Focus is on systems that act sufficiently if not optimally in all situations.

c. Goal is to develop systems that are rational and sufficient

The difference between strong AI and weak AI:

Strong AI makes the bold claim that computers can be made to think on a
level (at least) equal to humans.

Weak AI simply states that some "thinking-like" features can be added to


computers to make them more useful tools... and this has already started to
happen (witness expert systems, drive-by-wire cars and speech recognition
software).
AI Problems:

AI problems (speech recognition, NLP, vision, automatic


programming, knowledge representation, etc.) can be paired with techniques
(NN, search, Bayesian nets, production systems, etc.).AI problems can be
classified in two types:

1. Common-place tasks(Mundane Tasks)


2. Expert tasks

Common-Place Tasks:
1. Recognizing people, objects.

2. Communicating (through natural language).

3. Navigating around obstacles on the streets.

Expert tasks:
1. Medical diagnosis.

2. Mathematical problem solving

3. Playing games like chess

9
These tasks cannot be done by all people, and can only be performed by skilled
specialists.
Clearly tasks of the first type are easy for humans to perform, and
almost all are able to master them. The second range of tasks requires skill
development and/or intelligence and only some specialists can perform them
well. However, when we look at what computer systems have been able to
achieve to date, we see that their achievements include performing
sophisticated tasks like medical diagnosis, performing symbolic integration,
proving theorems and playing chess.

Intelligent Agent’s:
Agents andenvironments:

Fig 1: Agents and Environments

Agent:
An Agent is anything that can be viewed as perceiving its environment
through sensors and acting upon that environment through actuators.
✓ A human agent has eyes, ears, and other organs for sensors and
hands, legs, mouth, and other body parts for actuators.
✓ A robotic agent might have cameras and infrared range finders for
sensors and various motors for actuators.
✓ A software agent receives keystrokes, file contents, and network packets as
sensory

Percept:
We use the term percept to refer to the agent's perceptual inputs at any given instant.

10
PerceptSequence:
An agent's percept sequence is the complete history of everything the agent has ever
perceived.

Agent function:
Mathematically speaking, we say that an agent's behavior is described by
the agent function that maps any given percept sequence to an action.
Agentprogram
Internally, the agent function for an artificial agent will be implemented
by an agent program. It is important to keep these two ideas distinct. The
agent function is an abstract mathematical description; the agent program
is a concrete implementation, running on the agent architecture.

To illustrate these ideas, we will use a very simple example-the vacuum-cleaner


world shown in Fig

2. This particular world has just two locations: squares A and B. The
vacuum agent perceives which square it is in and whether there is dirt in
the square. It can choose to move left, move right, suck up the dirt, or do
nothing. One very simple agent function is the following: if the current
square is dirty, then suck, otherwise move to the other square. A partial
tabulation of this agent function is shown in Fig 2.

Fig 2 : A vacuum-cleaner world with just two locations.

Agent function

Percept Sequence Action

[A, Clean] Right

11
[A, Dirty] Suck

[B, Clean] Left

[B, Dirty] Suck

[A, Clean], [A, Clean] Right

[A, Clean], [A, Dirty] Suck

Fig 3: Partial tabulation of a simple agent function for the example: vacuum-cleaner
world shown in the Fig 3

Function REFLEX-VACCUM-AGENT ([location, status]) returns an action If

status=Dirty then return Suck

else if location = A then return Right

else if location = B then return Left

Fig 3 (i): The REFLEX-VACCUM-AGENT program is invoked for each new


percept (location, status) and returns an action each time

Rationality
Rationality is nothing but status of being reasonable, sensible, and having good sense of
judgment.
Rationality is concerned with expected actions and results depending upon what the agent
has perceived. Performing actions with the aim of obtaining useful information is an
important part of rationality.

What is Ideal Rational Agent?


An ideal rational agent is the one, which is capable of doing expected actions to maximize
its performance measure, on the basis of −
12
• Its percept sequence
• Its built-in knowledge base
Rationality of an agent depends on the following −
• The performance measures, which determine the degree of success.
• Agent’s Percept Sequence till now.
• The agent’s prior knowledge about the environment.
• The actions that the agent can carry out.
A rational agent always performs right action, where the right action means the action that
causes the agent to be most successful in the given percept sequence. The problem the agent
solves is characterized by Performance Measure, Environment, Actuators, and Sensors
(PEAS).
The Structure of Intelligent Agents
Agent’s structure can be viewed as −

• Agent = Architecture + Agent Program


• Architecture = the machinery that an agent executes on.
• Agent Program = an implementation of an agent function.
Simple Reflex Agents

• They choose actions only based on the current percept.


• They are rational only if a correct decision is made only on the basis of current
precept.
• Their environment is completely observable.

Condition-Action Rule − It is a rule that maps a state (condition) to an action.


1. Simple Reflex Agents
They are the basic form of agents and function only in the current state. They have very low
intelligence capability as they don’t have the ability to store past state. These types of agents
respond to events based on pre-defined rules, which are pre-programmed. They perform well
only when the environment is fully observable. Thus, these agents are helpful only in a
limited number of cases, something like a smart thermostat. Simple Reflex Agents hold a
static table from where they fetch all the pre-defined rules for acting.

13
Model Based Reflex Agents
They use a model of the world to choose their actions. They maintain an internal state.
Model − knowledge about “how the things happen in the world”.
Internal State − It is a representation of unobserved aspects of current state depending on
percept history.
Updating the state requires the information about −

• How the world evolves.


• How the agent’s actions affect the world.
2. Model-Based Agents
It is an advanced version of the Simple Reflex agent. Like Simple Reflex Agents, it
can also respond to events based on the pre-defined conditions; on top of that, it
can store the internal state (past information) based on previous events. Model-
Based Agents updates the internal state at each step. These internal states aid
agents in handling the partially observable environment. To perform any action,
it relies on both internal state and current percept. However, it is almost
impossible to find the exact state when dealing with a partially observable
environment.

14
Goal Based Agents
They choose their actions in order to achieve goals. Goal-based approach is more flexible
than reflex agent since the knowledge supporting a decision is explicitly modeled, thereby
allowing for modifications.
Goal − It is the description of desirable situations.
3. Goal-Based Agents
The action taken by these agents depends on the distance from their goal (Desired Situation).
The actions are intended to reduce the distance between the current state and the desired state.
To attain its goal, it makes use of the search and planning algorithm. One drawback of Goal-
Based Agents is that they don’t always select the most optimized path to reach the final goal.
This shortfall can be overcome by using Utility Agent described below.

15
Utility Based Agents
They choose actions based on a preference (utility) for each state.
Goals are inadequate when −
• There are conflicting goals, out of which only few can be achieved.
• Goals have some uncertainty of being achieved and you need to weigh likelihood of
success against the importance of a goal.

4. Utility Agents
The action taken by these agents depends on the end objective, so they are called
Utility Agents. Utility Agents are used when there are multiple solutions to a
problem, and the best possible alternative has to be chosen. The alternative chosen is
based on each state’s utility. Then, they perform a cost-benefit analysis of each
solution and select one that can achieve the minimum cost goal.

16
5. Learning Agents
Learning Agents have learning abilities so that they can learn from their past experiences.
These types of agents can start from scratch and, over time, can acquire significant
knowledge from their environment. The learning agents have four major components which
enable them to learn from their experience.

• Critic: The Critic evaluates how well is the agent performing vis-à-vis the set
performance benchmark.
• Learning Elements: It takes input from the Critic and helps Agent improve
performance by learning from the environment.
• Performance Element: This component decides on the action to be taken to improve
the performance.
• Problem Generator: Problem Generator takes input from other components and
suggests actions resulting in a better experience.

The Nature of Environments


Some programs operate in the entirely artificial environment confined to keyboard input,
database, computer file systems and character output on a screen.
In contrast, some software agents (software robots or softbots) exist in rich, unlimited
softbots domains. The simulator has a very detailed, complex environment. The software
agent needs to choose from a long array of actions in real time. A softbot designed to scan
the online preferences of the customer and show interesting items to the customer works in
the real as well as an artificial environment.

17
The most famous artificial environment is the Turing Test environment, in which one
real and other artificial agents are tested on equal ground. This is a very challenging
environment as it is highly difficult for a software agent to perform as well as a human.
Turing Test
The success of an intelligent behavior of a system can be measured with Turing Test.
Two persons and a machine to be evaluated participate in the test. Out of the two persons,
one plays the role of the tester. Each of them sits in different rooms. The tester is unaware of
who is machine and who is a human. He interrogates the questions by typing and sending
them to both intelligences, to which he receives typed responses.
This test aims at fooling the tester. If the tester fails to determine machine’s response from
the human response, then the machine is said to be intelligent.
Properties of Environment
The environment has multifold properties −
• Properties of environments Environments come in several flavors. The principal
distinctions to be made are as follows:
• ACCESSIBLE Accessible vs. inaccessible. If an agent’s sensory apparatus gives it
access to the complete state of the environment, then we say thatthe environment is
accessible to that agent. An environment is effectively accessible if the sensors detect
all aspects that are relevant to the choice of action. An accessible environmentis
convenient because the agent need not maintain any internalstate to keep track of the
world.
• DETERMINISTIC
Deterministic vs. nondeterministic. If the next state of the environment is completely
determined by the current state and the actions selected by the agents, then we say
the environment is deterministic. In principle, an agent need not worry about
uncertainty in an accessible, deterministic environment. If the environment is
inaccessible, however, then it may appear to be nondeterministic. This is particularly
true if the environment is complex, making it hard to keep track of all the
inaccessible aspects. Thus, it is often better to think of an environment as
deterministic or nondeterministic from the point of view of the agent.
• EPISODIC Episodic vs. nonepisodic.
In an episodic environment,the agent’s experience is divided into “episodes.” Each
episode consists of the agent perceiving and then acting. The quality of its action
depends just on the episode itself, because subsequent episodes do not depend on
what actions occur in previous episodes. Episodic environments are much simpler
because the agent does not need to think ahead.

18
• STATIC Static vs. dynamic. If the environment can change while an agent is
deliberating, then we say the environment is dynamic for that agent; otherwise it is
static. Static environments are easy to deal with because the agent need not keep
looking at the world while it is deciding on an action, nor need it worry about the
passage of time. If the environment does not change with the passage of time but the
agent’s performance score does, then we say the environment is SEMIDYNAMIC
semidynamic.
• DISCRETE Discrete vs. continuous. If there are a limited number of distinct, clearly
defined percepts and actions we say that the environment is discrete. Chess is
discrete—there are a fixed number of possible moves on each turn. Taxi driving is
continuous—the speed and location of the taxi and the other vehicles sweep through
a range of continuous values.1

Strategies of Solving
Tic-Tac-Toe Game
Playing Tic-Tac-Toe
Game Playing:
Tic-Tac-Toe is a simple and yet an interesting board game. Researchers have
used various approaches to study the Tic-Tac-Toe game. For example, Fok and
Ong and Grim et al. have used artificial neural network based strategies to
play it. Citrenbaum and Yakowitz discuss games like Go-Moku, Hex and
Bridg-It which share some similarities with Tic-Tac-Toe

Fig 1.

A Formal Definition of the Game:

The board used to play the Tic-Tac-Toe game consists of 9 cells laid out in the
form of a 3x3 matrix (Fig. 1). The game is played by 2 players and either of
them can start. Each of the two players is assigned a unique symbol (generally
0 and X). Each player alternately gets a turn to make a move. Making a move
is compulsory and cannot be deferred. In each move a player places the symbol
19
assigned to him/her in a hitherto blank cell.

Let a track be defined as any row, column or diagonal on the board. Since
the board is a square matrix with 9 cells, all rows, columns and diagonals have
exactly 3 cells. It can be easily observed that there are 3 rows, 3 columns and 2
diagonals, and hence a total of 8 tracks on the board (Fig. 1). The goal of the
game is to fill all the three cells of any track on the board with the symbol
assigned to one before the opponent does the same with the symbol assigned
to him/her. At any point of the game, if there exists a track whose all
three cells have been marked by the same symbol, then the player to whom
that symbol have been assigned wins and the game terminates. If there exist
no track whose cells have been marked by the same symbol when there is no
more blank cell on the board then the game is drawn.

Let the priority of a cell be defined as the number of tracks passing through
it. The priorities of the nine cells on the board according to this definition are
tabulated in Table 1. Alternatively, let the priority of a track be defined as the
sum of the priorities of its three cells. The priorities of the eight tracks on the
board according to this definition are tabulated in Table 2. The prioritization of
the cells and the tracks lays the foundation of the heuristics to be used in this
study. These heuristics are somewhat similar to those proposed by Rich and
Knight.

Strategy 1:

Algorithm:

1. View the vector as a ternary number. Convert it to a decimal number.

2. Use the computed number as an index into Move-Table and access the vector
stored there.

3. Set the new board to that vector.

Procedure:

20
1) Elements of vector:

0: Empty

1: X

2: O

→ the vector is a ternary number

2) Store inside the program a move-table (lookuptable):

a) Elements in the table: 19683 (39)

b) Element = A vector which describes the most suitable move from the

Comments:
1. A lot of space to store the Move-Table.

2. A lot of work to specify all the entries in the Move-Table.

3. Difficult to extend

Comments:

1. Not efficient in time, as it has to check several conditions before making each move

2. Easier to understand the program’s strategy.

3. Hard to generalize.

21

You might also like