Ai Unit 1 Notes
Ai Unit 1 Notes
Course Objective:
● To provide a strong foundation of fundamental concepts in Artificial Intelligence.
● To identify the type of an AI problem
● To acquire knowledge on intelligent systems and agents, formalization of knowledge and
reasoning
Course Outcome:
After successful completion of the course, students will able to:
CO1: Demonstrate knowledge of the building blocks of AI as presented in terms of intelligent
agents.
CO2: Analyze and formalize the problem as a state space search.
CO3: To describe the strengths and limitations of various state-space search algorithms, and
choose
the appropriate algorithm
CO4: Develop intelligent algorithms for constraint satisfaction problems
CO5: Ability to apply knowledge representation and use this to perform inference or
planning.
CO6: Formulate and solve problems with uncertain information using Bayesian approaches.
UNIT 1: Introduction
Introduction, Overview of Artificial intelligence: Problems of AI, AI
technique, Tic - Tac - Toe problem. Intelligent Agents, Agents &
environment, nature of environment, structure of agents, goal based
agents, utility based agents, learning agents.
1
Introduction:
History of AI:
➢ In 1957, The General Problem Solver (GPS) demonstrated by Newell, Shaw &
Simon
➢ In 1958, John McCarthy (MIT) invented the Lisp language.
➢ In 1959, Arthur Samuel (IBM) wrote the first game-playing program, for
checkers, to achieve sufficient skill to challenge a world champion.
➢ In 1963, Ivan Sutherland's MIT dissertation on Sketchpad introduced the
idea of interactive graphics into computing.
➢ In 1966, Ross Quillian (PhD dissertation, Carnegie Inst. of Technology;
now CMU) demonstrated semantic nets
2
➢ In 1967, Dendral program (Edward Feigenbaum, Joshua Lederberg,
Bruce Buchanan, Georgia Sutherland at Stanford) demonstrated to interpret
mass spectra on organic chemical compounds. First successful knowledge-
based program for scientific reasoning.
➢ In 1967, Doug Engelbart invented the mouse at SRI
➢ In 1968, Marvin Minsky & Seymour Papert publish Perceptrons,
demonstrating limits of simple neural nets.
➢ In 1972, Prolog developed by Alain Colmerauer.
➢ In Mid 80’s, Neural Networks become widely used with the
Backpropagation algorithm (first described by Werbos in 1974).
➢ 1990, Major advances in all areas of AI, with significant demonstrations
in machine learning, intelligent tutoring, case-based reasoning, multi-agent
planning, scheduling, uncertain reasoning, data mining, natural language
understanding and translation, vision, virtual reality, games, and other topics.
➢ In 1997, Deep Blue beats the World Chess Champion Kasparov
➢ In 2002,iRobot, founded by researchers at the MIT Artificial Intelligence
Lab, introduced Roomba, a vacuum cleaning robot. By 2006, two million had
been sold.
Philosophy
e.g., foundational issues (can a machine think?), issues of knowledge
and believe, mutual knowledge
Psychology and Cognitive Science
e.g., problem solving skills
Neuro-Science
e.g., brain architecture
Computer Science And Engineering
e.g., complexity theory, algorithms, logic and inference, programming
languages, and system building.
Mathematics and Physics
e.g., statistical modeling, continuous mathematics,
Statistical Physics,
Complex Systems. Sub Areas of AI
1) Game Playing
3
2) Speech Recognition
Applications of AI:
AI algorithms have attracted close attention of researchers and have
also been applied successfully to solve problems in engineering.
Nevertheless, for large and complex problems, AI algorithms consume
considerable computation time due to stochastic feature of the search
approaches
5) Education: in teaching
6) Fraud detection
7) Object identification
8) Information retrieval
Building AI Systems:
1) Perception
5
Inference, decision-making, classification from what is sensed and
what the internal "model" is of the world. Might be a neural network,
logical deduction system, Hidden Markov Model induction, heuristic
searching a problem space, Bayes Network inference, genetic
algorithms, etc.
Includes areas of knowledge representation, problem solving, decision
theory, planning, game theory, machine learning, uncertainty
reasoning, etc.
3) Action
"The automation of] activities that we "The study of the computations that
associate with human thinking, activities make it possible to perceive, reason,
such as decision-making, problem solving, and act" (Winston, 1992)
learning..."(Bellman, 1978)
c) "The art of creating machines that perform d) "A field of study that seeks to explain
functions that require intelligence when and emulate intelligent behavior in
performed by people" (Kurzweil, 1990) terms of computational processes"
(Schalkoff, 1 990)
"The study of how to make computers
do things at which, at the moment, "The branch of computer science
people are better" (Rich and Knight, 1 that is concerned with the automation
99 1 ) of intelligent behavior" (Luger and
Stubblefield, 1993)
The definitions on the top, (a) and (b) are concerned with reasoning, whereas
those on the bottom, (c) and (d) address behavior. The definitions on the left,
(a) and (c) measure success in terms of human performance, and those on the
right, (b) and (d) measure the ideal concept of intelligence called rationality
6
Intelligent Systems:
Human- Rationally
Like
b. Focus is not just on behavior and I/O, but looks like reasoning process.
7
Laws of thought: Think Rationally
8
Rational agent: Act Rationally
Strong AI makes the bold claim that computers can be made to think on a
level (at least) equal to humans.
Common-Place Tasks:
1. Recognizing people, objects.
Expert tasks:
1. Medical diagnosis.
9
These tasks cannot be done by all people, and can only be performed by skilled
specialists.
Clearly tasks of the first type are easy for humans to perform, and
almost all are able to master them. The second range of tasks requires skill
development and/or intelligence and only some specialists can perform them
well. However, when we look at what computer systems have been able to
achieve to date, we see that their achievements include performing
sophisticated tasks like medical diagnosis, performing symbolic integration,
proving theorems and playing chess.
Intelligent Agent’s:
Agents andenvironments:
Agent:
An Agent is anything that can be viewed as perceiving its environment
through sensors and acting upon that environment through actuators.
✓ A human agent has eyes, ears, and other organs for sensors and
hands, legs, mouth, and other body parts for actuators.
✓ A robotic agent might have cameras and infrared range finders for
sensors and various motors for actuators.
✓ A software agent receives keystrokes, file contents, and network packets as
sensory
Percept:
We use the term percept to refer to the agent's perceptual inputs at any given instant.
10
PerceptSequence:
An agent's percept sequence is the complete history of everything the agent has ever
perceived.
Agent function:
Mathematically speaking, we say that an agent's behavior is described by
the agent function that maps any given percept sequence to an action.
Agentprogram
Internally, the agent function for an artificial agent will be implemented
by an agent program. It is important to keep these two ideas distinct. The
agent function is an abstract mathematical description; the agent program
is a concrete implementation, running on the agent architecture.
2. This particular world has just two locations: squares A and B. The
vacuum agent perceives which square it is in and whether there is dirt in
the square. It can choose to move left, move right, suck up the dirt, or do
nothing. One very simple agent function is the following: if the current
square is dirty, then suck, otherwise move to the other square. A partial
tabulation of this agent function is shown in Fig 2.
Agent function
11
[A, Dirty] Suck
Fig 3: Partial tabulation of a simple agent function for the example: vacuum-cleaner
world shown in the Fig 3
Rationality
Rationality is nothing but status of being reasonable, sensible, and having good sense of
judgment.
Rationality is concerned with expected actions and results depending upon what the agent
has perceived. Performing actions with the aim of obtaining useful information is an
important part of rationality.
13
Model Based Reflex Agents
They use a model of the world to choose their actions. They maintain an internal state.
Model − knowledge about “how the things happen in the world”.
Internal State − It is a representation of unobserved aspects of current state depending on
percept history.
Updating the state requires the information about −
14
Goal Based Agents
They choose their actions in order to achieve goals. Goal-based approach is more flexible
than reflex agent since the knowledge supporting a decision is explicitly modeled, thereby
allowing for modifications.
Goal − It is the description of desirable situations.
3. Goal-Based Agents
The action taken by these agents depends on the distance from their goal (Desired Situation).
The actions are intended to reduce the distance between the current state and the desired state.
To attain its goal, it makes use of the search and planning algorithm. One drawback of Goal-
Based Agents is that they don’t always select the most optimized path to reach the final goal.
This shortfall can be overcome by using Utility Agent described below.
15
Utility Based Agents
They choose actions based on a preference (utility) for each state.
Goals are inadequate when −
• There are conflicting goals, out of which only few can be achieved.
• Goals have some uncertainty of being achieved and you need to weigh likelihood of
success against the importance of a goal.
4. Utility Agents
The action taken by these agents depends on the end objective, so they are called
Utility Agents. Utility Agents are used when there are multiple solutions to a
problem, and the best possible alternative has to be chosen. The alternative chosen is
based on each state’s utility. Then, they perform a cost-benefit analysis of each
solution and select one that can achieve the minimum cost goal.
16
5. Learning Agents
Learning Agents have learning abilities so that they can learn from their past experiences.
These types of agents can start from scratch and, over time, can acquire significant
knowledge from their environment. The learning agents have four major components which
enable them to learn from their experience.
• Critic: The Critic evaluates how well is the agent performing vis-à-vis the set
performance benchmark.
• Learning Elements: It takes input from the Critic and helps Agent improve
performance by learning from the environment.
• Performance Element: This component decides on the action to be taken to improve
the performance.
• Problem Generator: Problem Generator takes input from other components and
suggests actions resulting in a better experience.
17
The most famous artificial environment is the Turing Test environment, in which one
real and other artificial agents are tested on equal ground. This is a very challenging
environment as it is highly difficult for a software agent to perform as well as a human.
Turing Test
The success of an intelligent behavior of a system can be measured with Turing Test.
Two persons and a machine to be evaluated participate in the test. Out of the two persons,
one plays the role of the tester. Each of them sits in different rooms. The tester is unaware of
who is machine and who is a human. He interrogates the questions by typing and sending
them to both intelligences, to which he receives typed responses.
This test aims at fooling the tester. If the tester fails to determine machine’s response from
the human response, then the machine is said to be intelligent.
Properties of Environment
The environment has multifold properties −
• Properties of environments Environments come in several flavors. The principal
distinctions to be made are as follows:
• ACCESSIBLE Accessible vs. inaccessible. If an agent’s sensory apparatus gives it
access to the complete state of the environment, then we say thatthe environment is
accessible to that agent. An environment is effectively accessible if the sensors detect
all aspects that are relevant to the choice of action. An accessible environmentis
convenient because the agent need not maintain any internalstate to keep track of the
world.
• DETERMINISTIC
Deterministic vs. nondeterministic. If the next state of the environment is completely
determined by the current state and the actions selected by the agents, then we say
the environment is deterministic. In principle, an agent need not worry about
uncertainty in an accessible, deterministic environment. If the environment is
inaccessible, however, then it may appear to be nondeterministic. This is particularly
true if the environment is complex, making it hard to keep track of all the
inaccessible aspects. Thus, it is often better to think of an environment as
deterministic or nondeterministic from the point of view of the agent.
• EPISODIC Episodic vs. nonepisodic.
In an episodic environment,the agent’s experience is divided into “episodes.” Each
episode consists of the agent perceiving and then acting. The quality of its action
depends just on the episode itself, because subsequent episodes do not depend on
what actions occur in previous episodes. Episodic environments are much simpler
because the agent does not need to think ahead.
18
• STATIC Static vs. dynamic. If the environment can change while an agent is
deliberating, then we say the environment is dynamic for that agent; otherwise it is
static. Static environments are easy to deal with because the agent need not keep
looking at the world while it is deciding on an action, nor need it worry about the
passage of time. If the environment does not change with the passage of time but the
agent’s performance score does, then we say the environment is SEMIDYNAMIC
semidynamic.
• DISCRETE Discrete vs. continuous. If there are a limited number of distinct, clearly
defined percepts and actions we say that the environment is discrete. Chess is
discrete—there are a fixed number of possible moves on each turn. Taxi driving is
continuous—the speed and location of the taxi and the other vehicles sweep through
a range of continuous values.1
Strategies of Solving
Tic-Tac-Toe Game
Playing Tic-Tac-Toe
Game Playing:
Tic-Tac-Toe is a simple and yet an interesting board game. Researchers have
used various approaches to study the Tic-Tac-Toe game. For example, Fok and
Ong and Grim et al. have used artificial neural network based strategies to
play it. Citrenbaum and Yakowitz discuss games like Go-Moku, Hex and
Bridg-It which share some similarities with Tic-Tac-Toe
Fig 1.
The board used to play the Tic-Tac-Toe game consists of 9 cells laid out in the
form of a 3x3 matrix (Fig. 1). The game is played by 2 players and either of
them can start. Each of the two players is assigned a unique symbol (generally
0 and X). Each player alternately gets a turn to make a move. Making a move
is compulsory and cannot be deferred. In each move a player places the symbol
19
assigned to him/her in a hitherto blank cell.
Let a track be defined as any row, column or diagonal on the board. Since
the board is a square matrix with 9 cells, all rows, columns and diagonals have
exactly 3 cells. It can be easily observed that there are 3 rows, 3 columns and 2
diagonals, and hence a total of 8 tracks on the board (Fig. 1). The goal of the
game is to fill all the three cells of any track on the board with the symbol
assigned to one before the opponent does the same with the symbol assigned
to him/her. At any point of the game, if there exists a track whose all
three cells have been marked by the same symbol, then the player to whom
that symbol have been assigned wins and the game terminates. If there exist
no track whose cells have been marked by the same symbol when there is no
more blank cell on the board then the game is drawn.
Let the priority of a cell be defined as the number of tracks passing through
it. The priorities of the nine cells on the board according to this definition are
tabulated in Table 1. Alternatively, let the priority of a track be defined as the
sum of the priorities of its three cells. The priorities of the eight tracks on the
board according to this definition are tabulated in Table 2. The prioritization of
the cells and the tracks lays the foundation of the heuristics to be used in this
study. These heuristics are somewhat similar to those proposed by Rich and
Knight.
Strategy 1:
Algorithm:
2. Use the computed number as an index into Move-Table and access the vector
stored there.
Procedure:
20
1) Elements of vector:
0: Empty
1: X
2: O
b) Element = A vector which describes the most suitable move from the
Comments:
1. A lot of space to store the Move-Table.
3. Difficult to extend
Comments:
1. Not efficient in time, as it has to check several conditions before making each move
3. Hard to generalize.
21