0% found this document useful (0 votes)
91 views154 pages

Intro AI

The document outlines key concepts in artificial intelligence including intelligent agents, problem solving, searching concepts, knowledge representation, and branches of AI. It then discusses four views of AI: thinking humanly, acting humanly, thinking rationally, and acting rationally. The rest of the document provides more details on concepts like the Turing test, cognitive modeling, rational agents, PEAS, environment types, and agent types.

Uploaded by

Divyanshu Rawat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
91 views154 pages

Intro AI

The document outlines key concepts in artificial intelligence including intelligent agents, problem solving, searching concepts, knowledge representation, and branches of AI. It then discusses four views of AI: thinking humanly, acting humanly, thinking rationally, and acting rationally. The rest of the document provides more details on concepts like the Turing test, cognitive modeling, rational agents, PEAS, environment types, and agent types.

Uploaded by

Divyanshu Rawat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 154

Outline

• What is AI?
• Intelligent Agents
• Problem Solving
• Searching Concepts
• Knowledge Representation
• Branches of AI

1
What is AI?
Views of AI fall into four categories:

Thinking humanly Thinking rationally


Acting humanly Acting rationally

2
Acting humanly: Turing Test
• Turing (1950) "Computing machinery and intelligence":
• "Can machines think?"  "Can machines behave
intelligently?"
• Operational test for intelligent behavior: the Imitation Game

Suggested major components of AI: knowledge, reasoning,


language understanding, learning

3
Thinking humanly: cognitive modeling

• 1960s "cognitive revolution": information-processing


psychology

• Requires scientific theories of internal activities of the


brain

• How to validate? Requires


1. Predicting and testing behavior of human subjects (top-
down)
2. Direct identification from neurological data (bottom-up)
• Both approaches (roughly, Cognitive Science and
Cognitive Neuroscience) are now distinct from AI

4
Thinking rationally: "laws of thought"
• Aristotle: what are correct arguments/thought
processes?
• Several Greek schools developed various forms of
logic: notation and rules of derivation for thoughts;
may or may not have proceeded to the idea of
mechanization
• Direct line through mathematics and philosophy to
modern AI
• Problems:
1. Not all intelligent behavior is mediated by logical
deliberation
2. What is the purpose of thinking? What thoughts should I
have? 5
Acting rationally: rational agent
• Rational behavior: doing the right thing
• The right thing: that which is expected to
maximize goal achievement, given the
available information
• Doesn't necessarily involve thinking – e.g.,
blinking reflex – but thinking should be in the
service of rational action

6
Rational agents
• An agent is an entity that perceives and acts
• This course is about designing rational agents
• Abstractly, an agent is a function from percept
histories to actions:
[f: P*  A]

• For any given class of environments and tasks, we


seek the agent (or class of agents) with the best
performance
• Caveat: computational limitations make perfect
rationality unachievable
 design best program for given machine resources

7
State of the art
• Deep Blue defeated the reigning world chess
champion Garry Kasparov in 1997
• No hands across America (driving autonomously 98%
of the time from Pittsburgh to San Diego)
• During the 1991 Gulf War, US forces deployed an AI
logistics planning and scheduling program that
involved up to 50,000 vehicles, cargo, and people
• NASA's on-board autonomous planning program
controlled the scheduling of operations for a
spacecraft
• Proverb solves crossword puzzles better than most
humans

8
Agents
• Agents and environments
• Rationality
• PEAS (Performance measure, Environment,
Actuators, Sensors)
• Environment types
• Agent types

9
Agents
• An agent is anything that can be viewed as perceiving
its environment through sensors and acting upon
that environment through actuators
• Human agent: eyes, ears, and other organs for
sensors; hands,
• legs, mouth, and other body parts for actuators
• Robotic agent: cameras and infrared range finders for
sensors;
• various motors for actuators

10
Agents and environments

• The agent function maps from percept histories to


actions:
[f: P*  A]
The agent program runs on the physical architecture to
produce f
• agent = architecture + program
11
Vacuum--cleaner world
Vacuum

• Percepts: location and contents, e.g., [A,Dirty]


• Actions: Left, Right, Suck, NoOp

12
Rational agents
• An agent should strive to "do the right thing", based
on what it can perceive and the actions it can
perform. The right action is the one that will cause
the agent to be most successful
• Performance measure: An objective criterion for
success of an agent's behavior
• E.g., performance measure of a vacuum-cleaner
agent could be amount of dirt cleaned up, amount of
time taken, amount of electricity consumed, amount
of noise generated, etc.
13
Rational agents
• Rational Agent: For each possible percept sequence, a rational
agent should select an action that is expected to maximize its
performance measure, given the evidence provided by the
percept sequence and whatever built-in knowledge the agent
has.
• Rationality is distinct from omniscience (all-knowing with infinite
knowledge)
• Agents can perform actions in order to modify future percepts
so as to obtain useful information (information gathering,
exploration)
• An agent is autonomous if its behavior is determined by its own
experience (with ability to learn and adapt) 14
PEAS
• PEAS: Performance measure, Environment, Actuators,
Sensors
• Must first specify the setting for intelligent agent design
• Consider, e.g., the task of designing an automated taxi
driver:
– Performance measure: Safe, fast, legal, comfortable trip, maximize
profits
– Environment: Roads, other traffic, pedestrians, customers
– Actuators: Steering wheel, accelerator, brake, signal, horn
– Sensors: Cameras, sonar, speedometer, GPS, odometer, engine
sensors, keyboard

15
PEAS
• Agent: Medical diagnosis system
• Performance measure: Healthy patient,
minimize costs, lawsuits
• Environment: Patient, hospital, staff
• Actuators: Screen display (questions, tests,
diagnoses, treatments, referrals)
• Sensors: Keyboard (entry of symptoms,
findings, patient's answers)

16
PEAS
• Agent: Part-picking robot
• Performance measure: Percentage of parts in
correct bins
• Environment: Conveyor belt with parts, bins
• Actuators: Jointed arm and hand
• Sensors: Camera, joint angle sensors

17
PEAS
• Agent: Interactive English tutor
• Performance measure: Maximize student's
score on test
• Environment: Set of students
• Actuators: Screen display (exercises,
suggestions, corrections)
• Sensors: Keyboard

18
Environment types
• Fully observable (vs. partially observable): An agent's
sensors give it access to the complete state of the
environment at each point in time.
• Deterministic (vs. stochastic): The next state of the
environment is completely determined by the current
state and the action executed by the agent. (If the
environment is deterministic except for the actions of
other agents, then the environment is strategic)
• Episodic (vs. sequential): The agent's experience is
divided into atomic "episodes" (each episode consists of
the agent perceiving and then performing a single
action), and the choice of action in each episode depends
only on the episode itself.
19
Environment types
• Static (vs. dynamic): The environment is unchanged
while an agent is deliberating. (The environment is
semidynamic if the environment itself does not
change with the passage of time but the agent's
performance score does)
• Discrete (vs. continuous): A limited number of
distinct, clearly defined percepts and actions.
• Single agent (vs. multiagent): An agent operating by
itself in an environment.

20
Agent functions and programs
• An agent is completely specified by the agent
function mapping percept sequences to
actions
• One agent function is rational
• Aim: find a way to implement the rational
agent function concisely

21
Agent types
• Four basic types in order of increasing
generality:
• Simple reflex agents
• Model-based reflex agents
• Goal-based agents
• Utility-based agents

22
Simple reflex agents

23
Model--based reflex agents
Model

24
Goal--based agents
Goal

25
Utility--based agents
Utility

26
Learning agents

27
Problem Solving
•Rational agents need to perform sequences of actions in order
to achieve goals.
•Intelligent behavior can be generated by having a look-up table
or reactive policy that tells the agent what to do in every
circumstance, but:
- Such a table or policy is difficult to build
- All contingencies must be anticipated
•A more general approach is for the agent to have knowledge of
the world and how its actions affect it and be able to simulate
execution of actions in an internal model of the world in order
to determine a sequence of actions that will accomplish its
goals.
•This is the general task of problem solving and is typically
performed by searching through an internally modelled space
28
of world states.
Problem Solving Task
•Given:
-An initial state of the world
-A set of possible actions or operators that can be performed.
-A goal test that can be applied to a single state of the world to
determine if it is a goal state.
•Find:
-A solution stated as a path of states and operators that shows
how to transform the initial state into one that satisfies the
goal test.
•The initial state and set of operators implicitly define a state
space of states of the world and operator transitions between
them. May be infinite.
29
Measuring Performance
•Path cost: a function that assigns a cost to a
path, typically by summing the cost of the
individual operators in the path. May want to
find minimum cost solution.
•Search cost: The computational time and space
(memory) required to find the solution.
•Generally there is a trade-off between path
cost and search cost and one must satisfies
and find the best solution in the time that is
available.
30
Example: Romania
 On holiday in Romania; currently in Arad.
 Flight leaves tomorrow from Bucharest
 Formulate goal:
 be in Bucharest
 Formulate problem:
 states: various cities
 actions: drive between cities
 Find solution:
 sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest
 Path cost: Number of intermediate cities, distance
traveled, expected travel time
31
Example: Romania

32
Example: The 8-
8-puzzle

• states? = locations of tiles


• actions? = move blank left, right, up, down
• goal test? = goal state (given)
• path cost? = 1 per move
[Note: optimal solution of n-Puzzle family is NP-hard]

33
“Toy” Problems
•8-queens problem (N-queens problem)

•Missionaries and cannibals


Identity of individuals irrelevant, best to represent
state as
(M,C,B) M = number of missionaries on left bank
C = number of cannibals on left bank
B = number of boats on left bank (0 or 1)
Operators to move: 1M, 1C, 2M, 2C, 1M1C
Goal state: (0,0,0)

34
More Realistic Problems
• Route finding
• Travelling salesman problem
• VLSI layout
• Robot navigation
• Web searching

35
Searching Concepts
•A state can be expanded by generating all states that can be
reached by applying a legal operator to the state.
•State space can also be defined by a successor function that
returns all states produced by applying a single legal operator.
•A search tree is generated by generating search nodes by
successively expanding states starting from the initial state as
the root.
•A search node in the tree can contain
-Corresponding state
-Parent node
-Operator applied to reach this node
-Length of path from root to node (depth)
-Path cost of path from initial state to node
36
Expanding Nodes and Search

37
Search Algorithm
• Easiest way to implement various search
strategies is to maintain a queue of unexpanded
search nodes.
• Different strategies result from different methods
for inserting new nodes in the queue.
• Properties of search strategies
-Completeness
-Time Complexity
-Space Complexity
-Optimality

38
Search Strategies
• Uniformed search strategies (blind, exhaustive,
bruteforce) do not guide the search with any additional
information about the problem.
• Breadth-first search
• Uniform-cost search
• Depth-first search
• Depth-limited search
• Iterative deepening search

• Informed search strategies (heuristic, intelligent) use


information about the problem (estimated distance from
a state to the goal) to guide the search.

39
Queue

40
Queue

41
Stack

42
Stack

43
Tree Structure

44
Tree Structure

45
BFS & DFS
Breadth-first search (BFS) : A Search strategy, in which the highest
layer of a decision tree is searched completely before proceeding to
the next layer is called Breadth-first search (BFS).
− In this strategy, no viable solution is omitted and therefore guarantee
that optimal solution is found.
− This strategy is often not feasible when the search space is large.
Depth-first search (DFS) : A search strategy that extends the current
path as far as possible before backtracking to the last choice point and
trying the next alternative path is called Depth-first search (DFS).
− This strategy does not guarantee that the optimal solution has been
found.
− In this strategy, search reaches a satisfactory solution more rapidly
than breadth first, an advantage when the search space is large.
46
Breadth--First Search Strategy (BFS)
Breadth

This is an exhaustive search technique.


• The search generates all nodes at a particular level before
proceeding to the next level of the tree.
• The search systematically proceeds testing each node that is
reachable from a parent node before it expands to any child of
those nodes.
• The control regime guarantees that the space of possible moves is
• systematically examined; this search requires considerable memory
• resources.
• The space that is searched is quite large and the solution may lie a
• thousand steps away from the start node. It does, however,
guarantee
• that if we find a solution it will be the shortest possible.
• Search terminates when a solution is found and the test returns
true.

47
Breadth--first search
Breadth
• Expand shallowest unexpanded node
• Implementation:
– fringe is a FIFO queue, i.e., new successors go at
end

48
Breadth-first search

• Expand shallowest unexpanded node

49
Breadth--first search
Breadth
• Expand shallowest unexpanded node

50
Breadth--first search
Breadth
• Expand shallowest unexpanded node

51
Properties of breadth-
breadth-first search

• Complete? Yes (if b is finite)

• Time? 1+b+b2+b3+… +bd + b(bd-1) = O(bd+1)

• Space? O(bd+1) (keeps every node in memory)

• Optimal? Yes (if cost = 1 per step)

• Space is the bigger problem (more than time)

52
BFS

53
Uniform--cost search
Uniform
• Like breadth-first except
always expand node of
least cost instead of
least depth (i.e. sort
new queue by path
cost).
• Do not recognize goal
until it is the least cost
node on the queue and
removed for goal
testing.
54
Uniform--cost search
Uniform
• Expand least-cost unexpanded node
• Implementation:
– fringe = queue ordered by path cost
• Equivalent to breadth-first if step costs all equal
• Therefore, guarantees optimality as long as path cost
never decreases as a path increases (non-negative
operator costs).

55
Depth--first search
Depth
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

56
Depth--first search
Depth
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

57
Depth--first search
Depth
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

58
Depth--first search
Depth
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

59
Depth--first search
Depth
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

60
Depth--first search
Depth
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

61
Depth--first search
Depth
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

62
Depth--first search
Depth
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

63
Depth--first search
Depth
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

64
Depth--first search
Depth
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

65
Depth--first search
Depth
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

66
Depth--first search
Depth
• Expand deepest unexpanded node
• Implementation:
– fringe = LIFO queue, i.e., put successors at front

67
Properties of depth-
depth-first search
• Complete? No: fails in infinite-depth spaces, spaces
with loops
– Modify to avoid repeated states along path
 complete in finite spaces

• Time? O(bm): terrible if m is much larger than d


– but if solutions are dense, may be much faster than
breadth-first
• Space? O(bm), i.e., linear space!
• Optimal? No

68
Iterative deepening search

69
Iterative deepening search l =0

70
Iterative deepening search l =1

71
Iterative deepening search l =2

72
Iterative deepening search l =3

73
Properties of iterative deepening
search
• Complete? Yes

• Time? (d+1)b0 + d b1 + (d-1)b2 + … + bd = O(bd)

• Space? O(bd)

• Optimal? Yes, if step cost = 1

74
Depth--First v/s Breadth-
Depth Breadth-First Search

75
Depth--First v/s Breadth-
Depth Breadth-First Search

76
Repeated states
• Failure to detect repeated states can turn a linear
problem into an exponential one!

Three methods for reducing repeated work in order of effectiveness and


computational overhead:
-Do not follow self-loops (remove successors back to the same state).
-Do no create paths with cycles (remove successors already on the
path back to the root).
-Do not generate any state that was already generated. Requires storing
all generated states (O(bd) space) and searching them (usually using a
hash-table for efficiency). 77
Heuristic Search
• Heuristic search incorporates domain knowledge to
improve efficiency over blind search.
• Heuristic is a function that, when applied to a state,
returns value as estimated merit of state, with respect to
goal.
– Heuristics might (for reasons) under estimate or over estimate the
merit of a state with respect to goal.
– Heuristics that under estimates are desirable and called
admissible.
• Heuristic evaluation function estimates likelihood of given
state leading to goal state.
• Heuristic search function estimates cost from current state
to goal, presuming function is efficient.
78
Heuristic Search

79
80
81
82
83
Informed search algorithms
• Best-first search
• A* search
• Local search algorithms
• Hill-climbing search

84
Best--first search
Best
• Idea: use an evaluation function f(n) for each node
– estimate of "desirability"
Expand most desirable unexpanded node

• Implementation:
Order the nodes in fringe in decreasing order of
desirability

Special cases:
– greedy best-first search
– A* search
Romania with step costs in km
Greedy best-
best-first search
• Evaluation function f(n) = h(n) (heuristic)
• = estimate of cost from n to goal
• e.g., hSLD(n) = straight-line distance from n to
Bucharest
• Greedy best-first search expands the node
that appears to be closest to goal
Greedy best-
best-first search example
Greedy best-
best-first search example
Greedy best-
best-first search example
Greedy best-
best-first search example
Properties of greedy best-
best-first search

• Complete? No – can get stuck in loops, e.g.,


Iasi  Neamt  Iasi  Neamt 
• Time? O(bm), but a good heuristic can give
dramatic improvement
• Space? O(bm) -- keeps all nodes in memory
• Optimal? No
A* search
• Idea: avoid expanding paths that are already
expensive
• Evaluation function f(n) = g(n) + h(n)
• g(n) = cost so far to reach n
• h(n) = estimated cost from n to goal
• f(n) = estimated total cost of path through n to
goal
A* search example
A* search example
A* search example
A* search example
A* search example
A* search example
Admissible heuristics
• A heuristic h(n) is admissible if for every node n,
h(n) ≤ h*(n), where h*(n) is the true cost to reach the
goal state from n.
• An admissible heuristic never overestimates the cost
to reach the goal, i.e., it is optimistic
• Example: hSLD(n) (never overestimates the actual
road distance)
• Theorem: If h(n) is admissible, A* using TREE-
SEARCH is optimal
Properties of A*

• Complete? Yes (unless there are infinitely


many nodes with f ≤ f(G) )

• Time? Exponential

• Space? Keeps all nodes in memory

• Optimal? Yes
Local search algorithms
• In many optimization problems, the path to the goal
is irrelevant; the goal state itself is the solution

• State space = set of "complete" configurations


• Find configuration satisfying constraints, e.g., n-
queens

• In such cases, we can use local search algorithms


• keep a single "current" state, try to improve it
Example: n-queens
• Put n queens on an n × n board with no two
queens on the same row, column, or diagonal
Hill--climbing search
Hill
• "Like climbing Everest in thick fog with amnesia"
Hill--climbing search
Hill
• Problem: depending on initial state, can get
stuck in local maxima

Hill--climbing search: 8-
Hill 8-queens problem

• h = number of pairs of queens that are attacking each other,


either directly or indirectly
• h = 17 for the above state
Hill--climbing search: 8-
Hill 8-queens problem

• A local minimum with h = 1


Knowledge
• Knowledge is a progression that starts with data which
is of limited utility.
– By organizing or analyzing the data, we understand what the
data means, and this becomes information.
– The interpretation or evaluation of information yield
knowledge.
– An understanding of the principles embodied within the
knowledge is wisdom.

108
109
110
Knowledge Type
• Cognitive psychologists sort knowledge into
Declarative and Procedural category
• Procedural knowledge is • Declarative knowledge is
Knowledge about "how to Knowledge about "that
do something"; e.g., to something is true or false".
determine if Peter or e.g., A car has four tyres;
Robert is older, first find Peter is older than Robert;
their ages. • Refers to representations of
• Focuses on tasks that must objects and events;
be performed to reach a knowledge about facts and
particular objective or relationships;
goal. • Example : concepts, objects,
• Examples : procedures, facts, propositions,
rules, strategies, agendas, assertions, logic and
models. descriptive models.
111
Knowledge Representation
How do we Represent what we know ?
• Knowledge is a general term.
An answer to the question, "how to represent knowledge",
requires an analysis to distinguish between knowledge
“how” and knowledge “that”.
■ knowing "how to do something".
e.g. "how to drive a car" is a Procedural knowledge.
■ knowing "that something is true or false".
e.g. "that is the speed limit for a car on a motorway" is a
Declarative knowledge.

112
Knowledge Representation
knowledge and Representation are two distinct
entities. They play a central but distinguishable
roles in intelligent system.
■ Knowledge is a description of the world.
It determines a system's competence by what it
knows.
■ Representation is the way knowledge is encoded.
It defines a system's performance in doing
something.

113
Knowledge and Representation
• know things to represent
– Objects : facts about objects in the domain.
– Events : actions that occur in the domain.
– Performance : knowledge about how to do things
– Meta knowledge :- knowledge about what we know
• need means to manipulate
– Requires some formalism : to what we represent ;
• Thus, knowledge representation can be considered at two levels :
– (a) knowledge level at which facts are described, and
– (b) symbol level at which the representations of the objects,
defined in terms of symbols, can be manipulated in the
programs.
114
KR Using Predicate Logic
Logic : Logic is concerned with the truth of statements
about the world. Generally each statement is either
TRUE or FALSE. Logic includes : Syntax , Semantics
and Inference Procedure.
Syntax : Specifies the symbols in the language about how they
can be combined to form sentences. The facts about the
world are represented as sentences in logic.
Semantic : Specifies how to assign a truth value to a sentence
based on its meaning in the world. It Specifies what facts a
sentence refers to. A fact is a claim about the world, and it
may be TRUE or FALSE.
Inference Procedure : Specifies methods for computing new
sentences from the existing sentences.
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
KR Using Rules

130
Types of Rules

131
Types of Rules

132
133
134
Forward versus Backward Reasoning
• Rule-Based system architecture consists a set of
rules, a set of facts, and an inference engine. The
need is to find what new facts can be derived.
• Given a set of rules, there are essentially two
ways to generate new knowledge: one, forward
chaining and the other, backward chaining.
– Forward chaining : also called data driven.
It starts with the facts, and sees what rules apply.
– Backward chaining : also called goal driven.
It starts with something to find out, and looks for rules
that will help in answering it.
135
136
137
138
139
140
Forward vs Backward Chaining
• Depends on problem, and on properties of rule set.
• Backward chaining is likely to be better if there is
clear hypotheses.
• Examples : Diagnostic problems or classification
problems, Medical expert systems
• Forward chaining may be better if there is less clear
hypothesis and want to see what can be concluded
from current situation;
• Examples : Synthesis systems – design /
configuration.

141
142
143
144
145
Branches of AI
Logical AI
• Logic is a language for reasoning; a collection
of rules used while doing logical reasoning.
• Types of logic
– Propositional logic - logic of sentences
– Predicate logic - logic of objects
– Fuzzy logic - dealing with fuzziness

146
Search in AI
• Search is a problem-solving technique that systematically consider
all possible action to find a path from initial state to target state.
• Search techniques are many; the most fundamental are
– Depth first -- Hill climbing
– Breadth first -- Least cost
• Search components
– Initial state - First location
– Available actions - Successor function : reachable states
– Goal test - Conditions for goal satisfaction
– Path cost - Cost of sequence from initial state to reachable state
• Search objective
– Transform initial state into goal state - find a sequence of actions.
• Search solution
– Path from initial state to goal - optimal if lowest cost.

147
Pattern Recognition (PR)
• Definitions : from the literature
• 'The assignment of a physical object or event to
one of pre-specified categories' – Duda and Hart
• 'The science that concerns the description or
classification (recognition) of measurements' –
Schalkoff
• 'The process of giving names Ω to observations X '
– Schürmann
• Pattern Recognition is concerned with answering
the question 'What is this?' – Morse

148
Pattern Recognition (PR)
• Pattern recognition problems
– Machine vision - Visual inspection, ATR
– Character recognition – Mail sorting, processing bank cheques
– Computer aided diagnosis - Medical image/EEG/ECG signal analysis
– Speech recognition - Human Computer Interaction, access
• Approaches for Pattern recognition
– Template Matching
– Statistical classification
– Syntactic or Structural matching
• Applications requiring Pattern recognition
– Image Proc / Segmentation
– Computer Vision
– Industrial Inspection
– Medical Diagnosis
– Financial Forecast

149
Learning
• Programs learn from what the facts or the behaviours can
represent.
Definitions
• Herbert Simon 1983 – “Learning denotes changes in the system
that are adaptive in the sense that they enable the system to do the
same task or tasks more efficiently and more effectively the next
time.”
• Marvin Minsky 1986 – “Learning is making useful changes in the
working of our mind.”
• Ryszard Michalski 1986 – "Learning is constructing or modifying
representations of what is being experienced."

150
Learning
• Major Paradigms of Machine Learning
• Learning by memorization; Saving knowledge so that it can be used
again.
• Induction : Learning by example; Process of learning by example
where a system tries to induce a general rule from a set of observed
instances.
• Analogy : Learning from similarities; Recognize similarities in
information already stored; can determine correspondence
between two different representations.
• Genetic Algorithms : Learning by mimicking processes nature uses;
Part of evolutionary computing, a way of solving problems by
mimicking processes, nature uses, selection, crosses over, mutation
and accepting to evolve a solution to a problem.
• Reinforcement : Learning from actions; Assign rewards, +ve or - ve;
at the end of a sequence of steps, it learns which actions are good
or bad.

151
Planning
• A plan is a representation of a course of action.
• Planning is a problem solving technique.
• Planning is a reasonable series of actions to accomplish a goal.
• Planning programs
– Start with facts about the world, particularly
– facts about the effects of actions,
– facts about the particular situation, and
– statement of a goal.
• Benefits of planning
– reducing search,
– resolving goal conflicts, and
– providing a basis for error recovery.
• Strategy for planning
• A strategy is just a sequence of actions. From facts the program
• generate a strategy for achieving the goal.

152
Ontology
• Ontology is concerned with existence; a study of the categories of things
that exist or may exist in some domain.
• Ontology is a data model, represents a domain and is used to reason
about the objects in that domain and the relations between them.
• Ontology is used in artificial intelligence, as a form of knowledge
representation about the world or some part of it.
• Ontology generally describe:
– Individuals (instances): the basic or ground level objects
– Classes: sets, collections, or types of objects.
– Attributes: properties, features, characteristics, or parameters
that objects can have and share.
– Relations: ways the objects can be related to one another.
• Ontology is a specification of a conceptualization.

153
Heuristics
• Heuristics are simple, efficient rules;
• Heuristics are in common use as Rule of thumb;
• In computer science, a heuristic is an algorithm with provably good run
times and with provably good or optimal solution.
• Heuristics are intended to gain computational performance or
conceptual simplicity, potentially at the cost of accuracy or precision.
• People use heuristics to make decisions, come to judgments, and solve
problems, when facing complex problems or incomplete information.
• These rules work well under most circumstances.
• In AI programs, the heuristic functions are :
– used to measure how far a node is from goal state.
– used to compare two nodes, find if one is better than the other.

154

You might also like