0% found this document useful (0 votes)
54 views14 pages

Unit 1: Thinking Rationally

The document provides an introduction to artificial intelligence, including key concepts such as intelligent agents that perceive and act in an environment. It discusses different definitions and approaches to AI such as systems that think or act like humans versus those that think or act rationally. The history of AI is also summarized, from its origins in the 1950s to recent developments in deep learning. Problem solving agents are introduced as goal-based agents that decide on sequences of actions to achieve desirable states by formulating problems and searching for solutions.

Uploaded by

Karthick
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views14 pages

Unit 1: Thinking Rationally

The document provides an introduction to artificial intelligence, including key concepts such as intelligent agents that perceive and act in an environment. It discusses different definitions and approaches to AI such as systems that think or act like humans versus those that think or act rationally. The history of AI is also summarized, from its origins in the 1950s to recent developments in deep learning. Problem solving agents are introduced as goal-based agents that decide on sequences of actions to achieve desirable states by formulating problems and searching for solutions.

Uploaded by

Karthick
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Unit 1

Introduction
 AI is the study of agents that exist in an environment and perceive and act.
 AI attempts to understand intelligent entities.
 Philosophy and Psychology also concerned with intelligence.
 AI strives to build intelligent entities as well as understand them.
 Building an Intelligent System – Expert Systems.
 AI is the new discipline, formally initiated in 1956.
 A student in physics might reasonably feel that all the good ideas have already been
taken by Galileo, Newton, Einstein, and the rest, and that it takes many years of study
before one can contribute new ideas.
 AI, on the other hand, still has openings for a full-time Einstein.
 Definitions of artificial intelligence:
o Thought Processes
o Reasoning
o Behavior
 Definitions of AI are organized into four categories:
o Systems that think like humans.
o Systems that act like humans.
o Systems that think rationally.
o Systems that act rationally.
 Acting humanly:
o Natural Language Processing.
o Knowledge Representation.
o Automated Reasoning.
o Machine Learning.
o Computer Vision.
o Robotics.

 Thinking humanly:
o To get inside the actual working of human mind.
o Two ways to do this:
 Through Introspection
 Psychological Experiments
o Cognitive Science.
Is the combination of Computer Models from AI and Experimental Techniques from
Psychology.

Thinking rationally:
Logic systems are introduced
Acting rationally:
An agent is just something that perceives and acts.
- ability to represent knowledge and reason.
Foundations OF Artificial Intelligence
Philosophy
Philosophers made AI, by considering the ideas that the mind is in some ways like a
machine, that it operates on knowledge encoded in some internal language, and that
thought can be used to help arrive at the right actions to take.
Psychologists strengthened the idea that humans and other animals can be considered
information processing machines. Linguists showed that language use fits into this
model.
Mathematics
Mathematicians provided the tools to manipulate statements of logical certainty as
well as uncertain, probabilistic statements and created reasoning algorithms.
Computer engineering
Psychologists strengthened the idea that humans and other animals can be
considered information processing machines.
Linguistics
Linguists showed that language use fits into this model.

The History OF Artificial Intelligence


The gestation of artificial intelligence (1943-1956)
Early enthusiasm, great expectations (1952-1969)
A dose of reality (1966-1974)
Knowledge-based systems: The key to power? (1969-1979)
AI becomes an industry (1980-1988)
The return of neural networks (1986-present)
Recent events (1987-present)
1989 created ALVINN (An Autonomous Land Vehicle in a Neural Network).
https://en.wikipedia.org/wiki/Timeline_of_artificial_intelligence#1980s

Al and establishes the cultural background, and its important points are,
In developing AI equipped systems, are you concerned with thinking or behavior?
Do you want to model humans, or work from an ideal standard?
Intelligence is concerned mainly with rational action.
An intelligent agent takes the best possible action in a situation.
The history of Al has had cycles of success, misplaced optimism, and resulting
cutbacks in enthusiasm and funding. There have also been cycles of introducing new
creative approaches and systematically refining the best ones.
"The field of computer science that studies how machines can be made to act
intelligently" (Jackson, 1986)
INTELLIGENT AGENTS

An agent is anything that can be viewed as perceiving its environment through sensors and
acting upon that environment through effectors/actuators.

How AGENTS SHOULD ACT?


RATIONAL AGENT
The rational at any given time depends on four things:
 Performance measure that defines degree of success.
 Percept sequence - Everything that the agent has perceived so far.
 What the agent knows about the environment?
 The actions that the agent can perform.
 Mapping from percept sequences to actions.
AUTONOMY
A system is autonomous to the extent that its behavior is determined by its own experience.

STRUCTURE OF INTELLIGENT AGENTS


Job of an AI is to design the agent program: A function that implements the agent mapping
from percepts to actions.
This program will run on some sort of computing device (ARCHITECTURE)
A plain computer, or special-purpose hardware for certain tasks (processing camera images
or
filtering audio input).

Agent = Architecture + Program

Software Agents (or Software Robots or Softbots)

Agent and its type can be described as PAGE.

Percept – Action – Goal – Environment


Agent programs
The early versions of agent programs will use some internal data structures and are operated
on by the agent's decision-making procedures to generate an action choice, which is then
passed to the architecture to be executed.
Skeleton Program:
There are two things to note about this:
1. The agent program receives only a single percept as its input.
2. The goal or performance measure is not part of the skeleton program.

function SKELETON-AGEN(percept) returns action


static: memory, the agent's memory of the world
memory — UPDATE-MEMORY (memory, percept)
action <— CHOOSE-BEST-ACTION (memory)
memory — UPDATE-MEMORY (memory, action)
return action

function TABLE-DRIVEN-AGENT(percept) returns action


static: percepts, a sequence, initially empty table, a table,
indexed by percept sequences, initially fully specified
append percept to the end of percepts
action <— Lookup (percepts, table)
return action
An example:

Four types of Agent Program:


1. Simple reflex agents
2. Agents that keep track of the world
(Reflex Agent)
3. Goal-based agents
4. Utility-based agents
1. Simple reflex agent

2. Agents that keep track of the world (Reflex Agent)


Two kinds of knowledge:
1. Need some information about how the world evolves independently of the agent.
2. Need some information about how the agent's own actions affect the world
3. Goal-based agents

4. Utility-based agents

AGENTS AND ENVIRONMENTS


Properties of environments
Accessible vs. inaccessible
Deterministic vs. nondeterministic
Episodic vs. non-episodic
Static vs. dynamic
Discrete vs. continuous
Environment programs:
- Are providing the basic relationship between agents and environments.

- The basic environment simulator program. It gives each agent its percept, gets an
action from each agent, and then updates the environment.

An agent is something that perceives and acts in an environment. We split an agent into an
architecture and an agent program.
An ideal agent is one that always takes the action that is expected to maximize its
performance measure.
An agent is autonomous to the extent that its action choices depend on its own experience,
rather than on knowledge of the environment that has been built-in by the designer.
An agent program maps from a percept to an action, while updating an internal state.
Reflex agents respond immediately to percepts.
Goal-based agents act to accomplish their goals.
Utility-based agents try to maximize their own "happiness."
The process of making decisions by reasoning with knowledge is central to AI.
Developing Intelligent agents for the environments that are inaccessible, nondeterministic,
non-episodic, dynamic, and continuous are the most challenging.

PROBLEM-SOLVING

- Simple reflex agents are unable to plan ahead and their actions are determined only by
the current percept. They have no knowledge of what their actions do nor of what they
are trying to achieve.
- Goal based agents are called problem solving agents.
- Problem-solving agents decide what to do by finding sequences of actions that lead to
desirable states.
- Goal formulation: based on the current situation.
- Problem formulation: is the process of deciding what actions and states to consider
to frame a goal.
- Search Solution: The process of looking for such a sequence is called Search. A
search algorithm takes a problem as input and returns a solution in the form of an
action sequence.
- Execution: Once a solution is found, the actions it recommends can be carried out.
This is called the execution phase.
- A simple "formulate, search, execute" design for the agent.

A Simple Problem Solving Agent

FORMULATING PROBLEMS
- There are four essentially different types of problems.
o Single-state problems.
o Multiple-state problems.
o Contingency problems.
o Exploration problems.

Knowledge and problem types:


Let us consider an environment with just two locations (the vacuum world). Each location
may or may not contain dirt, and the agent may be in one location or the other. There are 8
possible world states.
The agent has three possible actions in this version of the vacuum world: Left, Right, and
Clean (100% effective).
The goal is to clean up all the dirt. That is, the goal is equivalent to the state set {7,8}.

PROBLEM
A problem is really a collection of information that the agent will use to decide what to do.
INITIAL STATE
Current percept of items, when the agent begins its production.
OPERATOR
The set of possible actions available to the agent.
SUCCESSOR FUNCTION
Identifying the next successful action available to perform in the sequence of actions.
STATE SPACE
The set of all states reachable from the initial state by any sequence of actions.
PATH
A path in the state space is simply any sequence of actions leading from one state to another.
GOAL TEST
The goal is specified by an abstract property.
- For example, in chess, the goal is to reach a state called "checkmate," where the
opponent's king can be captured on the next move no matter what the opponent does.
PATH COST
A path cost function is a function that assigns a cost to a path. (denoted by the symbol
together, the initial state, operator set, goal test, and path cost function define a problem.

SOLUTION
- A solution, that is, a path from the initial state to a state that satisfies the goal test.
STATE SET SPACE
- sets of states
Measuring problem-solving performance:
The effectiveness of a search can be measured in at least three ways.
1. Does it find a solution at all?
2. Is it a good solution (one with a low path cost)?
3. What is the search cost TOTAL COST associated with the time and memory
required to find a solution?

SEARCH COST
- Identifying an optimized solution based on a specific goal. (the time and memory
required for to find a solution)
TOTAL COST
- The total cost of the search is the sum of the path cost and the search cost.

EXAMPLE PROBLEMS

The problems are varying in two types:


1. Toy Problems.
a. The 8-puzzIe
b. The 8-queen’s problem
c. The vacuum world
2. Real World Problems.
a. Route finding
b. Touring and travelling salesperson problems
c. VLSI layout
d. Robot navigation
e. Assembly sequencing

SEARCHING FOR SOLUTIONS

AI Agent development phases:


1. Define a problem
2. How to recognize a solution?
3. Finding a solution – Search.

Generating action sequences (Actions Set):


- Generating a new set of states.
- Expanding state.
- Applying the search strategy.
SEARCH TREE

It is helpful to think of the search process as building up a SEARCH TREE that is


superimposed SEARCH NODE over the state space.
The root of the search tree is a search node corresponding to the initial state.
Data structures for search trees:
A node is a data structure with five components:
1. The state in the state space to which the node corresponds;
2. The node in the search tree that generated this node (this is called the parent node);
3. The operator that was applied to generate the node;
4. The number of nodes on the path from the root to this node (the depth of the node);
5. The path cost of the path from the initial state to the node.

The nodes that are waiting to be expanded are called the fringe or frontier.
It is like a data structure called queue.
The operations on a queue are as follows:
- MAKE-Q\JEUE(Elements) creates a queue with the given elements.
- EMPTY?'(Queue) returns true only if there are no more elements in the queue.
- REMOVE-FRONT(Queue) removes the element at the front of the queue and returns
it.
- QUEUlNG-FN(Elements, Queue) inserts a set of elements into the queue. Different
varieties of the queuing function produce different varieties of the search algorithm.
-

SEARCH STRATEGIES
Finding the right search strategy for a problem is given in terms of four criteria:
1. Completeness: is the strategy guaranteed to find a solution when there is one?
2. Time complexity: how long does it take to find a solution?
3. Space complexity: how much memory does it need to perform the search?
4. Optimality: does the strategy find the highest-quality solution when there are several
different solutions?

Six Search Strategies are come under the heading of uninformed search.
1. Breadth-first search
2. Uniform cost search
3. Depth-first search
4. Depth-limited search
5. Iterative deepening search
6. Bidirectional search

Breadth-first search
- the root node is expanded first, then all the nodes generated by the root node are
expanded.
- all the nodes at depth d in the search tree are expanded before the nodes at depth d +
1.
- Breadth-first search can be implemented by calling the GENERAL-SEARCH
algorithm with a queuing function that puts the newly generated states at the end of
the queue.

- If there is a solution, breadth-first search is guaranteed to find it, and if there are
several solutions, breadth-first search will always find the shallowest goal state first.
- In terms of the four criteria, breadth-first search is complete, and it is optimal
provided the path cost is a non-decreasing function of the depth of the node.
- the memory requirements are a bigger problem for breadth-first search than the
execution time.
- the time requirements are still a major factor.

Uniform cost search


- Uniform cost search modifies the breadth-first strategy by always expanding the
lowest-cost node on the fringe, rather than the lowest-depth node.
- This search finds the cheapest solution provided a simple requirement is met: the cost
of a path must never decrease as we go along the path.
- If every operator has a nonnegative cost, then the cost of a path can never decrease as
we go along the path, and uniform-cost search can find the cheapest path without
exploring the whole search tree.

Depth-first search
- It always expands one of the nodes at the deepest level of the tree.
- This strategy can be implemented by GENERAL-SEARCH with a queuing function
that always puts the newly generated states at the front of the queue.
- The time complexity for depth-first search is O(bm).
- For problems that have very many solutions, depth-first may actually be faster than
breadth-first, because it has a good chance of finding a solution after exploring only a
small portion of the whole space.

Depth-limited search
- Depth-limited search avoids the pitfalls of depth-first search by imposing a cutoff on
the maximum depth of a path.

Iterative Deepening Search


Bidirectional search
Comparing search strategies

AVOIDING REPEATED STATES


There are three ways to deal with repeated states:
- Do not return to the state you just came from. Have the expand function (or the
operator set) refuse to generate any successor that is the same state as the node's
parent.
- Do not create paths with cycles in them. Have the expand function (or the operator
set) refuse to generate any successor of a node that is the same as any of the node's
ancestors.
- Do not generate any state that was ever generated before.
CONSTRAINT SATISFACTION SEARCH

A constraint satisfaction problem (or CSP) is a special kind of problem that satisfies some
additional structural properties beyond the basic requirements for problems in general.

- In a CSP, the states are defined by the values of a set of variables.


- The goal test specifies a set of constraints that the values must obey.
- A solution to a CSP specifies values for all the variables such that the constraints are
satisfied.
- Each variable Vi in a CSP has a domain Di, which is the set of possible values that the
variable can take on. The domain can be discrete or continuous.
- Backtracking Search.
- Forward checking.
- Arc consistency.
- Constraint Propagation.
- Before an agent can start searching for solutions, it must formulate a goal and
then use the goal to formulate a problem.
- A problem consists of four parts: the initial state, a set of operators, a goal test
function, and a path cost function.
- The environment of the problem is represented by a state space.
- A path through the state space from the initial state to a goal state is a solution.
- In real life most problems are ill-defined; but with some analysis, many problems
can fit into the state space model.
- A single general search algorithm can be used to solve any problem; specific
variants of the algorithm embody different strategies.
- Search algorithms are judged on the basis of completeness, optimality, time
complexity, and space complexity. Complexity depends on b, the branching
factor in the state space, and d, the depth of the shallowest solution.
- Breadth-first search expands the shallowest node in the search tree first. It is
complete, optimal for unit-cost operators, and has time and space complexity of
O(b'').
- The space complexity makes it impractical in most cases.
- Uniform-cost search expands the least-cost leaf node first. It is complete, and
unlike breadth-first search is optimal even when operators have differing costs.
Its space and time complexity are the same as for breadth-first search.
- Depth-first search expands the deepest node in the search tree first. It is neither
complete nor optimal, and has time complexity of 0(bm) and space complexity of
O(bm), where m is the maximum depth.
- In search trees of large or infinite depth, the time complexity makes this
impractical.
- Depth-limited search places a limit on how deep a depth-first search can go. If
the limit happens to be equal to the depth of shallowest goal state, then time and
space complexity are minimized.
- Iterative deepening search calls depth-limited search with increasing limits until
a goal is found.
- It is complete and optimal, and has time complexity of O(bd) and space
complexity of O(bd).
- Bidirectional search can enormously reduce time complexity, but is not always
applicable.
- Its memory requirements may be impractical.

You might also like