Module 1-Intro To Ai
Module 1-Intro To Ai
ARTIFICIAL INTELLIGENCE
MODULE 1:
• The field of artificial intelligence attempts not just to understand but also to build intelligent entities.
AI is one of the newest fields in science and engineering. Work started in earnest soon after World War II, and
the name itself was coined in 1956.
• The branch of Computer Science called AI is said to be have born at conference held at “Dartmouth”, USA, in
1956
• The scientists attending the conference represented different disciplines: Mathematics, Neurology,
Psychology, Electrical Engineering etc.
• Artificial intelligence (AI) is the ability of machines or software to perform tasks that typically require
human intelligence.
These tasks include:
• Learn, Reason, Speech Recognition, Problem Solving, Identifying Patterns.
• A Rational agent in the context of artificial intelligence refers to an entity, typically a computer program
or system, that makes decisions or takes actions aimed at achieving its goals effectively in a given
environment.
Acting Humanly: The Turing test approach
• The Turing test was developed by Alan Turing(A computer scientist) in 1950.
• The Turing Test is a widely used measure of a machine’s ability to demonstrate human-like
intelligence.
• Natural language processing to communicate successfully in a human language;
• Machine learning to adapt to new circumstances and to detect and extrapolate patterns.
Total Turing Test: Turing test deliberately avoids the direct physical interaction between
the interrogator and the computer, because the physical simulation of a person is
unnecessary for intelligence.
• It is a black box where we are not clear about our thought process.
• One has to know the functioning of the brain and its mechanism for processing
information. It is an area of cognitive science.
Speech Recognition: A traveller calling United Airlines to book a flight can have the
entire conversation guided by an automated speech recognition and dialog management
system.
Game Playing: IBM’s DEEP BLUE became the first computer program to defeat the world
champion in a chess match when it bested Garry Kasparov by a score of 3.5 to 2.5 in an
exhibition match. The value of IBM’s stock increased by $18 billion.
Autonomous planning and Autonomous Scheduling : A hundred million miles from Earth,
NASA’s Remote Agent program became the first on-board autonomous planning program
to control the scheduling of operations for a spacecraft. Successor program MAPGEN
plans the daily operations for NASA’s Mars Exploration Rovers, and MEXAR2 did
mission planning—both logistics and science planning—for the European Space Agency’s
Mars Express mission in 2008.
Spam Fighting: Every day, learning algorithms categorize over a billion messages as spam,
preventing users from manually deleting a significant portion of their emails, which could
otherwise constitute 80% or 90% of their inbox. AI algorithms are trained to recognize
patterns associated with spam content, including common phrases, keywords, and
structural elements. Every day AI algorithm classify over billions of spam messages.
Logistics Planning: During the Persian Gulf crisis of 1991, U.S. forces deployed a
Dynamic Analysis and Replanning Tool, DART (Cross and Walker, 1994), to do
automated logistics planning and scheduling for transportation. This involved up to
50,000 vehicles, cargo, and people at a time, and had to account for starting points,
destinations, routes, and conflict resolution among all parameters.
Robotics: The iRobot Corporation has sold over two million Roomba robotic vacuum
cleaners for home use. The company also deploys the more rugged PackBot to Iraq and
Afghanistan, where it is used to handle hazardous materials, clear explosives, and identify
the location of snipers.
CHAPTER-2 INTELLIGENT AGENTS
• Structure of Agents
AGENTS AND ENVIRONMENTS
• An agent is anything that can be viewed as perceiving its environment through
sensors and acting upon that environment through actuators.
• HUMAN AGENT:
eyes, ears, and other organs for sensors
hands, legs, vocal tract for actuators
• ROBOTIC AGENT:
cameras and infrared range finders for sensors
and various motors for actuators.
• A software agent receives keystrokes, file contents, and network packets as
sensory inputs and acts on the environment by displaying on the screen,
writing files, and sending network packets
• The agent function maps from percept
Histories(Percept Sequence) to actions: [f : p*A]
The agent program runs on the physical architecture to produce f
• An agent’s percept sequence is the complete history of everything the agent
has ever perceived.
• Mathematically speaking, we say that an agent’s behaviour is described by the
agent function that maps any given percept sequence to an action.
VACUUM CLEANER WORLD
Rational Agent: For each possible percept sequence, a rational agent should
select an action that is expected to maximize its performance measure, given
the evidence provided by the percept sequence and whatever built-in
knowledge the agent has.
Examples of Rational Choice
With respect to Vacuum cleaner
a) Performance measure – awarding points (one point for each clean square at
each time step)
b) Prior knowledge – geography of environment(priori)-Clean squares stay
clean and sucking cleans the current square.
c) Actions – left, right, suck, Do Nothing
d) Percept sequence – perceiving dirt locations- The agent correctly perceives
its location and whether that location contains dirt.
Omniscience, learning, and autonomy
Rational is different from omniscience
• an omniscient agent knows the actual outcome of its actions and can act accordingly.
• Percept's may not supply all relevant information
• E.g., I am walking along the Champs one day and I see an old friend across the
street.
• E.g., in card game, don’t know cards of others.
• The agent’s initial configuration could reflect some prior knowledge of the
environment, but as the agent gains experience, this may be modified and
augmented.
EXAMPLE:
Cross Word Poker Backgammon Taxi driver Part picking robot Image analysis
Fully Partially Fully Partially Partially Fully
Single agent (vs. multi agent)
• An agent operating by itself in an environment or there are many agents
working together
• An agent solving a crossword puzzle by itself is clearly in a single-agent
environment, whereas an agent playing chess is in a two-agent environment.
• Chess is a competitive multiagent environment.
• In the taxi-driving environment, it is a partially cooperative multiagent
environment. And it is a partially competitive environment.
EXAMPLE:
Cross Word Poker Backgammon Taxi driver Part picking robot Image analysis
Single Multi Multi Multi Single Single
Deterministic (vs. stochastic)
• Deterministic (vs. stochastic): If the next state of the environment is completely
determined by the current state and the action executed by the agent, then we
say the environment is deterministic; other- wise, it is stochastic.
If the environment is(mostly)
partially observable -appear to be stochastic.
fully observable -appear to be deterministic
• We say an environment is uncertain if it is not fully observable or not
deterministic.
EXAMPLE:
Cross Word Poker Backgammon Taxi driver Part picking robot Image analysis
Deterministic Stochastic Stochastic Stochastic Stochastic Deterministic
Episodic (vs. sequential)
• The agent programs -take the current percept as input from the sensors and
return an action to the actuator.
• if the agent's actions depend on the entire percept sequence, the agent will have
to remember the percepts.
• P = the set of possible percepts
• T= lifetime of the agent
• The total number of percepts it receives
• Size of the look up table
• Consider playing chess
• P =10, T=150
• Will require a table of at least entries
Agent Types
Four basic types in order of increasing generality:
Reflex Agents-
• Simple reflex agents
• Model Based Reflex agents
Goal-based agents
Utility-based agents
Learning agents
Simple Reflex Agents
• Simple but very limited intelligence.
• Action does not depend on percept history, only on current percept.
• Therefore no memory requirements.
• Environment is fully observable , deterministic ,static , episodic , discrete
and single agent.
• The agent function is based on the condition-action rule:
• if condition then action .
• e.g. : if car-in-front-is-braking then initiate-braking.
• If (you see the car in front’s brake lights) then apply the brakes Agent
simply takes in a percept , determines which action could be applied, and
does that action.
• The INTERPRET-INPUT function generates an abstracted description of the
current state from the percept,
• The RULE-MATCH function returns the first rule in the set of rules that
matches the given state description.
Advantages:
• Easy to implement.
• Uses much less memory than the table-driven agent.
• Useful when a quick automated response needed (i.e. reflex action is
needed) .
Disadvantages:
• Simple reflex agents can only react on fully observable environment.
• Partially observable environments get simple reflex agents into trouble.
• Vacuum cleaner robot with defective location sensor leads to infinite loops.
Model Based Reflex Agent
• A model-based reflex agent is an artificial intelligence agent that incorporates
an internal model of the world to make decisions.
• This type of agent maintains an internal representation or model of the current
state of the world, and it uses this model to decide on actions based on
perceived inputs.
• The internal model helps the agent reason about the consequences of its
actions and plan its behaviour accordingly.
• The agent perceives the current state of the environment through sensors,
obtaining information about its surroundings.
• After taking an action, the agent updates its internal model to reflect the
changes caused by its actions. This update helps the agent refine its
understanding of the environment and improve decision-making in the future.
Goal Based Agents
knowing state and environment? Enough?
• Taxi can go left, right, straight
• correct decision depends on where the taxi is trying to get to.
Have a goal
• A destination to get to the agent needs some sort of goal information that
describes situations- being at the passenger's destination
Uses knowledge about a goal to guide its actions
• E.g., Search, planning
• Reflex agent breaks when it sees brake lights.
• Goal based agent reasons
• Brake light -> car in front is stopping -> I should stop -> I should use brake
Utility Based Agents
• A utility-based agent is an artificial intelligence agent designed to make
decisions by evaluating the utility or desirability of different outcomes.
• Unlike simple reflex agents or model-based reflex agents, which operate
based on predefined rules,
• utility-based agents consider the overall utility of possible actions and
choose the action that maximizes expected satisfaction or value.
• The agent selects the action that maximizes its expected utility. This
involves considering the potential outcomes of each action and choosing
the one that leads to the highest overall satisfaction.
• The agent may update its internal model, utility function, or decision-
making strategy based on the observed outcomes, aiming to improve its
performance over time.
Learning Agents
• All agents can improve their performance through learning.
A learning agent can be divided into four conceptual components
• Performance element is
what was previously the whole agent
• Input sensor
• Output action
• Learning element
• Modifies performance element.
• Critic
• how the agent is doing and how the performance measure modified to do
better
• Problem generator
• Tries to solve the problem differently instead of optimizing.
• Suggests exploring new actions -> new problems.
END OF CHAPTER-2
CHAPTER-3
SOLVING PROBLEMS BY SEARCHING
• Problem Solving Agents
1. Well defined Problems and Solutions
2. Formulating Problems
• Example Problems
1. Toy problems
2. Real-world Problems
Solving Problems by Searching
• Goal Based Agents
• Atomic Representations
• GOAL TEST: The goal test, which determines whether a given state is a goal state. The
agent’s goal in Romania is the singleton set
• PATH COST: A path cost function that assigns a numeric cost to each path. The problem-
solving agent chooses a cost function that reflects its own performance measure.
• The step cost of taking action a in state s to reach state s’ is denoted by c(s, a, s’).
• A solution to a problem is an action sequence that leads from the initial state to a goal state.
• Solution quality is measured by the path cost function, and an optimal solution has the
lowest path cost among all solutions.
Formulating Problems
ABSTRACT:
• Definition: The process of removing unnecessary details from a
representation.
• Example: In the context of a cross-country trip, various details like traveling
companions, current radio program, scenery, etc., are omitted from the state
description as they are irrelevant to finding a route to Bucharest.
Action Abstraction:
• Considerations: Actions in the real world have multiple effects beyond
changing location.
• Example: Driving actions considered in terms of changing location; other
actions like turning on the radio or looking out the window are omitted.
• Importance: Reduces complexity by considering only the most relevant
aspects of actions.
Significance of Abstraction:
• Overcoming Complexity: Abstraction is crucial for intelligent agents to
manage the overwhelming complexity of the real world.
• Criteria: The choice of a good abstraction involves removing unnecessary
details while maintaining validity and ensuring that abstract actions are easy
to execute.
Example problems
Toy Problems:
Vacuum World Problem:
• States: The state is determined by both the agent location and the dirt locations. The agent is in
one of two locations, each of which might or might not contain dirt. Thus, there are 2 × 22 = 8
possible world states. A larger environment with n locations has n · 2n states.
• Actions: In this simple environment, each state has just three actions: Left, Right, and Suck.
Larger environments might also include Up and Down.
• Transition model: The actions have their expected effects, except that moving Left in the
leftmost square, moving Right in the rightmost square, and Sucking in a clean square have no
effect.
• Goal test: This checks whether all the squares are clean.
• Path cost: Each step costs 1, so the path cost is the number of steps in the path.
A Typical Instance of the 8-Puzzle
• States: A state description specifies the location of each of the eight tiles and the blank in
one of the nine squares.
• Actions: The simplest formulation defines the actions as movements of the blank space
Left, Right, Up, or Down. Different subsets of these are possible depending on where the
blank is.
• Transition model: Given a state and action, this returns the resulting state; for example, if
we apply Left to the start state, the resulting state has the 5 and the blank switched.
• Goal test: This checks whether the state matches the goal configuration
• Path cost: Each step costs 1, so the path cost is the number of steps in the path.
What abstractions have we included here?
• The actions are abstracted to their beginning and final states, ignoring the
intermediate locations where the block is sliding.
• We have abstracted away actions such as shaking the board when pieces get stuck
and ruled out extracting the pieces with a knife and putting them back again.
• The 8-puzzle belongs to the family of sliding-block puzzles, which are often used as
test problems for new search algorithms in AI.
• The 24-puzzle (on a 5 × 5 board) has around 10^25 states, and random instances
take several hours to solve optimally.
8-Queens Problem
• The goal of the 8-queens problem is to place eight queens on a chessboard such that no
queen attacks any other.
• A complete-state formulation starts with all 8 queens on the board and moves them
around. In either case, the path cost is of no interest because only the final state counts.
• States: Any arrangement of 0 to 8 queens on the board is a state.
• Initial state: No queens on the board.
• Actions: Add a queen to any empty square.
• Transition model: Returns the board with a queen added to the specified
square.
• Goal test: 8 queens are on the board, none attacked.
• Actions: Add a queen to any square in the leftmost empty column such
that it is not attacked by any other queen.
• Our final toy problem was devised by Donald Knuth (1964) and illustrates how infinite state
spaces can arise.
• Knuth conjectured that, starting with the number 4, a sequence of factorial, square root, and
floor operations will reach any desired positive integer. For example, we can reach 5 from 4
as follows:
• Initial state: 4.
• Actions: Apply factorial, square root, or floor operation (factorial for integers only).
• The layout problem comes after the logical design phase and is usually split into two parts: cell
layout and channel routing.
• In cell layout, the primitive components of the circuit are grouped into cells, each of which
performs some recognized function.
• The aim is to place the cells on the chip so that they do not overlap and so that there is room
for the connecting wires to be placed between the cells.
• Channel routing finds a specific route for each wire through the gaps between the cells.
• Robot Navigation