Department of Computer Science and Engineering
Department of Computer Science and Engineering
(R2017)
UNIT I INTRODUCTION
Introduction–Definition – Future of Artificial Intelligence – Characteristics of Intelligent
Agents–Typical Intelligent Agents – Problem Solving Approach to Typical AI problems.
UNIT V APPLICATIONS
AI applications – Language Models – Information Retrieval- Information Extraction – Natural
Language Processing – Machine Translation – Speech Recognition – Robot – Hardware –
Perception – Planning – Moving
OUTCOMES:
Upon completion of the course, the students will be able to:
• Use appropriate search algorithms for any AI problem
• Represent a problem using first order and predicate logic
• Provide the apt agent strategy to solve a given problem
• Design software agents to solve a problem
• Design applications for NLP that use Artificial Intelligence.
TEXT BOOKS:
1. S. Russell and P. Norvig, "Artificial Intelligence: A Modern Approach‖, Prentice Hall,
Third Edition, 2009.
2. I. Bratko, ―Prolog: Programming for Artificial Intelligence‖, Fourth edition,
Addison-Wesley Educational Publishers Inc., 2011.
REFERENCES:
1. M. Tim Jones, ―Artificial Intelligence: A Systems Approach(Computer Science)‖,
Jones
and Bartlett Publishers, Inc.; First Edition, 2008
2. Nils J. Nilsson, ―The Quest for Artificial Intelligence‖, Cambridge University
Press, 2009.
3. William F. Clocksin and Christopher S. Mellish,‖ Programming in Prolog: Using the
ISO Standard‖, Fifth Edition, Springer, 2003.
4. Gerhard Weiss, ―Multi Agent Systems‖, Second Edition, MIT Press, 2013.
5. David L. Poole and Alan K. Mackworth, ―Artificial Intelligence: Foundations of
Computational Agents‖, Cambridge University Press, 2010.
UNIT I INTRODUCTION
Introduction–Definition - Future of Artificial Intelligence – Characteristics of Intelligent Agents–
Typical Intelligent Agents – Problem Solving Approach to Typical AI problems.
Introduction
INTELLIGENCE ARTIFICIAL INTELLIGENCE
No human is an expert.We may get better Expert systems are made which
solutions from other humans. aggregate many person’s
experience and ideas.
Definition
WHAT IS ARTIFICIAL INTELLIGENCE?
The study of how to make computers do things at which at the moment, people are better.
“Artificial Intelligence is the ability of a computer to act like a human
being”.
•Systems that think like humans
• Systems that act like humans
• Systems that think rationally
• Systems that act rationally.
(a)Intelligence - Ability to apply knowledge in order to perform better in an environment.
(b)Artificial Intelligence - Study and construction of agent programs that perform well in a
given environment, for a given agent architecture.
(c) Agent - An entity that takes action in response to percepts from an environment.
(d)Rationality - property of a system which does the “right thing” given what it knows. (e)
Logical Reasoning - A process of deriving new sentences from old, such that the new
sentences are necessarily true if the old ones are true.
Total Turing Test includes a video signal so that the interrogator can test the
subject’s perceptual abilities, as well as the opportunity for the interrogator to pass
physical objects “through the hatch.” To pass the total Turing Test, the computer will
need
• computer vision to perceive objects, and
robotics to manipulate objects and move about.
Acting rationally means acting so as to achieve one's goals, given one's beliefs. An agent
is just something that perceives and acts.
The right thing: that which is expected to maximize goal achievement, given the
available information
Does not necessary involve thinking.
For Example - blinking reflex- but should be in the service of rational action.
Transportation: Although it could take a decade or more to perfect them, autonomous cars will
• one day ferry us from place to place.
• Manufacturing: AI powered robots work alongside humans to perform a limited range of
tasks like assembly and stacking, and predictive analysis sensors keep equipment running
smoothly.
• Healthcare: In the comparatively AI-nascent field of healthcare, diseases are more quickly
and accurately diagnosed, drug discovery is sped up and streamlined, virtual nursing
assistants monitor patients and big data analysis helps to create a more personalized patient
experience.
• Education: Textbooks are digitized with the help of AI, early-stage virtual tutors assist
human instructors and facial analysis gauges the emotions of students to help determine
who’s struggling or bored and better tailor the experience to their individual needs.
• Media: Journalism is harnessing AI, too, and will continue to benefit from it. Bloomberg uses
Cyborg technology to help make quick sense of complex financial reports. The Associated
Press employs the natural language abilities of Automated Insights to produce 3,700 earning
reports stories per year — nearly four times more than in the recent past.
• Customer Service: Last but hardly least, Google is working on an AI assistant that can
place human-like calls to make appointments at, say, your neighborhood hair salon. In
addition to words, the system understands context and nuance.
Characteristics of Intelligent Agents
• Situatedness
The agent receives some form of sensory input from its environment, and it performs
some action that changes its environment in some way. Examples of environments:
the physical world and the Internet.
• Autonomy
The agent can act without direct intervention by humans or other agents and that it
has control over its own actions and internal state.
• Adaptivity
The agent is capable of
(1) reacting flexibly to changes in its environment;
(2) taking goal-directed initiative (i.e., is pro-active), when appropriate; and
(3) Learning from its own experience, its environment, and interactions with others.
• Sociability
The agent is capable of interacting in a peer-to-peer manner with other agents or humans
An agent is anything that can be viewed as perceiving its environment through sensors
and acting upon that environment through actuators.
➢ Human Sensors:
Eyes, ears, and other organs for sensors.
➢ Human Actuators:
Hands, legs, mouth, and other body parts.
➢ Robotic Sensors:
Mic, cameras and infrared range finders for sensors
➢ Robotic Actuators:
Motors, Display, speakers etc An
agent can be:
Human-Agent: A human agent has eyes, ears, and other organs which work for sensors
and hand, legs, vocal tract work for actuators.
o Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for
sensors and various motors for actuators.
o Software Agent: Software agent can have keystrokes, file contents as sensory input
and act on those inputs and display output on the screen.
Hence the world around us is full of agents such as thermostat, cellphone, camera, and even
we are also agents.
Before moving forward, we should first know about sensors, effectors, and actuators.
Sensor: Sensor is a device which detects the change in the environment and sends the
information to other electronic devices. An agent observes its environment through sensors.
Actuators: Actuators are the component of machines that converts energy into motion. The
actuators are only responsible for moving and controlling a system. An actuator can be an
electric motor, gears, rails, etc.
Effectors: Effectors are the devices which affect the environment. Effectors can be legs,
wheels, arms, fingers, wings, fins, and display screen.
Properties of Environment
What is an environment? Give its properties.
An environment is everything in the world which surrounds the agent, but it is
not a part of an agent itself. An environment can be described as a situation in which
an agent is present.
The environment is where agent lives, operate and provide the agent with
something to sense and act upon it.
1. Fully observable vs Partially Observable:
• If an agent sensor can sense or access the complete state of an environment at each
point of time then it is a fully observable environment, else it is partially
observable.
• A fully observable environment is easy as there is no need to maintain the internal
state to keep track history of the world.
• An agent with no sensors in all environments then such an environment is called as
unobservable.
• Example: chess – the board is fully observable, as
are opponent’s moves. Driving – what is around the next bend is not
observable and hence partially observable.
2. Deterministic vs Stochastic:
• If an agent's current state and selected action can completely determine the next state
of the environment, then such environment is called a deterministic environment.
• A stochastic environment is random in nature and cannot be determined completely
by an agent.
• In a deterministic, fully observable environment, agent does not need to worry about
uncertainty.
3. Episodic vs Sequential:
• In an episodic environment, there is a series of one-shot actions, and only the current
percept is required for the action.
• However, in Sequential environment, an agent requires memory of past actions to
determine the next best actions.
4. Single-agent vs Multi-agent
• If only one agent is involved in an environment, and operating by itself then such an
environment is called single agent environment.
• However, if multiple agents are operating in an environment, then such an
environment is called a multi-agent environment.
• The agent design problems in the multi-agent environment are different from single
agent environment.
5. Static vs Dynamic:
• If the environment can change itself while an agent is deliberating then such
environment is called a dynamic environment else it is called a static environment.
• Static environments are easy to deal because an agent does not need to continue
looking at the world while deciding for an action.
• However for dynamic environment, agents need to keep looking at the world at each
action.
• Taxi driving is an example of a dynamic environment whereas Crossword puzzles are
an example of a static environment.
6. Discrete vs Continuous:
• If in an environment there are a finite number of percepts and actions that can be
performed within it, then such an environment is called a discrete environment else it
is called continuous environment.
• A chess game comes under discrete environment as there is a finite number of moves
that can be performed.
• A self-driving car is an example of a continuous environment.
7. Known vs Unknown
• Known and unknown are not actually a feature of an environment, but it is an agent's
state of knowledge to perform an action.
• In a known environment, the results for all actions are known to the agent. While in
unknown environment, agent needs to learn how it works in order to perform an
action.
• It is quite possible that a known environment to be partially observable and an
Unknownenvironment to be fullyobservable.
8. Accessiblevs.Inaccessible
• If an agent can obtain complete and accurate information about the state's
environment, then such an environment is called an Accessible environment
else it
is called inaccessible.
• An empty roomwhose state can be defined by its temperature is an example of an
accessible environment.
• Information about an event on earthan
is example of Inaccessibleenvironment.
Task environments, which are essentially the "problems" to which rational agents are
e th
"solutions."
PEAS: Performance Measure, Environment, Actuators, Sensors
Performance:
The outputwhich we get fromtheagent. All thenecessary resultsthatan agent gives after
processing comes under its performance.
Environment:
All thesurrounding things and conditions of an agent
fall in this section.It basically
consists of allthethings under whichtheagents work.
Actuators:
The devices, hardware or software through which the agent performs any actions or
processes any information to produce a ult
resare the actuators of the agent.
Sensors:
The devices through which the agent observes andperceives its environment are the sensors
of the agent.
Consider, e.g., the task of designing an automated taxi driver:
Performance measure Environment Actuators Sensors
Agent Type
Safe, Roads, Steering wheel, Cameras,
fast, other traffic, accelerator, speedometer,
Taxi Driver legal, pedestrians, brake, GPS,
comfortable trip, customers signal, horn engine sensors,
maximize profits keyboard
Rational Agent - A system is rational if it does the “right thing”. Given what it knows.
Characteristic of Rational Agent.
▪ The agent's prior knowledge of the environment.
▪ The performance measure that defines the criterion of success.
▪ The actions that the agent can perform. ▪ The agent's percept sequence
to date.
For every possible percept sequence, a rational agent should select an action that
is expected to maximize its performance measure, given the evidence provided by the
percept sequence and whatever built-in knowledge the agent has.
An omniscient agent knows the actual outcome of its actions and can act
accordingly; but omniscience is impossible in reality.
Ideal Rational Agent precepts and does things. It has a greater performance
measure.
Eg. Crossing road. Here first perception occurs on both sides and then only action.
No perception occurs in Degenerate Agent.
Eg. Clock. It does not view the surroundings. No matter what happens outside. The
clock works based on inbuilt program.
Ideal Agent describes by ideal mappings. “Specifying which action an agent ought
to take in
response to any given percept sequence provides a design for ideal agent”. Eg.
SQRT function calculation in calculator.
Doing actions in order to modify future precepts-sometimes called information
gathering- is an important part of rationality.
A rational agent should be autonomous-it should learn from its own prior
knowledge (experience).
Agents can be grouped into four classes based on their degree of perceived intelligence and
capability :
• Simple Reflex Agents
• Model-Based Reflex Agents
• Goal-Based Agents
• Utility-Based Agents • Learning Agent
1) The Simple reflex agents
• The Simple reflex agents are the simplest agents. These agents take decisions on the
basis of the current percepts and ignore the rest of the percept history(past State).
• These agents only succeed in the fully observable environment.
• The Simple reflex agent does not consider any part of percepts history during their
decision and action process.
• The Simple reflex agent works on Condition-action rule, which means it maps the
current state to action. Such as a Room Cleaner agent, it works only if there is dirt in
the room.
• Problems for the simple reflex agent design approach:
o They have very limited intelligence o They do not have knowledge of non-
perceptual parts of the current state o Mostly too big to generate and to store.
o Not adaptive to changes in the environment.
o The Utility -based agent is useful when there are multiple possible alternatives,
and an agent has to choose in order to performthebest action.
o The utility function maps each state to a real number to check how efficiently
eachaction achievesthe goals.
5. Learning Agents o A learning agent in AI is the type of agent which can
learn from its past experiences, or it has learning capabilities.
o It starts to act with basic knowledge and then able to act and adapt
automatically through learning.
o A learning agent has mainly four conceptual components, which are:
a. Learning element: It is responsible for making improvements by learning
from environment
b. Critic: Learning element takes feedback from critic which describes that
how well the agent is doing with respect to a fixed performance standard.
c. Performance element: It is responsible for selecting external action
d. Problem generator: This component is responsible for suggesting actions
that will lead to new and informative experiences.
o Hence, learning agents are able to learn, analyze performance, and look for new
ways to improve the performance.
Problem-solving agents:
In Artificial Intelligence, Search techniques are universal problem-solving
methods. Rational agents or Problem-solving agents in AI mostly used these search
strategies or algorithms to solve a specific problem and provide the best result.
Problemsolving agents are the goal-based agents and use atomic representation. In this
topic, we will learn various problem-solving search algorithms.
Some of the most popularly used problem solving with the help of artificial
intelligence are:
1. Chess.
2. Travelling Salesman Problem.
3. Tower of Hanoi Problem.
4. Water-Jug Problem.
5. N-Queen Problem.
Problem Searching
In general, searching refers to as finding information one needs.
Searching is the most commonly used technique of problem solving in
artificial intelligence.
The searching algorithm helps us to search for solution of particular problem.
Problem: Problems are the issues which comes across any system. A solution is needed to
solve that particular problem.
Example Problems
A Toy Problem is intended to illustrate or exercise various problem-solving methods. A real-
world problem is one whose solutions people actually care about.
Toy Problems:
1) Vaccum World
States : The state is determined by both the agent location and the dirt locations. The
agent is in one of the 2 locations, each of which might or might not contain dirt. Thus
there are 2*2^2=8 possible world states.
Goal test: This checks whether all the squares are clean.
Path cost:Each step costs 1, so the path cost is the number ofinsteps
the path.
2) 8- Puzzle Problem
States: A state description specifies the location of each of the eight tiles and the blank in
one of the nine squares.
The simplest formulation defines the actions as movements of the blank space Left, Right, Up, or
Down. Different subsets of these are possible depending on where the blank is.
Transition model: Given a state and action, this returns the resulting state; for example, if
we apply
Left to the start state in Figure 3.4, the resulting state
has the 5 and the blank
switched.
Goal test: This checks whether the state matches the goal configuration
shown in Figure. Path cost: Each step costs 1, so the path cost is the number
of steps in the path.
3) 8 – Queens Problem: