UNIT-1 Notes (AI)
UNIT-1 Notes (AI)
UNIT 1
lOMoAR cPSD| 46454187
UNIT 1
INTRODUCTION
DEFINITION
The study of how to make computers do things at which at the moment, people are better.
“Artificial Intelligence is the ability of a computer to act like a human being”.
Figure 1.1 Some definitions of artificial intelligence, organized into four categories
3
lOMoAR cPSD| 46454187
The Turing Test, proposed by Alan Turing (1950), was designed to provide a
satisfactory operational definition of intelligence. A computer passes the test if a human
interrogator, after posing some written questions, cannot tell whether the written responses
come from a person or from a computer.
4
lOMoAR cPSD| 46454187
Total Turing Test includes a video signal so that the interrogator can test the subject’s
perceptual abilities, as well as the opportunity for the interrogator to pass physical objects
―through the hatch.‖ To pass the total Turing Test, the computer will need
• computer vision to perceive objects, and robotics to manipulate objects and move
about.
Analyse how a given program thinks like a human, we must have some way of
determining how humans think. The interdisciplinary field of cognitive science brings together
computer models from AI and experimental techniques from psychology to try to construct
precise and testable theories of the workings of the human mind.
The Greek philosopher Aristotle was one of the first to attempt to codify ``right
thinking,'' that is, irrefutable reasoning processes. His famous syllogisms provided patterns for
argument structures that always gave correct conclusions given correct premises.
For example, ``Socrates is a man; all men are mortal; therefore Socrates is mortal.''
These laws of thought were supposed to govern the operation of the mind, and initiated
the field of logic.
Acting rationally means acting so as to achieve one's goals, given one's beliefs. An
agent is just something that perceives and acts.
The right thing: that which is expected to maximize goal achievement, given the
available information
For Example - blinking reflex- but should be in the service of rational action.
5
lOMoAR cPSD| 46454187
Education: Textbooks are digitized with the help of AI, early-stage virtual tutors assist
human instructors and facial analysis gauges the emotions of students to help determine
who’s struggling or bored and better tailor the experience to their individual needs.
Media: Journalism is harnessing AI, too, and will continue to benefit from it.
Bloomberg uses Cyborg technology to help make quick sense of complex financial
reports. The Associated Press employs the natural language abilities of Automated
Insights to produce 3,700 earning reports stories per year — nearly four times more
than in the recent past
Customer Service: Last but hardly least, Google is working on an AI assistant that can
place human-like calls to make appointments at, say, your neighborhood hair salon. In
addition to words, the system understands context and nuance.
Situatedness
The agent receives some form of sensory input from its environment, and it performs
some action that changes its environment in some way.
Autonomy
The agent can act without direct intervention by humans or other agents and that it has
control over its own actions and internal state.
Adaptivity
6
lOMoAR cPSD| 46454187
Sociability
The agent is capable of interacting in a peer-to-peer manner with other agents or humans
An agent is anything that can be viewed as perceiving its environment through sensors
and acting upon that environment through actuators.
Human Sensors:
Eyes, ears, and other organs for sensors.
Human Actuators:
Hands, legs, mouth, and other body parts.
Robotic Sensors:
Mic, cameras and infrared range finders for sensors
Robotic Actuators:
Motors, Display, speakers etc An agent can be:
Human-Agent: A human agent has eyes, ears, and other organs which work for sensors
and hand, legs, vocal tract work for actuators.
Robotic Agent: A robotic agent can have cameras, infrared range finder, NLP for
sensors and various motors for actuators.
Software Agent: Software agent can have keystrokes, file contents as sensory input
and act on those inputs and display output on the screen.
Hence the world around us is full of agents such as thermostat, cell phone, camera, and
even we are also agents. Before moving forward, we should first know about sensors, effectors,
and actuators.
Sensor: Sensor is a device which detects the change in the environment and sends the
information to other electronic devices. An agent observes its environment through sensors.
7
lOMoAR cPSD| 46454187
Actuators: Actuators are the component of machines that converts energy into motion.
The actuators are only responsible for moving and controlling a system. An actuator can be an
electric motor, gears, rails, etc.
Effectors: Effectors are the devices which affect the environment. Effectors can be
legs, wheels, arms, fingers, wings, fins, and display screen.
PROPERTIES OF ENVIRONMENT
An environment is everything in the world which surrounds the agent, but it is not a
part of an agent itself. An environment can be described as a situation in which an agent is
present.
The environment is where agent lives, operate and provide the agent with something to
sense and act upon it.
If an agent sensor can sense or access the complete state of an environment at each
point of time then it is a fully observable environment, else it is partially observable.
A fully observable environment is easy as there is no need to maintain the internal state
to keep track history of the world.
Example: chess – the board is fully observable, as are opponent’s moves. Driving
– what is around the next bend is not observable and hence partially observable.
1. Deterministic vs Stochastic
If an agent's current state and selected action can completely determine the next state
of the environment, then such environment is called a deterministic environment.
8
lOMoAR cPSD| 46454187
In a deterministic, fully observable environment, agent does not need to worry about
uncertainty.
2. Episodic vs Sequential
In an episodic environment, there is a series of one-shot actions, and only the current
percept is required for the action.
3. Single-agent vs Multi-agent
If only one agent is involved in an environment, and operating by itself then such an
environment is called single agent environment.
The agent design problems in the multi-agent environment are different from single
agent environment.
4. Static vs Dynamic
If the environment can change itself while an agent is deliberating then such
environment is called a dynamic environment else it is called a static environment.
Static environments are easy to deal because an agent does not need to continue looking
at the world while deciding for an action.
However for dynamic environment, agents need to keep looking at the world at each
action.
5. Discrete vs Continuous
If in an environment there are a finite number of precepts and actions that can be
performed within it, then such an environment is called a discrete environment else it
is called continuous environment.
A chess game comes under discrete environment as there is a finite number of moves
that can be performed.
6. Known vs Unknown
Known and unknown are not actually a feature of an environment, but it is an agent's
state of knowledge to perform an action.
In a known environment, the results for all actions are known to the agent. While in
unknown environment, agent needs to learn how it works in order to perform an action.
If an agent can obtain complete and accurate information about the state's environment,
then such an environment is called an Accessible environment else it is called
inaccessible.
Task environments, which are essentially the "problems" to which rational agents are
the "solutions."
Performance
The output which we get from the agent. All the necessary results that an agent gives
after processing comes under its performance.
Environment
All the surrounding things and conditions of an agent fall in this section. It basically
consists of all the things under which the agents work.
Actuators
The devices, hardware or software through which the agent performs any actions or
processes any information to produce a result are the actuators of the agent.
Sensors
The devices through which the agent observes and perceives its environment are the
sensors of the agent.
10
lOMoAR cPSD| 46454187
Rational Agent - A system is rational if it does the ―right thing‖. Given what it knows.
For every possible percept sequence, a rational agent should select an action that is
expected to maximize its performance measure, given the evidence provided by the percept
sequence and whatever built-in knowledge the agent has.
An omniscient agent knows the actual outcome of its actions and can act accordingly;
but omniscience is impossible in reality.
Ideal Rational Agent precepts and does things. It has a greater performance measure.
Eg. Crossing road. Here first perception occurs on both sides and then only action. No
perception occurs in Degenerate Agent.
Eg. Clock. It does not view the surroundings. No matter what happens outside. The
clock works based on inbuilt program.
Ideal Agent describes by ideal mappings. ―Specifying which action an agent ought to
take in response to any given percept sequence provides a design for ideal agent‖.
11
lOMoAR cPSD| 46454187
A rational agent should be autonomous-it should learn from its own prior knowledge
(experience).
TYPES OF AGENTS
Agents can be grouped into four classes based on their degree of perceived intelligence
and capability :
The Simple reflex agents are the simplest agents. These agents take decisions on the
basis of the current percepts and ignore the rest of the percept history (past State).
The Simple reflex agent does not consider any part of percepts history during their
decision and action process.
The Simple reflex agent works on Condition-action rule, which means it maps the
current state to action. Such as a Room Cleaner agent, it works only if there is dirt in
the room.
12
lOMoAR cPSD| 46454187
The Model-based agent can work in a partially observable environment, and track the
situation.
A model-based agent has two important factors:
o Model: It is knowledge about "how things happen in the world," so it is called a
Model-based agent.
o Internal State: It is a representation of the current state based on percept history.
These agents have the model, "which is knowledge of the world" and based on the
model they perform actions.
Updating the agent state requires information about:
o How the world evolves
o How the agent's action affects the world.
13
lOMoAR cPSD| 46454187
o The knowledge of the current state environment is not always sufficient to decide for
an agent to what to do.
o The agent needs to know its goal which describes desirable situations.
o Goal-based agents expand the capabilities of the model-based agent by having the
"goal" information.
o These agents may have to consider a long sequence of possible actions before deciding
whether the goal is achieved or not. Such considerations of different scenario are called
searching and planning, which makes an agent proactive.
14
lOMoAR cPSD| 46454187
o These agents are similar to the goal-based agent but provide an extra component of
utility measurement (“Level of Happiness”) which makes them different by providing
a measure of success at a given state.
o Utility-based agent act based not only goals but also the best way to achieve the goal.
o The Utility-based agent is useful when there are multiple possible alternatives, and an
agent has to choose in order to perform the best action.
o The utility function maps each state to a real number to check how efficiently each
action achieves the goals.
o A learning agent in AI is the type of agent which can learn from its past experiences, or
it has learning capabilities.
o It starts to act with basic knowledge and then able to act and adapt automatically
through learning.
b. Critic: Learning element takes feedback from critic which describes that how well
the agent is doing with respect to a fixed performance standard.
d. Problem generator: This component is responsible for suggesting actions that will
lead to new and informative experiences.
o Hence, learning agents are able to learn, analyze performance, and look for new ways
to improve the performance.
Problem-solving agents
Some of the most popularly used problem solving with the help of artificial intelligence
are:
1. Chess.
2. Travelling Salesman Problem.
3. Tower of Hanoi Problem.
4. Water-Jug Problem.
5. N-Queen Problem.
Problem Searching
16
lOMoAR cPSD| 46454187
Problem: Problems are the issues which comes across any system. A solution is needed to
solve that particular problem.
Defining The Problem: The definition of the problem must be included precisely. It
should contain the possible initial as well as final situations which should result in acceptable
solution.
1. Analyzing The Problem: Analyzing the problem and its requirement must be done as
few features can have immense impact on the resulting solution.
3. Choosing a Solution: From all the identified solutions, the best solution is chosen basis
on the results produced by respective solutions.
17
lOMoAR cPSD| 46454187
1. Search Space: Search space represents a set of possible solutions, which a system
may have.
3. Goal test: It is a function which observe the current state and returns whether the
goal state is achieved or not.
Search tree: A tree representation of search problem is called Search tree. The root of
the search tree is the root node which is corresponding to the initial state.
Actions: It gives the description of all the available actions to the agent.
Solution: It is an action sequence which leads from the start node to the goal node.
Optimal Solution: If a solution has the lowest cost among all solutions.
Example Problems
Toy Problems
Vacuum World
States: The state is determined by both the agent location and the dirt locations. The
agent is in one of the 2 locations, each of which might or might not contain dirt. Thus there are
2*2^2=8 possible world states.
18
lOMoAR cPSD| 46454187
Actions: In this simple environment, each state has just three actions: Left, Right, and
Suck. Larger environments might also include Up and Down.
Transition model: The actions have their expected effects, except that moving Left in
the leftmost squ are, moving Right in the rightmost square, and Sucking in a clean square have
no effect. The complete state space is shown in Figure.
Goal test: This checks whether all the squares are clean.
Path cost: Each step costs 1, so the path cost is the number of steps in the path.
1) 8- Puzzle Problem
States: A state description specifies the location of each of the eight tiles and the blank
in one of the nine squares.
19
lOMoAR cPSD| 46454187
Initial state: Any state can be designated as the initial state. Note that any given goal
can be reached from exactly half of the possible initial states.
The simplest formulation defines the actions as movements of the blank space Left,
Right, Up, or Down. Different subsets of these are possible depending on where the blank is.
Transition model: Given a state and action, this returns the resulting state; for example,
if we apply Left to the start state in Figure 3.4, the resulting state has the 5 and the blank
switched.
Goal test: This checks whether the state matches the goal configuration shown in
Figure. Path cost: Each step costs 1, so the path cost is the number of steps in the path.
Queens Problem
Consider the given problem. Describe the operator involved in it. Consider the water
jug problem: You are given two jugs, a 4-gallon one and 3-gallon one. Neither has any
measuring marker on it. There is a pump that can be used to fill the jugs with water. How can
you get exactly 2 gallon of water from the 4-gallon jug ?
Explicit Assumptions: A jug can be filled from the pump, water can be poured out of a
jug on to the ground, water can be poured from one jug to another and that there are no other
measuring devices available.
Here the initial state is (0, 0). The goal state is (2, n) for any value of n.
20
lOMoAR cPSD| 46454187
State Space Representation: we will represent a state of the problem as a tuple (x, y)
where x represents the amount of water in the 4-gallon jug and y represents the amount of water
in the 3-gallon jug. Note that 0 ≤ x ≤ 4, and 0 ≤ y ≤ 3.
To solve this we have to make some assumptions not mentioned in the problem. They
are:
Operators - we must define a set of operators that will take us from one state to another.
Table 1.1
21
lOMoAR cPSD| 46454187
Table 1.2
Solution
How can you get exactly 2 gallon of water into the 4-gallon jug?
22