OCS351 AIML Unit 1
OCS351 AIML Unit 1
1.1. INTRODUCTION TO AI
Artificial intelligence, or Al, is the subfield of computer science that studies how
to give machines human-like intelligence. Artificial intelligence is defined as "the
study and design of intelligent agents," where an intelligent agent is a system that
perceives its environment and takes actions that maximize its chances of success.
Invented by John McCarthy in 1956, the term refers to "the science and engineering
of making intelligent machines, especially intelligent computer programs." Some
textbooks' AI definitions can be broken down into four distinct camps, which are
briefly described in the table below.
Al machines or systems are prone to less erors and high accuracy as it takes
decisions as per pre-experience or information.
High-Speed:
Since Al systems are capable of rapid decision making, they are capable of
defeatinga human chess champion.
High Reliability:
Al machines are highly reliable and can perform
the same action multiple times
with high accuracy.
Intelligent Agent and Uninformed Search |13
Digital Assistant:
Artificial intelligence has the potential to be a useful digital assistant, helping sites
likeAmazon and eBay better cater their offerings to individual customers.
Useful as a public utility:
Public utilities can benefit greatly from Al applications like self-driving cars that
make travel safer and more convenient, facial recognition systems for added safety,
natural language processing that allows machines to communicate with humans in
their own language, ete.
1.4. FOUNDATIONS OF AI
Philosophy
bsna Aristotle (384-322 B.C.)was the first to formulate a precise set of laws
8 governing the rational part of the mind. He developed an informal system
one to
of syllogisms for proper reasoning, which in principle allowed
sstle,tgenerate conclusions mechanically, given initial premises.
Much later, Ramon Lull (d. 13 15) had the idea that useful reasoning could
actually be carried out by
a mechanical
artifact-“concept wheels
Thomas Hobbes (1588-1679) proposed that reasoning was like numerical
computation that "we add and subtract in our silent thoughts." The
way.
automation of computation itself was already well under
a
Around 1500, Leonardo da Vinci (1452-1519)designed but did not build
mechanical calculator; recent reconstructions have shown the design to be
functional.
|14| Artificial Intelligene and Machine Lerning l'mdamentals
The first known caleulating machine was constructed around l623 by the
German scientist Wilhelm Schickard (1 592-1635).
Mathematics
Philosophers staked out most of the important ideas of Al, but the leap to a
formal science required a level of mathematical formalization in three
fundamental areas: logie, computation, and probability.
The idea of formal logic can be traced back to the philosophers of ancient
Greece, but its mathematical development really began
with the work of
George Boole (1815-1864), who worked out the details propositional or
of
Boolean logic.
In 1879, Gottlob Frege (1848-1925) extended Boole's logic to
include
objects and relations, creating the first-order logic that is
used today as the
most basic knowledge representation system.
Alfred Tarski (1902-1983) introduced a theory
of reference that shows
how to relate the objects in a logic to objects
in the real world
Economics (1776-present)
The science of economics got its start
in 1776, when Scottish philosopher
Adam Smith (1723 - 1790)
published An Inquiry into
Causes of the Wealth of Nations, the Nature and
While the ancient Greeks and
made contributions to economic others had
thought, Smith was the first to treat as a
science, using the idea that it
economies can be thought
individual agents maximizing of as consisting of
their own economic well-being.
Intelligent Agent and Uninformed Search 1.5
Neuroscience (1861-present)
Neuroscience is the study of the nervous system, particularly the brain.
The exact way in which the brain enables thought is one of the great
mysteries of science. It has been appreciated for thousands of years that
the brain is somehow involved in thought, because of the evidence that
strong blows to the head can lead to mental incapacitation.
Brain is recognized as the seat of consciousness. Before then, candidate
locations included the heart,the spleen, and the pineal gland.
Paul Broca's (1824-1880) study of aphasia (speech deficit) in brain
damaged patients in 1861 reinvigorated the field and persuaded the
medical establishment of the existence of localized areas of the brain
responsible for specific cognitive functions.
Despite these advances, we are still a long way from understanding how
any of these cognitive processes actually work.
The truly amazing conclusion is that a collection of simple cells can lead
to thought, action, and consciousness or, in other words, that brains cause
t minds (Searle, 1992)
Psychology (1879-present)
The origins of scientific psychology are iually traced to the work of the
German physicist Hermann von Helmholtz (1821-1894) and his student
Wilhelm Wundt (1832-1920).
Helmholtz applied the scientific method to the study of human visiön, and
his Handbook of Physiological Optics is even now described as the single
most important treatise on the physics and physiology of human vision"
(Nalwa, 1993, p.15).
In 1879, Wundt opened the first laboratory of experimental psychology at
the University of Leipzig. Wundt insisted on carefully controlled
experiments in which his workers would perform a perceptual or
associative task while introspecting on their thought processes
Linguistics (1957-present)
In 1957, B. F.Skinner published Verbal Behávior. This was
comprehensive, detailed account of the behaviourist approach to language
learning, written by the foremost expert in the field.
|1.6| Artificial Intelligence and Machine Learning lFundamentals
Moden neural network research has split into two ficlds: onc focused on
Autonomous Control
computer vision system was trained to steer a car to keep it
The ALVINN
to ALVINN, which then
following a lanc. Video cameras that transnit rond images
runs.
computes the best direction to stcer, based on experience from previous training
Diagnosis
to
Medical diagnosis programs based on probabilistic analysis have been able
perform at the level of an expert physician in several areas of medicine. Heckerman
(1991)describes a case where a leading expert on lymph-node pathology scoffs at a
program's diagnosis of an cspecially difficult case.
The creators of the program suggest he'ask the conmputer for an explanation of the
diagnosis. The machine points out the major factors influencing its decision and
explains the subtle interaction of several of the symptoms in this case. Eventually,
the expert agrees with the program.
Logistics Planning
During the Persian Gulf crisis of 1991, U.S. forces deployed a
Dynamic Analysis
and Replanning Tool, DART (Cross
and Walker, 1994), to do automated logistics
planning and scheduling for transportation.
This involved up to 50,000 vehicles.
cargo, and people at a
time, and had to account for starting
routes, and conflict resolution among points, destinations,
all parameters.
The Al planning techniques allowed a
plan to be generated in hours
have taken weeks with older methods. that vould
The Defence Advanced Research
Agency (DARPA) stated that this Project
single application more than
paid back DARPA'S
30-year investment in AI.
Intelligent Agent and Uninformed Search 1.13
Robotics
Many surgeons now use robot assistants in microsurgery. Computer vision
techniques are used to create a three-dimensional model of a patient's internal
anatomy and then uses robotic control to guide the insertion of a hip replacement
prosthesis.
Benefits:
Àl can automate repetitive tasks and free up human resources for more
creative and complex work.
a more
Al cn process and analyze large amounts of data at faster and
accurate rate than humans, leading to better decision-making.
AI can help identify patterns and insights in complex data sets that would
be difficult for humans to detect.
AI can assist in ffelds like healthcare, education, and transportation by
providing personalized services, improving efficiency, and reducing costs.
Risks:
AIcan lead to jobdisplacement as automation beconmes more prevalent.
AI can be biased if trained on biased data, leading to discrimination and
unfair outcomes.
Al can pose a threat to privacy if it is used to collect and analyze personal
data without consent or appropriate security measures.
AI can be used maliciously by bad actors to spread disinformation or
conduct cyber-attacks.
Artificial lntelligence and Machine Learning Fundamental,
1.14|
like transparency,
AI can also raise ethical concerns around issues
Risks of AI Benefits of AI
Increased efficiency and productivity
Job displacement
Improved healthcare and disease
Bias in decision-making diagnosis
Actions
Actuators
Fig. 1.1. Agents interact with environments through sensors and actuators Percept
We use the term percept to refer to the agent's perceptual inputs at any given
instant.
Percept Sequence
An agent's percept sequence is the complete history of everything the agent has
ever perceived.
Agent function
Mathematically speaking, we say that an agent's behavior is described by the
agent function that maps any given percept sequence to an action.
Agent program
Internally, the agent function for an artificial agent will be implemented by an
agent program. It is important to keep these two ideas distinct. The agent function is
an abstract mathematical description; the agent program is a
concrete
implementation, running on the agent architecture.
To illustrate these ideas, we will use a very simple example-the vacuum-cleaner
world shown in Figure 1.2. This particular world has just two locations: squares A
and B. THa vacuum agent perceives which square it is in and whether there is dirt in
the square. It can choose to move left, move right, suck up the dirt, or do nothing.
One very simple agent function is the following:
if the current square is dirty, then
Suck, otherwise move to the other square. A partial tabulation
of this agent function
S
shown inFigure 1.3.
Learning Fundamentole
Artificial Intelligence and Machine
1.16
B
A
Performance Measures
A performance measure embodies the criterion for
success of an agent's behavior.
a sequence of actions
When an agent is plunked down in an environment, it generates
according to the percepts it receives. This
sequence of actions causes the
environment to go through a sequence of states. If the
sequence is desirable, then the
agent has performed well.
Rationality
What is rational at any given time depends on four things:
The performance measure that defines the criterion of
success.
Example
Let us consider a more
complex problem: an automated
taxi driver, as an example
from the vacuum world. Before
the reader becomes concerned, we
a
that fully automated taxi is currently beyond should point out
the capabilities of current technology.
Intelligent Agent and Uninformed Search 1.19
The entire driving task is extremely flexible. Another reason we chose as a topic
it
for discussion is that the novel combinations of circumstances that can arise are
limitless. The PEAS description for the taxi's task environment is summarized in
Figure 1.4.
Performance
-AgentType Environment Actuators Sensors
Measure
Taxi driver Safe, fast, Roads, other Steering, Cameras,
legal, traffic, accelerator, Sonar,
comfortable pedestrians, brake, signal, speedometer,
trip, maximize customers horn, display GPS,
profits odometer,
accelerometer,
engine sensors,
keyboard
itiFig. 1.4. PEAS description of the task environment for an automated taxi.
Performance Measure
$ Desirable characteristics include:
u$ arriving at the correct destination; minimizing fuel consumption and wear
and tear;.
minimizing trip time or cost;
&
minimizing traffic violations and disturbances to other drivers; maximizing
safety and passenger comfort; and
3 maximizing profits.
3Obviously, some of these objectives are incompatible, so trade-offs will be
necessary.
Environment
Any taxi driver must navigate a wide range of roads, from rural lanes and
urban alleys to 12-lane freeways.
Other traffic, pedestrians, stray animals, road works, police cars, puddles,
and potholes can all be
found on the roads.
Learning Fundamentals
Artificial Intelligence and Machine
120
must interact with both potential and actual
In addition, the taxi
options.
passengers. There are also some additional
to operate in Southern California, where snow is
The taxi may be required
a or in Alaska, where it is almost never a problem.
rarely problem,
on the right, or we might want it to be able to drive
It could always drive
places like Britain or Japan. Obviously, the easier the design
on the left in
environment.
problem, the more restricted the
Actuators
a human
The actuators for an automated taxi include those available to
driver:
over steering
control over the engine through the accelerator and control
and braking.
In addition, it will need output to a display screen or voice synthesizer
to
talk back to the passengers, and perhaps some way to communicate with
other vehicles, politely or otherwise.
Sensors
The taxi's basic sensors will include one or more controllable video
cameras to see the road; these may be supplemented with infrared or sonar
sensors to detect distances to other cars
and obstacles.tatN
To avoid speeding tickets,-the taxi should have a speedometer, and an
accelerometer to properly control the vehicle, especially on curves.
The usual array of engine, fuel, and electrical system sensors
will be
required to determine the mechanical state
of the vehicle. It, like many
human drivers, may desire a global positioning
system (GPS) to avod
getting lost.
Performance
Agent Type Environment Actuators Sensors
Measure
Medical Healthy Patient, Display of Keyboard
diagnosis hospital. staff hospital, staff questions, entry of
system tests,Symptoms,
diagnoses, findings,
treatments, patient's
referrals answers
Satellite image Correct image Downlink Display of Color pixel
analysis categorization from orbiting Scene arrays
system satellite categorization
Part-picking Percentage of Conveyor belt Jointed armCamera, joint
robot parts in correct with parts; and hand angle sensors
bins bins
Refinery Purity, yield, Refinery, Valves, Temperature,
controller safety operators pumps, pressure,
heaters, chemical
displays T
sensorS
Interactive Student's Set of Display of Keyboard
English tutor Score on test students, exercises, entry
testing agency suggestions,
corrections
Fig. 1.5. Eranples of agent types and their PEAS descriptions.
state
o Because of noisy and inaccurate sensors, or because parts of the
sinply missing from the sensor data, an environment may be partially
a sen
observable for example, a vacuum agent with only local dirt
-
cannot tell whether there is dirt in other squares, and an automated taxi
Deterministicvs. Stochastic
& If the next state of the environment is completely determined by the
current state and the action taken by. the agent, the environment is said to
be deterministic; otherwise, it is stochastic.
If an environment is not fully observable or deterministic, we call it
uncertain. The term "stochastic" generally implies that uncertainty about
outcomes is quantified in terms of probabilities; - a nondeterministic
environment is one in which actions are defined by their potential
outcomes but no probabilities are assigned to them.
$.Nondeterministic environmunt descriptions are typically associated with
performance measures that require the agent to be successful
in all
possible outcomes of its act' ons.
Episodic vs. Sequential
1.10. STRUCTURE OF
AGENT
The job of AI
is to
function - the mapping
design an
agent program
that implements the agent
from percepts to
actions.
Intelliqnt AQent anx Uinformed Searclh 1.25|
Ne assume this program will run on some sort of computing device with
-
physical sensors and actuators 0 we call this the architecture:
ngent = architecture + program.
Agent Programs
The skeleton of cach agent program is the same: it receives the current
peveption as input from the sensors and sends back an action to the
actuator.6 Take note of the distinction betvween the agent function, which
accepts the entire percept history, and the agent program, which only
accepts the current percept.
Take note of the distinction between the agent function, which accepts the
entie percept history, and the agent program, which only accepts the
current percept.
Nothing more is available from the environment, so the agent program
only accepts the current percept as input; if the agent's actions are
dependent on the entire percept sequence, the agent will have to memorize
the percepts.
As an illustration, Figure 1.7 depicts a rather simple agent program that
tracks the percept sequence and uses it to index into a table of actions to
determine what to do. Figure 1.3's example table, which is for the vacuum
world, clearly illustrates the agent function that the agent program
embodies.
function TABLE-DRIVEN-AGENT (percept) returns an action
Challenges
Table lookup of percept-action pairs defining allpossible condition-actiom
rules necessary to interact in an environment
Problems
Too big to generate and to store (Chess has about 10^120 states, for
example)
No knowledge of non-perceptual parts of the current state
Not adaptive to changes in the environment; requires entire table to
be updated if changes occur
Looping: Can't make actions conditional
Take a long time to build the table
No autonomy
Even with learning, need a long time to learn the table entries
s like now
What action
Condition-action rules
should do now
Actuators
Sensors
State
should do now
Actuators
Agent
Fig. 1.11. A
model-based reflex agent
First, we need to know how the world changes without the agent, such as
the fact that a car overtaking it will usually be closer to it now than it was
earlier.
$ Second, we need to know how the agent's actions affect the outside world.
For instance, we need to know that turning the steering wheel clockwise
causes the car to turn to the right, or that after five minutes of driving
northbound on the freeway, one is typically about five miles further north
than when they started.
1.30 Artificial Intelligence and Machine Learning Fundamentals
Sensors
State
Actuators
Agent
Sensors
State
Howhappy Iwill
Utility
be In such a state
What action |
should do now
Actuators
Agent
It uses a model of the world, along with a utility function that measures its
preferences among states of the world. Then it chooses the action that leads to
the best expected utility, where expected utility is computed by averaging
over all possible outcome states, weighted by the probability of the
outcome
Critic Sensors +
Environment
changes
Performance
Learning
knowledge element
.element
Learning goals t
Problem
Generator
Actuators
Agent
return action
Fig. 1.16. A simple problem-solving agent
It first forimulates a goal and a problem, searches for a sequence of actions
that would solve the problem, and then executes the actions one at a time.
When this is complete, it formulates another goal and starts over
After formulating a goal and a problem to solve, the agent calls a search.
procedure to solve it. It then uses the solution to guide its actions, doing
whatever the solution recommends as the next thing todo - typically, the
first action of the sequence and then removing that step from the
sequence.
Once the solution has been executed, the agent will formulate a new goal.
while the agent is executing the solution sequence it ignores its percepts
when choosing an action because it knows in advance what they will be.
|1.36| Artificial Intelligence and Machine Learning Fundamentals
so to speak, must be
An agent that carries out its plans with its eycs closed,
an open-loop
quite certain of what is going on. Control theorists call this
system, because ignoring the percepts brcaks the loop between agent and
environment
4A description of what each action does; the formal name for this is the
a
transition model, specified by function (s, a) that returns the state that
results from doing action a in state s. We also use the term successor to
refer to any state reachable from a given state by a single action.
For example, we have
RESULT (In (Arad), Go (Zerind)) = In (Zerind)
Oradea
Neamt
Jzerind 151 87
75/
Isai
Arad 140
Sibiu
92
99 Fagaras
118
80 Vaslui
Timisoara Rimicu Vilcea
146 98
Mehadia 85 Hirsova
101 Urziceni
75
138 86
Bucharest
Dobrete O20 JCraiova
Glurgiu Eforie
Fig, 1.17. A
simplified road map of part of Romania
Intelliget Agent and Uninformed Search |1.37
Together, the initial state, actions, and transition model implicitly define
-
the state space of the problem the set of all states reachable from the
a
initial state by any sequence of actions. The state space forms directed
network or graph in which the nodes are states and the links between
nodes are actions.
The map of Romania shown in Figure 1.17 can be interpreted as a state
space graph if we view each road as standing for two driving actions, one
in each direction. A path in the state space is a sequence of states
connected by a sequence of actions.
The goal test, which determines whether a given state is a goal state.
Sometimes there is an explicit set of possible goal states, and the test
simply checks whether the given state is one of them. The agent's goal in
Romania is the singleton set {In(Bucharest)}
A path cost function that assigns a numeric cost to each path. The
problem-solving agent chooses a cost function that reflects its own
performance measure. For the agent trying to get to Bucharest, time is of
the essence, so the cost of a path might be its length in kilometres.
The step cost of taking action a in state s to reach state s is denoted by
c(s, a, s' ). The step costs for Romania are shown in Figure 1.17 as route
distances. We assume that step costs are nonnegative.
The preceding elements define a problem and can be gathered into a single
data structure that is given as input to a problem-solving algorithm.
A solution to a problem is an action sequence that leads from the initial
state to a goal state. Solution quality is measured by the path cost function,
and an optimal solution has the lowest path cost among all solutions
more.
to Sibiu, Rimnicu Vilcea.
As an illustration, consider the route from Arad
to this
Pitesti,and Bucharest. Numerous more intricate paths correspond
driving
abstract solution. For instance, we could listen to the radio while
between Sibiu and Rimnicu Vilcea before turning it off for the remainder
of the journey.
For every detailed state that is "in Arad," there must be a detailed path
to
some state that is "in Sibiu," and so on. The abstraction is valid if we can
expand any abstract solution into a solution in the more detailed world.
The abstraction is helpful if executing each step of the solution is simpler
than the original issue; in this case, the steps are simple enough to be
completed by a typical driving agent without the need for additional
research or planning.
Thus, choosing a good abstraction entail taking away as litle information
as possible while maintaining its validity and making sure
that the abstract
actions are simple to perform.
(B)
©. F G
to be
Fig. 1.19. Breadth-first search on a simple binary tree. At each stage, the node
erpanded next is indicated by a marker
-
We can easily see that it is complete if the shallowest goal node is at
some finite depth d, breadth-first search will eventually find it after
loop do
if EMPTY?(frontier) then return failure
node -
POP(frontier) /* ch00ses the lowest-cost node in frontier */
if problem.GOAL-TEST(node.State) then return SOLUTION (node)
add node. STATE to explored
for each action In problem.Action(node.STATE) do
aso child - CHILD-NODE(problem,node,action)nst
if child. STATE is not in explored or frontier then
frontier - NSERT(child,frontier)
else if child.STATE is in frontier with higher PATH-COST then
replace that frontier node with child
Fig. 1.21. Uniforn-cost search on a graph
The algorithm is identical to the general graph search algorithm in Figure
1.7, except for the use of a priority queue and the addition of an extra
check in case a shorter path to a frontier state is discovered. The data
structure for frontier needs to support efficient membership testing, so it
should combine the capabilities of a priority queue and a hash table.
Sibiu Fagaras
99
80
Rimicu Vilcea
Pitesti 211
97
101
Bucharest
Fig. 1.22. Part of theRomania state space, selected to illustrate uniforn-cost search
Intelligent Agent and Uninformned Search 1.43
Now a goal node has been generated, but uniform-cost search keeps going,
choosing Pitesti for expansion and adding a second path to Bucharest with
cost 80 + 97 + 10l=278. Now the algorithm checks to see if this new path
is better than the old one; it is, so the old one is discarded. Bucharest, now
with g-cost 278, is selected for expansion and the solution is returned.
Uniform-cost search does not care about the number of steps a path has,
but only about their total cost. Thereföre, it will get stuck in an infinite
loop if there is a path with an infinite sequence of zero-cost actions.
The algorithm's worst-case time and space complexity is O(5 ),
which can be much greater than b-.
1.12.3. DEPTH-FIRST-SEARCH
The deepest node in the curent frontier of the search tree is always
expanded by depth-first search. Figure 1.23 shows how the search is
progressing.
The deepest level of the search tree, where the nodes have no successors,
is reached right away by the search. The search "backs up" to the next
deepest node that still has unexplored successors as those nodes are
expanded and dropped from the frontier.
An example of graph-searchalgorithm is the depth-first search
a
A
B
A)
B
Fig. 1.23. Depth-first search on a binary tree. The unexplored region is shown in light
grav. Explored nodes with no descendants in
the frontier are removed from memor.
Nodes at depth 3 have no successors and Mis the only goal node.
In this way, only O(m) memory is needed rather than O(bm). Backtracking
search facilitates yet another memory-saving (and time-saving) trick: the
idea of generating a successor by modifying the current state description
directly rather than copying it first.
This reduces the memory requirements to just one state description and
O(m) actions.
This approach is called depth-limited search. The depth limit solves the
infinite-path problem
for depth =0 to o do
result DEPTH-LIMITED-SEARCH(problem,
depth)
if result# cutoff then return result
Fig. The iterative deepening search algorithm,
1.25.
which repeatedly applies depth limied
search with increasing limits. It terminates when a
search returnsfailure, meaning
solution is found or if
the depth limited
that no solution exists.
Intelligent Agent and Uinformed Search 1.47
Limit = 0
Limit = 1
Limit =2
Limit =3
Figure 1.26 shows four iterntions of on a binary scnrch trcc, where the
solution is found on the fourth iteration
4 E.g E.g.
(a) Breadth first search (a) Best First Search
(6) Uniform cost search (6)Greedy search
(c) Depth limited search (c)A* search
(d) Depth limited search
(e)lnteractive deepening search
) Bi-directional search
11, What is a rational agent?
A rational agent is one that does the right thing. Here
right thing is one that
will cause agent to be more successful. That leaves us with the problemn of
deciding how and when to evaluate the agent's success.
12. List the Risks and benefits of
A.
Risks of AI Benefits of AI
Job displacement Increased efficiency and productivity
REVIEW QUESTIONS
1. Given an example of a problem for which breadth-first search would work better
than depth-first search.
2. Explain the following search strategies. () depth limited search (i) Uniform
search
3. Explain Uniformed search Strategies.
4. Discuss on different types of Agent Program
Outline the components and functions of any one of the basic kinds of agent
programs.
6. The given figure shows the 15 node state space. Node 12 is the goal node.
Workout the order in which the nodes will be visited for () Breadth first search
(i1) Depth first search.
4 5 6