Ai Module1
Ai Module1
Study Material
Course Name: Artificial Intelligence
Course Code: BCS515B
Prepared By:
Faculty Name: Dr.Vishruth B G /Mr.Deepak B N
Semester: V
Module 1
Introduction to Artificial Intelligence
• Homo Sapiens : The name is Latin for "wise man―
• Artificial intelligence (AI) is an area of computer science that emphasizes the creation of
intelligent machines that work and react like humans.
• AI is accomplished by studying how human brain thinks and how humans learn, decide, and
work while trying to solve a problem, and then using the outcomes of this study as a basis of
developing intelligent software and systems.
1. What is AI ?
i. Thinking humanly
ii. Thinking rationally
iii. Acting humanly
iv. Acting rationally
―[The automation of] activities ―The study of the computations that make it
thatwe associate with human thinking, possible to perceive, reason, and act.‖
activities such as decision- (Winston, 1992)
making,problemsolv- ing, learning .. .‖
(Bellman, 1978)
Acting humanly Acting rationally
―The art of creating machines that per- form ―Computational Intelligence is the study of
functions that require intelligence when the design of intelligent agents.‖ (Poole et al.,
• A computer passes the test if a human interrogator, after posing some written questions, cannot
tell whether the written responses come from a person or from a machine.
• Automated reasoning: To use the stored information to answer questions and to draw new
conclusions.
• Machine Learning: To adopt to new circumstances and to detect and extrapolate patterns.
• If we are going to say that given program thinks like a human, we must have some way of
determining how humans think.
i. Through introspection
Trying to catch our own thoughts as they go
i. Through psychological experiments
Observing a person in action
ii. Through brain imaging
Observing the brain in action
• Comparison of the trace of computer program reasoning steps to traces of human subjects
solving the same problem.
• Cognitive Science brings together computer models from AI and experimental techniques from
psychology to try to construct precise and testable theories of the working of the human mind.
AI and Cognitive Science fertilize each other in the areas of vision and natural language.
• Once we have a sufficiently precise theory of the mind, it becomes possible to express the
theory as a computer program.
• For example, Allen Newell and Herbert Simon, who developed GPS, the "General Problem
Solver‖.
Aristotle was one of the first to attempt to codify ―right thinking,‖ that
is, irrefutable reasoning processes. His syllogisms provided patterns for argument structures that
always yielded correct conclusions when given correct premises.
Eg.
Socratesis a man;
Therefore, Socrates is mortal.-- logic There are two main obstacles to this approach.
1. It is not easy to take informalknowledge and state it in the formal terms required by
logical notation, particularly whenthe knowledge is less than 100% certain.
2. Second, there is a big difference between solvinga problem ―in principle and solving
it in practice.
• All computer programs do something, but computer agents are expected to do more: operate
autonomously, perceive their environment, persist over a prolonged time period, and adapt to
change, and create and pursue goals.
• A rational agent is one that acts so as to achieve the best outcome or, when there is
uncertainty, the best expected outcome.
• In the ―laws of thought‖ approach to AI, the emphasis was on correct inferences.
• On the other hand, correct inference is not all of rationality; in some situations, there is no
provably correct thing to do, but something must still be done.
• For example, recoiling from a hot stove is a reflex action that is usually more successful than a
slower action taken after careful deliberation.
• Take the right/ best action to achieve the goals, based on his/its knowledge and belief
• Example: Assume I don’t like to get wet in rain (my goal), so I bring an umbrella (my
action). Do I behave rationally?
• The answer is dependent on my knowledge and belief
• If I’ve heard the forecast for rain and I believe it, then bringing the umbrella is
rational.
• If I’ve not heard the forecast for rain and I do not believe that it is going to rain, then
bringing the umbrella is not rational
Example:
• My goals – (i) do not get wet if rain; (ii) do not looked stupid (such as bring an umbrella
when not raining)
• My knowledge/belief – weather forecast for rain and I believe it
• My rational behaviour – bring an umbrella
• The outcome of my behaviour: If rain, then my rational behaviour achieves both goals; If
no rain, then my rational behaviour fails to achieve the 2nd goal
• How does the mind arise from a physical brain? Where does knowledge come from?
• Aristotle was the first to formulate a precise set of laws governing the rational part of the
mind. He developed an informal system of syllogisms for proper reasoning, which in principle
allowed one to generate conclusions mechanically, given initial premises.
• Descartes was a strong advocate of the power of reasoning in understanding the world,
philosophy now called as rationalism.
Mathematics
• What are the formal rules to draw valid conclusions? What can be computed?
• (un)decidability: Incompleteness theory showed that in any formal theory, there are true
statements that are undecidable i.e. they have no proof within the theory.
• (in)tractability: A problem is called intractable if the time required to solve instances of the
problem grows exponentially with the size of the instance.
Economics
• Economics is the study of how people make choices that lead to preferred outcomes (utility).
• Decision theory: It combines probability theory with utility theory, provides a formal and
complete framework for decisions made under uncertainty.
Neuroscience
Psychology
• Cognitive psychology, views the brain as an information processing device. Common view
among psychologist that a cognitive theory should be like a computer program.(Anderson 1980)
i.e. It should describe a detailed information processing mechanism whereby some cognitive
function might be implemented.
Computer engineering:
• For artificial intelligence to succeed, we need two things: intelligence and an artifact. The
computer has been the artifact(object) of choice.
• The first operational computer was the electromechanical Heath Robinson, built in 1940 by
Alan Turing's team for a single purpose: deciphering German messages.
• The first operational programmable computer was the Z-3, the invention of KonradZuse in
Germany in 1941.
• The first electronic computer, the ABC, was assembled by John Atanasoff and his student
Clifford Berry between 1940 and 1942 at Iowa State University.
• The first programmable machine was a loom, devised in 1805 by Joseph Marie Jacquard
(1752-1834) that used punched cards to store instructions for the pattern to be woven.
• Ktesibios of Alexandria (c. 250 B.C.) built the first self-controlling machine: a water clock with
a regulator that maintained a constant flow rate. This invention changed the definition of what an
artifact could do.
• Modern control theory, especially the branch known as stochastic optimal control, has as its
goal the design of systems that maximize an objective function over time. This roughly
OBJECTIVE FUNCTION matches our view of Al: designing systems that behave optimally.
• The tools of logical inference and computation allowed AI researchers to consider problems
such as language, vision, and planning that fell completely outside the control theorist’s purview.
Linguistics
• Noam Chomsky, who had just published a book on his own theory, Syntactic Structures.
Chomsky pointed out that the behaviourist theory did not address the notion of creativity in
language.
• Modern linguistics and AI were ―born‖ at about the same time, and grew up together,
intersecting in a hybrid field called computational linguistics or natural language processing.
• The problem of understanding language soon turned out to be considerably more complex than
it seemed in 1957. Understanding language requires an understanding of the subject matter and
context, not just an understanding of the structure of sentences.
• Knowledge representation (the study of how to put knowledge into a form that a computer can
reason with)- tied to language and informed by research in linguistics.
The gestation of artificial intelligence (AI) during the period from 1943 to 1955 marked the early
theoretical and conceptual groundwork for the field. This period laid the foundation for the
subsequent development of AI
The birth of artificial intelligence (AI) in 1956 is commonly associated with the Dartmouth
Conference, a seminal event that took place at Dartmouth College in Hanover, New Hampshire.
The period from 1952 to 1969 in the history of artificial intelligence (AI) was characterized by
early enthusiasm and great expectations. Researchers during this time were optimistic about the
potential of AI and believed that significant progress could be made in creating machines with
human-like intelligence.
The period from 1966 to 1973 in the history of artificial intelligence (AI) is often referred to as
"A Dose of Reality." During this time, researchers faced challenges and setbacks that led to a
reevaluation of the initial optimism and expectations surrounding AI.
The period from 1969 to 1979 in the history of artificial intelligence (AI) is characterized by a
focus on knowledge-based systems, with researchers exploring the use of symbolic
representation of knowledge to address challenges in AI. This era saw efforts to build expert
systems, which were designed to emulate human expertise in specific domains.
The period from 1980 to the present marks the evolution of artificial intelligence (AI) into an
industry, witnessing significant advancements, increased commercialization, and widespread
applications across various domains.
The period from 1986 to the present is characterized by the resurgence and dominance of neural
networks in the field of artificial intelligence (AI). This era is marked by significant
advancements in the development of neural network architectures, training algorithms, and the
widespread adoption of deep learning techniques.
The period from 1987 to the present has seen the adoption of the scientific method in the field of
artificial intelligence (AI), reflecting a more rigorous and empirical approach to research. This
shift has involved the application of experimental methodologies, reproducibility, and a greater
emphasis on evidence-based practices.
The period from 1995 to the present has been marked by the emergence and evolution of
intelligent agents in the field of artificial intelligence (AI). Intelligent agents are autonomous
entities that perceive their environment, make decisions, and take actions to achieve goals.
The period from 2001 to the present has been characterized by the availability and utilization of
very large datasets in the field of artificial intelligence (AI). This era has witnessed an
unprecedented growth in the volume and diversity of data, providing a foundation for training
and enhancing increasingly sophisticated AI models.
Intelligent Agents
An agent is anything that can be viewed as perceiving its environment through sensors and
acting upon that environment through actuators. simple idea is illustrated in Figure 2.1
Percept Sequence − It is the history of all that an agent has perceived till date.
Performance Measure of Agent − It is the criteria, which determines how successful an agent
is.
Behavior of Agent − It is the action that agent performs after any given sequence of percepts.
This particular world has just two locations: squares A and B. The vacuum agent perceives
which square it is in and whether there is dirt in the square. It can choose to move left, move
right, suck up the dirt, or do nothing. One very simple agent function is the following: if the
current square is dirty, then suck; otherwise, move to the other square. A partial tabulation of this
agent function is shown in Figure 2.3 and an agent program that implements is given in Figure
2.8.
2. Concept of Rationality
A rational agent is one that does the right thing—conceptually speaking, every entry in the table
for the agent function is filled out correctly.
Rationality
A definition of a rational agent: For each possible percept sequence, a rational agent should
select an action that is ex-pected to maximize its performance measure, given the evidence
provided by the percept sequence and whatever built-in knowledge the agent has.
Consider the simple vacuum-cleaner agent that cleans a square if it is dirty and moves to the
other square if not; this is the agent function tabulated in Figure 2.3. Is this a rational agent? That
depends! First, we need to say what the performance measure is, what is known about the
environment, and what sensors and actuators the agent has. Let us assume the following:
• The performance measure awards one point for each clean square at each time step,
over a ―lifetime‖ of 1000 time steps.
• The ―geography‖ of the environment is known a priori (Figure 2.2) but the dirt distri-
bution and the initial location of the agent are not. Clean squares stay clean and sucking
cleans the current square. The Left and Right actions move the agent left and right except
when this would take the agent outside the environment, in which case the agent remains
where it is.
• The agent correctly perceives its location and whether that location contains dirt. We
claim that under these circumstances the agent is indeed rational
Task environments, which are essentially the ―problems‖ to which rational agents are the
―solutions.‖
To specify the performance measure, the environment, and the agent’s actuators and sensors
called the PEAS (Performance, Environment, Actuators, Sensors) description.
In designing an agent, the first step must always be to specify the task environment as fully as
possible.
The performance measure to which we would like our automated driver to aspire? Desirable
qualities include getting to the correct destination; minimizing fuel con- sumption and wear and
tear; minimizing the trip time or cost; minimizing violations of traffic laws and disturbances to
other drivers; maximizing safety and passenger comfort; maximizing profits. Obviously, some of
these goals conflict, so tradeoffs will be required.
What is the driving environment that the taxi will face? Any taxi driver must deal with a variety
of roads, ranging from rural lanes and urban alleys to 12-lane freeways. The roads contain other
traffic, pedestrians, stray animals, road works, police cars, puddles, and potholes. The taxi must
also interact with potential and actual passengers.
• A task environment is (effectively) fully observable iff the sensors detect the complete state of
the environment
• A task environment is deterministic iff its next state is completely determined by its current
state and by the action of the agent. (Ex: a crossword puzzle).
• If not so:
In a multi-agent environment we ignore uncertainty that arises from the actions of other agents
(Ex: chess is deterministic even though each agent is unable to predict the actions of the others).
In episodes do not depend on the actions taken in previous episodes, and they do not influence
future episodes
The task environment is dynamic iff it can change while the agent is choosing an action, static
otherwise ⇒ agent needs keep looking at the world while deciding an action
The task environment is semi dynamic if the environment itself does not change with time, but
the agent’s performance score does
The state of the environment, the way time is handled, and agents percepts & actions can be
discrete or continuous
Note:
• Describes the agent’s (or designer’s) state of knowledge about the ―laws of physics‖ of the
environment
• if the environment is known, then the outcomes (or outcome probabilities if stochastic)
for all actions are given.
• if the environment is unknown, then the agent will have to learn how it works in order to
make good decisions
• an unknown environment can be fully observable (Ex: a game I don’t know the rules of)
• The agent program runs on some computing device with physical sensors and actuators: the
agent architecture
• if the actions need to depend on the entire percept sequence, the agent will have to remember
the percepts
The table represents explicitly the agent function Ex: the simple vacuum cleaner
Agents can be grouped into five classes based on their degree of perceived intelligence and
capability. All these agents can improve their performance and generate better action over the
time. These are given below:
• The Simple reflex agents are the simplest agents. These agents take decisions on the basis of
the current percepts and ignore the rest of the percept history.
• The Simple reflex agent does not consider any part of percepts history during their decision and
action process.
• The Simple reflex agent works on Condition-action rule, which means it maps the current state
to action. Such as a Room Cleaner agent, it works only if there is dirt in the room.
The Model-based agent can work in a partially observable environment, and track the situation.
These agents have the model, "which is knowledge of the world" and based on the model they
perform actions.
• For the braking problem, the internal state is not too extensive— just the previous frame from
the camera, allowing the agent to detect when two red lights at the edge of the vehicle go on or
off simultaneously.
• For other driving tasks such as changing lanes, the agent needs to keep track of where the other
cars are if it can’t see them all at once. And for any driving to be possible at all, the agent needs
to keep track of where its keys are.
• Updating this internal state information as time goes by requires two kinds of knowledge to be
encoded in the agent program.
• First, we need some information about how the world evolves independently of the agent—for
example, that an overtaking car generally will be closer behind than it was a moment ago.
• Second, we need some information about how the agent’s own actions affect the world—for
example, that when the agent turns the steering wheel clockwise, the car turns to the right, or that
after driving for five minutes northbound on the freeway, one is usually about five miles north of
where one was five minutes ago.
• This knowledge about ―how the world works‖—whether implemented in simple Boolean
circuits or in complete scientific theories—is called a model of the world. An agent that uses
such a model is called a model-based agent.
Goal-based agents
• The knowledge of the current state environment is not always sufficient to decide for an agent
to what to do.
• The agent needs to know its goal which describes desirable situations.
• Goal-based agents expand the capabilities of the model-based agent by having the "goal"
information.
• These agents may have to consider a long sequence of possible actions before deciding whether
the goal is achieved or not. Such considerations of different scenario are called searching and
planning, which makes an agent proactive.
• Sometimes it will be trickier: for example, when the agent has to consider long sequences of
twists and turns to find a way to achieve the goal.
• Search and planning are the subfields of AI devoted to finding action sequences that achieve
the agent’s goals.
For the reflex agent, on the other hand, we The goal-based agent appears less efficient, it
would have to rewrite many condition–action is more flexible because the knowledge that
rules. supports its decisions is represented explicitly
and can be modified. If it starts to rain, the
agent can update its knowledge of how
effectively its brakes will operate; this will
automatically cause all of the relevant
behaviors to be altered to suit the new
conditions
The reflex agent’s rules for when to turn and The goal-based agent’s behavior can easily be
when to go straight will work only for a single changed to go to a different destination, simply
destination; they must all be replaced to go by specifying that destination as the goal.
somewhere new.
Example: Example:
The reflex agent brakes when it sees brake A goal-based agent, in principle, could reason
lights that if the car in front has its brake lights on, it
will slow down.
Utility-based agents
• These agents are similar to the goal-based agent but provide an extra component of utility
measurement which makes them different by providing a measure of success at a given state.
• Utility-based agent act based not only goals but also the best way to achieve the goal.
• The Utility-based agent is useful when there are multiple possible alternatives, and an agent has
to choose in order to perform the best action.
• The utility function maps each state to a real number to check how efficiently each action
achieves the goals.
• with several goals none of which can be achieved with certainty, utility selects proper tradeoff
between importance of goals and likelihood of success
Learning Agents
• Solution: build learning machines and then teach them (rather than instruct them)
• Performance element: selects actions based on percepts Corresponds to the previous agent
programs
• Learning element: introduces improvements uses feedback from the critic on how the agent is
doing determines improvements for the performance element
• Problem generator: suggests actions that will lead to new and informative experiences forces
exploration of new stimulating scenarios
• After the taxi makes a quick left turn across three lanes, the critic observes the shocking
language used by other drivers.
• From this experience, the learning element formulates a rule saying this was a bad action.
• The problem generator might identify certain areas of behavior in need of improvement, and
suggest trying out the brakes on different road surfaces under different conditions.