AI Unit 1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

UNIT 1 Artificial Intelligence and applications

Artificial Intelligence
UNIT – 1
Artificial intelligence is the science of making machines that can think like humans. It
can do things that are considered "smart."
AI technology can process large amounts of data in ways, unlike humans.
The goal for AI is to be able to do things such as recognize patterns, make decisions, and
judge like humans.
Example: Digital assistants, Maps & Navigation, Facial Recognition, Self-driving cars.
They are organized into four categories:
 Systems that think like humans.
 Systems that act like humans.
 Systems that think rationally.
 Systems that act rationally.

1)Acting humanly: The Turing Test approach


 The Turing Test, proposed by Alan Turing (1950), was designed to provide a
satisfactory operational definition of intelligence.
 Turing defined intelligent behavior as the ability to achieve human-level performance
in all cognitive tasks, sufficient to fool an interrogator.
 Roughly speaking, the test he proposed is that the computer should be interrogated by
a human via a teletype, and passes the test if the interrogator cannot tell if there is a
computer or a human at the other end.
UNIT 1 Artificial Intelligence and applications

The computer would need to possess the following capabilities:


 Natural language processing to enable it to communicate successfully in English (or
some other human language);
 Knowledge representation to store information provided before or during the
interrogation;(how machine express for suitable task)
 Automated reasoning to use the stored information to answer questions and to draw
new conclusions;
 Machine learning to adapt to new circumstances and to detect and extrapolate
patterns.
 Turing's test deliberately avoided direct physical interaction between the interrogator
and the computer, because physical simulation of a person is unnecessary for
intelligence.
 However, the so-called total Turing Test includes a video signal so that the
interrogator can test the subject's perceptual abilities, as well as the opportunity for
the interrogator to pass physical objects "through the hatch.”
To pass the total Turing Test, the computer will see
 Computer vision to perceive objects, and
 Robotics to move them about.

2)Thinking humanly: The cognitive modelling approach


 If we are going to say that a given program thinks like a human, we must have some
way of determining how humans think. We need to get inside the actual workings of
human minds.
 There are two ways to do this: through introspection—trying to catch our own
thoughts as they go by—or through psychological experiments.
 Once we have a sufficiently precise theory of the mind, it becomes possible to
express the theory as a computer program.
 If the program’s input/output and timing behavior matches human behavior, that is
evidence that some of the program's mechanisms may also be operating in humans.
Two ways to understand how human mind works?
Introspection
 It is the examination or observation of one’s own mental and emotional process.
Psychological experiment
 A scientific procedure under taken to make a discovery, test a hypothesis
(ASSUMPTION) or demonstrate a known fact.
UNIT 1 Artificial Intelligence and applications

3)Thinking rationally:
 In Artificial Intelligence thinking rationally means thinking rightly for example if
something is true that should be true or that must be true or it can’t be false.
 If someone thinking rightly always in given circumstances in a given amount of
information then we call it as laws of thought approach.
 The Greek Philosopher Aristotle was one of the first to attempt to codify “Right
thinking” for structured argument. He always yielded a correct conclusion when given
correct premises.
 For example: “All men have brains, all humans have brains therefore all humans are
men”.
There are two main obstacles exist to implement the approach
 This approach needed 100% knowledge
 Too many computations required.

4)Acting rationally: The rational agent approach


 Acting rationally means acting so as to achieve one's goals, given one's beliefs.
 An agent is just something that perceives and acts. (This may be an unusual use of the
word, but you will get used to it.)
 In this approach, AI is viewed as the study and construction of rational agents.
UNIT 1 Artificial Intelligence and applications

INTELLIGENT AGENTS
 An agent is anything that can be viewed as perceiving its environment through
sensors and acting upon that environment through effectors.
 A human agent has eyes, ears, and other organs for sensors, and hands, legs, mouth,
and other body parts for effectors.
 A robotic agent substitutes cameras and infrared range finders for the sensors and
various motors for the effectors.
HOW AGENTS SHOULD ACT
 A rational agent is one that does the right thing. Obviously, this is better than doing
the wrong thing, but what does it mean? As a first approximation, we will say that the
right action is the one that will cause the agent to be most successful.
 That leaves us with the problem of deciding how and when to evaluate the agent's
success.

PERFORMANCE MEASURE
 We use the term performance measure for the how—the criteria that determine how
successful an agent is.
 Obviously, there is not one fixed measure suitable for all agents We could ask the
agent for subjective opinion of how happy it is with its own performance, but some
agents would be unable to answer, and others would delude themselves.
Goals of Agent:
 -High Performance
 -optimized result
 -Rational action
UNIT 1 Artificial Intelligence and applications

Agent Model (PEAS)


 P-Performance
 E-Environment
 A-Actions
 S-Sensors
Based on their degree of perceived intelligence and capability, types of agents in
artificial intelligence can be divided into:
 Simple Reflex Agents.
 Model-Based Agents.
 Goal-Based Agents.
 Utility-Based Agents.
 Learning Agents.

1. Simple reflex agent


 A simple reflex agent is the most basic of the intelligent agents out there. It performs
actions based on a current situation. When something happens in the environment of a
simple reflex agent, the agent quickly scans its knowledge base for how to respond to
the situation at-hand based on pre-determined rules.

 The Simple reflex agents are the simplest agents. These agents take decisions on the
basis of the current percepts and ignore the rest of the percept history.
 These agents only succeed in the fully observable environment.
UNIT 1 Artificial Intelligence and applications

 The Simple reflex agent does not consider any part of percepts history during their
decision and action process.
 The Simple reflex agent works on Condition-action rule, which means it maps the
current state to action. Such as a Room Cleaner agent, it works only if there is dirt in
the room.
Problems for the simple reflex agent design approach:
 They have very limited intelligence
 They do not have knowledge of non-perceptual parts of the current state
 Mostly too big to generate and to store.
 Not adaptive to changes in the environment.

2.Model-based reflex agent


 A model-based reflex agent is one that uses internal memory and a percept history to
create a model of the environment in which it's operating and make decisions based
on that model. The term percept means something that has been observed or detected
by the agent.

 The Model-based agent can work in a partially observable environment, and track the
situation.
 A model-based agent has two important factors:
 Model: It is knowledge about "how things happen in the world," so it is called a
Model-based agent.
 Internal State: It is a representation of the current state based on percept history.
 These agents have the model, "which is knowledge of the world" and based on the
model they perform actions.
UNIT 1 Artificial Intelligence and applications

 Updating the agent state requires information about:


 How the world evolves
 How the agent's action affects the world.

3.Goal-based agents
 The knowledge of the current state environment is not always sufficient to decide for
an agent to what to do.
 The agent needs to know its goal which describes desirable situations.
 Goal-based agents expand the capabilities of the model-based agent by having the
"goal" information.
 They choose an action, so that they can achieve the goal.
 These agents may have to consider a long sequence of possible actions before
deciding whether the goal is achieved or not. Such considerations of different scenario
are called searching and planning, which makes an agent proactive.

4.Utility-based agents
 These agents are similar to the goal-based agent but provide an extra component of
utility measurement which makes them different by providing a measure of success at
a given state.
 Utility-based agent act based not only goals but also the best way to achieve the goal.
 The Utility-based agent is useful when there are multiple possible alternatives, and an
agent has to choose in order to perform the best action.
 The utility function maps each state to a real number to check how efficiently each
action achieves the goals.
UNIT 1 Artificial Intelligence and applications

5. Learning Agents
 A learning agent in AI is the type of agent which can learn from its past experiences,
or it has learning capabilities.
 It starts to act with basic knowledge and then able to act and adapt automatically
through learning.
 A learning agent has mainly four conceptual components, which are:
 Learning element: It is responsible for making improvements by learning from
environment
 Critic: Learning element takes feedback from critic which describes that how well the
agent is doing with respect to a fixed performance standard.
 Performance element: It is responsible for selecting external action
 Problem generator: This component is responsible for suggesting actions that will
lead to new and informative experiences.
Hence, learning agents are able to learn, analyze performance, and look for new ways to
improve the performance.
UNIT 1 Artificial Intelligence and applications

Properties of Task environment


Types of Environments in AI
An environment in artificial intelligence is the surrounding of the agent. The agent takes
input from the environment through sensors and delivers the output to the environment
through actuators.
There are several types of environments:

Fully Observable vs Partially Observable


Deterministic vs Stochastic
Competitive vs Collaborative
Single-agent vs Multi-agent
Static vs Dynamic
Discrete vs Continuous
Episodic vs Sequential
Known vs Unknown

1. Fully Observable vs Partially Observable


When an agent sensor is capable to sense or access the complete state of an agent at each
point in time, it is said to be a fully observable environment else it is partially observable.
Maintaining a fully observable environment is easy as there is no need to keep track of the
history of the surrounding.
An environment is called unobservable when the agent has no sensors in all environments.
Examples:
Chess – the board is fully observable, and so are the opponent’s moves.
Driving – the environment is partially observable because what’s around the corner is not
known.

2. Deterministic vs Stochastic
When a uniqueness in the agent’s current state completely determines the next state of the
agent, the environment is said to be deterministic.
The stochastic environment is random in nature which is not unique and cannot be
completely determined by the agent.
UNIT 1 Artificial Intelligence and applications

Examples:
Chess – there would be only a few possible moves for a coin at the current state and these
moves can be determined.
Self-Driving Cars- the actions of a self-driving car are not unique, it varies time to time.

3. Competitive vs Collaborative
An agent is said to be in a competitive environment when it competes against another agent to
optimize the output.
The game of chess is competitive as the agents compete with each other to win the game which
is the output.
An agent is said to be in a collaborative environment when multiple agents cooperate to
produce the desired output.
When multiple self-driving cars are found on the roads, they cooperate with each other to avoid
collisions and reach their destination which is the output desired.

4. Single-agent vs Multi-agent
An environment consisting of only one agent is said to be a single-agent environment.
A person left alone in a maze is an example of the single-agent system.
An environment involving more than one agent is a multi-agent environment.
The game of football is multi-agent as it involves 11 players in each team.

5. Dynamic vs Static
An environment that keeps constantly changing itself when the agent is up with some action is
said to be dynamic.
A roller coaster ride is dynamic as it is set in motion and the environment keeps changing
every instant.
An idle environment with no change in its state is called a static environment.
An empty house is static as there’s no change in the surroundings when an agent enters.

6. Discrete vs Continuous
If an environment consists of a finite number of actions that can be deliberated in the
environment to obtain the output, it is said to be a discrete environment.
UNIT 1 Artificial Intelligence and applications

The game of chess is discrete as it has only a finite number of moves. The number of moves
might vary with every game, but still, it’s finite.
The environment in which the actions are performed cannot be numbered i.e. is not discrete, is
said to be continuous.
Self-driving cars are an example of continuous environments as their actions are driving,
parking, etc. which cannot be numbered.

7.Episodic vs Sequential
In an Episodic task environment, each of the agent’s actions is divided into atomic incidents or
episodes. There is no dependency between current and previous incidents. In each incident, an
agent receives input from the environment and then performs the corresponding action.
Example: Consider an example of Pick and Place robot, which is used to detect defective
parts from the conveyor belts. Here, every time robot(agent) will make the decision on the
current part i.e. there is no dependency between current and previous decisions.
In a Sequential environment, the previous decisions can affect all future decisions. The next
action of the agent depends on what action he has taken previously and what action he is
supposed to take in the future.
Example:
Checkers- Where the previous move can affect all the following moves.

8. Known vs Unknown
In a known environment, the output for all probable actions is given. Obviously, in case of
unknown environment, for an agent to make a decision, it has to gain knowledge about how
the environment works.

You might also like