AI Unit 1
AI Unit 1
AI Unit 1
Artificial Intelligence
UNIT – 1
Artificial intelligence is the science of making machines that can think like humans. It
can do things that are considered "smart."
AI technology can process large amounts of data in ways, unlike humans.
The goal for AI is to be able to do things such as recognize patterns, make decisions, and
judge like humans.
Example: Digital assistants, Maps & Navigation, Facial Recognition, Self-driving cars.
They are organized into four categories:
Systems that think like humans.
Systems that act like humans.
Systems that think rationally.
Systems that act rationally.
3)Thinking rationally:
In Artificial Intelligence thinking rationally means thinking rightly for example if
something is true that should be true or that must be true or it can’t be false.
If someone thinking rightly always in given circumstances in a given amount of
information then we call it as laws of thought approach.
The Greek Philosopher Aristotle was one of the first to attempt to codify “Right
thinking” for structured argument. He always yielded a correct conclusion when given
correct premises.
For example: “All men have brains, all humans have brains therefore all humans are
men”.
There are two main obstacles exist to implement the approach
This approach needed 100% knowledge
Too many computations required.
INTELLIGENT AGENTS
An agent is anything that can be viewed as perceiving its environment through
sensors and acting upon that environment through effectors.
A human agent has eyes, ears, and other organs for sensors, and hands, legs, mouth,
and other body parts for effectors.
A robotic agent substitutes cameras and infrared range finders for the sensors and
various motors for the effectors.
HOW AGENTS SHOULD ACT
A rational agent is one that does the right thing. Obviously, this is better than doing
the wrong thing, but what does it mean? As a first approximation, we will say that the
right action is the one that will cause the agent to be most successful.
That leaves us with the problem of deciding how and when to evaluate the agent's
success.
PERFORMANCE MEASURE
We use the term performance measure for the how—the criteria that determine how
successful an agent is.
Obviously, there is not one fixed measure suitable for all agents We could ask the
agent for subjective opinion of how happy it is with its own performance, but some
agents would be unable to answer, and others would delude themselves.
Goals of Agent:
-High Performance
-optimized result
-Rational action
UNIT 1 Artificial Intelligence and applications
The Simple reflex agents are the simplest agents. These agents take decisions on the
basis of the current percepts and ignore the rest of the percept history.
These agents only succeed in the fully observable environment.
UNIT 1 Artificial Intelligence and applications
The Simple reflex agent does not consider any part of percepts history during their
decision and action process.
The Simple reflex agent works on Condition-action rule, which means it maps the
current state to action. Such as a Room Cleaner agent, it works only if there is dirt in
the room.
Problems for the simple reflex agent design approach:
They have very limited intelligence
They do not have knowledge of non-perceptual parts of the current state
Mostly too big to generate and to store.
Not adaptive to changes in the environment.
The Model-based agent can work in a partially observable environment, and track the
situation.
A model-based agent has two important factors:
Model: It is knowledge about "how things happen in the world," so it is called a
Model-based agent.
Internal State: It is a representation of the current state based on percept history.
These agents have the model, "which is knowledge of the world" and based on the
model they perform actions.
UNIT 1 Artificial Intelligence and applications
3.Goal-based agents
The knowledge of the current state environment is not always sufficient to decide for
an agent to what to do.
The agent needs to know its goal which describes desirable situations.
Goal-based agents expand the capabilities of the model-based agent by having the
"goal" information.
They choose an action, so that they can achieve the goal.
These agents may have to consider a long sequence of possible actions before
deciding whether the goal is achieved or not. Such considerations of different scenario
are called searching and planning, which makes an agent proactive.
4.Utility-based agents
These agents are similar to the goal-based agent but provide an extra component of
utility measurement which makes them different by providing a measure of success at
a given state.
Utility-based agent act based not only goals but also the best way to achieve the goal.
The Utility-based agent is useful when there are multiple possible alternatives, and an
agent has to choose in order to perform the best action.
The utility function maps each state to a real number to check how efficiently each
action achieves the goals.
UNIT 1 Artificial Intelligence and applications
5. Learning Agents
A learning agent in AI is the type of agent which can learn from its past experiences,
or it has learning capabilities.
It starts to act with basic knowledge and then able to act and adapt automatically
through learning.
A learning agent has mainly four conceptual components, which are:
Learning element: It is responsible for making improvements by learning from
environment
Critic: Learning element takes feedback from critic which describes that how well the
agent is doing with respect to a fixed performance standard.
Performance element: It is responsible for selecting external action
Problem generator: This component is responsible for suggesting actions that will
lead to new and informative experiences.
Hence, learning agents are able to learn, analyze performance, and look for new ways to
improve the performance.
UNIT 1 Artificial Intelligence and applications
2. Deterministic vs Stochastic
When a uniqueness in the agent’s current state completely determines the next state of the
agent, the environment is said to be deterministic.
The stochastic environment is random in nature which is not unique and cannot be
completely determined by the agent.
UNIT 1 Artificial Intelligence and applications
Examples:
Chess – there would be only a few possible moves for a coin at the current state and these
moves can be determined.
Self-Driving Cars- the actions of a self-driving car are not unique, it varies time to time.
3. Competitive vs Collaborative
An agent is said to be in a competitive environment when it competes against another agent to
optimize the output.
The game of chess is competitive as the agents compete with each other to win the game which
is the output.
An agent is said to be in a collaborative environment when multiple agents cooperate to
produce the desired output.
When multiple self-driving cars are found on the roads, they cooperate with each other to avoid
collisions and reach their destination which is the output desired.
4. Single-agent vs Multi-agent
An environment consisting of only one agent is said to be a single-agent environment.
A person left alone in a maze is an example of the single-agent system.
An environment involving more than one agent is a multi-agent environment.
The game of football is multi-agent as it involves 11 players in each team.
5. Dynamic vs Static
An environment that keeps constantly changing itself when the agent is up with some action is
said to be dynamic.
A roller coaster ride is dynamic as it is set in motion and the environment keeps changing
every instant.
An idle environment with no change in its state is called a static environment.
An empty house is static as there’s no change in the surroundings when an agent enters.
6. Discrete vs Continuous
If an environment consists of a finite number of actions that can be deliberated in the
environment to obtain the output, it is said to be a discrete environment.
UNIT 1 Artificial Intelligence and applications
The game of chess is discrete as it has only a finite number of moves. The number of moves
might vary with every game, but still, it’s finite.
The environment in which the actions are performed cannot be numbered i.e. is not discrete, is
said to be continuous.
Self-driving cars are an example of continuous environments as their actions are driving,
parking, etc. which cannot be numbered.
7.Episodic vs Sequential
In an Episodic task environment, each of the agent’s actions is divided into atomic incidents or
episodes. There is no dependency between current and previous incidents. In each incident, an
agent receives input from the environment and then performs the corresponding action.
Example: Consider an example of Pick and Place robot, which is used to detect defective
parts from the conveyor belts. Here, every time robot(agent) will make the decision on the
current part i.e. there is no dependency between current and previous decisions.
In a Sequential environment, the previous decisions can affect all future decisions. The next
action of the agent depends on what action he has taken previously and what action he is
supposed to take in the future.
Example:
Checkers- Where the previous move can affect all the following moves.
8. Known vs Unknown
In a known environment, the output for all probable actions is given. Obviously, in case of
unknown environment, for an agent to make a decision, it has to gain knowledge about how
the environment works.