0% found this document useful (0 votes)
30 views14 pages

Unit 1

Uploaded by

Prasad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views14 pages

Unit 1

Uploaded by

Prasad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Unit-1

Introduction : What is Artificial Intelligence: The AI Problems, The Underlying assumption, what is an AI
Technique? Foundation of AI and History of AI, Intelligent agents: Agents and Environments, the concept of
rationality, the nature of environments, structure of agents, problem solving agents, problem formulation.

Introduction to Artificial Intelligence


➢ Artificial Intelligence is concerned with the design of intelligence in an artificial device. The term was
coined by John McCarthy in 1956.
➢ Intelligence is the ability to acquire, understand and apply the knowledge to achieve goals in the
world.
➢ Artificial intelligence is a machine’s ability to perform the cognitive (relating to the mental process
involved in knowing, learning, and understanding things) functions we usually associate with human
minds.
➢ Artificial intelligence, the ability of a digital computer or computer-controlled robot to perform tasks
commonly associated with intelligent beings.
➢ AI is unique, sharing borders with Mathematics, Computer Science, Philosophy, Psychology, Biology,
Cognitive Science and many others.
➢ Artificial Intelligence refers to the simulation of human intelligence in machines that are programmed
to think and act like humans. It involves the development of algorithms and computer programs that
can perform tasks that typically require human intelligence such as visual perception, speech
recognition, decision-making, and language translation.
➢ Although there is no clear definition of AI or even Intelligence, it can be described as an attempt to
build machines that like humans can think and act, able to learn and use knowledge to solve problems
on their own.

A SIMPLE DEFINITION
Artificial Intelligence (AI) is a branch of computer science that deals with the creation of intelligent
systems which can reason, learn, and act autonomously.

The AI Problems
What then are some of the problems contained within Al? Much of the early work in the field focused on
formal tasks, such as game playing and theorem proving. Samuel wrote a checkers-playing program that not
only played games with opponents but also used its experience at those games to improve its later performance.
Chess also received a good deal of attention. The Logic Theorist was an early attempt to prove mathematical
theorems. It was able to prove several theorems from the first chapter of Whitehead and Russell's Principia
Mathematica. Gelernter's theorem prover explored another area of mathematics: geometry. Game playing and
theorem proving share the property that people who do them well are considered to be displaying intelligence.
Despite this, it appeared initially that computers could perform well at those tasks simply by being fast at
exploring a large number of solution paths and then selecting the best one. It was thought that this process
required very little knowledge and could therefore be programmed easily. As we will see later, this assumption
turned out to be false since no computer is fast enough to overcome the combinatorial explosion generated by
most problems.

Another early foray into AI focused on the sort of problem solving that we do every day when we decide how
to get to work in the morning, often called commonsense reasoning. It includes reasoning about physical
objects and their relationships to each other (e.g., an object can be in only one place at a time), as well as
reasoning about actions and their consequences (e.g., if you let go of something, it will fall to the floor and
maybe break). To investigate this sort of reasoning, Newell, Shaw, and Simon built the General Problem Solver
(GPS), which they applied to several commonsense tasks as well as to the problem of performing symbolic

Prasad.S.Merwade Page | 1
manipulations of logical expressions. Again, no attempt was made to create a program with a large amount of
knowledge about a particular problem domain. Only simple tasks were selected.

As AI research progressed and techniques for handling larger amounts of world knowledge were developed,
some progress was made on the tasks just described and new tasks could reasonably be attempted. These
include perception (vision and speech), natural language understanding, and problem solving in specialized
domains such as medical diagnosis and chemical analysis.

A person who knows how to perform tasks from several of the categories shown in the figure learns the
necessary skills in a standard order. First, perceptual, linguistic, and commonsense skills are learned. Later
(and of course for some people, never) expert skills such as engineering, medicine, or finance are acquired. It
might seem to make sense then that the earlier skills are easier and thus more amenable to computerized
duplication than are the later, more specialized ones. For this reason, much of the initial Al work was
concentrated in those early areas. But it turns out that this naive assumption is not right. Although expert skills
require knowledge that many of us do not have, they often require much less knowledge than do the more
mundane(Simpler) skills and that knowledge is usually easier to represent and deal with inside programs.

Prasad.S.Merwade Page | 2
Or

Problems in AI early focused in on formal task with Applied Mathematical Theorem


• Another Early day focused on sort the problem is called Commonsense reasoning.
• These include perception natural language understanding, and problem solving in Specialized domains
medical diagnosis and chemical analysis.
Now days AI problems and solution techniques it is important to discuss the following Question:
➢ What are the underlying assumptions about intelligence?
➢ What kinds of techniques will be useful for solving AI problems?
➢ At what level human intelligence can be modelled?
➢ When will it be realized when an intelligent program has been built?

The Underlying assumption


➢ Physical Symbol
➢ A Physical Symbol entities called symbol.
➢ A physical symbol is machine that produces through time on evolving collection of symbol structures.
➢ Computer provide the perfect of medium for this exterminations be programmed to physical symbols
➢ A physical Symbol include a visual perception its influence of sub symbolic process.
➢ Sub symbolic models are beginning to challenge symbolic ones at such low level tasks.
➢ The important of the physical symbol it’s a significant theory of the nature of human Intelligence and
great interest to psychologists.
➢ That its possible to build programs that can perform the intelligent task performed by people
AI Techniques:
Ai techniques is a method that exploits knowledge the should be represent several way
Its including:
➢ It is voluminous.
➢ Its is hard to characterize accurately.
➢ Its is constantly changing.
➢ Its differ from data by organized.
AI techniques is a method that exploits knowledge represented.
➢ The Knowledge captures' generalization. Amount of memory and updating will be required.so we call
this property ‘data’ rather than knowledge.
➢ A bulk of the data can be acquired automatically.
➢ It can easily be modified to correct errors and to reflect changes in the world.
➢ Its can be used in great many situations even if its not totally accurate or complete.

Foundation of AI and History of AI


AI has come a long way since its inception in the mid-20th century. Here’s a brief history of artificial
intelligence.
Mid-20th century
The origins of artificial intelligence may be dated to the middle of the 20th century, when computer scientists
started to create algorithms and software that could carry out tasks that ordinarily need human intelligence,
like problem-solving, pattern recognition and judgment.
One of the earliest pioneers(A person who is the first to study some new subject, or use or develop a new
technique) of AI was Alan Turing, who proposed the concept of a machine that could simulate any human
intelligence task, which is now known as the Turing Test.
1956 Dartmouth conference

Prasad.S.Merwade Page | 3
The 1956 Dartmouth conference gathered academics from various professions to examine the prospect of
constructing robots that can “think.” The conference officially introduced the field of artificial intelligence.
During this time, rule-based systems and symbolic thinking were the main topics of AI study.

1960s and 1970s


In the 1960s and 1970s, the focus of AI research shifted to developing expert systems designed to mimic the
decisions made by human specialists in specific fields. These methods were frequently employed in industries
such as engineering, finance and medicine.
1980s
However, when the drawbacks of rule-based systems became evident in the 1980s, AI research began to focus
on machine learning, which is a branch of the discipline that employs statistical methods to let computers learn
from data. As a result, neural networks were created and modelled after the human brain’s structure and
operation.
1990s and 2000s
AI research made substantial strides in the 1990s in robotics, computer vision and natural language processing.
In the early 2000s, advances in speech recognition, image recognition and natural language processing were
made possible by the advent of deep learning — a branch of machine learning that uses deep neural networks.

Intelligent agents
An AI agent is a computer program that can perform tasks autonomously. AI agents use sensors to perceive
their surroundings and actuators to act on those perceptions. They can make decisions based on their
environment, inputs, and predefined goals.
Or
An agent in artificial intelligence (AI) is essentially a smart system or a program that can perceive its
surroundings, decide on the best action to take, and act to achieve specific goals. These agents can work on
their own, making decisions and learning from their interactions. They’re key to making our technology
smarter, helping with everything from simple tasks at home to solving complex problems in various industries.
Or
An agent can be anything that perceiveits environment through sensors and act upon that environment through
actuators. An Agent runs in the cycle of perceiving, thinking, and acting.

Sensor: Sensor is a device which detects the change in the environment and sends the information to other
electronic devices. An agent observes its environment through sensors.

Actuators: Actuators are the component of machines that converts energy into motion. The actuators are only
responsible for moving and controlling a system. An actuator can be an electric motor, gears, rails, etc.

Types of AI Agents

Agents can be grouped into five classes based on their degree of perceived intelligence and capability. All
these agents can improve their performance and generate better action over the time. These are given below:

o Simple Reflex Agent

Prasad.S.Merwade Page | 4
o Model-based reflex agent
o Goal-based agents
o Utility-based agent
o Learning agent

1. Simple Reflex agent:

o The Simple reflex agents are the simplest agents. These agents take decisions on the basis of the current
percepts and ignore the rest of the percept history.
o These agents only succeed in the fully observable environment.
o The Simple reflex agent does not consider any part of percepts history during their decision and action
process.
o The Simple reflex agent works on Condition-action rule, which means it maps the current state to
action. Such as a Room Cleaner agent, it works only if there is dirt in the room.
o Problems for the simple reflex agent design approach:
o They have very limited intelligence
o They do not have knowledge of non-perceptual parts of the current state
o Mostly too big to generate and to store.
o Not adaptive to changes in the environment.

2. Model-based reflex agent

Prasad.S.Merwade Page | 5
o The Model-based agent can work in a partially observable environment, and track the situation.
o A model-based agent has two important factors:
o Model: It is knowledge about "how things happen in the world," so it is called a Model-based
agent.
o Internal State: It is a representation of the current state based on percept history.
o These agents have the model, "which is knowledge of the world" and based on the model they perform
actions.
o Updating the agent state requires information about:
a. How the world evolves
b. How the agent's action affects the world.

3. Goal-based agents

o The knowledge of the current state environment is not always sufficient to decide for an agent to what
to do.
o The agent needs to know its goal which describes desirable situations.
o Goal-based agents expand the capabilities of the model-based agent by having the "goal" information.
o They choose an action, so that they can achieve the goal.
o These agents may have to consider a long sequence of possible actions before deciding whether the
goal is achieved or not. Such considerations of different scenario are called searching and planning,
which makes an agent proactive.

Prasad.S.Merwade Page | 6
4. Utility-based agents
o These agents are similar to the goal-based agent but provide an extra component of utility measurement
which makes them different by providing a measure of success at a given state.
o Utility-based agent act based not only goals but also the best way to achieve the goal.
o The Utility-based agent is useful when there are multiple possible alternatives, and an agent has to
choose in order to perform the best action.
o The utility function maps each state to a real number to check how efficiently each action achieves the
goals.

5. Learning Agents
o A learning agent in AI is the type of agent which can learn from its past experiences, or it has learning
capabilities.
o It starts to act with basic knowledge and then able to act and adapt automatically through learning.
o A learning agent has mainly four conceptual components, which are:
Prasad.S.Merwade Page | 7
a. Learning element: It is responsible for making improvements by learning from environment
b. Critic: Learning element takes feedback from critic which describes that how well the agent is
doing with respect to a fixed performance standard.
c. Performance element: It is responsible for selecting external action
d. Problem generator: This component is responsible for suggesting actions that will lead to new
and informative experiences.
o Hence, learning agents are able to learn, analyze performance, and look for new ways to improve the
performance.

Agent Environment in AI
An environment is everything in the earth surrounding the agent, but it is not a part of an agent itself. An
environment can be referred as a situation in which an agent is present.

Prasad.S.Merwade Page | 8
The nature of environments
To design an agent, the first step should always be defining the task environment as completely as possible.
Task environment is the description of Performance, Environment, Actuators and Sensors of an agent. Let us
take an example task environment description of an automated taxi.

What are the features of Environment?( Properties of task environment)


According to Russell and Norvig, an environment might have a variety of characteristics from the perspective
of an agent:

1. Fully observable vs Partially Observable.


2. Static vs Dynamic.
3. Discrete vs Continuous.
4. Deterministic vs Stochastic.
5. Single-agent vs multi-agent.
6. Episodic vs sequential.
7. Known vs Unknown.
8. Accessible vs Inaccessible.

1. Fully observable vs Partially Observable:

• A fully observable environment is one in which an agent sensor may perceive or access the entire state
of an environment at any given time; otherwise, it is partially observable.
• It's simple to create a completely observable environment because there's no need to keep track of the
world's past.
• Unobservable environments are those in which an agent has no sensors in all environments.

2. Deterministic vs Stochastic:

• If an agent's current state and selected action can perfectly predict the forthcoming state of the
environment, then such environment is a deterministic environment.
• An agent cannot entirely control a stochastic environment because it is unpredictable in nature.
• Agents do not need to be concerned about uncertainty in a deterministic, fully observable world.

Prasad.S.Merwade Page | 9
3. Episodic vs Sequential:

• In an episodic environment, there are a succession of one-shot actions that just require the present
percept.
• In a Sequential context, however, an agent must remember previous acts in order to decide the next
best action.

4. Single-agent vs Multi-agent:

• The term "single agent environment" refers to an environment in which just one agent is present and
operates independently.
• A multi-agent environment, on the other hand, is one in which numerous agents are functioning in the
same space.
• The issues of agent design in a multi-agent environment differ from those in a single-agent
environment.

5. Static vs Dynamic:

• If the environment can vary while an agent is deliberating, it is referred to as a dynamic environment;
otherwise, it is referred to as a static environment.
• Static settings are simple to deal with because an agent does not need to keep glancing around while
making a decision.
• Agents, on the other hand, in a dynamic environment must continuously glancing about at each action.

5. Static vs Dynamic:

• If the environment can vary while an agent is deliberating, it is referred to as a dynamic environment;
otherwise, it is referred to as a static environment.
• Static settings are simple to deal with because an agent does not need to keep glancing around while
making a decision.
• Agents, on the other hand, in a dynamic environment must continuously glancing about at each action.

6. Discrete vs Continuous:

• A discrete environment is one in which there are a finite number of percepts and actions that can be
performed within it, whereas a continuous environment is one in which there are an infinite number of
percepts and actions that may be performed within it.
• A chess game takes place in a discrete context since there are only so many moves that may be made.
• A continuous environment is exemplified by a self-driving automobile.

7. Known vs Unknown:

• The terms "known" and "unknown" do not refer to features of the environment, but rather to an agent's
state of knowledge when performing an action.
• The effects of all actions are known to the agent in a known environment. In order to perform an action
in an unfamiliar environment, the agent must first learn how it operates.
• A known environment could be partially observable whereas an unknown environment could be
entirely observable.

8. Accessible vs Inaccessible:

• If an agent can acquire complete and correct knowledge about the state's environment, it is referred to
as an accessible environment; otherwise, it is referred to as an inaccessible environment.
Prasad.S.Merwade Page | 10
• An accessible environment is an empty room whose condition may be characterized by its temperature.
• An example of an inaccessible environment is information regarding a global incident.

Or

1. Fully observable vs Partially Observable:


o If an agent sensor can sense or access the complete state of an environment at each point of time then
it is a fully observable environment, else it is partially observable.
o A fully observable environment is easy as there is no need to maintain the internal state to keep track
history of the world.
o An agent with no sensors in all environments then such an environment is called as unobservable.

2. Deterministic vs Stochastic:
o If an agent's current state and selected action can completely determine the next state of the
environment, then such environment is called a deterministic environment.
o A stochastic environment is random in nature and cannot be determined completely by an agent.
o In a deterministic, fully observable environment, agent does not need to worry about uncertainty.

3. Episodic vs Sequential:
o In an episodic environment, there is a series of one-shot actions, and only the current percept is required
for the action.
o However, in Sequential environment, an agent requires memory of past actions to determine the next
best actions.

4. Single-agent vs Multi-agent
o If only one agent is involved in an environment, and operating by itself then such an environment is
called single agent environment.
o However, if multiple agents are operating in an environment, then such an environment is called a
multi-agent environment.
o The agent design problems in the multi-agent environment are different from single agent environment.

5. Static vs Dynamic:
o If the environment can change itself while an agent is deliberating then such environment is called a
dynamic environment else it is called a static environment.
o Static environments are easy to deal because an agent does not need to continue looking at the world
while deciding for an action.
o However for dynamic environment, agents need to keep looking at the world at each action.
o Taxi driving is an example of a dynamic environment whereas Crossword puzzles are an example of
a static environment.

Prasad.S.Merwade Page | 11
6. Discrete vs Continuous:
o If in an environment there are a finite number of percepts and actions that can be performed within it,
then such an environment is called a discrete environment else it is called continuous environment.
o A chess gamecomes under discrete environment as there is a finite number of moves that can be
performed.
o A self-driving car is an example of a continuous environment.

7. Known vs Unknown
o Known and unknown are not actually a feature of an environment, but it is an agent's state of
knowledge to perform an action.
o In a known environment, the results for all actions are known to the agent. While in unknown
environment, agent needs to learn how it works in order to perform an action.
o It is quite possible that a known environment to be partially observable and an Unknown environment
to be fully observable.

8. Accessible vs Inaccessible
o If an agent can obtain complete and accurate information about the state's environment, then such an
environment is called an Accessible environment else it is called inaccessible.
o An empty room whose state can be defined by its temperature is an example of an accessible
environment.
o Information about an event on earth is an example of Inaccessible environment.

Concept of Rationality
An agent should act as a Rational Agent. A rational agent is one that does the right thing that is the right
actions will cause the agent to be most successful in the environment.
Rational agent: An agent should act as a Rational Agent. A rational agent is one that does the right thing
that is the right actions will cause the agent to be most successful in the environment. Rational agent is
capable of taking best possible action in any situation. Example of rational action performed by any
intelligent agent:
Automated Taxi Driver:
Performance Measure: Safe, fast, legal, comfortable trip, maximize profits.
Environment: Roads, other traffic, customers.
Actuators: Steering wheel, accelerator, brake, signal, horn.
Sensors: Cameras, sonar, speedometer, GPS, odometer, engine sensors, keyboard.

Prasad.S.Merwade Page | 12
Performance measures: A performance measure embodies the criterion for success of an agent's behavior.
When an agent is plunked down in an environment, it generates a sequence of actions according to the
percepts it receives.
There are three basic types of environment in ai: real-world, virtual, and simulated. Each category has a
particular function and raises particular issues for AI system development.

Nature of Environments
he surroundings or circumstances in which an AI system functions are referred to as the environment in AI. It
includes the physical environment, digital platforms, and virtualized worlds where AI models and algorithms
are used. The environment gives AI systems the context they need to see, think, and decide in an informed
manner.

Here are the 3 types:

The Physical Environment

• The tangible reality in which AI systems function is referred to as the physical environment.
• It features authentic environments including houses, workplaces, factories, and outdoor areas.
• When used in real situations, AI systems must use sensors to sense their surroundings and interact with
people and objects in a useful way.
• These surroundings frequently provide difficulties like noise, shifting weather patterns, and significant
safety risks.

Virtual Environment

• Computer-generated settings that resemble real-world scenes are called virtual environments.
• They make it possible for AI systems to communicate with made-up things and entities.
• Before deploying AI models and testing algorithms in the actual world, virtual environments are
frequently used.
• They offer engineers a safe space for testing and allow them to make adjustments to AI systems without
worrying about the impact on the real world.

Simulated Environment

• Realistic and highly specialized virtual places are called simulated environments.
• They generate intricate situations that could be risky or impossible to recreate in the real world.
• Simulated environments are especially useful for teaching AI systems in fields like robotics, aerospace,
and autonomous vehicles.
• Developers can improve AI systems' adaptability and get them ready for difficulties in the real world
by exposing them to a variety of simulated settings.

The Structure of Agents


An intelligent agent is a combination of Agent Program and Architecture.
Intelligent Agent = Agent Program + Architecture.
Agent Program is a function that implements the agent mapping from percepts to actions. There exists a
variety of basic agent program designs, reflecting the kind of information made explicit and used in the
decision process. The designs vary in efficiency, compactness, and flexibility. The appropriate design of the
agent program depends on the nature of the environment.
PROBLEM-SOLVING APPROACH IN ARTIFICIAL INTELLIGENCE PROBLEMS

Prasad.S.Merwade Page | 13
The reflex agents are known as the simplest agents because they directly map states into actions.
Unfortunately, these agents fail to operate in an environment where the mapping is too large to store and
learn. Goal-based agent, on the other hand, considers future actions and the desired outcomes.
Here, we will discuss one type of goal-based agent known as a problem-solving agent, which uses atomic
representation with no internal states visible to the problem-solving algorithms.
Problem-solving agent
The problem-solving agent perfoms precisely by defining problems and its several solutions.
According to psychology, “a problem-solving refers to a state where we wish to reach to a definite goal
from a present state or condition.”
According to computer science, a problem-solving is a part of artificial intelligence which encompasses a
number of techniques such as algorithms, heuristics to solve a problem.
Therefore, a problem-solving agent is a goal-driven agent and focuses on satisfying the goal.
PROBLEM DEFINITION To build a system to solve a particular problem, we need to do four things:
(i) Define the problem precisely. This definition must include specification of the initial
situations and also final situations which constitute (i.e) acceptable solution to the
problem.
(ii) Analyze the problem (i.e) important features have an immense (i.e) huge impact on the
appropriateness of various techniques for solving the problems.
(iii) Isolate and represent the knowledge to solve the problem.
(iv) Choose the best problem – solving techniques and apply it to the particular.
Steps performed by Problem-solving agent

Goal Formulation: It is the first and simplest step in problem-solving. It organizes the steps/sequence
required to formulate one goal out of multiple goals as well as actions to achieve that goal.
Goal formulation: is based on the current situation and the agent’s performance measure (discussed below).
Problem Formulation: It is the most important step of problem-solving which decides what actions should
be taken to achieve the formulated goal. There are following five components involved in problem
formulation:
Initial State: It is the starting state or initial step of the agent towards its goal.
Actions: It is the description of the possible actions available to the agent.
Transition Model: It describes what each action does.
Goal Test: It determines if the given state is a goal state.
Path cost: It assigns a numeric cost to each path that follows the goal. The problem-solving agent selects a
cost function, which reflects its performance measure. Remember, an optimal solution has the lowest path
cost among all the solutions.

Prasad.S.Merwade Page | 14

You might also like