UNIT I (B)
UNIT I (B)
• An agent is anything that can be viewed as perceiving its environment through sensors and acting
upon that environment through actuators.
• For example consider human as agent. Human has eyes, ears and other organs which are sensors.
Hands, legs, mouth and other body part work as actuators.
• Lets consider another example of agent-Robot. A Robotic agent might have cameras, infrared
rangefinders as sensors. Robot can have various motors for actuators.
The Al Terminology
1) Percept
The term percept refers to the agent's perceptual inputs at any given instant. Examples -
1) A human agent percepts "Bird flying in the sky "through eyes and takes i (photograph)".
2) A robotic agent perceive "Temperature of a boiler" through cameras and takes the control action.
2) Percept Sequence
An agent's percept sequence is the complete history of everything the agent has ever perceived.
Agent has choice of action at any given instant and it can depend on the entire percept sequence
agent has recorded. The change in the perception forms a historical case.
For example -
A robotic agent monitoring temperature of a boiler will be sensing it and keep on maintaining the
percept sequence. This percept sequence will help robotic agent to know how temperature
fluctuates and action will be taken depending on percept sequence for controlling temperature.
3) Agent Function
It is defined as mathematical function which maps each and every possible percept sequence to a
possible action.
This function has input as percept sequence and it gives output as action.
Example -
ATM machine is a agent, it display menu for withdrawing money, when ATM card is inserted. When
provided with percept sequence (1) A transaction type and (2) PIN number, then only user gets cash.
4) Agent Program
When we want to develop a agent program we need to tabulate all the agent functions that
describes any given agent. This can practically lead to infinite functions hence we need to put bound
on the length of percept sequence that we need to consider. This table of functions of percept
sequences and action will be external characteristics of the agent where as internally agent function
for an intelligent agent will be implement by an agent program.
Note:
Architecture of Agent
• The agent program runs on some sort of computing device, which is called the architecture. The
program we choose has to be one that the architecture will accept and run. The architecture makes
the percepts from the sensors available to the program, runs the program and feeds the program's
action choices to the effectors as they are generated. The relationship among agents, architectures
and programs can be summed up as follows:
• Following diagram illustrates the agent's action process, as specified by architecture. This can be
also termed as agent's structure.
Role of an Agent Program
• An agent program takes input as the current percept from the sensor and return an action to the
effectors (Actuators).
• An agent is anything that can be viewed as perceiving its environment through sensors and acting
upon that environment through effectors/actuators.
Weak Agent
• A weak notion says that an agent is a hardware or software based computer system that has the
following properties:
1] Autonomy
Agents operate without direct intervention of humans and have control over their actions and
internal state.
2] Social ability
Agents interact with other agents (and possibly humans) via an agent communication language.
3] Reactivity
Agents perceive their environment and respond in timely and rational fashion to changes that occur
in it.
4] Pro-activeness
Agents do not simply act in response to their environment, they are capable of taking the initiative,
generate their own goals and act to achieve them.
Strong Agent
A stronger nation says that an agent has mental properties, such as knowledge, belief, intention,
obligation. In addition and agent has other properties such as: -
1. Mobility: Agents can move around from one machine to another and across different system
architectures and platforms.
3. Rationality: Agents will try to achieve their goals and not acts in such a way that would prevent
their goals from being achieved.
Strong AI is associated with human traits such as consciousness, sentience, sapience, self-awareness
Rational Agent
• If every entry in the agent function is filled correctly then the agent will always do the right thing.
Such agent is called as rational agent. Doing the right thing makes agent most successful. So now we
need certain methods to measure the success of rational agent.
• When an agent is working in the environment, it generates a sequence of actions according to the
percept it receives. This sequence of actions leads to various states of environment. If this sequences
of environment state change is desirable, then we can say that agent has performed well. So if the
tasks and environment change automatically the measuring conditions will change and hence there
is no fixed measure suitable for all agents.
• As a general rule, it is better to design performance measures according to what one wants in the
environment, rather than according to how one thinks the agent should behave.
For each possible percept sequence, a rational agent should select an action that is expected to
maximize its performance measure given the evidence provided by the percept sequence and
whatever built-in knowledge the agent has. Fig. 2.8.1 depits performance measure metric.
• The concept of rational behaviour leads to two types agents, the good agents and In the bad agent.
Most of the time the good and bad behaviour (that is performance) of the agent depends completely
on the environment.
• If environment is completely known then we get agent's good behaviour as depicted in Fig. 2.8.2.
• If environment is unknown then agent can act badly as depicted in Fig. 2.8.3.
• An omniscient agent knows the actual outcome of its actions and can act accordingly, but in reality
omniscience is impossible.
• For increasing performance agent must do same actions in order to modify future percepts.
• This is called as information gathering which is important part of rationality. Also agent should
explore (understand) environment to increase performance i.e. for doing more correct actions.
• Learning is another important activity agent should do so as to gather information. Agent may
know environment completely (which is practically not possible) in certain cases but if it is not known
agent needs learn on its own.
• To the extent that an agent relies on the prior knowledge of its designer rather than on its own
percepts, we say that agent lacks autonomy. A rational agent should be autonomous - it should learn
what it can do to compensate for partial or incorrect prior knowledge.
Agent Environment
Agent Description
It has two buckets at two locations, L1 and L2 (for simplicity consider square area for location), full of
BLACK and WHITE colour balls.
Picker perceives at which location it is. It can perceive that, is there a BLACK ball at the given location.
Agent Actions
Picker can choose to MOVE LEFT or MOVE RIGHT, PICK UP BLACK BALL or be ideal that as do nothing.
A function can be devised as follows - If the current location bucket has more BLACK BALLS then
PICK, otherwise MOVE to other square.
Following is the partial tabulation of a simple agent function for the black ball picker.
Environments
Nature of Environment
• In previous section we have seen various types of agents, now let us see the details of environment
where in agent is going to work. A task environment is essentially a problem to which agent is a
solution.
• The range of task environments that might arise in AI is obviously vast. We can, however, identify a
fairly small number of dimensions along which task environments can be categorized. These
dimensions determine, to a large extent, the appropriate agent design and the applicability of each
of the principle families of techniques for agent implementation.
• If an agent's sensors give it the access to the complete state of the environment at each point of
time, then it is fully observable.
• In some environment, if there is noise or agent is with inaccurate sensors or may be some states of
environment are missing then such environment is partially observable.
Example-
Fully Observable
The puzzle game environment is fully observable where agent can see all the aspects, that are
surrounding it. That is agent can see all the squares of the puzzle game along with values (if any
added) in them.
More examples -
Partially Observable
The pocker game environment is partially observable. Game of pocker is a card game that shares
betting rule; and usually (but not always) hand rankings. In this game agent is not able to perceive
other player's betting.
Also agent cannot see other player's card. It has to play with reference to its own cards and with
current betting knowledge.
More examples-
2) Millitary Planning.
2. Deterministic Vs Stochastic
• If from current state of environment and the action, agent can deduce the next state of
environment then, it is deterministic environment otherwise it is stochastic environment.
• If the environment is deterministic except for the actions of other agents, we say that the
environment is strategic.
Examples -
Deterministic: In image analysis whatever is current percept of the image, agent can take next action
or can process remaining part of image based on current knowledge. Finally it can produce all the
detail aspects of the image.
Strategic: Agent playing tic-tac toe game is in strategic environment as from the current state agent
decides next state action except for the action of other agents.
More examples -
1) Video analysis.
2) Trading agent.
Stochastic: Boat driving agent is in stochastic environment as the next driving does not based on
current state. In fact it has to see the goal and from all current and previous percepts agent needs to
take action.
More examples-
1) Car driving.
3. Episodic Vs Sequential
• In episodic environment agent's experience is divided into atomic episodes such that each episode
consists of, the agent perceiving process and then performing single action. In this environment the
choice of action depends only on the episode itself, previous episode does not affect current actions.
• In sequential environment on the other hand, the current decision could affect all future decision.
• Episodic environments are more simpler than sequential environments because the agent does not
need to think ahead.
Example -
Episodic Environment: Agent finding defective part of assembled computer machine. Here agent will
inspect current part and take action which does not depend on previous decisions (previously
checked parts).
More Examples -
2) Card games.
Sequential Environment: A game of chess is sequential environment where agent takes action based
on all previous decisions.
More examples - -
2) Refinery controller.
4. Static Vs Dynamic
• If the environment can change while agent is deliberating then we say the environment is dynamic
for the agent, otherwise it is static.
• Static environments are easy to tackle as agent need not worry about changes around (as it will not
change) while taking actions.
• Dynamic environments keep on changing continuously which makes agent to be more attentive to
make decisions for act.
• If the environment itself does not change with time but the agent's performance does, then we say
that environment is semidynamic.
Examples -
Static: In crossword puzzle game the environment that is values held in squares can only change by
the action of agent.
More examples -
1) 8 queen puzzle
2) Semidynamic.
Dynamic: Agent driving boat is in dynamic environment because the environment can change (A big
wave can come, it can be more windy) without any action of agent.
More examples -
1) Car driving
2) Tutor.
5. Discrete Vs Continuous
• In discrete environment the environment has fixed finite discrete states over the time and each
state has associated percepts and action.
• Where as continuous environment is not stable at any given point of time and it changes randomly
thereby making agent to learn continuously, so as to make decisions.
Example:
Discrete: A game of tic-tac toe depicts discrete environment where every state is stable and it
associated percept and it is outcome of some action.
More examples -
1) 8 - queen puzzle
2) Crossword puzzle.
Continuous: A boat driving environment is continuous where the state changes are continuous, and
agent needs to perceive continuously.
More examples -
• In single agent environment we have well defined single agent who takes decision and acts.
• In multiagent environment there can be various agents or various group of which are working
together to take decision and act. In multiagent environment agents we can have competitive
multiagent environment, in which many agents are working parallel to miximize performance of
individual or there can be co-operative multiagent environment, where in all agents have single goal
and they work to get high performance of all of them together.
Example:
• Fantasy football. [Here many agents work together to achieve same goal.] Multiagent competitive
environment
• Trading agents. [Here many agents are working but opposite to each other]
• Wargames, [Here multiple agents are working opposite to each other but one side (agent/agent
team) is having negative goal.]
Based on specific problem domains we can further classify task environments as follow.
Example: Chess with a clock environment where the move should be done in specified amount of
time.
Example: The executive agent who is monitoring profit of a organization, can help top level
management to take decision..
Example: The image processing agent who can take input and synthesize it to produce required
output, and details about the image.
Example: A small scale agent which can be used as personal assistance who can help to remember
daily task, who can give notifications about work etc.
6) Buying Environment
Example: A online book shopping bot (agent) who buys book online as per user requirements.
Example: A cadburry manufacturing firm can use a agent who automates complete procedure of
cadburry making.
Example: We can have a agent who is learning some act or some theories presented to it and later it
can play it back which will be helpful for others to learn that act or theories.
Example: We can have agent who solve different types of problems from mathematics or statistics or
any general purpose problem like travelling salesman problem.
Example Agent doing scientific calculations for aeronautics purpose or agent develop to design road
maps or over bridge structure.
Example Agent working for design of some chemical component helpful for medicine.
Example Agent working in a research lab where it is made to grasp (learn) knowledge and represent
it and drawing conclusions from it, which will helps researcher for further study.
Example: An agent developed to automatically carry data over a computer network based on certain
conditions like time limit or data size limit in same network (same type of agent can be developed for
physically transferring items or mails) over same network.
Example: If a data repository is to be maintained then agent can be developed to arrange data based
on criterias which will be helpful for searching later on.
Intelligent Agent
"Intelligent agent is an intelligent actor, who observe and act upon an environment".
The term 'Intelligent thinker' is different from intelligent agent. Fig. 2.11.1 shows intelligent agent's
behaviour.
Example:
1) The IA must learn and improve through interaction with the agent environment.
5) The IA must have memory which must exhibit storage and retrival capacities.
6) The IA should be able to analyze self in terms of behaviour, error and success.
In artificial intelligence, there are different forms of intelligent agent and sub-agents.
As the degree of perceived intelligence and capability varies, it is possible to frame agent's into four
categories.
1. Agent Type 1
These agents select actions on the basis of the current percept, ignoring the rest of percept history.
Property:
4) If simple reflex agent works in partially observable environment then, it can lead to infinite loops.
5) Infinite loops can be avoided if simplex reflex agent can try out possible actions i.e can randomize
the actions.
6) A randomize simple reflex agent will perform better than deterministic reflex agent.
Example:
In ATM agent system if PIN matches with given account number then customer getsmoney.
Input: Percept
Output: An action.
4. return action.
2. Agent Type 2
Property:
1) It has ability to handle partially observable environments. 2) Its internal state is updated
continuously which can be shown as:
For example:
A car driving agent which maintains its own internal state and then take action as environment
appears to it.
Procedure: REFLEX-AGENT-WITH-STATE
Input: Percept.
Output: An action.
Static State, a description of the current world state, rules, a set of condition- action rules, action, the
most recent action, initially none.
3. Action←RULE-ACTION (rule)
4. return action.
3. Agent Type 3
Goal based agent stores state description as well as it stores goal states information.
Property
3) They are dynamic in nature because the information description appears in proper and explicit
manner.
4) We can quickly change goal based agent's behaviour for new/unknown goal.
For example:
4. Agent Type 4
In complex environment only goals are not enough for agent designs. Additional to this we can have
utility function.
Property:
1) Utility function maps a state on to a real number, which describes the associated degree of best
performance.
2) Goals gives us only two outcomes achieved or not achieved. But utility based agents provide a way
in which the likelihood of success can be measured against importance of the goals.
3) Rational agent which is utility based can maximize expected value of utility function i.e more
perfection can be achieved.
4) Goals gives only two discrete states,
a) Happy b) Unhappy.
For example-
Millitary planning robot which provides certain plan of action to be taken. Its environment is too
complex, and expected performance is also high.
Learning Agent
If agent is to operate initially in unknown environments then agent should be self-learner. It should
observe and gain and store information. Learning agent can be divided into 4 conceptual
components.
3) Critic - It tells how agent is doing and determines how the performance element should be
modified to do better in the future.
4) Problem Generator - It is responsible for suggesting actions that will lead to new and informative
experiences to agent. Agent can ask problem generator for suggestions.
The performance standards distinguishes part of the incoming percept as a reward (success) or
penalty (failure) that provides direct feedback on the quality of the agent's behaviour.
All four types agent we have seen can improve their performance through learning and there by
become learning agents.
For example:
Aeroplane driving agent which continuously learns from environment and then do safe plane driving.
1) Base/Learner/Learning element - It holds basic knowledge and learn new things from the
unfamiliar environment.
3) Fault reflector element - It gives feedback. It reflects fault and analyze corrective actions in order
to get maximum success.
4) New problem generator element - It generate new and informative experience. It suggests new
actions.
The performance standard makes difference between incoming percept as a reward (or penalty),
that indicate direct feedback on the quality of the agent's behaviour.
• Depending on sensitivity of their sensors, and effectiveness of their action and internal states they
possess.
2. Temporal Agents - A temporal agent may use time based stored information to offer instructions
or data acts to a computer program or human being and takes program inputs percepts to adjust its
next behaviour.
5. Input Agents - That process and make sense of sensor inputs- e.g. neural network based agents.
7. Believable Agents - An agent exhibiting a personality via the use of an artificial character (the
agent is embedded) for the interaction.
8. Computational Agents: That can do some complex, lengthy scientific computations as per problem
requirements.
9. Information Gathering Agents - Who can collect (perceive) and store data.
10. Entertaining Agents - Who can perform something which can entertain human like gaming
agents.
11. Biological Agents - Their reasoning engine works almost identical to human brain.
12. World Agents - That incorporate a combination of all the other classes of agents to allow
autonomous behaviours.
13. Life Like Agents - Which are combinations of other classes of agents which will behave like real
world characters. (For example - A robotic dog)
When we are specifying agents we need to specify performance measure, the environment and the
agent's sensors and actuators. We group all these under the heading of the task environment.
For the acronymically we call this PEAS ([P]erformance, [E]nvironment, [A]ctuators, [S]ensors)
description.
1) Define problem area (i.e. task environment) in complete manner. Example-Vaccum world,
automated face recognition, automated taxi driver.
3) Define or tabulate agent functions (i.e. percept sequence and action column)
Examples of Agent Types and their PEAS Description According to their Uses
V) Educational Purpose
The Interactive English Tutor agent system must achieve the following performance measures.
1) All the student must get maximum knowledge regarding English subject, such as vocabulary,
verbal soft skills, (i.e. communicational skill), reading, writing skills.
2) All the students must score good marks in the English test.
1) All the students having different grasping power and IQ (Intellectual Quotient).
The software model (agent program) will be executed on the agent architecture. (i.e. operating
system). The actions performed by interactive English tutor are,
2) Practical assignment on verbal written skills, report generation, letter writing, etc.
3) Monitoring and inspection (i.e. checking) of the practical assignment provided with suggestions
and corrections, to students.
Sensor plays a crucial role in interactive English tutor agent system. The following sensor are required
to support sequence of perception:-
AU May-14, Dec.-14
Problems are the issues which comes across any system. A solution is needed to solve that particular
problem. Strong intelligence is required to solve such problems. Traditionally people think that the
person who is able to solve more and more problems is more intelligent than others. It is always said
that problem solving skills demonstrates intelligence hence it becomes a major aspect in artificial
intelligence to solve the problems. In order to understand how exactly problem solving contributes
to intelligence, one needs to find out how intelligent species solve problems.
The classical approach to solving a problem is quite simple in which, given a problem at hand hit and
trial method is used to check for various solutions to that problem. This hit and trial approach usually
works well for trivial problems and is referred to as the classical approach to problem solving.
Problem Defining and Solving Problem
AU: Dec.-10
For solving any type problem (task) in real world one needs formal description of the problem.
Goals help to organize behaviour of systems by limiting the objectives that the agent is trying to
achieve. Goal formulation is based on the current situation and the agent's performance measure. It
is first step towards problem solving.
That is how success is defined. That will be the ultimate thing system needs to achieve, which is the
problem solution's output.
4. Ability to Perform
It tells how agents transforms from one situation to another, how operations and rules are specified
which change the states of the problem during solution process.
Problem formulation is the process of deciding what actions and states to consider, given a goal.
For example –
Consider a agent program Indian Traveller developed for travelling from Pune to Chennai travelling
through different states. The initial state for this agent can be described as In (Pune).
2) A description of the possible actions available to the agent
The most common formulation uses a successor funtion. Given a perticular state x, SUCCESSOR
Function (X) returns a set of <action, successor>, ordered pairs, where each action is one of the legal
actions in state x and each successor is a state that can be reached from x by applying the action.
For example:
From the state In (Pune), the successor function for Indian Traveller problem wouldreturn.
Together, the initial state and successor function implicitly define the state space of the problem -
which is the set of all states reachable from the initial state.
The state space forms a graph in which the nodes are states and the arcs between nodes are actions.
3) The goal test, which determines whether a given state is goal (final) state. In some problems we
can explicitely specify a set of goals. If a particular state is reached we can check it with set of goals
and if a match is found success can be announced.
For example:
In Indian Traveller problem the goal is to reach chennai i.e. it is a singleton set {In (Chennai)}.
In certain types of problems we can not specify goals explicitly. Instead, goal is specified by an
abstract property rather than an explicitly enumerated set of states.
For example:
In chess, the goal is to reach a state called "Checkmate" where the opponent's king is under attack
and can not escape. This "Checkmate" situation can be represented using various state spaces.
4) A path cost function that assigns a numeric cost (value) to each path. The problem-solving agent is
expected to choose a cost-function that reflects its own performance measure.
For Indian-Traveller agent we can have time requireded as cost for path-cost function. It should
consider length of each road being travelled.
In general step-cost of taking action 'a' to go from state x to state c (x, a, y).
The above four elements define a problem and can be put together in single data structure which
can be given as input to a problem-solving algorithm.
A solution to the problem is a path from the initial state to a goal state.
We can measure quality of solution by the path cost function. We can have multiple solutions to the
problem. The optimal solution will be the one with lowest path cost among all the solutions.
1) Incremental formulation
2) Complete-state formulation.
Depending upon problem requirements and specification one can decide which onegofor.
1) Incremental formulation
• It involves operators that augment the state description, starting with an empty state.
For example-
For the 8-queens problem, incremental formulation states that, each action adds a queen to the
state. In this formulation we have 64.63 ... 57 3×1014 possible sequences to investigate.
• In this initially we will have some basic configuration represented in initial state.
• Here while doing any action first the conditions on the actions will be checked so that the
configuration state after the action will be same legal state.
• It takes up large memory as complete state space is generated. This formulation reduces number of
sequences generated.
For example-
In 8-queen problem initially all the queens will be arranged on the board. The action will be 'move a
queen to the next square such that it is not attacking'.
This complete state formulation reduces state space from 3×1014 (which is for incremental
formulation) to just 2,057 and solutions are easy to find.
• Incremental formulation
1) States
2) Initial state
- Empty board.
4) Goal test
- No queen attacked.
1) States.
2) Initial state
-All 8 queens on board.
4) Goal test
- No queen attacked.
Properties: Good strategies can reduce the number of possible sequences which are considerable.
1) Problem definition: Where in detailed specification of inputs and what constitutes an acceptable
solution is described.
2) Problem analysis: Where in problem is studied through various view points like inputs, to the
problem, environment of the problem, expected outputs.
3) Knowledge representation: Where in the known data about the problem and various expected
stimuli from environment is represented in perticular format which is helpful for taking actions.
4) Problem solving: Where in the selection of best suited techniques for problem solutions are
thought of and finalized.
Problem solving agent adapt to the task environment understand goal and achieve success -
Problem solving agents determine sequence of actions which generate successful state. Problem
solving agent can be aimed at maximizing performance measure there by developing intelligent
problem solving agent.
Problem solving agent achieves success by taking following approach to problem solution -
After formulating goal, it is required to find out what will be the sequence of actions which generate
goal state.
Problem formulation is a way of looking at actions and states generated because of actions, which
leads to success.
If the task environment is unknown then agent first tries different sequence of actions and gathers
knowledge (i.e. learning). Then agent gets known set of actions which leads to goal state. Thus agent
search for describable sequence of actions this process is called as searching process.
With knowledge of environment and goal state we can design a search algorithm. A search algorithm
is a procedure which takes problem as input and return its solution which represented in the form of
action sequence.
Once the solution is given by the search algorithm then the actions suggested by the algorithm are
executed. This is the execution phase. Solution guides agent for doing the actions. After executing
the actions agent again formulate new goal.
3. Algorithm
Results: An action.
2) S→ State-current state.
S← Search (p)
G← First (s)
S← Rest (s)
Return a
Procedure
Consider following simple problem solving agent. Working in open-loop system. Open-loop system
means agent is assumed to be working in following environment -
1) Static environment: Where in problem formulation and solution is done by ignoring the changes
that can occur in environment.
• Points to Note
1) Above kind of working systems is called as open-loop system, because ignoring the percepts
breaks loop between agent and environment.
2) In open-loop systems solutions to problem are single sequence of actions, so they cannot handle
any unexpected events.
There are six major components of an artificial intelligence system. They are solely responsible for
generating desired results for particular problem. These components are as follows,
2. Heuristic Searching Techniques: Usually while dealing with the problems the knowledge base
keeps on growing and growing making it difficult to search in that knowledge base. To tackle with this
challenge, heuristic searching techniques can be used which can provide results (because of certain
criteria) efficiently in terms of time and memory usage.
3. Artificial Intelligence Hardware: Hardware compatibility is major concern when it comes to deploy
software on machines. Hardware must be efficient to accommodate and produce desire results.
Hardware components includes each and every machinery required spanning from memory to
processor to communicating devices. Al systems incomplete without Al hardware.
4. Computer Vision and Pattern Recognition: AI programs capture the inputs on their own by
generating a real world scenario with the help of this component. Sufficient and compatible
hardware enables better patterns gathering that makes a useful knowledge base.
5. Natural Language Processing: This component processes or analyses written or spoken languages.
Speech recognition is not sufficient to capture real world data. Acquiring the word sequence and
parsing sentence into computer is not just sufficient to gain knowledge about environment for AI
systems. Natural Language processing plays vital role in understanding of domain of text to AI
systems.
6. Artificial Intelligence Language and Support Tools: Artificial Intelligence languages are almost
similar to traditional software development programming languages with additional feature to
capture human brain processes and logic as much as possible.
Ans.: An agent is anything (a program, a machine assembly) that can be viewed as perceiving its
environment through sensors and acting upon that environment through actuators.
Ans.: A rational agent is agent which always works as per the expected performance. It is a agent
who always acts perfectly as per the expectation. It tries to maximize expected behavioural
performance.
Ans.: Agent program is important and central part of agent system. It drives the agent, which means
that it analyzes date and provides probable actions agent could take.
An agent program takes input as the current percept from the sensor and return an action to the
effectors (Actuator).
Ans.: 1) The IA must learn and improve through interaction with the environment.
4) The IA must have memory which must exhibit storage and retrieval capacities.
Ans.: In artificial intelligence the abstraction is commonly used to account for the use of various
levels in details in a given representation language or the ability to change from one level to another
while preserving useful properties. Abstraction has been mainly studied in problem solving, theorem
proving, knowledge representation and machine learning. In such context, abstraction is defined as
mapping between formalism that reduces the computational complexity of the task in question.
Ans.: Rationality is the capacity to generate maximally successfull behaviour given the available
information. Rationality also indicates the capacity to compute the perfectly rational decision given
the initially available information. The capacity to select the optimal combination of computation -
sequence plus the action, under the constraint that the action must be selected by the computation
is also rationality.
Perfect rationality constraints an agent's actions to provide the maximum expectations of success
given the information available.
Ans.: Agent function is a mathematical function which maps each and every possible percept
sequence to a possible action.
The major functionality of the agent function is to generate the possible action to each and every
percept. It helps the agent to get the list of possible actions the agent can take. Agent function can
be represented in the tabular form.
Agent program takes input as the current percept from the sensor and return an action to the
effectors (Actuators).
Q.10 What are the four components to define a problem? Define them. AU: May-13
2. A description of possible actions - It is the description of possible actions which are available to
the agent.
3. The goal test - It is the test that determines whether a given state is goal (final) state.
4. A path cost function - It is the function that assigns a numeric cost (value) to each path. The
problem-solving agent is expected to choose a cost-function that reflects its own performance
measure.
Dec. 2009
Dec. 2010
Q.2 How a problem is formally defined? List down the components of it. (Refer section 2.14)[4]
May 2012
Q.3 Explain the goal and model based reflex agents with example. (Refer section 2.4)[8]
Dec. 2012
Q.4 Explain in detail, the structure of different intelligent agents. (Refer section 2.4)[16]
May 2014
Q.5 Explain the components of problem definition with an example. (Refer section 2.13) [8]
Dec. 2014
Q.6 Explain the components necessary to define a problem. (Refer section 2.13)
Q.7 What is an agent? Explain the basic kinds of agent program. (Refer section 2.4.2) [10]
Dec. 2016
Q.8 Define agents. Specify the PEAS descriptions for intelligent agent design with examples and
explain basic types of agents. (Refer section 2.5)[16]