CS8691 Unit 4
CS8691 Unit 4
UNIT IV
SOFTWARE AGENTS
Architecture for Intelligent Agents – Agent communication – Negotiation and Bargaining –
Argumentation among Agents – Trust and Reputation in Multi-agent systems.
Page 152
CS8691 – ARTIFICIAL INTELLIGENCE UNIT IV
Page 153
CS8691 – ARTIFICIAL INTELLIGENCE UNIT IV
Page 154
CS8691 – ARTIFICIAL INTELLIGENCE UNIT IV
Figure 4.2 Vacuum Cleaning World Figure 4.3 Simple Agent Architecture
To illustrate these ideas, let us consider a small example based on the vacuum cleaning
world example. The idea is that we have a small robotic agent that will clean up a house. The
robot is equipped with a sensor that will tell it whether it is over any dirt, and a vacuum cleaner
that can be used to suck up dirt. In addition, the robot always has a definite orientation (North,
Page 155
CS8691 – ARTIFICIAL INTELLIGENCE UNIT IV
East, West and South) and turn right 90ᴼ. The agent moves around a room, which is divided grid-
like into a number of equally sized squares. We will assume that our agent does nothing but
clean – it never leaves the room.
To summarize, our agent can receive a percept dirt, or null. It can perform any one of
three possible actions: forward, suck, or turn. The robot will always move from (0,0) to (0,1) to
(0,2) and then to (1,2), to (1,1), and so on. The goal is to traverse the room, continually searching
for and removing dirt. First, make use of three simple domain predicates:
In(x, y) agent is at (x,y)
Dirt(x, y) there is dirt at (x,y)
Facing(d) the agent is facing direction d
The rules that govern our agent’s behavior are:
In(x, y)∧Dirt(x, y) −→ Do(suck) ------(4.1)
In(0,0)∧Facing(north)∧¬Dirt(0,0) −→ Do( forward) ------(4.2)
In(0,1)∧Facing(north)∧¬Dirt(0,1) −→ Do( forward) ------(4.3)
In(0,2)∧Facing(north)∧¬Dirt(0,2) −→ Do(turn) ------(4.4)
In(0,2)∧Facing(east) −→ Do( forward) ------(4.5)
Notice that in each rule, we must explicitly check whether the antecedent of rule (4.1)
fires. The problems with this vacuum cleaning world are:
1. An agent is said to enjoy the property of calculative rationality if and only if its decision-
making apparatus will suggest an action that was optimal when the decision-making
process began. Calculative rationality is clearly not acceptable in environments that
change faster than the agent can make decisions.
2. Representing and reasoning temporal information. Temporal information is how a
situation changes over time. Representing it turns out to be extraordinarily difficult.
3. The problems associated with representing and reasoning about complex, dynamic,
possibly physical environments are also essentially unsolved.
4.2.2 Reactive Architectures (or) Subsumption Architecture
The subsumption architecture is arguably the best-known reactive agent architecture.
There are two defining characteristics of the subsumption architecture. The first is a set of task
accomplishing behaviors, each behavior may be thought of as an individual action selection
process, which continually takes perceptual input and maps it to an action to perform. These
behaviors are implemented as rules of the form, situation → action.
Page 156
CS8691 – ARTIFICIAL INTELLIGENCE UNIT IV
The second defining characteristic of the subsumption architecture is that many behaviors
can ―fire‖ simultaneously. There must be a mechanism to choose between the different actions
selected by these multiple actions. A subsumption hierarchy has behaviors arranged into layers.
Lower layers in the hierarchy are able to inhibit higher layers: the lower a layer is, the higher is
its priority, which represent more abstract behaviors. For example, in a mobile robot the behavior
―avoid obstacles.‖ It makes sense to give obstacle avoidance a high priority.
Page 157
CS8691 – ARTIFICIAL INTELLIGENCE UNIT IV
The lowest-level behavior is obstacle avoidance, which can be represented in the rule:
if detect an obstacle then change direction ------(4.6)
Other behaviors ensures any samples carried by agents are dropped back at mothership.
if carrying samples and at the base then drop samples. ------(4.7)
if carrying samples and not at the base then travel up gradient. ------(4.8)
if detect a sample then pick sample up ------(4.9)
if true then move randomly. ------(4.10)
The precondition of 4.10 rule is thus assumed to always fire. These behaviors are arranged into
the following hierarchy:
(4.6) ≺ (4.7) ≺ (4.8) ≺ (4.9) ≺ (4.10)
However, rule 4.8, determining action on carrying sample and not at base is modified as follows.
if carrying samples and not at the base then drop 2 crumbs and travel up gradient. ------(4.11)
However, an additional behavior is required for dealing with crumbs.
if sense crumbs then pick up 1 crumb and travel down gradient. ------(4.12)
These behaviors are then arranged into the following subsumption hierarchy:
(4.6) ≺ (4.8) ≺ (4.11) ≺ (4.9) ≺ (4.12) ≺ (4.10)
Page 158
CS8691 – ARTIFICIAL INTELLIGENCE UNIT IV
Page 159
CS8691 – ARTIFICIAL INTELLIGENCE UNIT IV
Figure 1.4: The value iteration algorithm for Markov decision processes.
4.2.4 Belief-Desire-Intention Architectures
Belief Desire Intention (BDI) architectures have their roots in the philosophical tradition
of understanding practical reasoning — the process of deciding, moment by moment, which
action to perform in the furtherance of our goals. Practical reasoning involves two important
processes:
1. Deliberation - deciding what goals we want to achieve, and
2. Means-ends reasoning - how we are going to achieve these goals.
The decision process begins by trying to understand what are the options available. After
generating this set of alternatives, you must choose between them, called intentions, and commit
to some, called future practical reasoning.
4.2.4.1 Intention
Make a reasonable attempt to achieve the intention. Moreover, if a course of action fails
to achieve the intention, then you would expect to try again – you would not expect to simply
give up. This intention will constrain future practical reasoning.
From this discussion, we can see that intentions play a number of important roles in
practical reasoning:
Intentions drive means-ends reasoning.
Intentions constrain future deliberation.
Intentions persist.
Intentions influence beliefs upon which future practical reasoning is based.
Page 160
CS8691 – ARTIFICIAL INTELLIGENCE UNIT IV
A key problem in the design of practical reasoning agents is that of achieving a good
balance between these different concerns. Specifically, it seems clear that an agent should at
times drop some intentions. It follows that, from time to time, it is worth an agent stopping to
reconsider its intentions. But reconsideration has a cost, in terms of both time and computational
resources. But this presents us with a dilemma. This dilemma is essentially the problem of
balancing proactive (goal-directed) and reactive (event-driven) behavior.
An agent that does not stop to reconsider sufficiently often will continue attempting to
achieve its intentions even after it is clear that they cannot be achieved, or that there is no
longer any reason for achieving them;
An agent that constantly reconsiders its intentions may spend insufficient time actually
working to achieve them, and hence runs the risk of never actually achieving them.
Let us investigate how bold agents (those that never stop to reconsider) and cautious
agents (those that are constantly stopping to reconsider). The rate of world change is γ.
If γ is low (i.e., the environment does not change quickly) then bold agents do well
compared to cautious ones, because cautious ones waste time reconsidering their
commitments while bold agents are busy working towards – and achieving – their goals.
If γ is high (i.e., the environment changes frequently) then cautious agents tend to
outperform bold agents, because they are able to recognize when intentions are doomed,
and also to take advantage of serendipitous situations and new opportunities.
The lesson is that different types of environments require different types of decision
strategies. In static, unchanging environments, purely proactive, goal directed behavior is
adequate. But in more dynamic environments, the ability to react to changes by modifying
intentions becomes more important.
4.2.4.2 Practical Reasoning
There are seven main components to a BDI agent:
i. A set of current beliefs, representing information agent has about its current
environment;
ii. A belief revision function (brf), which takes a perceptual input and the agent’s current
beliefs, and on the basis of these, determines a new set of beliefs;
iii. An option generation function(options), which determines the options available to the
agent (its desires), on the basis of current beliefs and its current intentions;
Page 161
CS8691 – ARTIFICIAL INTELLIGENCE UNIT IV
iv. A set of current options, representing possible courses of actions available to the agent;
v. A filter function (filter), which represents the agent’s deliberation process, and which
determines the agent’s intentions on the basis of its current beliefs, desires, and
intentions;
vi. A set of current intentions, representing the agent’s current focus – those states of
affairs that it has committed to trying to bring about;
vii. An action selection function (execute), which determines an action to perform on the
basis of current intentions.
Page 162
CS8691 – ARTIFICIAL INTELLIGENCE UNIT IV
Figure 1.6: Information and control flow in three types of layered agent architectures
Vertically layered architecture is subdivided into one-pass architectures (Figure 1.6(b))
and two-pass architectures (Figure 1.6(c)). The complexity of interactions between layers is
reduced in vertical architecture. Since there are n−1 interfaces between n layers, then if each
layer is capable of suggesting m actions, there are at most m2 (n−1) interactions to be considered
Page 163
CS8691 – ARTIFICIAL INTELLIGENCE UNIT IV
between layers. This is clearly much simpler than the horizontally layered case. However, this
simplicity comes at the cost of some flexibility.
4.2.5.1 Touring Machines
The Touring Machines architecture consists of perception and action subsystems, which
interface directly with the agent’s environment, and control layers embedded in a control
framework, which mediates between the layers. TOURING MACHINES consists of three
activity producing layers.
Reactive layer: immediate response.
Planning layer: ―day-to-day‖ running under normal circumstances.
Modelling layer: predicts conflicts and generate goals to be achieved in order to solve
these conflicts.
Control subsystem: decided which of the layers should take control over the agent.
4.2.5.2 INTERRAP
INTERRAP defines an agent architecture that supports situated behavior where the
agents are able to recognize unexpected events and react timely and appropriately to them. It
show goal-directed behavior in a way that the agent decides which goals to pursue and how. The
agents can act under real time constraints and act efficiently with limited resources. The agent
can interact with other agents to achieve common goals.
Page 164
CS8691 – ARTIFICIAL INTELLIGENCE UNIT IV
Page 165
CS8691 – ARTIFICIAL INTELLIGENCE UNIT IV
Page 166
CS8691 – ARTIFICIAL INTELLIGENCE UNIT IV
Page 167
CS8691 – ARTIFICIAL INTELLIGENCE UNIT IV
State Machines - This state machine supports two executions. One execution represents the
scenario where the customer rejects the merchant’s offer. The other execution represents the
scenario where the customer accepts the offer, following which the merchant and the customer
exchange the item and the payment for the item. A state machine does not reflect the internal
policies based upon which the customer accepts an offer.
Page 168
CS8691 – ARTIFICIAL INTELLIGENCE UNIT IV
Page 169
CS8691 – ARTIFICIAL INTELLIGENCE UNIT IV
Drawbacks:
1. If one developer implements all the interacting agents correctly, the developer can be
assured that an agent would send a particular message only in a particular internal state.
But it is not the case in multi-agent system.
2. FIPA specifications have ended up with a split personality.
Page 170
CS8691 – ARTIFICIAL INTELLIGENCE UNIT IV
Page 171
CS8691 – ARTIFICIAL INTELLIGENCE UNIT IV
Page 172
CS8691 – ARTIFICIAL INTELLIGENCE UNIT IV
the lack of control: no agent has control over another agent. To get things done, agents set up the
appropriate commitments by interacting. Any expectation from an agent beyond what the agent
has explicitly committed would cause hidden coupling. Explicit meanings - The meaning ought
to be made public, not hidden within agent implementations.
Communication Based Methodologies
A number of methodologies for designing and implementing multi-agent systems are
based on communications. The common idea behind these methodologies is to identify the
communications involved in the system being specified and to state the meanings of such
communications. The main protocol concepts are roles, messages, and message meanings. The
high-level considerations involved in designing a protocol are:
Identify stakeholder requirements.
Identify the roles involved in the interaction.
If a suitable protocol is available in repository, then choose it and we’re done.
Often the required protocol may be obtained by composing existing protocols.
Sometimes the protocol or parts of it may need to be written. Identify the
communications among the roles and how each message would affect the commitments
of its sender and receiver to write a new one.
4.3.8 Advanced Topics and Challenges
Primacy of Meaning – Adopting a meaning-based protocol protects one’s models from
unplanned dependencies and yields the highest flexibility for the participating agents
while maintaining correctness.
Verifying Compliance - When two agents talk to one another, they must agree
sufficiently on what they are talking about and verify if the others are complying with
expectations of them.
Protocol Refinement and Aggregation - Refinement deals with how a concept refines
another in the sense of the is-a hierarchy. Aggregation deals with how concepts are put
together into composites in the sense of the part-whole hierarchy.
Role Conformance – Ensure the role as published by a vendor conforms with the role as
derived from a protocol.
Page 173
CS8691 – ARTIFICIAL INTELLIGENCE UNIT IV
Page 174
CS8691 – ARTIFICIAL INTELLIGENCE UNIT IV
There are two ways to model such bilateral negotiations: using cooperative game theory
and using non-cooperative game theory. In cooperative games, agreements are binding or
enforceable, possibly by law. When agreements are binding, it is possible for the players to
negotiate outcomes that are mutually beneficial. In non-cooperative games, agreements are not
binding. Here, the players are self-interested and their focus is on individually beneficial
outcomes. So a player may have an incentive to deviate from an agreement in order to improve
its utility.
B: Quiet/cooperate B: Defect/testify against
partner
A: Quiet/Cooperate Both serve 1 month A: serves 1 year, B: goes free
A: Defect/testify against A: goes free, B: serves 1 year Both serve three months
partner
Table 4.1: Prisoner’s Dilemma game
Consider Prisoner’s Dilemma game given in Table 4.1. Assume that this game is non -
cooperative. Then the dominant strategy for both players will be to confess. The outcome is not
Pareto optimal. In contrast, if the same game was played as a cooperative game, and the players
agreed not to confess, then both players would benefit. The agreement (deny, deny) would be
binding and the resulting outcome would be Pareto optimal.
Cooperative Models of Single-Issue Negotiation
Consider a two-person bargaining situation with two individuals who have the
opportunity to collaborate for mutual benefit. Thus, situations such as trading between a buyer
and a seller, or an employer and a labor may be regarded as bargaining problems. There will be
more than one way of collaborating, and how much an individual benefits depends on the actions
taken by both agents. Nash analyzed the bargaining problem and defined a solution/outcome, by
determining how much each individual should expect to benefit from the situation, using an
axiomatic approach.
There are two players (say a and b) who want to come to an agreement over the
alternatives in an arbitrary set A. Failure to reach an agreement, i.e., disagreement, is represented
by a designated outcome denoted{D}. Agent i ∈ {a, b}. The set of all utility pairs that result
from an agreement is called the bargaining set (S).
Definition: A bargaining problem is defined as a pair (S, d). A bargaining solution is a function
f that maps every bargaining problem (S, d) to an outcome in S, i.e., f : (S, d)→Ȿ
Page 175
CS8691 – ARTIFICIAL INTELLIGENCE UNIT IV
The payoff allocations that two players ultimately get should depend only on the
following two factors:
1. The set of payoff allocations that are jointly feasible for the two players in the process
of negotiation or arbitration, and
2. The payoffs they would expect if negotiation or arbitration were to fail to reach a
settlement.
Axiom 1 (Individual Rationality) - The bargaining solution should give neither player less than
what it would get from disagreement, i.e., f(Ȿ,d)≥d.
Axiom 2 (Symmetry) - When the players’ utility functions and their disagreement utilities are
the same, they receive equal shares.
Axiom 3 (Strong Efficiency) - The bargaining solution should be feasible and Pareto optimal.
Axiom 4 (Invariance) - The solution should not change as a result of linear changes to the utility
of either player.
Axiom 5 (Independence of Irrelevant Alternatives) - Eliminating feasible alternatives that
would not have been chosen should not affect the solution.
Non-Cooperative Models of Single-Issue Negotiation
A key difference between the cooperative and non-cooperative models is that the former
does not specify a procedure, whereas the latter has a procedure. There are two players and a unit
of good, an apple, to be split between them. If player a gets a share of x a ∈ [0,1], then player b
gets xb =1−xa. Neither player receives anything unless the two players come to an agreement.
Here, the apple, can be split between the players. So the issue is said to be divisible.
Player b can accept or reject the offer. If player b accepts, the game ends successfully
with the pie being split as per player a’s proposal. Otherwise, the game continues to the next time
period in which player b proposes a counteroffer to player a. This process of making offers and
counteroffers continues until one of the players accepts the other’s offer. Since the players take
turns in making offers, this is known as alternating offers protocol.
4.4.2 Game-Theoretic Approaches for Multi-Issue Negotiation
The four key procedures for bargaining over multiple issues.
1. Global bargaining: Here, the bargaining agents directly tackle the global problem in
which all the issues are addressed at once.
Page 176
CS8691 – ARTIFICIAL INTELLIGENCE UNIT IV
Page 177
CS8691 – ARTIFICIAL INTELLIGENCE UNIT IV
1. A time dependent strategy takes ―time‖ as an input and returns an offer such that
concessions are small at the beginning of negotiation but increase as the deadline
approaches.
2. Resource dependent strategies are also similar to the time dependent functions except that
the domain of the function is the quantity of resources available instead of the remaining
time.
3. Behavior dependent strategy simply imitates its opponent’s strategy in order to protect
itself from being exploited by the opponent.
Heuristics can also be used to predict the opponent’s preferences for the issues. This
prediction is relevant to situations where the negotiators have limited information about their
opponents. Here, any information gained from the opponent’s offers in the past rounds is used to
heuristically form a prediction of the future. An agent’s preferences will be revealed when it
makes offers to the mediator, because an agent will only propose those offers that are optimal
from its individual perspective.
Page 178
CS8691 – ARTIFICIAL INTELLIGENCE UNIT IV
Page 179
CS8691 – ARTIFICIAL INTELLIGENCE UNIT IV
Argumentation can be seen as a reasoning process consisting of the following four steps:
1. Constructing arguments (in favor of/against a ―statement‖) from available information.
2. Determining the different conflicts among the arguments.
3. Evaluating the acceptability of the different arguments.
4. Concluding, or defining, the justified conclusions.
A number of recent attempts have been made to provide a general, unifying definition.
Prakken’s recent unifying framework is quite general and highlights most important concepts.
Prakken defines an argumentation system as a tuple (L,−,Rs,Rd,≤), consisting of a logical
language L, two disjoint sets of strict rules Rs and defeasible rules Rd,and a partial order ≤ over
Rd. The contrariness function: L → 2L captures conflict between formulas, with classical
negation ¬ being captured by ¬ϕ ∈ ϕ and ϕ ∈ ¬ϕ. A particular knowledge base is a pair ( K, ≤’ ),
with K ⊆ L divided into the following disjoint sets: Kn are the necessary axioms (cannot be
attacked); Kp are the ordinary premises; Ka are the assumptions; Ki are the issues. Finally, ≤’ is
a partial order on K\Kn.
From a knowledge base, arguments are built by applying inference rules to subsets of K.
The left-hand side of the rule ϕ1,...,ϕn is called the premises (or antecedents),while the right-
hand side ϕ is called the conclusion (or consequent). An argument is intuitively any of the
following structures:
Φ ∈ K, where Prem(A)={ϕ}, Conc( A ) = ϕ, and Sub( A ) = {ϕ}.
A1,...,An → ψ, where A1,...,An are arguments, and there exists in Rs a strict rule
Conc(A1),...,Conc(An) → ψ.
A1,...,An ⇒ψ, where A1,...,An are arguments, and there exists in Rd a defeasible rule
Conc(A1),...,Conc(An)→ψ.
Argument as an Instance of a Scheme
Argumentation schemes are forms (or categories) of argument, representing stereotypical
ways of drawing inferences from particular patterns of premises to conclusions in a particular
domain. Walton’s ―sufficient condition scheme for practical reasoning‖ may be described as
follows [1]: In the current circumstances R, We should perform action A, Which will result in
new circumstances S, Which will realize goal G, Which will promote some value V.
Abstract Arguments
An argument can be seen as a node in an argument graph, in which directed arcs capture
defeat between arguments. Despite its simplicity, this model is surprisingly powerful. Argument
α1 has two defeaters (i.e., counterarguments) α2 and α4, which are themselves defeated by
arguments α3 and α5, respectively.
Page 181
CS8691 – ARTIFICIAL INTELLIGENCE UNIT IV
Page 182
CS8691 – ARTIFICIAL INTELLIGENCE UNIT IV
proponent) and the other OPP (the opponent). Dialogue begins with PRO asserting an argument
x. Then PRO and OPP take turns, in a sequence of moves called a dispute, where each player
makes an argument that attacks its counterpart’s last move. A player wins a dispute if its
counterpart cannot make a counterattack. But the counterpart may try a different line of attack,
creating a new dispute. This results in a dispute tree structure that represents the dialogue.
Figure 5.3: (i) Argumentation framework, (ii) Dispute tree induced in a, and
(iii) Dispute tree induced by under protocol G, with the winning strategy encircled.
Definition 5.10 (Protocol G) Given argument graph A, and a dispute D whose tail is argument
x. Let PRO(D) be the arguments uttered by PRO.
If dispute length is odd (next move is by OPP), then possible next moves are{y|y→x}.
If the dispute length is even (next move is by PRO), then the possible next moves are
{y|y→x and y ! ∈ PRO(D)}.
Theorem 5.1 Let A,→ be an argument graph. There exists a winning strategy T for x under
protocol G such that the set of arguments uttered by PRO in T is conflict-free, if and only if x is
in the grounded extension of A,→.
Dialog System relies on having the explicit contents of the arguments presented.
P1[−]: claim r O2[P1]: why r
P3[O2]: r since q,q⇒r O4[P3]: why q
P5[O4]: q since p,p⇒q O 6[P5]: concede p⇒q
O 7[P5]: why p
Page 183
CS8691 – ARTIFICIAL INTELLIGENCE UNIT IV
Note that at this point, player P has many possible moves. It can retract its claim or
premises of its argument, or give an argument in support of p. It was also possible for player O to
make queries against P’s original claim.
Page 184
CS8691 – ARTIFICIAL INTELLIGENCE UNIT IV
Glazer and Rubinstein’s Model - Glazer and Rubinstein’s Model explore the mechanism
design problem of constructing rules of debate that maximize the probability that a listener
reaches the right conclusion given arguments presented by two debaters.
Game Theory Background – The field of game theory studies strategic interactions of self-
interested agents. There is a set of self-interested agents, whose preferences are outcomes over
set of all possible outcomes. When agents interact, we say that they are playing strategies. A
strategy for an agent is a plan that describes what actions the agent will take for every decision
the opponent agent might make. Strategy profile denote the outcome that results when each agent
is playing strategy, which is used as the utility for future moves.
A solution called, Nash equilibrium is a strategy profile in which each agent is following
a strategy that maximizes its own utility, given its type and the strategies of the other agents.
Mechanism Design - Mechanism design ensures that a desirable system-wide outcome
or decision is made when there is a group of self-interested agents who have preferences
over the outcomes.
The Revelation Principle - The revelation principle states that we can limit our search to
a special class of mechanisms. A social choice function is a rule f : Θ1×...×ΘI →O, that
selects some outcome f (θ) ∈ O, given the type of the agent. Theorem5.2 (Revelation
principle) If there exists some mechanism that implements social choice function f in
dominant strategies, then there exists a direct mechanism that implements f in dominant
strategies and is truthful.
4.5.4 The Argument Interchange Format
Argumentation Interchange Format (AIF) provides a common language for argument
representation, to facilitate argumentation among agents in an open system. The core AIF has
two types of nodes: information nodes (or I-nodes) and scheme nodes (or S-nodes).
Information nodes are used to represent passive information contained in an argument,
such as a claim, premise, data, etc. Scheme nodes capture the application of schemes (i.e.,
patterns of reasoning). The present ontology has three different types of scheme nodes: rule of
inference application nodes (RA-nodes), preference application nodes (PA nodes) and conflict
application nodes (CA-nodes).
Page 185
CS8691 – ARTIFICIAL INTELLIGENCE UNIT IV
Page 186
CS8691 – ARTIFICIAL INTELLIGENCE UNIT IV
Page 187
CS8691 – ARTIFICIAL INTELLIGENCE UNIT IV
Page 188
CS8691 – ARTIFICIAL INTELLIGENCE UNIT IV
Page 189
CS8691 – ARTIFICIAL INTELLIGENCE UNIT IV
Page 190
CS8691 – ARTIFICIAL INTELLIGENCE UNIT IV
Decentralized Approaches
The decentralized approach relies on the individual information that each agent can
obtain about the society. This information can be images and reputation values coming from
third-party agents but also can be information about social relations, roles, etc. Decentralized
approaches have the following advantages:
No trust of an external central entity is necessary.
They are suited to build scalable systems as they do not introduce any bottleneck.
Each agent can decide the method that it wants to follow to calculate reputation.
But they imply the following drawbacks:
It can take some time for the agent to obtain enough information from the rest of the
society to calculate a reliable reputation value.
It is not so easy for newcomers to start using reputation in a society that does not have a
centralized reputation service.
It demands more complex and ―intelligent‖ agents as they need to encapsulate processes
for reasoning on reputation messages received, calculating reputation, and deciding when
to communicate reputation to others.
Page 191
CS8691 – ARTIFICIAL INTELLIGENCE UNIT IV
1.Define an agent?
Agents are systems that can decide for themselves what they need to do in order to
achieve the objectives that we delegate to them. Intelligent agents, or sometimes
autonomous agents are agents that must operate robustly in rapidly changing,
unpredictable, or open environments, where there is a significant possibility that actions
can fail.
2. Give the classification of environment properties.
Accessible vs. inaccessible
Deterministic vs. non-deterministic
Episodic vs. non-episodic
Static vs. dynamic
Discrete vs. continuous
3.List down the four important classes of agent architecture?
Logic-based agents
Reactive agents
Belief-desire-intention agents
Layered architectures
4.What are the rules that govern vaccum cleaning agent’s behavior?
In(x, y)∧Dirt(x, y) −→ Do(suck) ------(4.1)
In(0,0)∧Facing(north)∧¬Dirt(0,0) −→ Do( forward) ------(4.2)
In(0,1)∧Facing(north)∧¬Dirt(0,1) −→ Do( forward) ------(4.3)
In(0,2)∧Facing(north)∧¬Dirt(0,2) −→ Do(turn) ------
(4.4)
In(0,2)∧Facing(east) −→ Do( forward) ------(4.5)
5.Give the disadvantages of Reactive Architecture
• Agents need sufficient information available in their local environment.
• It is difficult to see how decision making could take into account non-local
information.
Page 193
CS8691 – ARTIFICIAL INTELLIGENCE UNIT IV
Page 194
CS8691 – ARTIFICIAL INTELLIGENCE UNIT IV
The set of payoff allocations that are jointly feasible for the two players in the
process of negotiation or arbitration, and
The payoffs they would expect if negotiation or arbitration were to fail to reach a
settlement.
16.What is ABN?
Argumentation-based negotiation (ABN) approaches, on the other hand, enable agents to
exchange additional meta-information (i.e., arguments) during negotiation.
17.State the steps in argumentation?
Argumentation can be seen as a reasoning process consisting of the following four steps:
1. Constructing arguments (in favor of/against a ―statement‖) from available information.
2. Determining the different conflicts among the arguments.
3. Evaluating the acceptability of the different arguments.
4. Concluding, or defining, the justified conclusions.
18.Define Argument labeling?
Let AF = A, labeling is a total function L : A → {in, out, undec}such that:
∀ α ∈ A : (L(α) = out ≡ ∃β ∈ A such that (β α and L(β) = in))
∀ α ∈ A : (L(α) = in ≡ ∀β ∈ A : ( if β α then L(β) = out))
Otherwise, L(α) = undec (since L is a total function).
19.What is the idea of Glazer and Rubinstein’s Model
Glazer and Rubinstein’s Model explore the mechanism design problem of constructing
rules of debate that maximize the probability that a listener reaches the right conclusion
given arguments presented by two debaters.
20.Define Trust and give trust evaluation process?
Trust is the outcome of observation leading to the belied that the actions of another may
be relied upon, without explicit guarantee to attain the goal. The Trust Evaluation process
involves:
1. Filtering the Inputs
2. Statistical Aggregation
3. Logical Beliefs Generation
Page 196
CS8691 – ARTIFICIAL INTELLIGENCE UNIT IV
Page 197