0% found this document useful (0 votes)
483 views9 pages

AI Unit-4 Software Agents Communication

This document discusses intelligent agents and agent communication. It defines an intelligent agent as an autonomous entity that acts using sensors and actuators to achieve goals by learning from its environment. The four main rules for an AI agent are to perceive the environment, use observations to make decisions, have decisions result in actions, and take rational actions. The document then discusses agent interaction, communication using speech act theory, agent communication languages like KQML and FIPA-ACL, and defines the semantics of request and inform speech acts in FIPA-ACL.

Uploaded by

Priyansh Gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
483 views9 pages

AI Unit-4 Software Agents Communication

This document discusses intelligent agents and agent communication. It defines an intelligent agent as an autonomous entity that acts using sensors and actuators to achieve goals by learning from its environment. The four main rules for an AI agent are to perceive the environment, use observations to make decisions, have decisions result in actions, and take rational actions. The document then discusses agent interaction, communication using speech act theory, agent communication languages like KQML and FIPA-ACL, and defines the semantics of request and inform speech acts in FIPA-ACL.

Uploaded by

Priyansh Gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Intelligent Agents:

An intelligent agent is an autonomous entity which acts upon an environment using


sensors and actuators for achieving goals. An intelligent agent may learn from the
environment to achieve their goals. A thermostat is an example of an intelligent
agent.

Following are the main four rules for an AI agent:

o Rule 1: An AI agent must have the ability to perceive the environment.


o Rule 2: The observation must be used to make decisions.
o Rule 3: Decision should result in an action.
o Rule 4: The action taken by an AI agent must be a rational action.

Coherence. An intelligent agent acts as a single purposeful entity. It attempts to


construct a consistent interpretation of the world, acquire information relevant to
its ongoing activities, notice important unanticipated events, and take actions that
advance its goals. It establishes and modifies its own goals and allocates its own
limited resources among competing tasks in a purposeful, coordinated manner, in
accordance with its current goals and constraints and its current interpretation of
the environment.
Agent interaction and communication

• So far, we have dealt exclusively with single agents


• Today’s lecture marks the beginning of the second block of the course
syllabus: foundations of multiagent systems
• We will be talking about agents interacting in a common
environment
• Focus will be on different forms of interaction
• Interaction does not always imply action
• Coordination does not always imply communication
Basic typology of interaction
interaction

coordination
competition cooperation

collaboration
communication

• Non-/Quasi-communicative interaction:
• Shared environment (interaction via resource/capability sharing)
• ”Pheromone” communication (ant algorithms)

• Communication:
• Information exchange: sharing knowledge, exchanging views
• Collaboration, distributed planning: optimising use of resources and

distribution of tasks, coordinating execution


• Negotiation: reaching agreement in the presence of conflict
(Human-machine dialogue, reporting errors,

Speechact theory

• Most multiagent approaches to communication based on speech act


theory (started by Austin (1962))
• Underlying idea: treat communication in a similar way as non-
communicative action
• Pragmatic theory of language, concerned with how
communication is used in the context of agent activity
• Austin (1962): Utterances are produced like “physical” actions to change
the state of the world
• Speech act theory is a theory of how utterances are used to achieve
one’s intentions
• A speech act can be conceptualised to consist of:
1) Locution (physical utterance)
2) Illocution (intended meaning)
3) Perlocution (resulting action)
• Two parts of a speech act:

Performative = communicative verb used to distinguish between
different “illocutionary forces”
• Examples: promise, request, purport, insist, demand, etc.
• Propositional content = what the speech act is about

• Example:
• Performative: request/inform/enquire
• Propositional content: “the window is open”
• Searle (1972) identified following categories of performatives:
• assertives/representatives (informing, making a claim)
• directives (requesting, commanding)
• commissives (promising, refusing)
• declaratives (effecting change to state of the world)
• expressives (expressing mental states)

• Ambiguity problems:
• “Please open the window!”
• “The window is open.”
• “I will open the window.”

• Debate as to whether this (or any!) typology is appropriate (and innate to


human thinking)
• Austin and Searle also analysed the conditions under which speech
acts can be successfully completed
• Austin’s felicity conditions:
1. There must be an accepted conventional procedure for the

performative
2. The procedure must be executed correctly and completely
3. The act must be sincere, any uptake must be completed as far as

possible
• Searle’s properties for success of (e.g.) a request:
1. I/O conditions (ability to hear request, normal situation)
2. Preparatory conditions must hold (requested action can be

performed, speaker must believe this, hearer will not perform action
anyway)
3. Sincerity conditions (wanting the action to be performed)

Speech Acts as Rational Action


• If communication is like action, what should agents say?
• Cohen and Perrault (1979) proposed applying planning techniques to speech
acts (STRIPS-style)
• Pre- and post-conditions would describe beliefs, abilities and wants
of participants
• Distinction between “can-do” and “want” preconditions
• Identified necessity of mediating acts, since speech acts say nothing
about perlocutionary effect
• Cohen and Levesque later integrated that in their model of intentions
(as previously discussed)
• Example of the Cohen-Perrault model:
Request (S, H, α)

pre−can : (S BEL (H CAN α)) ∧ (S BEL (H BEL (H CAN α)))

pre−want : (S BEL (S WANT requestInstance)) effect : (H


BEL (S BEL (S WANT α)))
CauseToWant (A1, A2, α)
pre−can : (A1 BEL (A2 BEL (A2 WANT α)))

effect : (A1 BEL (A1 WANT α))

• This has been the most influential approach to using


communication in multiagent systems!

Agent Communication language

• Agent communication languages (ACLs) define standards for messages


exchanged among agents
• Usually based on speech act theory, messages are specified by:
• Sender/receiver(s) of the message
• Performative to describe intended actions
• Propositional content in some content language

• Most commonly used languages:


• KQML/KIF
• FIPA-ACL (today de-facto standard)

• FIPA=Foundation for Intelligent Physical Agents

KQML/KIF
• KQML – Knowledge Query and Manipulation Language
• An “outer” language, defines various acceptable performatives
• Example performatives:
• ask-if(‘is it true that...’)
• perform (‘please perform the following action...’)
• tell(‘it is true that...’)
• reply(‘the answer is ...’)
• Message format:
(performative
:sender <word> :receiver <word>
:in-reply-to <word> :reply-with <word>
:language <word> :ontology <word>
:content <expression>)
Example

(advertise
:sender Agent1
:receiver Agent2
:in-reply-to ID1
:reply-with ID2
:language KQML
:ontology kqml-ontology
:content (ask
:sender Agent1
:receiver Agent3
:language Prolog
:ontology blocks-world
:content "on(X,Y)"))

• KQML does not say anything about content of messages


→ need content languages
• KIF – Knowledge Interchange Format: a logical language to
describe knowledge
• Essentially first-order logic with some extensions/restrictions
• Examples:
• (=> (and (real-num ?x) (even-num ?n))(> (expt ?x ?n) > 0))
• (interested joe ’(salary ,?x ,?y ,?z))

• Can be also used to describe ontology referred to by interacting agents

• KQML/KIF were very successful, but also some problems


• List of performatives (up to 41!) not fixed
interoperability problems
• No formal semantics, only informal descriptions of meaning
• KQML completely lacks commissives, this is a massive restriction!
• Performative set of KQML rather ad hoc, not theoretically clear or very
elegant
• These lead to the development of FIPA ACL
FIPA ACL
• In recent years, FIPA started work on a program of agent standards
– the centrepiece is an ACL called FIPA-ACL
• Basic structure is quite similar to KQML, but semantics expressed in a
formal language called SL
(inform :sender agent1 :receiver agent5
:content (price good200 150)
:language sl :ontology hpl-auction)
• ”Inform” and ”Request” basic performatives, all others (about 20) are
macro definitions (defined in terms of these)
• The meaning of inform and request is defined in two parts:
• “Feasibility precondition”, i.e. what must be true in order for the

speech act to succeed


• ”Rational effect”, i.e. what the sender of the message hopes to bring
About

FIPA ACL Semantics


• Assume Bi ϕ means i believes ϕ, Bifi ϕ/Uifi ϕ means i knows/is uncertain
about the truth value of ϕ
• Basic definitions of semantics of request/inform in FIPA ACL:
⟨ i, inform(j, ϕ)⟩

feasibility precondition: Bi ϕ ∧ ¬Bi (Bifj ϕ ∨ Uifj ϕ)

rational effect: Bj ϕ

⟨ i, request (j, α)⟩

feasibility precondition: Bi Agent (α, j ) ∧ ¬Bi Ij Done(α)

rational effect: Done(α)

PROBLEM
• Here, Agent (α, j ) means that j can perform j , Done(α) means that the
action has been done
• Impossible for the speaker to enforce those beliefs on the hearer!
• More generally: No way to verify mental state of agent on the grounds
of its (communicative) behaviour
• Alternative approaches use notion of social commitments
• “A debtor a is indebted to a creditor b to perform action c (before d )”
• Often public commitment stores are used to track status of

generated commitments
• At least (non)fulfilment of commitments can be verified

• This is a fundamental problem of all mentalistic approaches to


communication semantics!

Ontologies
• One aspect we have not discussed so far: how can agents ensure the
terminology they use is commonly understood?
• What are ontologies?
• philosophically speaking: a theory of nature of being or existence
• practically speaking: a formal specification of a shared

conceptualisation
• Ontologies have become a prominent are of research in particular with the
rise of the Semantic Web
• Many interesting problems: ontology matching and mapping, ontology
negotiation, ontology learning etc.
• For our purposes sufficient to know that agreement on terminology is
prerequisite for meaningful communication

Interaction protocols
• ACLs define the syntax and semantics of individual utterances
• But they don’t specify what agent conversations look like
• This is done by interaction protocols for different types of agent
dialogues
• Interaction protocols govern the exchange of a series of messages among
agents
• Restrict the range and ordering of possible messages (effectively define
patterns of admissible sequences of messages)
• Often formalised using finite-state diagrams or “interaction diagrams” in
FIPA-AgentUML
• Define agent roles, message patterns, semantic constrains

Contract-net Protocol

• One of the oldest, most widely used agent interaction protocols


• A manager agent announces one or several tasks, agents place bids for
performing them
• Task is assigned by manager according to evaluation function applied to
agents’ bids (e.g. choose cheapest agent)
• Idea of exploiting local cost function (agents’ private knowledge) for
distributed optimal task allocation
• Even in purely cooperative settings, decentralisation can improve global
performance
• A typical example of “how it can make sense to agentify a system”
• Successfully applied to different domains (e.g. transport logistics)

You might also like