Week 1
Week 1
CST3118
Week #1: Introduction to Multi-Agent Systems
3
Welcome
• Course Outline
• Weekly schedule
• Professor Bio & Contact
• What to expect in the course: what is it about. Expected readings before class.
Practical activities. In class & lab. Project.
• Class etiquette. Weekly work. Any late penalties for labs, extensions, any
emergencies notify professor before deadline.
• End of class quiz.
• Feedback expectations. Lab oral feedback
4
Lesson Overview (Agenda)
In the following lesson, we will explore:
1. Reasons for the interest in multiagent systems
2. Notion of an Agent
3. Agent Architecture
4. Agent Reasoning
5
Reasons for the interest in multiagent systems
• A main reason for the vast interest and attention multiagent systems are receiving is that
they are seen as an enabling technology for applications that rely on distributed and parallel
processing of data, information, and knowledge in complex computing environments. With
advancing technology, such applications are becoming standard in a variety of domains
such as e-commerce, logistics, supply chain management, telecommunication, health care,
and manufacturing. More generally, such applications are characteristic of several widely
recognized computing paradigms known as grid computing, peer-to-peer computing,
pervasive computing, ubiquitous computing, autonomic computing, service-oriented
computing, and cloud computing. Another reason for the broad interest in multiagent
systems is that these systems are seen as a technology and tool that helps to analyze and
develop models and theories of interactivity in large-scale human-centered systems.
Mike Wooldridge
6
What is an Agent
• What are the characteristics of an agent?
• • This is a question that gets a lot of discussion! (Compare object.)
• • For our purposes, we find it useful to introduce two increasingly strong
notions of agency:
• – weak agency;
• (primarily the software agents community)
• – strong agency;
• (primarily AI).
Mike Wooldridge
7
A Weak Notion of Agency
• An agent is a hardware or (more usually) software-based computer system that
enjoys the following properties:
• autonomy
agents operate without the direct intervention of humans or others, and
have some kind of control over their actions and internal state;
• social ability
agents interact with other agents (and possibly humans) via some kind
of agent communication language;
Mike Wooldridge
8
A Weak Notion of Agency (Cont.)
• reactivity
agents perceive their environment, (which may be the physical world, a
user via a graphical user interface, a collection of other agents, the
INTERNET, or perhaps all of these combined), and respond in a timely
fashion to changes that occur in it;
• pro-activeness
• agents do not simply act in response to their environment, they are able to
exhibit goal-directed behavior by taking the initiative.
Mike Wooldridge
9
A Weak Notion of Agency (Cont.)
• (A simple way of conceptualizing an agent is as a kind of UNIX-like software
daemon.)
• Think of (weak) agents as human-like ‘assistants’ or ‘drones’ that are limited in
their abilities:
• you can give them tasks to do, and they can go away and cooperate with
other agents to achieve these tasks;
• also, they are capable of taking the initiative in a limited way, like a
human secretary would.
• The weak notion of agency buys us something: a useful computational
metaphor and abstraction tool.
Mike Wooldridge
10
A Strong Notion of Agency
• For some researchers —particularly those in AI—the term ‘agent’ has a stronger
meaning.
• These researchers generally mean an agent to be a computer system that, in
addition to having the properties identified above, is either conceptualized or
implemented using concepts usually applied to people:
• mentalistic notions (belief, desire, obligation, choice, . . . ;
• rationality;
• veracity;
• adaptability/learning.
Mike Wooldridge
11
An Agent
Mike Wooldridge
12
Micro versus Macro Issues
• In building an agent-based system, there are 2 sorts of issues to be addressed:
• – micro issues:
how do we design and build an agent that is capable of acting
autonomously, reactively, pro-actively in a time-constrained domain?
• macro issues:
how do we get a society of agents to cooperate effectively (i.e.,
maximizing their coherence and coordination)?
Mike Wooldridge
13
Agent Architectures
• We want to build agents, that enjoy the properties of autonomy, reactiveness,
pro-activeness, and social ability that we talked about earlier.
• – How do we do this?
• – What software/hardware structures are appropriate?
• – What is an appropriate separation of concerns?
Mike Wooldridge
14
Agent Architectures (Cont.)
• Maes defines an agent architecture as:
‘[A] particular methodology for building [agents]. It specifies how . . . the agent can
be decomposed into the construction of a set of component modules and how
these modules should be made to interact. The total set of modules and their
interactions has to provide an answer to the question of how the sensor data
and the current internal state of the agent determine the actions . . . and future
internal state of the agent. An architecture encompasses techniques and
algorithms that support this methodology.’
Mike Wooldridge
15
Agent Architectures (Cont.)
• Kaelbling considers an agent architecture to be:
• Most of the rest of this presentation will be taken up with a discussion of the
various kinds of agent architecture that have been developed (primarily in AI).
Mike Wooldridge
16
Reasoning Agents
• The classical approach to building agents is to view them as a particular type of
knowledge-based system, and bring all the associated methodologies of such
systems to bear.
• This paradigm is known as symbolic AI.
• We define a deliberative agent or agent architecture to be one that:
• – contains an explicitly represented, symbolic model of the world;
• – makes decisions (for example about what actions to perform) via
symbolic reasoning.
Mike Wooldridge
17
Reasoning Agents (Cont.)
Mike Wooldridge
18
Reasoning Agents (Cont.)
• Now if one aims to build an agent in this way, then there are at least two
important problems to be solved:
• 1. The trnasduction problem: that of translating the real world into an
accurate, adequate symbolic description, in time for that description to be
useful.
• 2. The representation/reasoning problem: that of how to symbolically
represent information about complex real-world entities and processes,
and how to get agents to reason with this information in time for the
results to be useful.
Mike Wooldridge
19
Reasoning Agents (Cont.)
• The former problem has led to work on vision, speech understanding, learning,
etc.
• The latter has led to work on knowledge representation, automated reasoning,
automatic planning, etc.
• Despite the immense volume of work that these problems have generated, most
researchers would accept that neither is anywhere near solved.
• Even seemingly trivial problems, such as commonsense reasoning, have turned
out to be extremely difficult
Mike Wooldridge
20
Reasoning Agents (Cont.)
Mike Wooldridge
21
AGENT0 and PLACA
• Much of the interest in agents from the AI community has arisen from Shoham’s notion of
agent oriented programming (AOP)
• AOP a ‘new programming paradigm, based on a societal view of computation’.
• AOP embodies an unashamedly strong notion of agency!
• The key idea that informs AOP is that of directly programming agents in terms of intentional
notions like belief, commitment, and intention.
• The motivation behind such a proposal is that, as we humans use the intentional stance as
an abstraction mechanism for representing the properties of complex systems.
• In the same way that we use the intentional stance to describe humans, it might be useful
to use the intentional stance to program machines.
Mike Wooldridge
22
AGENT0 and PLACA (Cont.)
• Shoham suggested that a complete AOP system will have 3 components:
a logic for specifying agents and describing their mental states;
an interpreted programming language for programming agents;
an ‘agentification’ process, for converting ‘neutral applications’ (e.g., databases)
into agents.
• Results only reported on first two components.
• Relationship between logic and programming language is semantics.
• • We will skip over the logic(!), and consider the first AOP language, AGENT0.
Mike Wooldridge
23
AGENT0 and PLACA (Cont.)
• AGENT0 is implemented as an extension to LISP.
Each agent in AGENT0 has 4 components:
a set of capabilities (things the agent can do);
a set of initial beliefs;
a set of initial commitments (things the agent will do); and
a set of commitment rules.
• •The key component, which determines how the agent acts, is the commitment rule set.
Mike Wooldridge
24
AGENT0 and PLACA (Cont.)
• Each commitment rule contains
a message condition;
a mental condition; and
an action.
• • On each ‘agent cycle’ . . .
The message condition is matched against the messages the agent has
received;
The mental condition is matched against the beliefs of the agent.
If the rule fires, then the agent becomes committed to the action (the
action gets added to the agents commitment set).
Mike Wooldridge
25
AGENT0 and PLACA (Cont.)
• Actions may be
• private— corresponding to an internally executed subroutine, or
• communicative, i.e., sending messages.
• Messages are constrained to be one of three types:
• ‘requests’ to commit to action;
• ‘unrequests’ to refrain from actions;
• ‘informs’, which pass on information.
Mike Wooldridge
26
Practical Reasoning Mike Wooldridge
Mike Wooldridge
4-27
Intentions in Practical Mike Wooldridge
Reasoning
1. Intentions pose problems for agents, who need to determine ways of achieving
them.
If I have an intention to , you would expect me to devote resources to deciding
how to bring about .
2. Intentions provide a “filter” for adopting other intentions, which must not conflict.
If I have an intention to , you would not expect me to adopt an intention such
that and are mutually exclusive.
3. Agents track the success of their intentions, and are inclined to try again if their
attempts fail.
If an agent’s first attempt to achieve fails, then all other things being equal, it
will try an alternative plan to achieve .
Mike Wooldridge
4-28
Intentions in Practical Mike Wooldridge
Reasoning
4. Agents believe their intentions are possible.
That is, they believe there is at least some way that the intentions could be brought about.
5. Agents do not believe they will not bring about their intentions.
It would not be rational of me to adopt an intention to if I believed was not possible.
6. Under certain circumstances, agents believe they will bring about their intentions.
It would not normally be rational of me to believe that I would bring my intentions about;
intentions can fail. Moreover, it does not make sense that if I believe is inevitable that I
would adopt it as an intention.
Mike Wooldridge
4-29
Intentions in Practical Reasoning
7. Agents need not intend all the expected side effects of their
intentions.
If I believe and I intend that , I do not necessarily
intend also. (Intentions are not closed under implication.)
Mike Wooldridge
4-30
Intentions in Practical Reasoning
Notice that intentions are much stronger than mere desires:
“My desire to play basketball this afternoon is merely a potential
influencer of my conduct this afternoon. It must vie with my
other relevant desires [. . . ] before it is settled what I will do. In
contrast, once I intend to play basketball this afternoon, the
matter is settled: I normally need not continue to weigh the
pros and cons. When the afternoon arrives, I will normally just
proceed to execute my intentions.” (Bratman, 1990)
Mike Wooldridge
4-31
Belief-Desire-Intention Architectures
• These architectures have their roots in the philosophical tradition of understanding practical
reasoning—the process of deciding, moment by moment, which action to perform in the
furtherance of our goals.
• Practical reasoning involves two important processes: deciding what goals we want to
achieve, and how we are going to achieve these goals. The former process is known as
deliberation, the latter as means-ends reasoning.
• Example: Done with your degree, you have options for what to do. After generating this set
of alternatives, you must choose between them, and commit to some. These chosen
options become intentions, which then determine the agent's actions. Intentions lead to
action.
Mike Wooldridge
32
Belief-Desire-Intention Architectures (Cont.)
• A key problem in the design of practical reasoning agents is that of achieving a good
balance between these different concerns. Specifically, it seems clear that an agent should
at times drop some intentions (because it comes to believe that either they will never be
achieved, they are achieved, or else because the reason for having the intention is no
longer present). It follows that, from time to time, it is worth an agent stopping to reconsider
its intentions. But reconsideration has a cost—in terms of both time and computational
resources. But this presents us with a dilemma:
• an agent that does not stop to reconsider sufficiently often will continue attempting
to achieve its intentions even after it is clear that they cannot be achieved, or that
there is no longer any reason for achieving them;
• • an agent that constantly reconsiders its attentions may spend insufficient time
actually working to achieve them, and hence runs the risk of never actually
achieving them.
Mike Wooldridge
33
Belief-Desire-Intention Architectures (Cont.)
• The lesson is that different types of environment require different types of decision
strategies. In static, unchanging environment, purely pro-active, goal directed behavior is
adequate. But in more dynamic environments, the ability to react to changes by modifying
intentions becomes more important.
Mike Wooldridge
34
Belief-Desire-Intention Architectures (Cont.)
Mike Wooldridge
35
Belief-Desire-Intention Architectures (Cont.)
• As this Figure illustrates, there are seven main components to a BDI agent:
• a set of current beliefs, representing information the agent has about its current environment;
• a belief revision function, (brf), which takes a perceptual input and the agent's current beliefs,
and on the basis of these, determines a new set of beliefs;
• an option generation function, (options), which determines the options available to the agent
(its desires), on the basis of its current beliefs about its environment and its current
intentions;
• a set of current options, representing possible courses of actions available to the agent;
• a filter function (filter), which represents the agent's deliberation process, and which
determines the agent's intentions on the basis of its current beliefs, desires, and intentions;
• a set of current intentions, representing the agent's current focus—those states of affairs that
it has committed to trying to bring about;
• an action selection function (execute), which determines an action to perform on the basis of
current intentions.
Mike Wooldridge
36
Environments (1)
Russell and Norvig suggest the following classification of environment properties:
1. Accessible versus inaccessible. An accessible environment is one in which the
agent can obtain complete, accurate, up-to-date information about the
environment's state. Most real-world environments (including, for example, the
everyday physical world and the Internet) are not accessible in this sense.
2. Deterministic versus non-deterministic. A deterministic environment is one in
which any action has a single guaranteed effect - there is no uncertainty about
the state that will result from performing an action.
37
Environments (2)
Russell and Norvig suggest the following classification of environment properties:
3. Static versus dynamic. A static environment is one that can be assumed to
remain unchanged except by the performance of actions by the agent. In
contrast, a dynamic environment is one that has other processes operating on
it, and which hence changes in ways beyond the agent's control. The physical
world is a highly dynamic environment, as is the Internet.
4. Discrete versus continuous. An environment is discrete if there are a fixed,
finite number of actions and percepts in it.
38
Time to check your learning!
Give other examples of agents (not necessarily intelligent) that you know of. For
each, define as precisely as possible:
(a) the environment that the agent occupies (physical, software, . . . ), the states
that this environment can be in, and whether the environment is: accessible or
inaccessible; deterministic or non-deterministic; episodic or non-episodic; static or
dynamic; discrete or continuous.
(b) the action repertoire available to the agent, and any pre-conditions associated
with these actions;
(c) the goal, or design objectives of the agent—what it is intended to achieve.
39
Conclusion
In this lesson, you learned…
1. Reasons for the interest in multiagent systems
2. Notion of an Agent
3. Agent Architecture
4. Agent Reasoning
40