0% found this document useful (0 votes)
37 views7 pages

Mechanism Design: Some Basic Concepts

(1) Mechanism design deals with situations where a principal wants to condition actions on private information held by agents. (2) The principal must provide incentives, like payments, for agents to truthfully report private information, but incentives are costly and can result in inefficiency. (3) Mechanism design seeks to determine if the principal's objectives can be achieved by decentralizing decision making to selfish agents through incentive compatible mechanisms.

Uploaded by

Swaraj Kumar Dey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views7 pages

Mechanism Design: Some Basic Concepts

(1) Mechanism design deals with situations where a principal wants to condition actions on private information held by agents. (2) The principal must provide incentives, like payments, for agents to truthfully report private information, but incentives are costly and can result in inefficiency. (3) Mechanism design seeks to determine if the principal's objectives can be achieved by decentralizing decision making to selfish agents through incentive compatible mechanisms.

Uploaded by

Swaraj Kumar Dey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Mechanism Design: Some Basic Concepts1

A “principal” faces multiple “agents” who hold private information. The principal
would like to condition his actions on this information. He could simply ask the
agents for their information, but they will not report it truthfully unless the principal
gives them an incentive to do so, either by monetary payments or with some other
instruments he controls. Since providing these incentives is costly, the principal faces
a tradeoff that often results in an inefficient allocation.
The principal could be the social planner (or government) who acts on behalf of
the society (to pursue the interest of the society as a whole, e.g., efficiency):
(1) Regulating monopoly with unknown cost
(2) Collecting tax to finance public project when he does not know the agents’
(citizens’) valuations of the project
The principal can also be someone who is pursuing his own interest (self-interested):
(1) A seller (principal), not knowing the willingness to pay of the buyers, need to
design an auction mechanism to determine who purchase the good and the sale
price
(2) In a second-degree price discrimination, the monopoly (principal), who has in-
complete information about the willingness to pay of the consumers, designs
a price schedule that determines the price to be paid by the consumer as a
function of the quantity purchased.
(3) Insurance companies design a menu of contracts to screen customers.
Lastly, the principal can also be a mediator of two parties:
In the problem of bilateral exchange, a mediator designs a trading mechanism
between a seller who has private information about the production cost and a
buyer who has private information about his willingness to pay for the good.
In an ideal world where the principal has all relevant information (including indi-
viduals’ preferences), Mechanism Design would not be necessary. The lack of sufficient
information makes it necessary to for the principal search for means to achieve, if pos-
sible, certain social goals or private objectives. To achieve certain social goals, for
example, voting has been used in many cases to aggregate individuals’ preferences.
Voting in many cases does not lead to efficiency. This is often true in the absence
of any strategic behavior (for example the median voter and the average voter may
1 Lecture notes prepared by Licun Xue. For details, see Mas-Colell, Whinston, and Green (1995).

1
have different preferences). What is more, individuals may have incentive to misrep-
resent their preferences in order to manipulate the final outcome. Strategic voting is
a typical example of noncooperative manipulation. Mechanism design deals with the
following problem: Given the principal’s objective (again the principal could be the
social planner who pursues some desirable social ethics such as equity or efficiency),
is it possible, if so how, to decentralize the decision power among individual agents
in such a way that by freely exercising this decision power (e.g. utility-maximizing
agents can misrepresent themselves) the agents eventually select the very outcome(s)
that the principal considers as a priori desirable? In other words, Mechanism De-
sign tries to answer whether or not, if so how, a particular objective of the principal
(represented by a social choice function’ if the principal is a social planner) can be
materialized in a world of selfish agents according to some behavioral pattern (i.e.
some equilibrium concept).
Mechanism Design is typically studied as a three step game of incomplete infor-
mation, where the agents’ types – e.g., willingness to pay – are private information.
Step 1: The principal designs a “mechanism”, or “contract”, or “incentive scheme”.
A mechanism is a game in which the agents send costless messages, and the
principal chooses outcome or allocation based on the messages received.
Step 2: The agents choose to accept or reject the mechanism (this step is omitted if the
principal is the government, say.)
Step 3: The agents who accept the mechanism play the game specified by the mecha-
nism.
The Revelation Principle shows that to obtain the highest expected payoff, the
principal can restrict himself to direct mechanisms where agents report directly their
types (i.e., messages are types), all agents accept the mechanism in step 2, and in
step 3 the agents simultaneously reveal their true types. Thus, we need only analyze
a static Bayesian game.
In some cases (mainly when the principal is the government), step 2 is omitted since
the agents must participate. Thus, individual rationality or participation constraints
(Ps) are not imposed. In other cases, however, agents can freely choose whether to
participate, that is, participation is voluntary: bidders are free not to participate in
an auction; buyers can refrain from buying from a firm, regulated firms can refuse to
produce at all.
An important focus of the Mechanism Design literature is how the combination
of incomplete information and binding participation constraints can prevent efficient
outcomes. Coase (1960) argues that in the absence of transaction costs and with
symmetric information, bargaining among parties concerned by a decision leads to

2
an efficient decision. But this is not true in general under asymmetric information.
A constant theme of the literature is that the private information of the agents leads
to inefficiency when participation constraints are binding.

Formalization

Consider the following environment:


(1) There are n + 1 players: a “principal” and n agents N = {1, 2, . . . , n}.
(2) The set of outcomes is given by X.
(3) The principal does not have private information, but each agent i ∈ N has
private information about his type θi that determines his preferences. The set
Q
of possible types of agent i is θi . Thus, Θ = i∈N Θi denotes the set of all
possible type profiles (or combinations).
(4) The agents’ types θ = (θ1 , . . . , θn ) are drawn from Θ according to some com-
monly known distribution (the p.d.f. is φ(θ), say).
(5) The utility function of agent i ∈ N when he is type θi is ui (x, θi ) where x ∈ X.
(6) The principal’s utility function is u0 (x, θ). [If the principal is the social plan-
ner, however, we may not use u0 (x, θ) directly. Instead, we specify the social
planner’s objective (e.g., efficiency) directly.]
(7) Suppose that if the principal knew θ, the types of the agents, he would choose
f (θ) ∈ X. That is,
f : Θ → X,

where f is called a (social) choice function (if the principal is the social planner).
Thus, for each type profile θ ∈ Θ, f (θ) specifies an (desirable) outcome in X.
(8) Since the principal does not know the true types of the agents, he can only
rely on the “messages” collected from the agents. Let Mi denote the set of all
Qn
messages agent i ∈ N could send and let M = i=1 Mi .
(9) A mechanism is a message space M and (decision rule) mapping g : M → X,
i.e., for each massage profile m ∈ M , g forces an outcome g(m) ∈ X.
(10) Note that the mechanism (as well the other information)define a Bayesian game.
Let m∗ = (m∗1 , . . . , m∗n ) denote an “equilibrium” of this game. (Here different
equilibrium concepts arise.) Let m∗i (θi ) denote the “equilibrium” message of
agent i with true type θi for all i ∈ N .
(11) The the social choice function f is implementable if

g(m∗1 (θ1 ), . . . , m∗n (θn )) = f (θ) ∀θ ∈ Θ.

3
(12) A direct mechanism is one where each agent is asked to report his individual
preferences, in which case M = Θ (and f = g). In an indirect mechanism,
agents are asked to send massages other than preferences.
(13) The Revelation Principle states that if a social choice function can be imple-
mented by an indirect mechanism then it can also be implemented by a truth-
telling direct mechanism.

Implementation in dominant strategies


The strongest notion of implementation is implementation in dominant strategies.
That is, the equilibrium concept in (10) is “equilibrium in dominant strategies”. In
this case, m∗i (θi ) is the best message of agent i whatever the messages of other agents
are. That is, for all i ∈ N and θi ∈ Θi ,

ui (g(m∗i (θi ), m−i ), θi ) ≥ ui (g(m0i , m−i ), θi ) ∀m0i ∈ Mi and m−i ∈ M−i .

By the Revelation Principle, if f can be implemented in dominant strategies, then


it can be implemented by a direct mechanism where true-telling is a dominant strategy
for every agent. A direct mechanism is said to be strategy-proof if revealing the true
preferences is a dominant strategy for each agent and for any type θi , i.e., each agent
i cannot benefit from reporting θi0 whenever his true utility is θi .

ui (f (θi , θ−i ), θi ) ≥ ui (f (θi0 , θ−i ), θi ) ∀θi0 ∈ Θi and θ−i ∈ Θ−i .

A strategy-proof mechanism is said to be incentive compatible in Hurwicz’s (1972)


terminology. (The notion of incentive compatibility should be very familiar to you
by now.)
Gibbard (1973) and Satterthwaite (1975) show that

Theorem. If the preferences of the agents are unrestricted, N is finite, and |X| ≥ 3,
then there is no strategy-proof mechanism that is non-dictatorial and Pareto optimal.

The two definitions needed in the above impossibility result are:

Definition. f is dictatorial if there exists an agent i ∈ N such that for all θ ∈ Θ,

f (θ) ∈ {x ∈ X | ui (x, θi ) ≥ ui (y, θi ), ∀y ∈ X}.

That is, f if dictatorial if there is some agent i such that f always chooses i’s top-
ranked outcome.

4
Definition. f is (ex post) efficient if there do not exist θ ∈ Θ and x ∈ X such that

ui (x, θi ) ≥ ui (f (θ), θi), ∀i with strict inequlity for some i.

Weaker notions of implementation involve different equilibrium concepts: Nash


equilibrium and Bayesian Nash equilibrium. Implementation in Bayesian Nash equi-
librium stipulates that each agent has prior beliefs about other agents’ preferences and
this is common knowledge. A mechanism that implements a social choice function in
Bayesian Nash equilibrium is called Bayes incentive compatible. Implementation in
Nash equilibrium requires that agents know one another’s preferences. A mechanism
that implements a social choice function in Nash equilibrium is called Nash incentive
compatible. In both cases, however, each agent must “predict” other’s strategies in
order to decide his own optimal strategy.

Theorem. If N is finite, there is no Bayes incentive compatible mechanism which


possesses the no-trade 2 option and is Pareto efficient for all utility profiles.

Theorem. If N is finite, there are many Nash incentive compatible mechanisms


which possess the no-trade option 3 and are Pareto efficient.

Quasi-linear Environments: The Vickrey Auction and The


Clarke-Groves Mechanisms.
The simplest example of direct mechanism is the Vickrey auction. The Vickrey
auction is a sealed-bid auction in which the highest bidder wins the object but he
pays only the second highest bid. In this case, each bidder will bid his true valuation
of the object since truth-telling is a weakly dominant strategy. The reasoning is
as follows: First, an individual would not under-bid, since doing so only lowers the
probability of winning but does not change the amount he pays if he wins. Secondly,
an individual would not over-bid since if over-bidding does not ensure his winning, he
does not become better off by over-bidding; and if it does ensure his winning, there
must be a bid (of someone else) above his true valuation, in which case he is worse
off by over-bidding.
Clarke (1971) and Groves and Ledyard (1977) generalize this idea to implement
the efficient provision of public goods. Consider a simple economy with n individuals
who are deciding upon an indivisible public project G with cost c and each each agent
i’s utility is quasi-linear in money income, yi ,

ui (G, mi ) = vi ∗ G + yi , where G = 0, 1
2A mechanism is said to have no-trade option if there is an allocation at which each participant
remain. In exchange economy, the initial endowment is such an allocation.

5
where vi is agent i’s true valuation (in dollar terms) of the public project. The problem
is to produce the public good at an efficient level, i.e., the project is approved if and
Pn
only if i=1 vi ≥ c (ex post efficiency). The difficulty is to induce the agents to reveal
their true willingness to pay. The Clarke-Groves mechanism is as follows:
(1) Each agent i reports his willingness to pay wi . It is possible that wi 6= vi .
Pn
(2) The public project is approved iff i=1 wi ≥ c (decision rule).
(3) Each agent i receives a side-payment equal to the sum of other agents’ reported
P
willingness to pay minus the cost of the project, j6=i wj − c, if the project
is approved. (If this amount is positive, agent i receives it; if it is negative,
agent i must pay this amount.)
In this case, it is a (weakly) dominant strategy for each agent to report his true
willingness to pay. To prove this, w.l.o.g., assume c = 0. Then agent i’s payoff
(utility) takes the form
 P Pn
vi + j6=i wj , if i=1 wi ≥ 0
ui =
0 otherwise.
P
Suppose that vi + j6=i wj > 0. Then agent i can ensure that the public good is
provided by reporting wi = vi . Over-reporting cannot make him better off and under-
reporting, if it reverses the decision (i.e., agent i is pivotal), will make him worse off.
P
Suppose, on the other hand, vi + j6=i wj < 0. The agent i can ensure the public
good is not provided by reporting wi = vi . Under-reporting cannot possibly make
him better off and over-reporting, if it reverses the decision, will make him worse off.
Thus, there is never an incentive for any agent to misrepresent his preferences. Hence
the first best public decision is implemented.
The above mechanism, however, fails to balance the budget: the total side-payments
may be potentially very large. That is, it may be very costly to induce the agents to
tell the truth. Ideally, we would like to have a mechanism where the side-payments
sum up to zero. However, it turns out that this is not possible in general. Green and
Laffont (1979) proves the following impossibility theorem.

Theorem. There is no strategy-proof, efficient, and budget-balanced mechanism.


It is possible, however, to design a mechanism where the side-payments are always
non-positive. Thus the agents may be required to pay a “tax”, but they will never
receive payments. Because of these “wasted” taxes, the allocation of public and
private goods will not be Pareto efficient.
To achieve non-positive side-payments, we can “shift” agent i’s payoff by the
amount hi without affecting his incentive. Agent i’s payoff function is given by
 P Pn
vi + j6=i wj + hi , if i=1 wi ≥ 0
ui =
hi otherwise.

6
where  P P
− j6=i wj , if j6=i wj ≥ 0
hi =
0 otherwise.
Thus, the payoff function of agent i takes the form
 Pn P

 vi , if i=1 wi ≥ 0 and j6=i wj ≥ 0

 vi + P wj ,
 Pn
if i=1 wi ≥ 0 and
P
wj < 0
j6=i j6=i
ui = P Pn P

 − j6=i wj , if i=1 wi < 0 and wj ≥ 0

 Pn P
j6=i

0, if i=1 wi < 0 and j6=i wj < 0
Pn
That is, agent i must pay a transfer | j6=i wj | if he is pivotal, i.e., his report changes
the sign of the sum. In other words, he must pay the cost he imposes on the rest of
society. Such a scheme is known as Clarke tax or pivotal mechanism.

You might also like