Mechanism Design: Some Basic Concepts
Mechanism Design: Some Basic Concepts
A “principal” faces multiple “agents” who hold private information. The principal
would like to condition his actions on this information. He could simply ask the
agents for their information, but they will not report it truthfully unless the principal
gives them an incentive to do so, either by monetary payments or with some other
instruments he controls. Since providing these incentives is costly, the principal faces
a tradeoff that often results in an inefficient allocation.
The principal could be the social planner (or government) who acts on behalf of
the society (to pursue the interest of the society as a whole, e.g., efficiency):
(1) Regulating monopoly with unknown cost
(2) Collecting tax to finance public project when he does not know the agents’
(citizens’) valuations of the project
The principal can also be someone who is pursuing his own interest (self-interested):
(1) A seller (principal), not knowing the willingness to pay of the buyers, need to
design an auction mechanism to determine who purchase the good and the sale
price
(2) In a second-degree price discrimination, the monopoly (principal), who has in-
complete information about the willingness to pay of the consumers, designs
a price schedule that determines the price to be paid by the consumer as a
function of the quantity purchased.
(3) Insurance companies design a menu of contracts to screen customers.
Lastly, the principal can also be a mediator of two parties:
In the problem of bilateral exchange, a mediator designs a trading mechanism
between a seller who has private information about the production cost and a
buyer who has private information about his willingness to pay for the good.
In an ideal world where the principal has all relevant information (including indi-
viduals’ preferences), Mechanism Design would not be necessary. The lack of sufficient
information makes it necessary to for the principal search for means to achieve, if pos-
sible, certain social goals or private objectives. To achieve certain social goals, for
example, voting has been used in many cases to aggregate individuals’ preferences.
Voting in many cases does not lead to efficiency. This is often true in the absence
of any strategic behavior (for example the median voter and the average voter may
1 Lecture notes prepared by Licun Xue. For details, see Mas-Colell, Whinston, and Green (1995).
1
have different preferences). What is more, individuals may have incentive to misrep-
resent their preferences in order to manipulate the final outcome. Strategic voting is
a typical example of noncooperative manipulation. Mechanism design deals with the
following problem: Given the principal’s objective (again the principal could be the
social planner who pursues some desirable social ethics such as equity or efficiency),
is it possible, if so how, to decentralize the decision power among individual agents
in such a way that by freely exercising this decision power (e.g. utility-maximizing
agents can misrepresent themselves) the agents eventually select the very outcome(s)
that the principal considers as a priori desirable? In other words, Mechanism De-
sign tries to answer whether or not, if so how, a particular objective of the principal
(represented by a social choice function’ if the principal is a social planner) can be
materialized in a world of selfish agents according to some behavioral pattern (i.e.
some equilibrium concept).
Mechanism Design is typically studied as a three step game of incomplete infor-
mation, where the agents’ types – e.g., willingness to pay – are private information.
Step 1: The principal designs a “mechanism”, or “contract”, or “incentive scheme”.
A mechanism is a game in which the agents send costless messages, and the
principal chooses outcome or allocation based on the messages received.
Step 2: The agents choose to accept or reject the mechanism (this step is omitted if the
principal is the government, say.)
Step 3: The agents who accept the mechanism play the game specified by the mecha-
nism.
The Revelation Principle shows that to obtain the highest expected payoff, the
principal can restrict himself to direct mechanisms where agents report directly their
types (i.e., messages are types), all agents accept the mechanism in step 2, and in
step 3 the agents simultaneously reveal their true types. Thus, we need only analyze
a static Bayesian game.
In some cases (mainly when the principal is the government), step 2 is omitted since
the agents must participate. Thus, individual rationality or participation constraints
(Ps) are not imposed. In other cases, however, agents can freely choose whether to
participate, that is, participation is voluntary: bidders are free not to participate in
an auction; buyers can refrain from buying from a firm, regulated firms can refuse to
produce at all.
An important focus of the Mechanism Design literature is how the combination
of incomplete information and binding participation constraints can prevent efficient
outcomes. Coase (1960) argues that in the absence of transaction costs and with
symmetric information, bargaining among parties concerned by a decision leads to
2
an efficient decision. But this is not true in general under asymmetric information.
A constant theme of the literature is that the private information of the agents leads
to inefficiency when participation constraints are binding.
Formalization
where f is called a (social) choice function (if the principal is the social planner).
Thus, for each type profile θ ∈ Θ, f (θ) specifies an (desirable) outcome in X.
(8) Since the principal does not know the true types of the agents, he can only
rely on the “messages” collected from the agents. Let Mi denote the set of all
Qn
messages agent i ∈ N could send and let M = i=1 Mi .
(9) A mechanism is a message space M and (decision rule) mapping g : M → X,
i.e., for each massage profile m ∈ M , g forces an outcome g(m) ∈ X.
(10) Note that the mechanism (as well the other information)define a Bayesian game.
Let m∗ = (m∗1 , . . . , m∗n ) denote an “equilibrium” of this game. (Here different
equilibrium concepts arise.) Let m∗i (θi ) denote the “equilibrium” message of
agent i with true type θi for all i ∈ N .
(11) The the social choice function f is implementable if
3
(12) A direct mechanism is one where each agent is asked to report his individual
preferences, in which case M = Θ (and f = g). In an indirect mechanism,
agents are asked to send massages other than preferences.
(13) The Revelation Principle states that if a social choice function can be imple-
mented by an indirect mechanism then it can also be implemented by a truth-
telling direct mechanism.
Theorem. If the preferences of the agents are unrestricted, N is finite, and |X| ≥ 3,
then there is no strategy-proof mechanism that is non-dictatorial and Pareto optimal.
That is, f if dictatorial if there is some agent i such that f always chooses i’s top-
ranked outcome.
4
Definition. f is (ex post) efficient if there do not exist θ ∈ Θ and x ∈ X such that
ui (G, mi ) = vi ∗ G + yi , where G = 0, 1
2A mechanism is said to have no-trade option if there is an allocation at which each participant
remain. In exchange economy, the initial endowment is such an allocation.
5
where vi is agent i’s true valuation (in dollar terms) of the public project. The problem
is to produce the public good at an efficient level, i.e., the project is approved if and
Pn
only if i=1 vi ≥ c (ex post efficiency). The difficulty is to induce the agents to reveal
their true willingness to pay. The Clarke-Groves mechanism is as follows:
(1) Each agent i reports his willingness to pay wi . It is possible that wi 6= vi .
Pn
(2) The public project is approved iff i=1 wi ≥ c (decision rule).
(3) Each agent i receives a side-payment equal to the sum of other agents’ reported
P
willingness to pay minus the cost of the project, j6=i wj − c, if the project
is approved. (If this amount is positive, agent i receives it; if it is negative,
agent i must pay this amount.)
In this case, it is a (weakly) dominant strategy for each agent to report his true
willingness to pay. To prove this, w.l.o.g., assume c = 0. Then agent i’s payoff
(utility) takes the form
P Pn
vi + j6=i wj , if i=1 wi ≥ 0
ui =
0 otherwise.
P
Suppose that vi + j6=i wj > 0. Then agent i can ensure that the public good is
provided by reporting wi = vi . Over-reporting cannot make him better off and under-
reporting, if it reverses the decision (i.e., agent i is pivotal), will make him worse off.
P
Suppose, on the other hand, vi + j6=i wj < 0. The agent i can ensure the public
good is not provided by reporting wi = vi . Under-reporting cannot possibly make
him better off and over-reporting, if it reverses the decision, will make him worse off.
Thus, there is never an incentive for any agent to misrepresent his preferences. Hence
the first best public decision is implemented.
The above mechanism, however, fails to balance the budget: the total side-payments
may be potentially very large. That is, it may be very costly to induce the agents to
tell the truth. Ideally, we would like to have a mechanism where the side-payments
sum up to zero. However, it turns out that this is not possible in general. Green and
Laffont (1979) proves the following impossibility theorem.
6
where P P
− j6=i wj , if j6=i wj ≥ 0
hi =
0 otherwise.
Thus, the payoff function of agent i takes the form
Pn P
vi , if i=1 wi ≥ 0 and j6=i wj ≥ 0
vi + P wj ,
Pn
if i=1 wi ≥ 0 and
P
wj < 0
j6=i j6=i
ui = P Pn P
− j6=i wj , if i=1 wi < 0 and wj ≥ 0
Pn P
j6=i
0, if i=1 wi < 0 and j6=i wj < 0
Pn
That is, agent i must pay a transfer | j6=i wj | if he is pivotal, i.e., his report changes
the sign of the sum. In other words, he must pay the cost he imposes on the rest of
society. Such a scheme is known as Clarke tax or pivotal mechanism.