0% found this document useful (0 votes)
69 views7 pages

Combinatorial Markets For Efficient Energy Management

This paper develops a new model for electricity retail where end-use customers choose their supplier from competing electricity retailers. The model is based on simultaneous reverse combinatorial auctions, designed as a second-price sealed-bid multi-item auction with supply function bidding. This model prevents strategic bidding and allows the auctioneer to maximise its payoff.

Uploaded by

api-3697505
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
69 views7 pages

Combinatorial Markets For Efficient Energy Management

This paper develops a new model for electricity retail where end-use customers choose their supplier from competing electricity retailers. The model is based on simultaneous reverse combinatorial auctions, designed as a second-price sealed-bid multi-item auction with supply function bidding. This model prevents strategic bidding and allows the auctioneer to maximise its payoff.

Uploaded by

api-3697505
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Combinatorial Markets for Efficient Energy Management

Yoseba K. Penya Nicholas R. Jennings


Department of Information Systems School of Electronics and Computer Science
Vienna University of Economics and University of Southampton
Business Administration, Austria. Southampton, UK.
[email protected] [email protected]

Abstract ers must find additional energy to avoid a blackout. Finally,


there are non-negligible costs stemming from the variation
The deregulation of the electricity industry in many coun- in the electricity production volume that most of the tradi-
tries has created a number of marketplaces in which produc- tional types of energy generators (e.g. hydroelectric, ther-
ers and consumers can operate in order to more effectively moelectric, nuclear) have to face. Against this background,
manage and meet their energy needs. To this end, this paper the desideratum is to achieve a market model where retailers
develops a new model for electricity retail where end-use have the most accurate possible prognosis and the capability
customers choose their supplier from competing electricity of influencing or guiding customers’ consumption. To this
retailers. The model is based on simultaneous reverse com- end, there have been a number of initiatives, grouped under
binatorial auctions, designed as a second-price sealed-bid the general banner of Demand-Side Management (DSM),
multi-item auction with supply function bidding. This model whose main objective is to distribute the demand over time to
prevents strategic bidding and allows the auctioneer to max- avoid peak loads. Now, the easiest way to achieve this goal
imise its payoff. Furthermore, we develop optimal single- is by setting the price of the energy depending on the actual
item and multi-item algorithms for winner determination in demand load. Thus, the higher the demand, the more expen-
such auctions that are significantly less complex than those sive the price, and vice versa. Based upon these premises,
currently available in the literature. many utility companies (UCs) already present a basic form
of DSM by offering a cheaper night tariff.
Our aim in this work is to improve and extend this sim-
1. Introduction ple market model to permit UCs to express more complex
The deregulation of electricity markets began in the early aims and, thus, increase their influence on customers. For
nineties when the UK Government privatised the electric- instance, in order to lighten the peak-time load, the supplier
ity supply industry in England and Wales. This process was can offer a discount for consuming a small amount of energy
then subsequently followed in many other countries. In most at 8 am (peak-time) and a larger amount at midnight (off
cases, this restructuring involves separating the electricity peak). This incentivises the customer to reschedule some
generation and retail from the natural monopoly functions of tasks to midnight (e.g. the dishwasher or the washing ma-
transmission and distribution. This, in turn, leads to the es- chine). If many clients accept this compromise offer, the UC
tablishment of a wholesale electricity market for electricity will have achieved a double goal. It will have a more accu-
generation and a retail electricity market for electricity re- rate prognosis for 8 am and midnight and it will also have
tailing. In the former case, competing generators offer their shifted some of the peak-time consumption to off peak. In e-
electricity output to retailers and in the latter case end-use commerce terms, this process can be seen as a reverse combi-
customers choose their supplier from competing electricity natorial auction. It is “reverse” because the customers pick
retailers. Here we focus on retail markets, which differ from one of the available companies and tariffs to supply their fu-
their more traditional counterparts because energy cannot ture consumption. And it is “combinatorial” because bidding
be stored or held in stock (as tangible goods can). Conse- for a bundle of items is typically valued differently from bid-
quently, retailers are forced to work with consumption prog- ding separately for each of the constituent items (e.g. the
noses, which, in turn, creates a number of risks. First, pro- combination of consuming at 8am and midnight is more ap-
ducing more than is consumed is not economical. Moreover, preciated, and thus rewarded, than, for instance, the combi-
the price of the energy mainly depends on the production nation of consuming at 10am and 11am).
cost and this typically rises with the amount of energy pro- While combinatorial auctions provide very efficient al-
duced. Second, if the demand exceeds the prediction, suppli- locations that can maximise the revenue for the auctioneer,

Proceedings of the 2005 IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT’05)
0-7695-2416-8/05 $20.00 © 2005 IEEE
their main drawback is the complexity of the clearing pro- a threshold above which the consumption becomes more ex-
cess in which buyers and sellers are matched and the quan- pensive. To move to a more dynamic environment where the
tities of items traded between them are determined. Specifi- benefits of competition can be more fully realised, we put
cally, clearing combinatorial auctions is NP-hard [4]. More- forward the following requirements for our market design.
over, most work in this area deals with clearing combina- The arrangement of customers’ electricity supply from mul-
torial auctions with atomic propositions [6]. Thus, bids are tiple UCs should be achieved by having contracts that spec-
either accepted or rejected in their entirety, which may limit ify the provision of an amount of energy for a certain period
the profit for the auctioneer. A more efficient solution is to of time (say one hour). These contracts should not necessar-
allow bidding with demand/supply functions [7, 2], in which ily be exclusive and, thus, customers may have agreements
bidders submit a function to calculate the cost of the units with different companies for the same hour if this is the best
to be bought or sold. This allows the customer to accept thing to do. Finally, we assume customers auction, on a
parts of different bids and constitutes a powerful way of ex- daily basis, their next 24 hours consumption divided into
pressing complex pricing policies. In our case, production 24 items (representing one hour each). They subsequently
costs can be easily reflected in the supply function and if receive bids from the UCs and make their decision for the
bids are accepted partially, there may be more than one win- next 24 hours, which is a trade-off between the very static
ner for the same auction and item. This enables customers situations of today and the possibility of auctioning on a per
to accept different parts of bids from different bidders so minute basis for the coming minute.
they can get energy simultaneously from several suppliers. 2.2. Market Design
Since the transmission and distribution grids are shared and
the path followed by the electricity cannot be tracked down, The requirements detailed above can be best met by struc-
it is impossible to determine the producer of the energy be- turing the market as a reverse auction. Furthermore, we
ing consumed. Therefore, the hypothesis of customers being assume customers don’t issue any bids but simply choose
simultaneously supplied by several UCs does not pose any among those offered by the UCs. An exchange (in which
technical problems. multiple buyers and sellers submit their bids and offers to
Against this background, this paper advances the state of an independent auctioneer that decides the winners [6]) was
the art in two main ways. First, we present, for the first rejected because it scales poorly. In practice, the number of
time, an energy retail market designed as a system of re- customers may be up to tens of thousands, each of which is
verse combinatorial auctions with supply function bidding. selling 24 items, and with combinatorial bidding, clearing
This novel market allows customers to increase their profit such an exchange becomes intractable very fast. Unlike ex-
and provides UCs with a mechanism to influence customers’ changes, reverse auctions have the advantage that they may
behaviour. Second, we develop new optimal clearing algo- be performed in parallel. This means the complexity can be
rithms tailored to electricity supply functions that perform divided between the number of customers because instead
better than the existing more general clearing algorithms. of one big auction, many smaller ones are carried out at the
The remainder of the paper is organized as follows. Section same time. For these reasons, we have designed our sys-
2 details the overall market design. Section 3 presents the tem as a series of simultaneous reverse auctions despite the
single-item and the multi-item clearing algorithms and anal- risk of overbooking. That is, although the UCs issue their
yses their complexity and optimality. Section 4 examines the own tariffs they cannot control the number of customers that
results of comparing the multi-item algorithm with the only choose them. So the demand could exceed their capability.
other optimal multi-item one defined in the literature. Sec- However, in this case, we allow overbooked UCs to buy the
tion 5 discusses related work. Finally, Section 6 concludes additional energy from non-overbooked ones (also see sec-
and outlines the avenues of future work. tion 6). As combinatorial bidding is permitted, UCs submit
their special discounts together with the usual hour tariffs.
2. Electricity Retail Markets In this case, having 24 hours (or items) means that there may
be up to 224 different combinations of discounts. This is ob-
This section discusses the nature of current electricity re- viously a worst-case scenario because, in practice, our expe-
tail markets and outlines the design of our solution. rience in the domain indicates that UCs are highly unlikely
to issue a different discount for each possible combination.
2.1. Requirements
Moreover, we decided that the auctions should be sealed (to
Currently, most customers only partially enjoy the bene- reveal the least possible information) and single-round (to
fits of a deregulated market. They typically sign mid-term minimise communication and other delays). The auctions
contracts with a single supplier and the tariffs do not re- also need to be both multi-item and multi-unit. As each item
flect the pressure of competition. Moreover, whereas clas- is the supply of electricity in one hour, there are 24 items to
sical capitalist pricing policies encourage demand by apply- allocate in an auction. In addition, each bidder may not allo-
ing discounts on quantity (the more you buy, the cheaper the cate the whole consumption within an hour but rather just a
unit price becomes), actual electricity contracts often include portion of it (i.e. some units).

Proceedings of the 2005 IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT’05)
0-7695-2416-8/05 $20.00 © 2005 IEEE
Another important component to set is the price paid by Definition 2 An allocation is a list containing a number
the winner. We do not want to have a first price auction be- (between one and the number of suppliers) of single allo-
cause it offers incentives for strategic behaviour (i.e. the par- cations that detail the supply of electricity to be provided to
ticipants act according to beliefs formed about others’ values the customer at a given time-slot.
and types, which does not assure them of maximising their
payoff). To circumvent this, we choose a uniform second Definition 3 A more profitable allocation from two alterna-
price for combinatorial auctions (Vickrey-Clarke-Groves) tives is the one that for a given total demand q, has the lower
since this has the dominant strategy of bidders bidding their total price p.
true valuations of the goods [5, 3]. The price paid by the win-
ner is not directly specified in the bid because bidders submit Definition 4 An optimal allocation is one in which the de-
a supply function. Thus, the customer must calculate the en- mand constraint is satisfied and there is no more profitable
ergy it wants to consume within a time slot (i.e. the units of allocation.
that item to be auctioned) and then decide the cheapest com-
bination with the supply functions submitted (i.e. number of Definition 5 An optimal day allocation is a set of 24 op-
units to be allocated with each bidder. Therefore, the bids timal allocations, each of which corresponds to a different
are accepted partially. To this end, we use the compact no- item (i.e. there is an optimal allocation for each hour).
tation introduced in [2], where bidders submit for a certain
item a piece-wise linear supply function P composed of n The clearing algorithms we present in this section are re-
linear segments. Each segment l, 1 ≤ l ≤ n, is described by lated in that the multi-item one is a consecutive and iterative
a starting quantity sl , an ending quantity el , a unit price πl , processing of the single-item one (i.e. the result of the multi-
and a fixed price Cl . Thus, if a customer wants to buy q units item algorithm is obtained by executing the single-item one
of that item from the supplier, it will pay Pl = πl · q + Cl with different values). Specifically, clearing a single-item
if sl ≤ q ≤ el . Additionally, bidders submit a correlation case implies finding the optimal allocation for that item, so
function, ω, which shows the reward or penalty of buying a this enterprise deals only with the supply functions submit-
number of items together (it is this that makes the bidding ted to one item. The multi-item case has a broader remit (an
truly combinatorial). For instance, ω1 (A, B) = 0.95 would optimal day allocation) and, thus, it also takes into account
mean that if buying x units of items A and y units of B (i.e. the relationships between the different items of the optimal
consuming x Kw at time A and y Kw at time B), the price allocations (i.e. the correlation functions). Let us first start
paid will have a 5% discount. Thus, if the unit price of item with the explanation of the single-item case.
A is pa and the unit price of item B is pb , the total price 3.1. The Optimal Single-Item Clearing Algorithm
would be 0.95 · ((x · pa ) + (y · pb )).
Currently, there is only one optimal algorithm to solve Clearing a single-item algorithm with piece-wise supply
this problem. Specifically, the one presented by Dang and function bids involves determining the amount to be allo-
Jennings in [2] (described in more detail in section 5). How- cated to each submitted bid function. In essence, in each
ever, we believe this is inapplicable in our scenario because loop the algorithm selects one segment of each supply func-
it scales poorly (as we show in section 4). Therefore, with tion (the one corresponding to the already allocated demand)
the market described above in place, the next step is to de- and allocates k units to the segment with the best price (i.e.
sign a clearing algorithm that solves the winner determina- the lowest price for k units after applying any relevant dis-
tion problem more efficiently and allows it to be actually count on the amount). The loop is repeated until the demand
applied in realistic contexts. is satisfied. Note that the value of k is dynamically assigned
in each loop to guarantee the optimality of the algorithm.
Specifically, it always has the ending quantity value (el ) of
3. Optimal Clearing Algorithms the shortest segment being evaluated at that moment.
Let us now illustrate this procedure with the example of
This section details the optimal single-item (sPJ) and the
Figure 1. Assume there are three potential buyers 1, 2, and 3
optimal multi-item clearing algorithm (mPJ) that we have
that submit their supply functions s1, s2, and s3 for a certain
developed for the electricity retail market described in sec-
item (i.e. the consumption in one hour). In the first loop, the
tion 2. Furthermore, we analyse their complexity, prove their
algorithm processes the segments s11 , s21 , and s31 . Since
optimality, and analyse strategies to keep them tractable.
the shortest of the three is segment s11 (i.e. e11 < e21 < e31 ),
First of all, let us introduce some basic definitions that will
k = e11 and the algorithm compares s11 (e11 ), s21 (e11 ), and
be used thereafter:
s32 (e11 ). Suppose the price of s31 (e11 ) is less than the price
of s11 (e11 ) and s21 (e11 ); then, it selects s31 to supply these
Definition 1 A single allocation is a set <time-slot t, sup- first e11 units. In the second loop, the algorithm processes the
plier s, amount q, price p> meaning that s wants to pay p to segments s11 , s21 , and s31 (but starting from e11 ) and gives k
buy q units of energy to be consumed at time t. the value of e31 − e11 because it is less than e11 and e21 . Then,

Proceedings of the 2005 IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT’05)
0-7695-2416-8/05 $20.00 © 2005 IEEE
Units s12 s22 s33 Input: m supply functions f and demand.
Consumed e32 • Pre-loop: initialise needed variables: allocated to keep the
e11 e21 s32 total allocated demand, the list allocation showing the allo-
s2 1 cated demand per bidder, and the temporal storage variable k.
s11 s31
e31 • Loop: in each loop, until the demand is satisfied, select the
segment with the lowest gradient and allocate the minimum
Price
ending quantity units.
Bidder 1's Supply function
While (allocated < demand) do
Bidder 2's Supply function k = select the minimum ending quantity
Bidder 3's Supply function if (demand - allocated < k) then
winner = select the minimum fm (k)
Figure 1. Linear piece-wise supply functions allocated += k
submitted to a single item. allocation[winner] += k
else
winner = select the fm with lowest gradient
it compares s11 (e31 − e11 ), s21 (e31 − e11 ), and s31 (e11 ), and so allocated += demand - allocated
on. The algorithm continues until the amount of allocated allocation[winner] += demand - allocated
units is equal to the demand.
As we can see, the algorithm evaluates one function per Output: allocation, the variable detailing the amount allo-
bidder in each step so it has a complexity O(m) per loop, cated to each bidder.
where m is the number of bidders. As the loop is repeated
Figure 2. The sPJ clearing algorithm.
k times, where k is the number of segments of the function
with the highest number of them, the overall complexity is
O(km). A safe way to reach an optimal allocation is to se-
brute-force strategy for identifying all the possibilities. Here,
lect for each unit the segment that offers the best price (i.e.
all possible bids from each bidder are combined with all
k = 1). However, it is not necessary to repeat the process for
possible bids from the rest of the bidders. However, it is
each single unit since price and discount are constant in each
not necessary to evaluate all the combinations since some
segment. So, as long as the segments evaluated in each loop
of them are repeated. For instance, Table 1 shows an auc-
are the same (unit price and fixed price remain unchanged),
tion with two suppliers (1 and 2) and two items (a and b).
the winner will also be the same. Thus, in each loop it is
In this case, there is one possible correlation for each bid-
only necessary to compare the price of allocating the lowest
der, ω 1 (a, b) = x and ω 2 (a, b) = y. Thus, clearing the
ending quantity of the segments being processed, repeating
multi-item case implies evaluating the combinations where
this process until the demand is satisfied. Therefore, sPJ (de-
supplier 1 and 2 bid normally for item a (so the single-item
tailed in Figure 2) always finds the most profitable optimal
clearing algorithm is run with supply functions Pa1 − Pa2 );
allocation.
supplier 1 bids for item a and b with discount and supplier 2
3.2. The Optimal Multi-Item Clearing Algorithm bids normally for item b (so the single-item algorithm clears
This algorithm, detailed in Figure 3, is more complex item a with supply function xPa1 and item b with xPb1 and
since it cannot simply be generalised from the single-item Pb2 ), and so on.
one. If there were no correlations, it would be sufficient to
run the sPJ case once for each item. However, the existence Table 1. Single-item evaluations with two items
of correlations poses the problem of the inconsistent appli- and two bidders, repeated in bold.
cation of discounts. First, if a supplier bids for two items
and offers a reduction if both bids get accepted, no reduction Pa2 yPa2 Pb2 -
should be applied if only one of them succeeds. Second, yPb2
functions become different after applying a discount. For Pa1 Pa1 − Pa2 Pa1 − yPa2 Pa1 Pa1
example, assume Pl is a piece-wise supply function for the yPb2 Pb2
item l and it is included in the correlation ω(l, ...) = x. Then,
xPa1 xPa1 − Pa2 xPa − yPa2
1
xPa1 xPa1
Pl is the new supply function with the value Pl = xPl .
xPb1 xPb1 xPb1 − yPb2 xPb1 − Pb2 xPb1
Thus, the optimal allocation of a set of functions in which Pl
is included may not be the same as the one in which every- Pa2 yPa2
thing else is the same but with Pl instead of Pl . Pb1 Pb1 Pb1 − yPb2 Pb1 − Pb2 Pb1
In this way, mPJ must process all possible combinations - Pa2 yPa2 -
of discounted and non-discounted functions and check that yPb2 Pb2
discounts are applied consistently. To this end, we use a

Proceedings of the 2005 IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT’05)
0-7695-2416-8/05 $20.00 © 2005 IEEE
Input: j supply functions sj , j correlation functions ωj and 3.3. Constrained Bidding
demand qi for each item i. In order to prevent such pathological scenarios from oc-
• Pre-loop: Initialise variable day-set to keep the optimal allo- curring, it is possible to constrain the choice of possible dis-
cation for each item, item-set to keep a group of supply func- counts so bidders can submit only a certain number of corre-
tions to be evaluated by the single-item clearing algorithm, lations. This type of restriction has already been success-
all − item − sets to keep already processed sets of supply fully applied to atomic propositions bidding, where when
functions, and a boolean variable ok. limiting the allowable combinations to tree structures or
sequential combinations, the NP-hard winner determina-
• Loop: For each item calculate the optimal allocation of a
possible set of supply functions and then check whether the
tion problem can be solved in polynomial time [6]. In a
selected discounts are applicable. similar vein, mPJ can also take advantage of such an ap-
Do proach. Specifically, we can constrain the number of cor-
for each item i relations to a value c. Thus, bidder 1 can issue, for in-
for each supplier sj stance, the following: ω11 (n1 , n2 . . . ni ), ω21 (n1 , n2 . . . ni )
add next sij to item-set . . . ωc1 (n1 , n2 . . . ni ) where i is the number of items included
if item-set not in all-item-set then in each discount (for the sake of simplicity, let us suppose
optimal-allocation = single item algorithm(item-set) it is a fixed number less than n, the number of items, but
store item-set in all-item-sets big enough to allow the bidder to be sufficient flexible in its
add optimal-allocation to day-set offering).
ok = check constraints (day-set, ωj ).
In this way, the single-item algorithm sPJ will be exe-
If ok then compare day-set with best so far
cuted, with m bidders, i · cm times (again, supposing that i
until all the combinations are explored
is fixed) and the complexity of the mPJ algorithm will drop
Output: day-set, a set of i optimal allocations (one for each to O(ki · cm ). Unfortunately, in this case, the mPJ algo-
item) with the lowest total price. rithm cannot skip evaluating half of the combinations (as in
section 3). With this constrained discount choice, the reduc-
Figure 3. The mPJ clearing algorithm.
tion depends much more on the specific discount combina-
tions chosen. For instance, if the combinations include many
items (i.e. i is bigger), the single-item algorithm will be ex-
ecuted more often than if the combinations only include two
This brute-force strategy evaluates all possible bid com- items each. In short, there is no way to accurately deter-
binations (without repeating some of them) and, therefore, mine it a priori. Similarly, restricting the available amounts
it always finds the most profitable optimal day allocation. assigned to the discount increases the number of repeated
However, it also scales poorly. First, the number of possible combinations. Thus, if a supplier offers the same reduction
combinations depends on the number of items. In our case, for accepting two different items (e.g. ω(a, b) = ω(c, d)), the
with 24 items, there are 224 different combinations. Second, number of repeated combinations would increase further and
it also rises exponentially as the number of bidders grows: the complexity would continue decreasing.
with n items, and two bidders, 22n ; with three bidders 23n ,
4. Evaluation of the mPJ clearing Algorithm
and so on. In the extreme situation with two bidders submit-
ting a different supply function for each one of the 24 items In this section we present the results of comparing the
and 224 correlations, there are 2 · 248 possible combinations. performance of our mPJ algorithm (introduced in section
This is, n · (2n )m , where n is the number of items and m 3.2) with the only other optimal algorithm for this class of
the number of bidders. For instance, in the example of Ta- problem. Specifically, our benchmarks are the algorithm
ble 1, there are 2 · (22 )2 = 32 possible combinations, but mDJ presented by Dang and Jennings in [2] (described in
half of the combinations do not need to be re-calculated (in more detail in section 5, hereafter referred to as “sDJ” for
bold format in Table 1). Thus, if bidders bid for all items the single-item one and “mDJ” for multi-item) and the con-
and submit all possible correlations, the number of times strained bidding variant of our algorithm (as detailed in sec-
that the multi-item algorithm clears the single-item one is tion 3.3). In this later algorithm, we have set the maximum
n · (2n − 2n−1 )m = n · (2n−1 )m . Therefore, the complex- number of bids to be issued as half of the maximum possible
ity is O(kmn · 2(n−1)·m ), where n is the number of items, (c = 2n−1 ) and the maximum number of items included in a
m the number of suppliers and k the number of segments correlation as the number of items (i = n).
of the supply function with more segments. Note, how- The comparison shown in Figure 4 details how the com-
ever, that this is a pathological worst-case scenario, which plexity (defined in terms of X, the number of bids) scales
is highly unlikely to happen in practice. Furthermore, as we when the number of items n increases for a constant number
discuss bellow, it can be mitigated against by constraining of bidders m. As can be seen, mDJ soon becomes intractable
the agent’s bidding behaviours. (i.e. prohibitively high complexity), mPJ scales better, and

Proceedings of the 2005 IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT’05)
0-7695-2416-8/05 $20.00 © 2005 IEEE
the constrained variant presents the best profile for our pur-
poses. This would have been even clearer if we had not set
the value of c and i depending on the number of items n (as
detailed above). With a fixed c and i, the constrained vari-
ant would had presented a flat line, whereas mDJ and mPJ
would had grown exponentially because in contrast to mDJ
and mPJ, the constrained variant does not depend directly on
the number of items being auctioned. Figure 6. Complexity evolution with n and m
increasing.

the algorithm grows exponentially with k as the base). In


contrast, for mPJ increasing k just implies that the single-
item algorithm is going to process more steps (therefore the
algorithm grows linearly with k as the factor).

Figure 4. Complexity evolution with n increas-


ing and m steady (m = 2).

Figure 5 tests how the algorithms react to the increment


of m (bidders) when n (items) remains steady. Again, mDJ
becomes intractable as soon as it did in Fig. 4, whereas mPJ
and its constrained variant present a significantly better per-
formance profile. The main reason for this behaviour is the
sensitivity of mDJ to the increment of both n and m (while
mPJ is only sensitive to the increment of n, as seen in Fig. 4).
For mDJ, a larger number of items and clients means a larger Figure 7. Complexity evolution with n and m
number of single-allocations to form the set from which the steady and k increasing (n, m = 2).
allocations will be formed. Whereas for mPJ, more clients
means more correlations to clear, but half of which need not Note that the complexity of the constrained variant can
be processed since they are repeated. be further reduced depending on the values of i and c. With
the values, we assigned to i and c for these comparisons, it
is only m times less complex than mPJ (since c = 2n−1 ,
i = n and O(ki · cm ), then the complexity after substitu-
tion of c and i is O(kn · 2(n−1)m )). The genuine advan-
tage of the constrained variant can be found when there are
higher values of n and m. Thus, based on our beliefs about
the likely operation of the retail energy market some “typ-
ical” values might be to have 24 items (e.g. 24 hours) and
around 20 bidders (e.g. 20 UCs trying to sell their energy).
Figure 5. Complexity evolution with m increas-
Therefore, if we set k = 1 and restrict the number of pos-
ing and n steady (n = 2).
sible correlations to 10, each one with 5 items (which ex-
perience indicates will provide UCs with enough persuasive
Similarly, Figure 6 illustrates the behaviour of the algo-
power), the results are clear: mDJ presents a complexity of
rithms when both n (items) and m (bidders) increase. Again,
1, 498E + 147, our mPJ 1, 429E + 141 and the constrained
mDJ performs worse than the others. Its n = 2 series is al-
variant 5E + 20. In our opinion, this means the constrained
most equivalent to the n = 3 of our multi-item algorithm.
variant is sufficiently close to the optimal to be useful, but is
The best results are again achieved by the constrained vari-
still sufficiently tractable to be practicable.
ant (as we would expect).
Finally, Figure 7 depicts the dependence of each algo- 5. Related Work
rithm on k, the number of units allocated in each iteration
of the single-item algorithm. In this dimension both our al- There has been comparatively little previous work in
gorithm and its constrained variant perform again well. For combinatorial energy markets, but there is a much larger
mDJ, increasing k implies increasing the number of single literature on clearing algorithms for combinatorial auctions.
allocations that may be combined with each other (therefore However, these two strands of work have not been brought

Proceedings of the 2005 IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT’05)
0-7695-2416-8/05 $20.00 © 2005 IEEE
together before. The work of Ygge [8] is seminal in the area reward customers that consume electricity off-peak. Thus,
of agents and energy management. Specifically, he com- they have an additional tool for energy management besides
bines power load management with market-oriented pro- setting off-peak prices lower than peak ones. Moreover, the
gramming. He introduces a hierarchical structure of Home- use of combinatorial auctions helps to produce efficient allo-
Bots, intelligent agents that represent every load in the cations of goods because combinatorial bidding allows the
system and buy the energy in a system of forward non- expression of more complex synergies between auctioned
combinatorial auctions. With only one energy supplier, his items [4]. Together with the use of supply functions and
approach places all the initiative on the HomeBots so the non-atomic propositions, consumers are able to accept en-
UCs cannot express their preferences for having more or less ergy from diverse UCs simultaneously, which, in turn, helps
demand at a certain time. We address this shortcoming by al- them to maximise their benefits.
lowing combinatorial bidding. Against this background, this paper presents, for the first
Recently, there has been an enormous amount of research time, an electricity retail market as a system of simultane-
in combinatorial auctions [6], but most of this has focused ous reverse combinatorial auctions with supply-function bid-
on atomic propositions that may limit the choice (and hence ding. Furthermore, we have developed the novel single and
the profit) to the auctioneer. Addressing this limitation, sev- multi-item clearing algorithms sPJ and mPJ that are optimal,
eral authors have developed algorithms that deal with de- as well as a strategy to keep the multi-item algorithm within
mand/supply bidding [7]. Moreover, [1] developed a single- tractable ranges for the real-world problem we face. Future
item and a multi-item algorithm for multi-unit combinatorial work will focus on evaluating the whole electricity market
reverse auctions with demand/supply functions that run in system and on reducing the complexity of the multi-item
polynomial time (but that are not guaranteed to find the op- clearing algorithm with additional restrictions on combinato-
timal solution). In [2] the same authors present another two rial bidding. Further, we will focus on failure-scenarios and
algorithms for the same environment but they are optimal. how to keep the demand in secure ranges to avoid blackouts
The strategy they use consists in defining a dominant set con- or massive overbooking of the system. Finally, the likely
taining an increasingly sorted group of single allocations and pricing strategies of the suppliers need a more detailed study
searching within this dominant set for the combinations that to determine how to maximise their revenue.
form the most profitable day allocation. The complexity in
a worst case scenario is O(n · (k + 1)n ) in the single-item Acknowledgments
case and O(mn · (k + 1)mn ) in the multi-item (where n, MThe authors would like to thank Viet Dang, Raj Dash
m, and k have the same meaning as in the previous section). and Alex Rogers from the IAM Group at the University of
In comparison to this work, sPJ is less general than sDJ (be- Southampton for their support and assistance.
cause it only clears continuous piece-wise supply functions),
but both our algorithms present significantly lower compu- References
tational complexity even in a worst-case scenario (O(km) [1] V. D. Dang and N. R. Jennings. Polynomial algorithms for
in the single-item case and O(kmn · 2(n−1)·m ) in the multi- clearing multi-unit single item and multi-unit combinatorial
item). That is even if, for instance k = 1, our mPJ algorithm reverse auctions. In Proceedings of ECAI ’02 (Lyon France),
is still 2n−1 times less complex than mDJ. pages 23–27, Lyon France, 2002.
[2] V. D. Dang and N. R. Jennings. Optimal clearing algorithms
6. Conclusions and Future Work for multi-unit single item and multi-unit combinatorial auc-
tions with demand/suppy function bidding. In Proceedings of
The deregulation of the electricity industry offers new op- ICEC’03, pages 25–30, Pittsburgh PA, 2003.
portunities for providers and consumers. In this environ- [3] R. K. Dash, D. C. Parkes, and N. R. Jennings. Computational
ment, customers can choose their suppliers to get cheaper mechanism design: A call to arms. IEEE Intelligent Systems,
energy and suppliers can compete to increase the number 18(6):20–47, 2003.
[4] Y. Fujishima, K. Leyton-Brown, and Y. Shoham. Taming the
of their customers and, subsequently, their profits. To make
computational complexity of combinatorial auctions: Optimal
this happen in practice, however, efficient electricity markets and approximate approaches. In Proceedings of the IJCAI’99,
need to be developed. To this end, traditionally, energy man- pages 548–553, Stockholm Sweden, 1999.
agement techniques have presented the two different sides [5] T. Groves. Incentives in teams. Econometrica, 41:617–31,
with their own purposes and measures. On one hand, suppli- 1973.
ers and retailers aim to smooth the overall energy consump- [6] T. Sandholm. Algorithm for optimal winner determination
tion to avoid sudden peak loads. On the other hand, cus- in combinatorial auctions. Artificial Intelligence, 135:1–54,
tomers intend to reduce their energy bills without giving up 2002.
[7] T. Sandholm and S. Suri. Market clearability. In Proceedings
freedom (meaning they can use energy at any time). Our sys- of the IJCAI’01, pages 1145–1151, Seattle WA, 2001.
tem addresses both needs. It helps to reduce peak loads and [8] F. Ygge. Market-oriented programming and its application to
to distribute them amongst less-loaded time slots. Specif- power load management. PhD thesis, Department of Computer
ically, by including off-peak hours in the discounts, UCs Science, Lund University, 1998.

Proceedings of the 2005 IEEE/WIC/ACM International Conference on Intelligent Agent Technology (IAT’05)
0-7695-2416-8/05 $20.00 © 2005 IEEE

You might also like