0% found this document useful (0 votes)
49 views5 pages

APP2

This document summarizes an optimal strategy for a 2-player card game where players take turns taking cards from the sides of a face-up deck to maximize their score. For an even number of cards, the strategy is to sum the values of even and odd indexed cards and take the higher sum. For odd numbers of cards, the strategy simulates taking a side card and playing on the remaining even set. Dynamic programming is used to optimally play against a sister who greedily takes the higher side card, and also against an optimal opponent.

Uploaded by

Polina Rohoza
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
49 views5 pages

APP2

This document summarizes an optimal strategy for a 2-player card game where players take turns taking cards from the sides of a face-up deck to maximize their score. For an even number of cards, the strategy is to sum the values of even and odd indexed cards and take the higher sum. For odd numbers of cards, the strategy simulates taking a side card and playing on the remaining even set. Dynamic programming is used to optimally play against a sister who greedily takes the higher side card, and also against an optimal opponent.

Uploaded by

Polina Rohoza
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

APP2 Report

Y. BERMUDEZ, C. DANILUK, C. FIORENTINO, J. MAKOWSKI,


M. MUSIIENKO, P. ROHOZA
October 20, 2022

In this APP, we were provided with a 2-player cards game: there is a set of n cards lying on a table
face up arranged in a line and each player takes a card from the sides of the set when it is his time to
play. The players take turns. The final score for each player is calculate by summing the cards they
picked, the player with the greater score wins. We have to find a strategy to maximize our points, no
matter if we go first or not.

1 Formalization
Denote the initial number of cards by n.
Since at each moment the cards lie next to each other on the table, we index them. The first card has
index 0, the last one n − 1. (i, j) denotes the consecutive subsequence of cards, starting from index i
and ending at j, inclusively.
Denote the values of the cards by C : N → N, i.e., given an index, C returns how many points that
card at that index is worth.

2 General Observations
In this section, we explain the strategy that allows us to win with an even number of cards or an odd
one.
Theorem 2.1. If there is an even number of cards and we start, we never lose.
Proof. The strategy is to sum all even and odd-index cards. We obtain two sums for both sequences:

⌊ n−1
2 ⌋
X
E= C(2k)
k=0
⌊ n−1
2 ⌋
X
O= C(2k + 1)
k=0

Note that E and O partition the set of all cards, so E + O is the sum of all cards. Hence, if we get
the cards in the bigger one of the sums, we cannot lose the game.
Without loss of generalization, assume E > O, so we want to get all even-index cards. Since we start,
we get the leftmost card (index 0). Only odd-index cards are now available to the opponent, so after
his move, we can surely choose an even-indexed card. It can be seen inductively that we can obtain
the sum E this way.

1
Figure 1: Simulation of the strategy on a random set with an even number of cards. Player 1 chooses
the black 2 because the sum containing this 2 is greater than the other sum.

Theorem 2.2. If there is an odd number of cards and we choose who starts, we never lose.
Proof. The strategy is to simulate that we pick one of the outermost cards, for example the leftmost
one. We then put it aside and sum every other card as we did for an even number of cards. To choose
if we want to start, we sum the chosen card (here the left one) to the smallest sum. If this sum is
greater then the other then we choose to start. If not, we repeat it with the rightmost card. If the
smallest sum with one of the side cards is never greater than the other sum, then we let the opponent
start.

Figure 2: Simulation of how we choose to start or not on an random set with an odd number of cards.

As we can see on the figure above 3 + 12 < 18 and 13 + 2 < 18, so here we decide to let the player
2 start. Since player 2 starts this means that player 1 always plays on a even number of cards so we
apply that strategy.
The other player can choose the card he wants and we repeat this strategy until there is no more cards.
This strategy only works if we can choose to start or not.

2
Figure 3: Simulation of the strategy on a random set with an odd number of cards

3 Playing Optimally Against the Sister


The sister employs the greedy strategy of choosing the greater one of the two outermost cards. There-
fore, we can assume without loss of generalization that we go first: if we do not, then we can immedi-
ately tell which card the sister would choose first (the maximum one) and execute our strategy on the
remaining set of cards. Note that maximizing points against her strategy has optimal substructure: if
we are given cards (i, j), then we can solve the problem on (i + 1, j) and (i, j − 1), add the taken cards,
see which solution yields a greater value and choose the respective one. This allows for a recursive
formulation of the problem:
The base cases are

f (i, j) = 0 if i > j
f (i, i) = C[i]
f (i, i + 1) = max(C[i], C[i + 1])

and the recursive formula is

( ( !
C(i) + f (i + 2, j) C(i + 1) ≥ C(j) C(j) + f (i + 1, j − 1) C(i + 1) ≥ C(j)
f (i, j) = max ,
C(i) + f (i + 1, j − 1) C(i + 1) < C(j) C(j) + f (i, j − 2) C(i + 1) < C(j)

Each invocation of f plays our own move and that of the sister. We either choose the left or right
card. Depending on which one of the next outermost ones is greater, the sister chooses that one next.
A recursive algorithm can be constructed from this recurrence in a straightforward manner. The
n
recursive algorithm takes time O(2 2 ), however: every call removes two cards and calls itself two times.
To remedy this, a dynamic programming solution can be used, since there are overlapping subproblems:
some subproblems are solved multiple times as can be seen from drawing the tree of recursive calls, so
storing them in a table and reusing their results after solving them once yields a speed-up.
To see how a bottom-up dynamic programming solution could work, classify all subproblems (i, j) into
Sk :

3
S∆ = {(i, j) | j − i = ∆}

Note that the base cases solve all problems in S0 and S1 . Now, note that for solving all in S2 , we solve
subproblems in S0 and for S3 , we solve subproblems in S1 . Generally, for solving all problems in S∆ ,
we solve subproblems in S∆−2 .
Therefore, we can solve all problems (i, j)i,j∈[0,n−1] by solving S2 , then S3 , and so on. This is not
recursive since the required subproblems at any moment have already been solved and it solves each
subproblem exactly once. In the last iteration, (0, n − 1) is solved, whose solution is the maximum
number of points when playing against the sister.P
n
The running time of this is O(n2 ) since there are i=1 i = O(n2 ) subproblems. All are solved exactly
once and solving one takes a constant amount of time.

In the following algorithm, subproblems are stored in a table T of dimensions n · n. T (i, j) corresponds
to the optimal solution for the subarray (i, j).

Algorithm 1 Bottom-up dynamic programming implementation to play optimally against the sister
procedure SisterDynProg
for i = 0; i < n; i = i + 1 do
for j = 0; i < n − i; j = j + 1 do
l←j
r←i+j
if C(l + 1) > C(r) then
takingLeft ← C(i) + T(i + 2, j)
else
takingLeft ← C(i) + T(i + 1, j − 1)
if C(l) > C(r − 1) then
takingRight ← C(j) + T(i + 1, j − 1)
else
takingRight ← C(j) + T(i, j − 2)
T (i, j) ← max(takingLeft, takingRight)
return T (0, n − 1)

To not only get the maximum number of points, but also the cards taken to get these points, one
could simply store the first card taken for each subproblem in a separate table called S for side. S(i, j)
is either left or right, depending on whether we would take the card at i or at j first for an optimal
solution.
To find the solution, start with (0, n − 1) and look at S(0, n − 1). Depending on whether it is left or
right, we look at S(1, n − 1) or S(0, n − 2) next. Repeating this until we arrive at S(i, j) for i > j, we
can traverse all choices taken and store the respective card in each iteration.

4 Playing Optimally Against an Optimal Opponent


Instead of the sister, consider now an opponent playing optimally. This is harder because the opponent
does not take a local decision, but an optimal one, for which the opponent takes a global decision.
The way we play does not change, we only need to adjust the decision-making function of the opponent.
The important thing is that the opponent makes an optimal move if he minimizes our next move. If
he gets us to make the worst possible move, he maximizes his own points because the points we do
not get, he gets instead. Look at the following recursive scheme:

4
f (i, j) = max (C(i) + min(f (i + 2, j), f (i + 1, j − 1)), C(j) + min(f (i, j − 2), f (i + 1, j − 1)))

As before, we choose either left or right depending on which move maximizes our points. This time,
however, the card taken away by the opponent depends on which next move would give us less points.
To know the specific cards taken, we can keep the first card taken for each subproblem in a table as
we did before.
The running time, when implemented in a bottom-up dynamic-programming manner, would be O(n2 ),
as before: since the recursive call structure is the same, we can proceed by grouping all subproblems
(i, j) by their differences j − i. Only the constant amount of work in the loops changes, but does not
affect the asymptotic behavior.
We assume that we start in a similar manner to as we did before. Comparing f (1, n − 1) and f (0, n −
2) and since the opponent plays optimally if he minimizes our points, he can force us to choose
the smaller one by taking the appropriate card. Hence, if we do not start, our optimal solution is
min (f (1, n − 1), f (0, n − 2)).

You might also like