0% found this document useful (0 votes)
90 views33 pages

Ee/Econ 458 Integer Programming: J. Mccalley

This document provides an overview of integer programming and techniques for solving integer programs (IPs), including mixed integer linear programs (MIPs). It discusses how IPs cannot be solved via enumeration or linear programming relaxation with rounding. It introduces cutting plane methods and branch and bound (tree search) as approaches, focusing on the branch and bound method. Branch and bound conceptualizes the problem as a tree of solutions and tries to avoid searching the entire tree. An example problem is presented to illustrate how branch and bound works by creating predecessor and successor problems that add integer constraints.

Uploaded by

karen dejo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
90 views33 pages

Ee/Econ 458 Integer Programming: J. Mccalley

This document provides an overview of integer programming and techniques for solving integer programs (IPs), including mixed integer linear programs (MIPs). It discusses how IPs cannot be solved via enumeration or linear programming relaxation with rounding. It introduces cutting plane methods and branch and bound (tree search) as approaches, focusing on the branch and bound method. Branch and bound conceptualizes the problem as a tree of solutions and tries to avoid searching the entire tree. An example problem is presented to illustrate how branch and bound works by creating predecessor and successor problems that add integer constraints.

Uploaded by

karen dejo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 33

EE/Econ 458

Integer Programming

J. McCalley

1
Day-Ahead Market
SCUC enforces
limited number
of transmission
constraints on
the commitment
solution.

Each hourly SCED


performs SFT, which
tests all contingencies
in a list and for
violations, imposes
appropriate constraints
in SCED and resolves it.

Ref: Xingwang Ma, Haili Song, Mingguo Hong, Jie Wan, Yonghong Chen, Eugene Zak, “The Security-constrained
Commitment and Dispatch For Midwest ISO Day-ahead Co-optimized Energy and Ancillary Service Market,”
Proc. of the 2009 IEEE PES General Meeting.
Simplified versions of SCED and SCUC
They are both tools to solve optimization problems. But different optimization problems.
Here are some observations.

SCED Objective SCUC Objective

min s P  k gk
min  z Fit   g it Cit   yit S it   xit H it
it

k  generator _ buses  t  i    t  i    t  i    t  i  
Fixed (no-load) Costs Production Costs Startup Costs Shutdown Costs

• Decision variables are Pgk • Decision variables are zit, git, yit, xit
• Objective & constraints are linear • Objective & constraints are linear
• Pgk are continuous valued • zit, yit, xit are discrete, git is continuous
• It is a linear program (LP). • It is a mixed integer linear program (MIP).
• It is a convex programming problem. • It is a non-convex programming problem.
• It is solved by simplex, very efficient. • It is solved by branch and bound.
• For a single time period (1 hr or 5 min). • For multiple time periods (2-24 hrs or more)
• It provides LMPs. • It does not provide LMPs.

SCUC is a mixed integer linear program, typically called MIP.


Before discussing MIPS, let’s investigate integer programs (IP).

3
Solving IPs
Minimize f(x1, x2, x3)
Subject to
h(x1, x2, x3)=c
g(x1, x2, x3)≤b
x1, x2, x3 are binary (1 or 0)
How to solve it?
Two immediate ideas:
1. Check every possible solution (exhaustive enumeration)
2. Solve it as an LP and then round to closest binary value.

4
Solving IPs: Exhaustive enumeration
Minimize f(x1, x2, x3)
Subject to
h(x1, x2, x3)=c
g(x1, x2, x3)≤b
x1, x2, x3 are binary (1 or 0)
Possible solutions are:
(0,0,0), (0,0,1), (0,1,0), (0,1,1),
(1,0,0), (1,0,1), (1,1,0), (1,1,1)
There are 23=8 solutions.
But what if there were 30 or 300 variables?

230=1.0737×109=1,073,700,000 (over a billion possible solutions)


2300=2.037×1090
A typical ISO SCUC might use 3000 binary variables!
This method will not work for us . 5
Solving IPs: LP Relaxation w/ rounding
• “Relax” the requirement that the decision variables be integer.
• Then the problem becomes a standard LP, and we solve it
using simplex.
• This gives a solution where some or all of the variables are non-
integer. The non-integer variables are then rounded to the
nearest integer.

Two problems:
1.The solution may not be feasible (illustrated in notes).
 Let’s fix this by requiring that we round to the nearest
feasible solution.
2. This solution, if feasible, may not be optimal (illustrated on
next slide).

6
Solving IPs: LP Relaxation w/ rounding
max Z  x1  5 x2 ↑
x2 Actual integer LP-relaxed
solution, solution,
s.t. Z*=10 Z*=11
3 ● ● ● ●
x1  10 x2  20
x1  2 2 ● ● ● ●

x1  0, x2  0 Z*=10=x1+5x2

x1 , x2 integers 1 ● ● ● ●
LP-relaxed,
rounded solution
Z*=7
● ● ● ●
x1 →
1 2 3
• The dots are the possible integer solutions, and the shaded
region is the feasible region.
• The LP-relaxed solution is (2,1.8) where Z*=11. If we rounded,
then we would get (2,1) where Z*=7.
• But it is easy to see that the point (0,2) is on the Z*=10 line.
Because (2,0) is integer and feasible, and higher than Z*=7, we
see that the rounding approach has failed.
7
Solving IPs: Cutting plane methods

x2 Original
• Cutting plane methods generate constraint
3 ● ● ● ●
additional constraints that
eliminate non-integer solutions 2 ● ● ● ●
Shrunk
but not integer solutions. feasible
region
• Idea is to shrink the feasible 1 ● ● ● ●

region “just enough” so that all


corner points are integer. ● ● ● ●
x1 →
1 2 3

8
Solving IPs: Tree search methods
• Tree-search methods ●
conceptualize the problem as a
huge tree of solutions, and then ● ●
they try to do some smart things to
avoid searching the entire tree. ● ● ● ●
• Most popular IP (and MIP) solver
today is a tree-search method ● ● ● ● ● ● ● ●
called branch and bound. CPLEX
uses this method, in combination
with cutting planes.
• We will study the branch and
bound method.

9
Homework

10
Homework
What we are
about to do.
What we will
do at the end
of this class.

11
Solving IPs: Branch & Bound
Definitions ●
•Predecessor problem Pj
● ●
•Successor problem
Problem Pj is predecessor to Problem Pk and
● ● ● Pk ●
Problem Pk is successor to Problem Pj if:
they are identical with the exception ● ● ● ● ● ● ● ●
that one continuous-valued variable
in Pj is constrained to be integer in Pk.

12
“Zeta”
max   17 x1  12 x2
Example
Pose a problem P1 to be exactly like P0
s.t. except that we will constrain x1≤1.
P0 10 x1  7 x2  40 max   17 x1  12 x2
s.t.
x1  x2  5
10 x1  7 x2  40
x1 , x2  0 P1
x1  x2  5
Solution (as an LP) using CPLEX yields
P0 Solution : x1  1
x1 , x2  0
x1  1.667, x2  3.333,   68.333
What do you expect the value of x1 to be in
the optimal solution?
Because the solution without the
constraint x1≤1 wanted 1.667 of x1, we can be
sure that the solution with the constraint
x1≤1 will want as much of x1 as it can get, i.e.,
it will want x1=1.
Solution (as an LP) using CPLEX yields
P1 Solution : x1  1.0, x2  4.0,   65.0
It worked: x1 did in fact become integer. In
fact, x2 became integer as well, but this is by
13
coincidence.
Example
Let’s consider the same problem but P0 Solution :
with (x1,x2) constrained to be integers. x1  1.667, x2  3.333,
max   17 x1  12 x2   68.333
s.t. P1 Solution : ●
x1<1
10 x1  7 x2  40 x1  1.0, x2  4.0, P2
IP1 ● ●
x1  x2  5   65.0
x1 , x2  0
x1 , x2 integers. ● ● ● ●
Is the P1 solution we obtained,
● ● ● ● ● ● ● ●
which by chance is a feasible
solution to IP1, also optimal to IP1? P1 has to be as good as or
better than its successors
IP Optimality Criterion: A solution to because these will be more
an IP is optimal if the corresponding heavily constrained, so we
objective value is better than the need not look at P1 successors.
objective value for every other
feasible solution to that IP. But what about P2? What is P2?
14
Example
P2 is the P0 problem with x1>2. P0 Solution :
It is the remaining part of the x1  1.667, x2  3.333,
  68.333
space we need to search. P Solution : ● P2 Solution :
max   17 x1  12 x2 1
x1<1 x1>2
x1  1.0, x2  4.0, x1  2.0, x2  2.857,
s.t. ● ●
  65.0   68.284
10 x1  7 x2  40
P2
x1  x2  5
● ● ● ●
x1  2
x1 , x2  0 ● ● ● ● ● ● ● ●
Use CPLEX to solve P2. (see solution on the tree). Observe P2 is infeasible!
So can we draw any useful conclusion from this observation?
1. The P2 solution, 68.284, is better than our best feasible solution so
far, 65.0 (65.0 establishes a lower boundsolution is no less than 65).
YES!
2. Although P2 is infeasible, we can add constraints and find feasible
But solutions in successor problems.
why? 3. We are not sure any of those successor problems will be “better than
our best” (of 65), but because 68.284>65, we know it is worth trying. 15
Yes, Why? (In more detail…)
Compare the objective function Example
value of P2, 68.8286, with the P0 Solution :
objective function value of P1, x1  1.667, x2  3.333,
  68.333
65. Since we are maximizing, P1 Solution : ● P2 Solution :
the objective function of P2 is x <1 x >2
x1  1.0, x2  4.0, 1 1 x1  2.0, x2  2.857,
better. But the P2 solution is not   65.0 ● ●
  68.284
feasible. But we can constrain x2
● ● ● ●
so that we get a feasible
solution. Whether that feasible ● ● ● ● ● ● ● ●
solution will have better
objective function value we do
not
What know.
we do know is, because the objective function value of P 2 (68.8286)
is better than the objective function value of P 1 (65), it is worthwhile to
check it. Although the objective function value of successor problems to
P2 can only get worse (lower), they might be better than P 1, and if we can
find a successor (or a successor’s successor,…) that is feasible, it might be
16
better than our best current feasible solution, which is P 1.
Example
P0 Solution :
x1  1.667, x2  3.333,
  68.333
P1 Solution : ● P2 Solution :
x1<1 x1>2
x1  1.0, x2  4.0, x1  2.0, x2  2.857,
● ●
  65.0   68.284

Question: What if the P2 ● ● ● ●


solution would have been 64?
● ● ● ● ● ● ● ●
Would you have searched its
successor nodes?
NO! Why not?
Because successor nodes, even if feasible, would necessarily be
more constrained than P2 and therefore no better than its solution.
Since we already have a feasible solution of 65, and P 2 successors
could be no better than 64, there is no use searching them. Again,
17
the value of 65 establishes a lower bound on the problem solution.
But because P2 solution is better
than our current best feasible Example
P0 Solution :
solution, we should pursue P2 x1  1.667, x2  3.333,
successor problems. P1 Solution :   68.333 P2 Solution :
So what constraint should we x1  1.0, x2  4.0, x1  2.0, x2  2.857,
add to P2?   65.0 x1<1 x1>2   68.284
Our choices are x2<2 and x2>3. x2<2 x2>3
Let’s try x2<2, using P3. P3 Solution :
max   17 x1  12 x2 x1  2.6, x2  2.0,
s.t.   68.2
10 x1  7 x2  40
P3 x1  x2  5
The P3 solution is not feasible.
x1  2 Should we branch further?
x2  2 YES!
x1 , x2  0
1. The P3 solution, 68.2, is better than our best feasible solution so far, 65.0.
2. Although P2 is infeasible, we can add constraints and find feasible solutions in
successor problems.
3. We are not sure any of those successor problems will be “better than our
18
best” (of 65), but because 68.2>65, we know it is worth trying.
So what constraint should we
add to P3? Example P0 Solution :
Our choices are x1<2 and x1>3. x1  1.667, x2  3.333,
Let’s try x1<2, using P4. P1 Solution :   68.333 P2 Solution :
max   17 x1  12 x2 x1  1.0, x2  4.0, x1  2.0, x2  2.857,
s.t.   65.0 x1<1 x1>2   68.284
10 x1  7 x2  40 P3 Solution : x2<2 x2>3
P4 x1  x2  5 x1  2.6, x2  2.0,
x1  2   68.2
x2  2 x1<2 x1>3
P4 Solution :
x1 , x2  0
x1  2.0, x2  2.0
Should we branch further?   58.0

No! Why?
Two reasons, either one of which is enough:
• The P4 solution is feasible! And so we will not find another better feasible
solution that is successor to P4.
• The P4 objective is 58, worse than our best (65). So P4 and any further
successor nodes are of no interest. 19
Now what? We have to decide on going
back to P3 or P2. Choose P3. Example P0 Solution :
We impose x1>3, using P5. x1  1.667, x2  3.333,
max   17 x1  12 x2 P1 Solution :   68.333 P2 Solution :
s.t. x1  1.0, x2  4.0, x1  2.0, x2  2.857,
10 x1  7 x2  40   65.0 x1<1 x1>2   68.284
P5 x1  x2  5 P3 Solution : x2<2 x2>3
x1  3 x1  2.6, x2  2.0,
x2  2   68.2
x1 , x2  0 x1<2 x1>3
P4 Solution : P5 Solution :
x1  2.0, x2  2.0 x1  3.0, x2  1.4286
Should we branch further?   58.0   68.1429

Yes! Why?

1. The P5 solution, 68.1429, is better than our best feasible solution so far, 65.0.
2. Although P5 is infeasible, we can add constraints and find feasible solutions in
successor problems.
3. We are not sure any of those successor problems will be “better than our
20
best” (of 65), but because 68.1429>65, we know it is worth trying.
So what constraint should we
add to P5? Example P0 Solution :
Our choices are x2<1 and x2>2. x1  1.667, x2  3.333,
Let’s try x2<1, using P6. P1 Solution :   68.333 P2 Solution :
max   17 x1  12 x2 x1  1.0, x2  4.0, x1  2.0, x2  2.857,
s.t.   65.0 x1<1 x1>2   68.284
10 x1  7 x2  40 P3 Solution : x2<2 x2>3
P6 x1  x2  5 x1  2.6, x2  2.0,
x1  3   68.2
x2  1 x1<2 x1>3
P4 Solution : P5 Solution :
x1 , x2  0
x1  2.0, x2  2.0 x1  3.0, x2  1.4286
Should we branch further?   58.0   68.1429
P6 Solution : x2<1 x2>2
Yes! Why?
x1  3.3, x2  1.0
  68.1
1. The P6 solution, 68.1, is better than our best feasible solution so far, 65.0.
2. Although P6 is infeasible, we can add constraints and find feasible solutions in
successor problems.
3. We are not sure any of those successor problems will be “better than our
21
best” (of 65), but because 68.1>65, we know it is worth trying.
So what constraint should we
add to P6? Example P0 Solution :
Our choices are x1<3 and x1>4. x1  1.667, x2  3.333,
Let’s try x1<3, using P7. P1 Solution :   68.333 P2 Solution :
max   17 x1  12 x2 x1  1.0, x2  4.0, x1  2.0, x2  2.857,
s.t.   65.0 x1<1 x1>2   68.284
10 x1  7 x2  40 P3 Solution : x2<2 x2>3
P7 x1  x2  5 x1  2.6, x2  2.0,
x1  3   68.2
x2  1 x1<2 x1>3
P4 Solution : P5 Solution :
x1 , x2  0
x1  2.0, x2  2.0 x1  3.0, x2  1.4286
Should we branch further?   58.0   68.1429

No! Why? P6 Solution : x2<1 x2>2


x1  3.3, x2  1.0
Two reasons, either one of which is enough:
  68.1
• The P7 solution is feasible! We x1<3
will not find another better
x1>4
P7 Solution :
successor to P7
• P7 objective is 63, worse than x1  3.0, x2  1.0 Now let’s
  63.0
our best (65), so P7’s successor try here. 22
nodes are of no interest.
So now add the x1>4 constraint
to P6, to obtain P8.
Example P0 Solution :
max   17 x1  12 x2 x1  1.667, x2  3.333,
s.t. P1 Solution :   68.333 P2 Solution :
10 x1  7 x2  40 x1  1.0, x2  4.0, x1  2.0, x2  2.857,
P8 x1  x2  5   65.0 x1<1 x1>2   68.284
x1  4 P3 Solution : x2<2 x2>3
x2  1 x1  2.6, x2  2.0,
x1 , x2  0   68.2
Should we branch further? x1<2 x1>3
P4 Solution : P5 Solution :
No! Why? x1  2.0, x2  2.0 x1  3.0, x2  1.4286
• The P8 solution is feasible! We   58.0   68.1429
will not find another better P6 Solution : x2<1 x2>2
successor to P8 x1  3.3, x2  1.0
  68.1
Note the P8 objective is 68, which is x1<3 x1>4
better than our best (65)! So P8 P7 Solution : P8 Solution :
solution becomes our new best, i.e., x1  3.0, x2  1.0 x1  4.0, x2  0
it becomes our new lower bound on
  63.0   68.0
the solution. That is, the objective at 23
Question: Do we need to Example
check the other branch to P0 Solution :
P5 and P2? x1  1.667, x2  3.333,
P1 Solution :   68.333 P2 Solution :
x1  1.0, x2  4.0, x1  2.0, x2  2.857,
Answer: Yes! Why?
  65.0 x1<1 x1>2   68.284
Because the objective value P3 Solution : x2<2 x2>3
for P5 and P2 at greater x1  2.6, x2  2.0,
  68.2
than our bound of 68, so a x1<2 x1>3
successor node could be P4 Solution : P5 Solution :

better than 68 as well. x1  2.0, x2  2.0 x1  3.0, x2  1.4286


  58.0   68.1429
P6 Solution : x2<1 x2>2
x1  3.3, x2  1.0
  68.1
x1<3 x1>4
P7 Solution : P8 Solution :
x1  3.0, x2  1.0 x1  4.0, x2  0
  63.0   68.0
24
max   17 x1  12 x2
s.t. Example P0 Solution :
10 x1  7 x2  40
x1  1.667, x2  3.333,
P9 x1  x2  5
P1 Solution :   68.333 P2 Solution :
x1  2
x1  1.0, x2  4.0, x1  2.0, x2  2.857,
x2  3 x1>2 
  65.0 x1<1  68.284
x1 , x2  0
P3 Solution : x2<2 x2>3
P9: infeasible
x1  2.6, x2  2.0,
  68.2
max   17 x1  12 x2 x1<2
P4 Solution :
x1>3 P5 Solution :
s.t.
10 x1  7 x2  40 x1  2.0, x2  2.0 x1  3.0, x2  1.4286
P10 x1  x2  5   58.0   68.1429
x1  3 P6 Solution : x2<1 x2>2
x1  3.3, x2  1.0 P10: infeasible
x2  2
x1 , x2  0   68.1
x1<3 x1>4
P7 Solution : P8 Solution :
x1  3.0, x2  1.0 x1  4.0, x2  0
  63.0   68.0
25
Central ideas to branch & bound
Branch, force integrality on one variable by adding a constraint to an LP-relaxation:

Use LP-relaxation to decide how to branch. Each branch adds a constraint to previous
LP-relaxation to enforce integrality on one variable that was not integer in the
predecessor solution.

Bound, continue branching only if the objective function value of the current solution is
better than the objective function value of the best feasible solution obtained so far:

Maintain the best feasible solution obtained so far as a bound on tree-paths that should
still be searched.
• If any tree node has an objective value less optimal than the identified bound, no
further searching from that node is necessary, since adding constraints can never
improve an objective.
• If any tree node has an objective value more optimal than the identified bound,
then additional searching from that node is necessary.

26
Homework

What we just did.


What we are
about to do.

27
Using CPLEX to solve MIPS directly
1. Created CPLEX source within a
text file called mip.lp as follows:
2. Used WinSCP to port the file to
maximize
server (linux-7.ece.iastate.edu).
17 x1 + 12 x2
4. Typed cplex122 to call cplex.
subject to
5. Typed read mip.lp to read in
10 x1 + 7 x2 <= 40
problem statement.
x1 + x2 <= 5
6. Typed mipopt to call the MIP-
Bounds
solver. The result was as follows…
0<= x1 <= 1000
0<= x2 <= 1000
Integer
x1 x2
end

28
Using CPLEX to solve MIPS directly
1. Created CPLEX source within a
text file called mip.lp as follows:
2. Used WinSCP to port the file to
maximize
server (linux-7.ece.iastate.edu).
17 x1 + 12 x2
4. Typed cplex122 to call cplex.
subject to
5. Typed read mip.lp to read in
10 x1 + 7 x2 <= 40
problem statement.
x1 + x2 <= 5
6. Typed mipopt to call the MIP-
Bounds
solver. The result was as follows…
0<= x1 <= 1000
0<= x2 <= 1000
Integer
x1 x2
end

29
Using CPLEX to solve MIPS directly
1. Created CPLEX source within a
text file called mip.lp as follows:
7. Typed
maximize
display solution variables -
17 x1 + 12 x2
The result was:
subject to
10 x1 + 7 x2 <= 40
Variable Name Solution Value
x1 + x2 <= 5 x1 4.000000
Bounds All other variables in the range 1-2 are 0.
0<= x1 <= 1000
0<= x2 <= 1000
Integer
x1 x2
end

30
Recall this point in our procedure:
The question can be posed like this: Depth vs breadth
P0 Solution :
Depth: Do we continue from P3, x1  1.667, x2  3.333,
requiring x1≤2, for example? or P Solution :   68.333 P2 Solution :
1
Breadth: Do we go back to P2 to x1  1.0, x2  4.0, x1  2.0, x2  2.857,
examine its other branch, x2≥3?   65.0 x1<1 x1>2   68.284
x2<2 x2>3
P3 Solution :
x1  2.6, x2  2.0,
  68.2

For high-dimensional IPs, it is usually the case that feasible solutions are
more likely to occur deep in a tree than at nodes near the root.
Finding multiple feasible solutions early in B&B is important because it
tightens the bound (in the above example, it increases the bound), and
therefore enables termination of branching at more nodes (and therefore
decreases computation). One can see that this will be the case if we consider
the bound before we find a feasible solution: the bound is infinite! (-∞ for
maximization problems and +∞ for minimization problems).
31
For high-dimensional IPs, is
there any benefit for selecting Branching variable selection
one non-integer variable over P0 Solution :

another when branching? x1  1.667, x2  3.333,


P1 Solution :   68.333 P2 Solution :
x1  1.0, x2  4.0, x1  2.0, x2  2.857,
In our problem, except initially, x1>2 
  65.0 x1<1  68.284
there was never a decision to
P3 Solution : x2<2 x2>3
make in this way because there P9: infeasible
was never more than one non- x1  2.6, x2  2.0,
  68.2
integer variable. x1<2
P4 Solution :
x1>3 P5 Solution :
1 x  2.0, x  2.0
2
x1  3.0, x2  1.4286
A rich research question.   68.1429
  58.0
For specific problems, you can P6 Solution : x2<1 x2>2
pre-specify an ordering of the x1  3.3, x2  1.0 P10: infeasible
variables required to be integer.   68.1
Good orderings become apparent x1<3 x1>4
based on experience with running P7 Solution : P8 Solution :
the algorithm or based on x1  3.0, x2  1.0 x1  4.0, x2  0
physical understanding, e.g.,   63.0   68.0
32
largest unit.
Our example was a pure integer
problem. What about a MIP? Mixed integer problems
P0 Solution :
max   17 x1  12 x2
x1  1.667, x2  3.333,
s.t.
P1 Solution :   68.333 P2 Solution :
10 x1  7 x2  40
IP2 x1  1.0, x2  4.0, x1  2.0, x2  2.857,
x1  x2  5 x1>2 
  65.0 x1<1  68.284
x1 , x2  0
P3 Solution : x2<2 x2>3
x1 integer. P9: infeasible
x1  2.6, x2  2.0,
  68.2
The solution to IP2 is obtained as x1<2
P4 Solution :
x1>3 P5 Solution :
soon as we solve P1 and P2.
x1  2.0, x2  2.0 x1  3.0, x2  1.4286
  58.0   68.1429
Conclusion: We can easily solve P6 Solution : x2<1 x2>2
x1  3.3, x2  1.0 P10: infeasible
MIP within our LP-relaxation
  68.1
branch and bound scheme by x1<3 x1>4
simply allowing the non-integer P7 Solution : P8 Solution :
variables to remain relaxed. x1  3.0, x2  1.0 x1  4.0, x2  0
  63.0   68.0
33

You might also like