0% found this document useful (0 votes)
42 views10 pages

5 Admm

The document summarizes the Alternating Direction Method of Multipliers (ADMM) algorithm. It discusses how ADMM combines benefits of dual decomposition and augmented Lagrangian methods. It also describes how ADMM can solve large optimization problems by decomposing them into smaller subproblems that can be solved in parallel. The document outlines the ADMM algorithm and states that under certain conditions, ADMM is guaranteed to converge to the optimal solution.

Uploaded by

Fateme Ozgoli
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views10 pages

5 Admm

The document summarizes the Alternating Direction Method of Multipliers (ADMM) algorithm. It discusses how ADMM combines benefits of dual decomposition and augmented Lagrangian methods. It also describes how ADMM can solve large optimization problems by decomposing them into smaller subproblems that can be solved in parallel. The document outlines the ADMM algorithm and states that under certain conditions, ADMM is guaranteed to converge to the optimal solution.

Uploaded by

Fateme Ozgoli
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Alternating Direction Method of Multipliers

Simone Graffione

October 2022
Goals

▶ Robust control for large scale optimization problems


▶ Decentralized control of multiple systems with small
information exchange
ADMM blend the benefits of dual decomposition and augmented
Lagrangian methods
Dual decomposition

▶ Suppose f is separable:

f (x) = f1 (x1 ) + · · · + fN (xN ), x = (x1 , . . . , xN )

▶ the Lagrangian is separable in x:

L(x, y ) = L1 (x1 , y ) + · · · + LN (xN , y ) − y T b


1
Li (xi , y ) = fi (xi ) + y T Ai xi − y T b
N
▶ x-minimization in dual ascent splits into N separable
minimizations

xik+1 := arg min Li (xi , y k )


xi
Dual decomposition algorithm

xik+1 := arg min Li (xi , y k ), i = 1, . . . , N


xi
N
!
X
y k+1 k
:= y + α k
Ai xik+1 − b
i=1

▶ Solve a large problem:


- by solving smaller subproblems (even in parallel)
- dual variable update provides coordination
▶ Cons: may be slow and require a lot of assumptions
Method of Multipliers

▶ a method to robustify dual ascent


▶ use augmented Lagrangian, ρ > 0
ρ
Lρ (x, y ) = f (x) + y T (Ax − b) + ||Ax − b||22
2
▶ method of multipliers

x k+1 := arg min Lρ (x, y k )


x
y k+1 := y k + ρ(Ax k+1 − b)

(note specific dual update step length ρ)


Alternating direction method of multipliers

▶ a method:
- with good robustness of method of multipliers
- which can support decomposition
▶ “robust dual decomposition” or “decomposable method of
multipliers”
Alternating direction method of multipliers

▶ ADMM problem form (with f, g convex)

minimize f (x) + g (z)


subject to Ax + Bz = c

- two sets of variables, with separable objective


▶ Augmented Lagrangian:
Lρ (x, z, y ) = f (x)+g (z)+y T (Ax +Bz −c)+ ρ2 ||Ax +Bz −c||22
where ρ > 0 is the penalty parameter

A more useful and easy to read form is:


Lρ (x, z, y ) = f (x) + g (z) + ρ2 ||Ax + Bz − c + ρ1 y ||22
called Scaled Form
Alternating direction method of multipliers

▶ ADMM algorithm:

x k+1 := arg min Lρ (x, z k , y k )


x
z k+1 := arg min Lρ (x k+1 , z, y k )
z
k+1
y := y + ρ(Ax k+1 + Bz k+1 − c)
k
Convergence of ADMM

▶ Let assume:
- f , g convex, closed, proper
- L0 has a saddle point
▶ then ADMM converges:
- iterates approach feasibility: Ax k + Bz k − c → 0
- L0 objective approaches optimal value: f (x k ) + g (z k ) → p ∗
Exercise

▶ Minimization of two cost functions with a common variable:

minimize x12 + x22 + 2x1 − 3 minimize x32 − x3 + 10


(
x1 + 2x2 = 10
subject to
2x1 + x3 = 0

You might also like