Latex For Mu
Latex For Mu
1
- The Lagrangian is: \[ \mathcal{L}(x, y, \lambda) = x + y + \lambda(x^2
+ y^2 - 1) \] - Taking partial derivatives and setting them equal to zero:
\[ \frac{\partial \mathcal{L}}{\partial x} = 1 + 2\lambda x = 0 \] \[
\frac{\partial \mathcal{L}}{\partial y} = 1 + 2\lambda y = 0 \] \[
\frac{\partial \mathcal{L}}{\partial \lambda} = x^2 + y^2 - 1 = 0 \] -
Solving this system of equations gives the solution �x = \frac{1}{\sqrt{2}}, y
= \frac{1}{\sqrt{2}}�.
\section{Karush-Kuhn-Tucker (KKT) Conditions (Nonlinear Constrained Opti-
mization)}
\textbf{Formula}: For a constrained optimization problem with inequality con-
straints, the KKT conditions include: 1. \textbf{Primal feasibility}: �g(x) \leq
0� 2. \textbf{Dual feasibility}: �\lambda \geq 0� 3. \textbf{Stationarity}:
�\nabla f(x) + \sum \lambda_i \nabla g_i(x) = 0� 4. \textbf{Complementary
slackness}: �\lambda_i g_i(x) = 0�
\textbf{Example}: Minimize �f(x) = x^2� subject to �g(x) = x - 1 \geq 0�.
- The KKT conditions give: - Primal feasibility: �x - 1 \geq 0� - Dual feasibility:
�\lambda \geq 0� - Stationarity: �2x + \lambda = 0� - Complementary slackness:
�\lambda(x - 1) = 0� - Solving these conditions gives the optimal solution �x =
1� with �\lambda = 0�.
\section{Convex Optimization Theorem}
\textbf{Formula}: If �f(x)� is a convex function and �X� is a convex feasible
region, then any local minimum is also a global minimum.
\textbf{Example}: Minimize �f(x) = x^2� subject to �x \in \mathbb{R}�.
- Since �f(x) = x^2� is convex (the second derivative �f”(x) = 2 > 0�), and the
feasible region is �\mathbb{R}�, the global minimum occurs at �x = 0�.
\section{Duality Theorem (Strong Duality)}
\textbf{Formula}: For a convex optimization problem, the dual problem pro-
vides a lower bound to the primal problem. \textbf{Strong duality} means that
the optimal value of the dual problem is equal to the optimal value of the primal
problem.
If the primal problem is: \[ \min f(x) \quad \text{subject to} \quad g(x)
\leq 0 \] then the dual problem is: \[ \max \, \min \, L(x, \lambda) \quad
\text{subject to} \quad \lambda \geq 0 \]
\textbf{Example}: For a linear programming problem, the duality theorem
states that solving the dual problem gives the same optimal value as solving the
primal problem.
\section{Subdifferential Theorem (Subgradient)}
\textbf{Formula}: For a non-smooth function �f(x)�, the subdifferential at a
point �x� is the set of all vectors �g� such that: \[ f(y) \geq f(x) + g^\top (y - x)
2
\quad \forall y \] where �g� is called the \textbf{subgradient} of �f(x)� at �x�.
\textbf{Example}: For the function �f(x) = |x|�, the subdifferential at �x = 0� is
�\{-1, 1\}�, because the function has a sharp corner at �x = 0�.
\section{Frank-Wolfe Algorithm (Conditional Gradient Method)}
\textbf{Formula}: The Frank-Wolfe algorithm updates the current solution by
solving a linear approximation of the objective function at each step: \[ x_{k+1}
= \text{argmin}_{x \in X} \, \nabla f(x_k)^\top (x - x_k) \] where �X� is
the feasible region.
\textbf{Example}: For a convex problem like minimizing �f(x) = x^2� over the
interval �[0, 1]�, the Frank-Wolfe method iteratively moves toward the optimal
solution by solving linearized versions of the function.
\section{Gradient Descent and Convergence Theorem}
\textbf{Formula}: The gradient descent algorithm updates the solution by mov-
ing in the direction of the negative gradient: \[ x_{k+1} = x_k - \alpha \nabla
f(x_k) \] where �\alpha� is the step size.
\textbf{Example}: Minimize �f(x) = x^2� using gradient descent: - Start with
�x_0 = 2�, choose �\alpha = 0.1�. - Iteratively update �x_k� until convergence.
\section{No-Free-Lunch Theorem (Optimization)}
\textbf{Formula}: The No-Free-Lunch Theorem states that no optimization al-
gorithm can outperform others across all possible problems. There’s no universal
best algorithm.
\textbf{Example}: For simple problems, methods like gradient descent work
well, but for non-smooth or discrete problems, algorithms like genetic algorithms
or simulated annealing may perform better.
\end{document}