The problem \eqref{e:fac} includes many variety problems in semidefinite programming \cite{Burer2005,Burer2003,tu2014practical}.
This problem is a convex optimal problem. However, convex optimal methods are not good choice when $d$ is very large. To
overcome this issue, factorized technique \cite{Burer2005,Burer2003} $x = uu^\top$, that transmit the hight dimensional $x$ into low dimensional $u$, will be useful.
Moreover, the constraint $x\succeq 0$ will be disappeared in $u$. Suppose that one can find $C \subset \RR^{n\times r}$ such that
$x\in C'$ if $u\in C$. By defining $L\colon u\mapsto Muu^\top$ and $h\colon u\mapsto h_0(uu^\top)$, we can
write \eqref{e:fac} as a non-convex optimal problem:
\begin{equation}
\label{e:fac1}
\underset{u\in C, Lu=b} {\text{min}} \; h(u).
\end{equation}
Then it is not hard to check that \eqref{e:fac1} fits to our framework.
\end{example}
Various standard numerical methods were proposed in the literature for the problem \eqref{prob:01} such as
penalty method, augmented Lagrangian method, barrier method, Newton methods, modified Newton methods.
Let us recall briefly the basic idea of the augmented Lagrangian method \cite{luenberger1984linear}. For simple, taking $C=\RR^d$. Define
where $u^+ = P_{C}(u-\gamma \nabla F_{\beta}(u;\dot{y}))$.
%\end{enumerate}
\end{lemma}
\begin{proof} For a fixed $\dot{y}$, the function $u\mapsto h(u)+F_{\beta}(u;\dot{y})$ is $(\mathbb{L}_h+ \mathbb{L}_{\beta})$-Lipschitz. Hence, the results follows from
which together with \eqref{conkhi}, shows that \eqref{e:inclu1} is satisfied.
\end{proof}
\subsection{Sufficient conditions}
Sufficient condition for a local minima is well-studied in the litterature \cite{luenberger1984linear,rockafellar2009variational,mordukhovich2006variational,gfrerer2015complete}.
For simple, let us assume that $C = \menge{u}{Mu\leq c}$ for some $\mathcal{C}^2$-function $g\colon\RR^d\to\RR$. In this section we assume that $h$ and $L$ are too $\mathcal{C}^2$-functions.
Let $\overline{u}$ be a point such that $g\overline{u}=c$ and $0\not\in\nabla g(\overline{u})$. Then, \cite[Proposition 10.3]{rockafellar2009variational} implies that
Now suppose that the first oder optimality condition \eqref{e:inclu1} is satisfied for $\overline{u}$. Then there exists $\mu \geq 0$ and $\dot{y} \in\RR^m$ such that
which implies that $b_0>-\infty$ whenever $(u_k)_{k\in\NN}$ or $\dom(f)$ is bounded.
\end{remark}
\subsection{Convergence}
In view of Lemma \ref{l:solv}, we need to estimate gradient mapping $\|G_{\beta_{k},\gamma_{k}}(u_k,\dot{y}_{k})\|$ as well as feasibility $\|Lu_{n+1}-b\|^2$.
\begin{theorem}
\label{t:1}
Suppose that $b_0 = \inf_{k\in\NN} \mathcal{L}_{\beta_{k}}(u_{k+1},\dot{y}_{k}) > -\infty$ and set $z_0=\sum_{k=1}^\infty \frac{d_k}{(1+k)^\alpha}$. Then the following hold.
which proves \eqref{e:mapp1}. Moreover, \eqref{e:feas1} follows directly from \eqref{e:mc}.
\end{proof}
\begin{corollary} Under the same condition as in Theorem \ref{t:1}. Suppose that
$N\gamma_{\min, N}\to \infty$ and $N/\beta_{\max, N}\to \infty$, where $\gamma_{\min,N}= \min_{1\leq k\leq N}\gamma_k$ and $\beta_{\max,N} = \max_{1\leq k\leq N}\beta_k$, then
\eqref{e:mapp2} and \eqref{e:feas2} follow directly
from \eqref{e:mapp1} and \eqref{e:feas1}, respectively.
\end{proof}
\subsection{Related works}
To the best of our knowledge, the proposed method is new and different from existing methods in the literature.
As mentioned in Introduction, the connection to augmented Lagrange method is already mentioned.
In the case when $h=0$, a modification of
Chambolle-Pock's method is investigated in \cite{Valkonen14} and preconditioned ADMM \cite{Matin17}
where the convergence of iterate is proved under strong assumptions not full-filling in our setting here.
%\noindent{\bf Connection to Linearized Alternating Direction Method}.\\
ADMM is the classic method proposed for solving the problem \ref{prob:01} for the case where $L$ is a linear operator and $h$ is zero \cite{gabay1976dual}.
This method is an application of the Douglas-Rachford method to the dual problem \cite{Gabay83}.
One of the main drawback of the ADMM is the appearance of the term $Lu$ in the update rule of $u_{k+1}$. To overcome this issue, some strategies were suggested.
The first strategies is proposed in \cite{shefi2014rate}, refined in \cite{banert2016fixing}, known as alternating direction proximal method of multipliers.
The second strategies is to use linearized technique \cite{lin2011linearized}.
We show here that our proposed method
is closed related to updating rule as the linearized alternating direction method \cite{lin2011linearized}. Assume that $h\equiv 0$ and $L$ is a linear operator.
which is a variant version of Linearized ADMM \cite{lin2011linearized}.
%\noindent
%{\bf Connection to ALBUM3 in \cite{bolte2018nonconvex}}\\
Very recently, \cite{bolte2018nonconvex} proposes a framework with for solving the problem \ref{prob:01} with $C=\RR^d$.
In particular, a special case AlBUM3 (Proximal Linearized Alternating Minimization) in this work
is closely related to us where their conditions are checkable only when $L$ is linear. Moreover,
our updating of $\beta_{k}$ in \cite{bolte2018nonconvex} depending on the smallest eigenvalue $L^*L$. For nonlinear $L$, the application of their method remains a challenge.
%\noindent{\bf Connection to the deflected subgradient method}\\
The deflected subgradient method is investigated in \cite{burachik2010deflected} can be use to solve a special case of the Problem \ref{prob:01} for
some a compact subset $\mathcal{C}$ in $\mathcal{X}$. The basis step of the deflected subgradient method to solve: given $\beta, v$,
where $\boldsymbol{\sigma}$ is a continuous penalty function such as $\|\cdot\|$, and $K$ is bounded linear operator. In general, there is no closed
-form expression for $u^*$ since it does not split $f$, $h$, $L$ invididually. Hence, it is hard to implement deflected subgradient method. This is also a common drawback of the classic penalty method and its related works
\cite{gasimov2002augmented,burachik2010primal}.
\section{Numerical experiments}
%\begin{thebibliography}{}
%
% and use \bibitem to create references. Consult the Instructions