To solve the equivalent formulation of \eqref{prob:01} presented in \eqref{eq:minmax}, we propose the inexact ALM (iALM), detailed in Algorithm~\ref{Algo:2}.
At the $k^{\text{th}}$ iteration, Step 2 of Algorithm~\ref{Algo:2} calls a solver that finds an approximate stationary point of the augmented Lagrangian $\L_{\b_k}(\cdot,y_k)$ with the accuracy of $\epsilon_{k+1}$, and this accuracy gradually increases in a controlled fashion.
The increasing sequence of penalty weights $\{\b_k\}_k$ and the dual update (Steps 4 and 5) are responsible for continuously enforcing the constraints in~\eqref{prob:01}. The appropriate choice for $\{\b_k\}_k$ will be specified in Sections \ref{sec:first-o-opt} and \ref{sec:second-o-opt}.
The particular choice of the dual step sizes $\{\s_k\}_k$ in Algorithm~\ref{Algo:2} ensures that the dual variable $y_k$ remains bounded, see~\cite{bertsekas1976penalty} for a precedent in the ALM literature where a similar dual step size is considered.
%Step 3 of Algorithm~\ref{Algo:2} removes pathological cases with divergent iterates. As an example, suppose that $g=\delta_\mathcal{X}$ in \eqref{prob:01} is the indicator function for a bounded convex set $\mathcal{X}\subset \RR^d$ and take $\rho' > \max_{x\in \mathcal{X}} \|x\|$. Then, for sufficiently large $k$ \editf{[(mfs:)we may consider removing this "sufficiently large" phrase which appears a few times or clearly explain what we meant for it] }, it is not difficult to verify that all the iterates of Algorithm~\ref{Algo:2} automatically satisfy $\|x_k\|\le \rho'$ without the need to execute Step 3.
for second-order-stationarity, if $g=0$ in \eqref{prob:01}.
%\item \textbf{(Control)} If necessary, project $x_{k+1}$ to ensure that $\|x_{k+1}\|\le \rho'$.\editf{[(mfs:) can we remove this step? If we append this to 2nd step of algo., can we still make sure is solution(or approx. solution) is within this ball?]}\\