To solve the formulation presented in \eqref{eq:minmax}, we propose the inexact ALM (iALM), detailed in Algorithm~\ref{Algo:2}.
At the iteration $k$, Algorithm~\ref{Algo:2} calls in Step 2 a solver that finds an approximate stationary point of the augmented Lagrangian $\L_{\b_k}(\cdot,y_k)$ with the accuracy of $\epsilon_{k+1}$, and this accuracy gradually increases in a controlled fashion.
The increasing sequence of penalty weights $\{\b_k\}_k$ and the dual update (Steps 4 and 5) are responsible for continuously enforcing the constraints in~\eqref{prob:01}. As we will see in the convergence analysis, the particular choice of the dual step size $\s_k$ in Algorithm~\ref{Algo:2} ensures that the dual variable $y_k$ remains bounded; see~\cite{bertsekas1976penalty} for a precedent in the ALM literature where a similar choice for $\sigma_k$ is considered.
Step 3 of Algorithm~\ref{Algo:2} removes pathological cases with divergent iterates. As an example, suppose that $g=\delta_\mathcal{X}$ in \eqref{prob:01} is the indicator function for a bounded convex set $\mathcal{X}\subset\RR^d$ and take $\rho' > \max_{x\in\mathcal{X}} \|x\|$. Then, for sufficiently large $k$, it is not difficult to verify that all the iterates of Algorithm~\ref{Algo:2} automatically satisfy $\|x_k\|\le\rho'$ without the need to execute Step 3.
\begin{algorithm}[h!]
\begin{algorithmic}
\STATE\textbf{Input:}$\rho,\rho',\rho''>0$. A non-decreasing, positive, unbounded sequence $\{\b_k\}_{k\ge1}$, stopping threshold $\tau_f$ and $\tau_s$.
\STATE\textbf{Initialization:}$x_{1}\in\RR^d$ such that $\|A(x_1)\|\le\rho$ and $\|x_1\|\le\rho'$, $y_0\in\RR^m$, $\s_1$.