Since $\gamma_k$ is determined by the line search subroutine in Lemma \ref{lem:eval Lipsc cte}, we may now apply Lemma \ref{lem:11} for every iteration in this interval to find that
%That is, Volkan's assumption \eqref{eq:assumption} successfully bounds both the gradient mapping and the feasibility gap. One question is the interplay between $\{\beta_k,\gamma_k\}_k$ to ensure the validity of Volkan's assumption, which feels like some sort of \emph{uncertainty principle}.
Note that \eqref{eq:long chain} bounds the gradient mapping with the feasibility gap. We next find a converse, thus bounding the feasibility gap with the gradient mapping. To that end, let $T_C(u)$ and $P_{T_C(u)}$ be the tangent cone of $C$ at $u\in C$ and orthogonal projection onto this subspace, respectively. Likewise, let $N_C(u)$ and $P_{N_{C}(u)}$ be the normal cone of $C$ at $u$ and the corresponding orthogonal projection.
Roughly speaking, \eqref{eq:bnd on Ak final} states that the feasibility gap is itself bounded by the gradient map. In order to apply Lemma \ref{lem:bnd bnd Ak}, let us assume that (\ref{eq:good neighb}) holds. Lemma \ref{lem:bnd bnd Ak} is then in force and we may now substitute \eqref{eq:bnd on Ak final} back into \eqref{eq:long chain} to find that
In order to interpret (\ref{eq:final bnd on G},\ref{eq:feas bnd semi final},\ref{eq:suff cnd on rho}), we next estimate $B_{K}$ in \eqref{eq:defn of B}. To that end, let us first control the growth of the dual sequence $\{y_k\}_k$. Recalling \eqref{eq:y update recall} and for every $k\in [k_0:k_1+1]$, we write that
In order to interpret (\ref{eq:final bnd on G},\ref{eq:feas bnd semi final},\ref{eq:suff cnd on rho}), it still remains to estimate $\mu'$ in \eqref{eq:defn of muprime}. To that end, we first derive a lower bound on the step sizes $\{\gamma_k\}_k$. To invoke \eqref{eq:moreover}, we in turn need to gauge how smooth the augmented Lagrangian $\mathcal{L}_{\beta_k}(\cdot,y_k)$ is. For every $k\in [k_0:k_1+1]$, note that
\qquad \text{(see (\ref{eq:Bk evaluated},\ref{eq:mup to mupp}))}
\end{align}
Moreover, we can simplify the assumption in \eqref{eq:suff cnd on rho}. To be specific, thanks to (\ref{eq:Bk evaluated},\ref{eq:mup to mupp}), we can replace \eqref{eq:suff cnd on rho} with the assumption
The lower bound on the step sizes in \eqref{eq:low bnd on gammas} has two other consequences. First, we find that \eqref{eq:to be used later on} automatically holds if $k_0$ is sufficiently large. Second, it allows us to improve \eqref{eq:bnd on gamma G} by writing that
where the last line holds if there exists $c>0$ for which $k_1-k_0+1 \ge ck_1 $.
Note that $\log$ factors in the convergence proof might be shaved up to a factor by choosing a more conservative step size for the dual which will automatically bound the norm of the dual. A possible choice
\begin{align}\label{eq:yk bounded}
\sigma_k \geq \max(\beta \sqrt{k}, \beta k \log^2(k+1) \| Au_k - b \|),