<spanid="quad-forms"></span><h2>Eliminating quadratic forms<aclass="headerlink"href="#eliminating-quadratic-forms"title="Permalink to this headline">¶</a></h2>
<p>One particular reformulation that we <em>strongly</em> encourage is to eliminate quadratic
forms—that is, functions like <ttclass="docutils literal"><spanclass="pre">sum_square</span></tt>, <ttclass="docutils literal"><spanclass="pre">sum(square(.))</span></tt> or <ttclass="docutils literal"><spanclass="pre">quad_form</span></tt>—whenever
it is possible to construct equivalent models using <ttclass="docutils literal"><spanclass="pre">norm</span></tt> instead.
Our experience tells us that quadratic forms often pose a numerical challenge for
the underlying solvers that CVX uses.</p>
<p>We acknowledge that this advice goes against conventional wisdom: quadratic forms
are the prototypical smooth convex function, while norms are nonsmooth and therefore
unwieldy. But with the <em>conic</em> solvers that CVX uses, this wisdom is <em>exactly backwards</em>.
It is the <em>norm</em> that is best suited for conic formulation and solution. Quadratic forms
are handled by <em>converting</em> them to a conic form—using norms, in fact! This conversion
process poses some interesting scaling challenges. It is better if the modeler can eliminate
the need to perform this conversion.</p>
<p>For a simple example of such a change, consider the objective</p>
<divclass="highlight-none"><divclass="highlight"><pre>minimize( sum_square( A * x - b ) )
</pre></div>
</div>
<p>In exact arithmetic, this is precisely equivalent to</p>
<divclass="highlight-none"><divclass="highlight"><pre>minimize( square_pos( norm( A * x - b ) ) )
</pre></div>
</div>
<p>But equivalence is also preserved if we eliminate the square altogether:</p>
<divclass="highlight-none"><divclass="highlight"><pre>minimize( norm( A * x - b ) )
</pre></div>
</div>
<p>The optimal value of <ttclass="docutils literal"><spanclass="pre">x</span></tt> is identical in all three cases, but this last version is
likely to produce more accurate results. Of course, if you <em>need</em> the value of the
squared norm, you can always recover it by squaring the norm after the fact.</p>
<p>Conversions using <ttclass="docutils literal"><spanclass="pre">quad_form</span></tt> can sometimes be a bit more difficult. For instance, consider</p>
<divclass="highlight-none"><divclass="highlight"><pre>quad_form( A * x - b, Q ) <= 1
</pre></div>
</div>
<p>where <ttclass="docutils literal"><spanclass="pre">Q</span></tt> is a positive definite matrix. The equivalent <ttclass="docutils literal"><spanclass="pre">norm</span></tt> version is</p>
<divclass="highlight-none"><divclass="highlight"><pre>norm( Qsqrt * ( A * x - b ) ) <= 1
</pre></div>
</div>
<p>where <ttclass="docutils literal"><spanclass="pre">Qsqrt</span></tt> is an appropriate matrix square root of <ttclass="docutils literal"><spanclass="pre">Q</span></tt>. One option is to compute
the symmetric square root <ttclass="docutils literal"><spanclass="pre">Qsqrt</span><spanclass="pre">=</span><spanclass="pre">sqrtm(Q)</span></tt>, but this computation destroys sparsity.
If <ttclass="docutils literal"><spanclass="pre">Q</span></tt> is sparse, it is likely worth the effort to compute a sparse Cholesky-based
square root:</p>
<divclass="highlight-none"><divclass="highlight"><pre>[ Qsqrt, p, S ] = chol( Q );
Qsqrt = Qsqrt * S;
</pre></div>
</div>
<p>Sometimes an effective reformulation requires a practical understanding of what it
means for problems to be equivalent. For instance, suppose we wanted to add an
<spanclass="math">\(\ell_1\)</span> regularization term to the objective above, weighted by some fixed,
<divclass="highlight-none"><divclass="highlight"><pre>minimize( sum_square( A * x - b ) + lambda * norm( x, 1 ) )
</pre></div>
</div>
<p>In this case, we typically do not care about the <em>specific</em> values of <ttclass="docutils literal"><spanclass="pre">lambda</span></tt>; rather
we are varying it over a range to study the tradeoff between the residual of <ttclass="docutils literal"><spanclass="pre">A*x-b</span></tt>
and the 1-norm of <ttclass="docutils literal"><spanclass="pre">x</span></tt>. The same tradeoff can be studied by examining this modified model:</p>
<divclass="highlight-none"><divclass="highlight"><pre>minimize( norm( A * x - b ) + lambda2 * norm( x, 1 ) )
</pre></div>
</div>
<p>This is not precisely the same model; setting <ttclass="docutils literal"><spanclass="pre">lambda</span></tt> and <ttclass="docutils literal"><spanclass="pre">lambda2</span></tt> to the same value
will not yield identical values of <ttclass="docutils literal"><spanclass="pre">x</span></tt>. But both models <em>do</em> trace the same tradeoff
curve—only the second form is likely to produce more accurate results.</p>
</div>
<divclass="section"id="indexed-dual-variables">
<spanid="indexed-dual"></span><h2>Indexed dual variables<aclass="headerlink"href="#indexed-dual-variables"title="Permalink to this headline">¶</a></h2>
<p>In some models, the <em>number</em> of constraints depends on the model
parameters—not just their sizes. It is straightforward to build such
models in CVX using, say, a Matlab <ttclass="docutils literal"><spanclass="pre">for</span></tt> loop. In order to assign
each of these constraints a separate dual variable, we must find a way
to adjust the number of dual variables as well. For this reason, CVX
supports <em>indexed dual variables</em>. In reality, they are simply standard
Matlab cell arrays whose entries are CVX dual variable objects.</p>
<p>Let us illustrate by example how to declare and use indexed dual
variables. Consider the following semidefinite program from the
<p>The statement <ttclass="docutils literal"><spanclass="pre">dual</span><spanclass="pre">variables</span><spanclass="pre">y{n}</span></tt> allocates a cell array of
<spanclass="math">\(n\)</span> dual variables, and stores the result in the Matlab variable
<ttclass="docutils literal"><spanclass="pre">Z</span></tt>. The equality constraint in the <ttclass="docutils literal"><spanclass="pre">for</span></tt> loop has been augmented
with a reference to <ttclass="docutils literal"><spanclass="pre">y{k+1}</span></tt>, so that each constraint is assigned a
separate dual variable. When the <ttclass="docutils literal"><spanclass="pre">cvx_end</span></tt> command is issued, CVX
will compute the optimal values of these dual variables, and deposit
them into an <spanclass="math">\(n\)</span>-element cell array <ttclass="docutils literal"><spanclass="pre">y</span></tt>.</p>
<p>This example admittedly is a bit simplistic. With a bit of careful
arrangement, it is possible to rewrite this model so that the <spanclass="math">\(n\)</span>
equality constraints can be combined into a single vector constraint,
which in turn would require only a single vector dual variable. <aclass="footnote-reference"href="#id4"id="id1">[3]</a>
For a more complex example that is not amenable to such a
<spanid="successive"></span><h2>The successive approximation method<aclass="headerlink"href="#the-successive-approximation-method"title="Permalink to this headline">¶</a></h2>
<divclass="admonition note">
<pclass="first admonition-title">Note</p>
<pclass="last">If you were referred to this web page by CVX’s warning message: welcome!
Please read this section carefully to fully understand why using
functions like <ttclass="docutils literal"><spanclass="pre">log</span></tt>, <ttclass="docutils literal"><spanclass="pre">exp</span></tt>, etc. within CVX models requires special care.</p>
</div>
<p>Prior to version 1.2, the functions <ttclass="docutils literal"><spanclass="pre">exp</span></tt>, <ttclass="docutils literal"><spanclass="pre">log</span></tt>, <ttclass="docutils literal"><spanclass="pre">log_det</span></tt>,
and other functions from the exponential family could not be used within
CVX. Unfortunately, CVX utilizes symmetric primal/dual solvers that
simply cannot support those functions natively <aclass="footnote-reference"href="#id5"id="id2">[4]</a>, and a variety of factors
prevent us from incorporating other types of solvers into CVX.</p>
<p>Nevertheless, support for these functions was requested quite frequently.
For this reason, we constructed a <em>successive approximation</em> heuristic that
allows the symmetric primal/dual solvers to support the exponential
family of functions. A precise description of the approach is beyond the
scope of this text, but roughly speaking, the method proceeds as follows:</p>
<olclass="arabic simple">
<li>Choose an initial approximation centerpoint <spanclass="math">\(x_c=0\)</span>.</li>
<li>Construct a polynomial approximation for each log/exp/entropy term
which is accurate in the neighborhood of <spanclass="math">\(x_c\)</span>.</li>
<li>Solve this approximate model to obtain its optimal point <spanclass="math">\(\bar{x}\)</span>.</li>
<li>If <spanclass="math">\(\bar{x}\)</span> satisfies the optimality conditions for
the <em>orignal</em> model to sufficient precision, exit.</li>
<li>Otherwise, shift <spanclass="math">\(x_c\)</span> towards <spanclass="math">\(\bar{x}\)</span>, and repeat steps 2-5.</li>
</ol>
<p>Again, this is a highly simplified description of the
approach; for instance, we actually employ both the primal and dual
solutions to guide our judgements for shifting <spanclass="math">\(x_c\)</span> and
terminating.</p>
<p>This approach has proven surprisingly effective for many problems.
<em>However, as with many heuristic approaches, it
is not perfect.</em> It will sometimes fail to converge even for problems known to have solutions.
Even when it does converge, it is several times slower than the standard solver,
due to its iterative approach. Therefore, it is best to use it sparingly and carefully.
Here are some specific usage tips:</p>
<ul>
<li><pclass="first">First, confirm that the log/exp/entropy terms are truly necessary for your model. In
many cases, an exactly equivalent model can be constructed without them, and that should
always be preferred. For instance, the constraint</p>
<spanid="powerfunc"></span><h2>Power functions and p-norms<aclass="headerlink"href="#power-functions-and-p-norms"title="Permalink to this headline">¶</a></h2>
<p>In order to implement the convex or concave branches of the power
function <spanclass="math">\(x^p\)</span> and <spanclass="math">\(p\)</span>-norms <spanclass="math">\(\|x\|_p\)</span> for general
values of <spanclass="math">\(p\)</span>, CVX uses an enhanced version of the SDP/SOCP
conversion method described by <aclass="reference internal"href="credits.html#ag00"id="id3">[AG00]</a>.
This approach is exact—as long as the exponent <spanclass="math">\(p\)</span> is rational.
To determine integral values <spanclass="math">\(p_n,p_d\)</span> such that
<spanclass="math">\(p_n/p_d=p\)</span>, CVX uses Matlab’s <ttclass="docutils literal"><spanclass="pre">rat</span></tt> function with its
default tolerance of <spanclass="math">\(10^{-6}\)</span>. There is currently no way to
for the <ttclass="docutils literal"><spanclass="pre">rat</span></tt> function for more details.</p>
<p>The complexity of the resulting model depends roughly on the size of the
values <spanclass="math">\(p_n\)</span> and <spanclass="math">\(p_d\)</span>. Let us introduce a more precise
measure of this complexity. For <spanclass="math">\(p=2\)</span>, a constraint
<spanclass="math">\(x^p\leq y\)</span> can be represented with exactly one <spanclass="math">\(2\times 2\)</span>
LMI:</p>
<divclass="math">
\[\begin{split}x^2 \leq y \quad\Longleftrightarrow\quad \begin{bmatrix} y & x \\ x & 1 \end{bmatrix} \succeq 0.\end{split}\]</div>
<p>For other values of <spanclass="math">\(p=p_n/p_d\)</span>, CVX generates a number of
<spanclass="math">\(2\times 2\)</span> LMIs that depends on both <spanclass="math">\(p_n\)</span> and <spanclass="math">\(p_d\)</span>;
we denote this number by <spanclass="math">\(k(p_n,p_d)\)</span>. (In some cases additional
linear constraints are also generated, but we ignore them for this
analysis.) For instance, for <spanclass="math">\(p=3/1\)</span>, we have</p>
<divclass="math">
\[\begin{split}x^3\leq y, x\geq 0 \quad\Longleftrightarrow\quad \exists z ~
\begin{bmatrix} z & x \\ x & 1 \end{bmatrix} \succeq 0. ~
\begin{bmatrix} y & z \\ z & x \end{bmatrix} \succeq 0.\end{split}\]</div>
<p>So <spanclass="math">\(k(3,1)=2\)</span>. An empirical study has shown that for
<spanclass="math">\(p=p_n/p_d>1\)</span>, we have</p>
<divclass="math">
\[k(p_n,p_d)\leq\log_2 p_n+\alpha(p_n)\]</div>
<p>where the <spanclass="math">\(\alpha(p_n)\)</span> term grows very slowly compared to the
<spanclass="math">\(\log_2\)</span> term. Indeed, for <spanclass="math">\(p_n\leq 4096\)</span>, we have verified
that <spanclass="math">\(\alpha(p_n)\)</span> is usually 1 or 2, but occasionally 0 or 3.
Similar results are obtained for <spanclass="math">\(0 < p < 1\)</span> and <spanclass="math">\(p < 0\)</span>.</p>
<p>The cost of this SDP representation is relatively small for nearly all
useful values of <spanclass="math">\(p\)</span>. Nevertheless, CVX issues a warning
whenever <spanclass="math">\(k(p_n,p_d)>10\)</span> to insure that the user is not surprised
by any unexpected slowdown. In the event that this threshold does not
suit you, you may change it using the command
<ttclass="samp docutils literal"><spanclass="pre">cvx_power_warning(</span><em><spanclass="pre">thresh</span></em><spanclass="pre">)</span></tt>, where <ttclass="samp docutils literal"><em><spanclass="pre">thresh</span></em></tt> is the desired
cutoff value. Setting the threshold to <ttclass="docutils literal"><spanclass="pre">Inf</span></tt> disables it completely.
As with the command <ttclass="docutils literal"><spanclass="pre">cvx_precision</span></tt>, you can place a call to
<ttclass="docutils literal"><spanclass="pre">cvx_power_warning</span></tt> within a model to change the threshold for a
single model; or outside of a model to make a global change. The command
always returns the <em>previous</em> value of the threshold, so you can save it
and restore it upon completion of your model, if you wish. You can query
the current value by calling <ttclass="docutils literal"><spanclass="pre">cvx_power_warning</span></tt> with no arguments.</p>
</div>
<divclass="section"id="overdetermined-problems">
<spanid="overdetermined"></span><h2>Overdetermined problems<aclass="headerlink"href="#overdetermined-problems"title="Permalink to this headline">¶</a></h2>
<p>The status message <ttclass="docutils literal"><spanclass="pre">Overdetermined</span></tt> commonly occurs when structure
in a variable or set is not properly recognized. For example, consider
the problem of finding the smallest diagonal addition to a matrix
<spanclass="math">\(W\in\mathbf{R}^{n\times n}\)</span> to make it positive semidefinite:</p>
<divclass="math">
\[\begin{split}\begin{array}{ll}
\text{minimize} & \operatorname*{\textrm{Tr}}(D) \\
\text{subject to} & W + D \succeq 0 \\
& D ~ \text{diagonal}
\end{array}\end{split}\]</div>
<p>In CVX, this problem might be expressed as follows:</p>
<p>and the variable <ttclass="docutils literal"><spanclass="pre">cvx_status</span></tt> is set to <ttclass="docutils literal"><spanclass="pre">Overdetermined</span></tt>.</p>
<p>What has happened here is that the unnamed variable returned by
statement <ttclass="docutils literal"><spanclass="pre">semidefinite(n)</span></tt> is <em>symmetric</em>, but <spanclass="math">\(W\)</span> is fixed and
<em>unsymmetric</em>. Thus the problem, as stated, is infeasible. But there are
also <spanclass="math">\(n^2\)</span> equality constraints here, and only <spanclass="math">\(n+n*(n+1)/2\)</span>
unique degrees of freedom—thus the problem is overdetermined. We can
correct this problem by replacing the equality constraint with</p>
<divclass="highlight-none"><divclass="highlight"><pre>sym( W ) + D == semidefinite(n);
</pre></div>
</div>
<p><ttclass="docutils literal"><spanclass="pre">sym</span></tt> is a function we have provided that extracts the symmetric part
of its argument; that is, <ttclass="docutils literal"><spanclass="pre">sym(W)</span></tt> equals <ttclass="docutils literal"><spanclass="pre">0.5</span><spanclass="pre">*</span><spanclass="pre">(</span><spanclass="pre">W</span><spanclass="pre">+</span><spanclass="pre">W'</span><spanclass="pre">)</span></tt>.</p>
<spanid="newfunc"></span><h2>Adding new functions to the atom library<aclass="headerlink"href="#adding-new-functions-to-the-atom-library"title="Permalink to this headline">¶</a></h2>
<p>CVX allows new convex and concave functions to be defined and added
to the atom library, in two ways, described in this section. The first
method is simple, and can (and should) be used by many users of CVX,
since it requires only a knowledge of the basic DCP ruleset. The second
method is very powerful, but a bit complicated, and should be considered
an advanced technique, to be attempted only by those who are truly
comfortable with convex analysis, disciplined convex programming, and
CVX in its current state.</p>
<p>Please let us know if you have implemented a convex or concave
function that you think would be useful to other users; we will be happy
<spanid="newfunc-psp"></span><h3>New functions via partially specified problems<aclass="headerlink"href="#new-functions-via-partially-specified-problems"title="Permalink to this headline">¶</a></h3>
<p>A more advanced method for defining new functions in CVX relies on
the following basic result of convex analysis. Suppose that
<spanclass="math">\(S\subset\mathbf{R}^n\times\mathbf{R}^m\)</span> is a convex set and
<p>In CVX you can define a convex function in this very manner, that
is, as the optimal value of a parameterized family of disciplined convex
programs. We call the underlying convex program in such cases an
<em>incomplete specification</em>—so named because the parameters (that is,
the function inputs) are unknown when the specification is constructed.
The concept of incomplete specifications can at first seem a bit
complicated, but it is very powerful mechanism that allows CVX to
support a wide variety of functions.</p>
<p>Let us look at an example to see how this works. Consider the
unit-halfwidth Huber penalty function <spanclass="math">\(h(x)\)</span>:</p>
<divclass="math">
\[\begin{split}h:\mathbf{R}\rightarrow\mathbf{R}, \quad h(x) \triangleq \begin{cases} x^2 & |x| \leq 1 \\ 2|x|-1 & |x| \geq 1 \end{cases}.\end{split}\]</div>
<p>We can express the Huber function in terms of the following family of
convex QPs, parameterized by <spanclass="math">\(x\)</span>:</p>
<divclass="math">
\[\begin{split}\begin{array}{ll}
\text{minimize} & 2 v + w^2 \\
\text{subject to} & | x | \leq v + w \\
& w \leq 1, ~ v \geq 0
\end{array}\end{split}\]</div>
<p>with scalar variables <spanclass="math">\(v\)</span> and <spanclass="math">\(w\)</span>. The optimal value of this
simple QP is equal to the Huber penalty function of <spanclass="math">\(x\)</span>. We note
that the objective and constraint functions in this QP are (jointly)
convex in <spanclass="math">\(v\)</span>, <spanclass="math">\(w\)</span><em>and</em><spanclass="math">\(x\)</span>.</p>
<p>We can implement the Huber penalty function in CVX as follows:</p>
<divclass="highlight-none"><divclass="highlight"><pre>function cvx_optval = huber( x )
cvx_begin
variables w v;
minimize( w^2 + 2 * v );
subject to
abs( x ) <= w + v;
w <= 1; v >= 0;
cvx_end
</pre></div>
</div>
<p>If <ttclass="docutils literal"><spanclass="pre">huber</span></tt> is called with a numeric value of <ttclass="docutils literal"><spanclass="pre">x</span></tt>, then upon reaching
the <ttclass="docutils literal"><spanclass="pre">cvx_end</span></tt> statement, CVX will find a complete specification,
and solve the problem to compute the result. CVX places the optimal
objective function value into the variable <ttclass="docutils literal"><spanclass="pre">cvx_optval</span></tt>, and function
returns that value as its output. Of course, it’s very inefficient to
compute the Huber function of a numeric value <spanclass="math">\(x\)</span> by solving a QP.
But it does give the correct value (up to the core solver accuracy).</p>
<p>What is most important, however, is that if <ttclass="docutils literal"><spanclass="pre">huber</span></tt> is used within a
CVX specification, with an affine CVX expression for its
argument, then CVX will do the right thing. In particular, CVX
will recognize the Huber function, called with affine argument, as a
valid convex expression. In this case, the function <ttclass="docutils literal"><spanclass="pre">huber</span></tt> will
contain a special Matlab object that represents the function call in
constraints and objectives. Thus the function <ttclass="docutils literal"><spanclass="pre">huber</span></tt> can be used
anywhere a traditional convex function can be used, in constraints or
objective functions, in accordance with the DCP ruleset.</p>
<p>There is a corresponding development for concave functions as well.
Given a convex set <spanclass="math">\(S\)</span> as above, and a concave function
<p>gives the <em>hypograph</em> representation of <spanclass="math">\(f\)</span>:</p>
<divclass="math">
\[\operatorname{\textbf{hypo}}f = S - \mathbf{R}_+^n.\]</div>
<p>In CVX, a concave incomplete specification is simply one that uses a
<ttclass="docutils literal"><spanclass="pre">maximize</span></tt> objective instead of a <ttclass="docutils literal"><spanclass="pre">minimize</span></tt> objective; and if
properly constructed, it can be used anywhere a traditional concave
function can be used within a CVX specification.</p>
<p>For an example of a concave incomplete specification, consider the
<p>Its hypograph can be represented using a single linear matrix
inequality:</p>
<divclass="math">
\[\operatorname{\textbf{hypo}}f = \left\{\, (X,t) \,~|~\, f(X) \geq t \,\right\} = \left\{\, (X,t) \,~|~\, X + X^T - t I \succeq 0 \,\right\}\]</div>
<p>So we can implement this function in CVX as follows:</p>
<divclass="highlight-none"><divclass="highlight"><pre>function cvx_optval = lambda_min_symm( X )
n = size( X, 1 );
cvx_begin
variable y;
maximize( y );
subject to
X + X' - y * eye( n ) == semidefinite( n );
cvx_end
</pre></div>
</div>
<p>If a numeric value of <ttclass="docutils literal"><spanclass="pre">X</span></tt> is supplied, this function will return
<ttclass="docutils literal"><spanclass="pre">min(eig(X+X'))</span></tt> (to within numerical tolerances). However, this
function can also be used in CVX constraints and objectives, just
like any other concave function in the atom library.</p>
<p>There are two practical issues that arise when defining functions using
incomplete specifications, both of which we will illustrate using our
<ttclass="docutils literal"><spanclass="pre">huber</span></tt> example above. First of all, as written the function works
only with scalar values. To apply it (elementwise) to a vector requires
that we iterate through the elements in a <ttclass="docutils literal"><spanclass="pre">for</span></tt> loop—a <em>very</em>
inefficient enterprise, particularly in CVX. A far better approach
is to extend the <ttclass="docutils literal"><spanclass="pre">huber</span></tt> function to handle vector inputs. This is, in
fact, rather simple to do: we simply create a <em>multiobjective</em> version
of the problem:</p>
<divclass="highlight-none"><divclass="highlight"><pre>function cvx_optval = huber( x )
sx = size( x );
cvx_begin
variables w( sx ) v( sx );
minimize( w .^ 2 + 2 * v );
subject to
abs( x ) <= w + v;
w <= 1; v >= 0;
cvx_end
</pre></div>
</div>
<p>This version of <ttclass="docutils literal"><spanclass="pre">huber</span></tt> will in effect create <ttclass="docutils literal"><spanclass="pre">sx</span></tt>“instances” of
the problem in parallel; and when used in a CVX specification, will
be handled correctly.</p>
<p>The second issue is that if the input to <ttclass="docutils literal"><spanclass="pre">huber</span></tt> is numeric, then
direct computation is a far more efficient way to compute the result
than solving a QP. (What is more, the multiobjective version cannot be
used with numeric inputs.) One solution is to place both versions in one
file, with an appropriate test to select the proper version to use:</p>
<divclass="highlight-none"><divclass="highlight"><pre>function cvx_optval = huber( x )
if isnumeric( x ),
xa = abs( x );
flag = xa < 1;
cvx_optval = flag .* xa.^2 + (~flag) * (2*xa-1);
else,
sx = size( x );
cvx_begin
variables w( sx ) v( sx );
minimize( w .^ 2 + 2 * v );
subject to
abs( x ) <= w + v;
w <= 1; v >= 0;
cvx_end
end
</pre></div>
</div>
<p>Alternatively, you can create two separate versions of the function, one
for numeric input and one for CVX expressions, and place the CVX
version in a subdirectory called <ttclass="docutils literal"><spanclass="pre">@cvx</span></tt>. (Do not include this
directory in your Matlab <ttclass="docutils literal"><spanclass="pre">path</span></tt>; only include its parent.) Matlab will
automatically call the version in the <ttclass="docutils literal"><spanclass="pre">@cvx</span></tt> directory when one of the
arguments is a CVX variable. This is the approach taken for the
version of <ttclass="docutils literal"><spanclass="pre">huber</span></tt> found in the CVX atom library.</p>
<p>One good way to learn more about using incomplete specifications is to
examine some of the examples already in the CVX atom library. Good
choices include <ttclass="docutils literal"><spanclass="pre">huber</span></tt>, <ttclass="docutils literal"><spanclass="pre">inv_pos</span></tt>, <ttclass="docutils literal"><spanclass="pre">lambda_min</span></tt>, <ttclass="docutils literal"><spanclass="pre">lambda_max</span></tt>,
<ttclass="docutils literal"><spanclass="pre">matrix_frac</span></tt>, <ttclass="docutils literal"><spanclass="pre">quad_over_lin</span></tt>, <ttclass="docutils literal"><spanclass="pre">sum_largest</span></tt>, and others. Some
are a bit difficult to read because of diagnostic or error-checking