Same as {\bf time \ } but compute the amplification $\mu^-1$ in the image plane.\\
If \textit{int1}= 1, compute $\mu^-1$, if \textit{int1}=2, compute its absolute value and if \textit{int1}=3, compute its absolute value in magnitude.\\
If \textit{int1}=5, compute the convergence map in the image plane.
If \textit{int1}=6, compute the shear map in the image plane.\\
If \textit{int1}=-1, compute the absolute value of $\mu^-1$ in the source plane considering every images (prototype).\\
Obviously, it depends on the pixel size. As for the \textbf{time} function, the area covered by the image is defined in the \textbf{frame} section.
In Bayesian optimisation mode, you can get the projected error mass-density (\textit{int1 \ }=5) in $10^{12} M_{\odot}/pixel$. The map size in arcsec is defined with the \verb+dmax+ keyword in the \verb+champ+ section. In the rectangular field case, the image size is (X,Y) = (scaling*\textit{int2}, \textit{int2}).
If true, will create a grid of \textit{int2} points. \\
If \textit{int1}= 1, it considers this grid as the Source Plane at the redshift of \textit{float}, and compute the corresponding grid in the Image Plane.\\
If \textit{int1}= 2, it considers this grid as the Image Plane and
compute the corresponding grid in the Source Plane at the
redshift of \textit{float}.\\
The gird coordinates either in the source or in the image planes are defined in the \textbf{frame} section by the \textit{dmax} or by the set of keywords (\textit{xmin}, \textit{xmax}, \textit{ymin}, \textit{ymax}).\\
Results are in the files: gi1.dat (image vertical grid)
gi2.dat (image horizontal grid) and gs1.dat (source vertical grid)
gs2.dat (source horizontal grid).\\
All these file have the following format:\\
\{$j$ $x_i$ $y_i$ \}\\
where $j$ is an index, $x_i$ $y_i$ are the pixel coordinates.\\
PICT: here is an example of PICT inputfile that draw the image grid:\\
{\bf
\begin{tabular}{lllll}
curve& & & & \\
&nfile &2& & \\
&namein &1&0&gi1.dat\\
&column &1&3& \\
& &2&3& \\
&namein &2&0&gi2.dat\\
&column &2&3& \\
& &2&3& \\
&end & & & \\
fini& & & & \\
\end{tabular}
}\\
The grid may also be displayed on ds9 by using the \textit{grid} perl script.\\
If \textit{int1}=1, the optimisation method is the parabolic method.\\
If \textit{int1}=2, the optimisation looks for the galaxy scale parameters sigma and cut radius that give the best lens model. According to the gridding stated in the \verb+potfile+ section, it runs over the grid and computes the best $\chi^2$ in each node with the parabolic optimisation method. (see \verb+potfile+ Section). \\
If \textit{int1}=3, the optimisation method is the bayesian method. This optimisation method is very slow but is less sensible to local $\chi^2$ minima than the parabolic method.\\
If \textit{int1}=4, the optimisation method is a maximum likelihood method but based on the BayeSys algorithm. The cooling factor is not limited to 1.\\
For inverse method 1 and 2, \textit{float1} gives the maximum number of iterations for the parabolic method. \\
For inverse method 3, \textit{float1} gives the speed of calculation of the Bayesian optimisation and \textit{float2} sets the number of sampling iterations. As we use $10$ Markov chains at the same time, each iteration produces $10$ samples. The default value is the number of iterations needed to complete the Burn-in phase. The samples are saved in the \verb+bayes.dat+ ASCII file. \\
\textit{float1} sets the rate $\delta\lambda$ by which is raised the likelihood at each step of the Markov Chain. At the beginning, $\lambda = 0$ and at the end of the Markov Chain $\lambda = 1$. Default ($\delta\lambda = 0.5$).\\
The optimisation process will create a \verb+best.par+ and a \verb+bestopt.par"+ files. They contain the best model and the best model + optimisation limits respectively. Additionally, useful information related to the optimisatoin are provided in their header. \\
\item{{\bf minchi0 \ } \textit{float} }\\
\textit{float\ } value of the $\chi^2$ at which the optimization program
will stop in the case of a parabolic optimisation. Default is 0, but this is not dramatic is case of slow convergence or even non-convergence at all. The number of iterations is also controlled from {\bf inverse} qualifier.\\
which is actually not the central velocity dispersion but a caracteristic one.
The elliticity is introduced in the lens potential replacing $r$ by $r\sqrt{1-\epsilon_\varphi \cos(2(\theta-\theta_0))}$ where $\epsilon_\varphi$ is the ellipticity of the potential. The potential ellipticity is proportional to the surface density ellipticity in the small ellipticities approximation by $\epsilon_\varphi \simeq \epsilon_\Sigma / 3$ (cf. Golse \& Kneib 2002).
\vspace{1cm}
\item{{\bf x\_centre \ } \textit{float} }\\
Set the x position of the center $x_c$. In arcseconds.\\
\item{{\bf x\_centre\_wcs \ } \textit{float} }\\
Same as \textbf{x\_centre} but gives the position is degree WCS. This keyword
needs the presence of the \textit{reference} keyword in the \textbf{runmode} Section. \\
\item{{\bf y\_centre \ } \textit{float} }\\
Set the y position of the center $y_c$. In arcseconds.\\
\item{{\bf y\_centre\_wcs \ } textit{float} }\\
Same as \textbf{x\_centre\_wcs} but for the u position. \\
\item{{\bf masse \ } \textit{float} }\\
Set the point mass $M_0$ expressed in $10^{12}$ solar masses,
only if {\bf profil}=7.\\
\item{{\bf pmass \ } \textit{float} }\\
Set the mass per surface unit $\Sigma_0$ expressed in $g.cm^{-2}$,
If \textit{int}=3, $x_c$ and $y_c$ are given in degrees in the World Coordinate System. \\
If \textit{int}=1, $x_c$ and $y_c$ are given in arcseconds relative to the reference point given in the \textbf{runtime} section.\\
The ellipticity ($\varepsilon$) parameter is linked to $a$ and $b$ by :
$$
\varepsilon = (a^2-b^2)/(a^2+b^2)\ .
$$
The ellipticity ($\varepsilon$) of the galaxies is then computed again according to their potential type \textbf(ftype) (cf. \ref{sec.potential}).\\
\item{{\bf type \ } \textit{int} }\\
All the potfile galaxies have the same mass profile set by \textit{int}. (See \textbf{potential} section). Default value : $81$, PIEMD.
In the current version, the $Lum$ value is not used.\\
The dynamical parameters ($r_{core}$,$r_{cut}$,$\sigma$) of the potfile galaxies are scaled from the Faber-Jackson and Tully-Fisher scaling relations for elliptical and spiral galaxies, respectively. These scaling laws conserve the mass-to-light radio of the galaxies. The scaling factors are defined below.
\item{{\bf mag0 \ } \textit{float} }\\
\textit{float} is $m^\star$ in the scaling relations below. It can be in absolute or in relative magnitudes according to the magnitude you give in your potfile. $m^\star$ default value is $17\ \rm mag$.
\item{{\bf zlens \ } \textit{float} }\\
All the potfile galaxies with no specified redshift (Catalog format $3$) have the same redshift \textit{float}. This is used to compute the $D_{OL}$ diameter angular distance.
In the \textbf{inverse} 2 optimisation method, \textit{int} set the number of bins for the potfile optimisation in the range (min,max) = (\textit{float1}, \textit{float2}). (see \textbf{inverse} section). \\
In the \textbf{inverse} 3 bayesian optimisation method, \textit{int} can be 1 or 3 for the uniform or Gaussian prior. For the uniform prior, (\textit{float1}, \textit{float2}) are the (min, max) limits. For the Gaussian prior, (\textit{float1}, \textit{float2}) are the (mean, stddev) parameters of the Gaussian pdf. \\
$\sigma_0^\star$ default value : $200\ \rm km/s$.
\item{{\bf core \ } \textit{float}}\\
\textit{float} is $r_{core}^\star$ in arc seconds. It is used to compute the core radius of the galaxies.
\textit{float} is $r_{core}^\star$ in kpc. It is used to compute the core radius in kpc of the galaxie. The cosmological parameters defined in the \textit{cosmology} Section are used to convert from kpc to arc seconds.
If \textit{int} is true, \textit{float1} is $r_{cut}^\star$ in kpc and is used to compute the cut radius of the galaxies in kpc. The cosmological parameters defined in the \textit{cosmology} Section are then used to convert from kpc to arc seconds.
\textit{float1} is the slope value used in the $r_{cut}$ computation. \\
\textit{int} and \textit{float2} are used in the potfile optimisation. (see \textit{sigma} keyword and \textit{inverse} section). [Not yet implemented for the bayesian optimisation]. \\
\textit{float1} is the velocity dispersion slope value used in the $\sigma_0$ computation. \\
\textit{int} and \textit{float2} are used in the potfile optimisation. (see \textit{sigma} keyword and \textit{inverse} section). [Not yet implemented for the bayesian optimisation]. \\
$\sigma_{slope}$ default value is 4. \\
\end{description}
\subsection{cline}
\textit{under this identifier are defined the
parameters to compute the critical and the caustic lines.}\\
Select one of the two algorithms for the computation of the critical and caustic
lines.\\
The snake algorithm is the original algorithm implemented in \lenstool.
For each of the first \textit{nlens\_crit} clumps of the lens model, the algorithm
starts from the centre of the clump and looks for a point on a surrounding critical
line (locus of the space where amplification is infinite). Then, it tries to follow
this critical line and to go back to its original starting point.\\
The marching squares algorithm defines a first square with the \textit{dmax} or
the \textit{champ} keywords. Then, it divides this first cube in 4 small squares.
According to their size and their amplification values in the centre and the 4 corners, each square is divided or not in 4 further small squares.\\
If the field is rectangular, the greater value between the width and the height of the field is choosen as the square size.\\
As long as the size of a square is greater than \textit{limitHigh}, it is
automatically divided in 4 small squares. The size of a square cannot be lower than \textit{limitLow}.\\
The marching squares algorithm is slower than the line-following snake algorithm but gives always the full contour of the critical lines. It is less sensitive to small irregularities in the contour. The snake algorithm always returns a connected contour.\\
Default algorithm is \verb+marchingsquares+.\\
\item{{\bf pas \ } \textit{float}}\\
For the line-following algorithm :\\
Defines the step between each search {\em i.e.} it represents the typical
distance in arcsecs between each point of the external critical lines.
For internal critical lines half this value is taken.\\
To improve the definition of the critic and caustic lines, use smaller
values such as 0.5" or even 0.2".\\
Default is 1."\\
\item{{\bf limitLow \ } \textit{float}}\\
For the marching cube algorithm :\\
Defines the smaller size of a square {\em i.e.} the size
of a square is compared to this value to decide between
dividing again in 4 squares or stopping the division.\\
To improve the definition of the critic and caustic lines, use smaller
values such as 0.5" or even 0.2". This implies more computation time.\\
Default is 1".\\
\item{{\bf limitHigh \ } \textit{float}}\\
For the marching cube algorithm :\\
Defines the higher size of a square {\em i.e.} the size of a square cannot
be higher than this value. A square
with a size above this value is automatically divided in 4 squares.\\
Decrease this value to remove holes in the critical lines and improve the
detection of critical lines around isolated galaxies. This implies more
computation time.\\
Default is 10".\\
\end{description}
\subsection{grande}
\textit{ under this identifier is defined the way to represent th
e
computed deformation of objects.}\\
\begin{description}
\item{{\bf large\_dist \ } \textit{float} }\\
\textit{float \ } set the value for which we can consider we have a strong
deformation (it corresponds to a minimum value of $\tau_I$). Typical value
If true the program will compute from the image pixel-frame
{\bf imframe \ } the corresponding
source frame {\bf sframe \ } at redshift \textit{float}.\\
If the inverse mode is selected see {\bf runmode inverse}
an optimization of the 'ring cycle' form will be done.
In this case
the optimization will work only if multiples images or giant arcs
are present in the image pixel-frame. It is strongly advise the user to have an
a priori good guess on the mass model parameters.\\
Moreover it will create the pixel-frame:\\
- \textit{erreur.ipx} : this frame is relevant only in the case of giant
or multiple arcs, it compute the reconstruction error of the source pixel-frame
for each pixels.
- \textit{imult.ipx} : this pixel-frame show the multiplicity of each pixel of the
reconstructed source pixel-frame.\\
\item{{\bf c\_image \ } \textit{filename} }\\
If \textit{int} = 1 for the \textbf{cleanset} keyword, \textit{filename} is an ASCII file that contains a list of points {$i,$, $x_i$, $y_i$} in the image plane given in arcsec relative to the
image reference points (see \textit{reference} keyword in \textit{runmode} section)
that delimit the
center of the observed image. The barycenter of those points is sent to the source
plane and defines the source center. This value is used to compute the WCS keywords
Define contours in the image plane of one or several images of the source you want the
shape in the source plane.
\textit{int} index of the contour for one image. First contour must be index by 1, second by 2, etc.\\
\textit{filename} is the name of the contour ASCII file for one image that contains a list of points {$i,$, $x_i$, $y_i$} in the image plane given in arcsec relative to the
image reference points (see \textit{reference} keyword in \textit{runmode} section).
\item{{\bf echant \ } \textit{int} }\\
It is possible to subsample the CCD frame to calculate with greater
accuracy the source frame in the source plane. \textit{int} is the
subsampling parameter. Default is 1. If \textit{int}= 2 it will
cut each pixel in 4 smaller pixel. Default value : 2.\\
\item{{\bf s\_echant \ } \textit{int} }\\
It is possible to subsample the frame in the source plane as you can do
in the image plane with the \textit{echant} keyword. If \textit{int} is too
high, you can get an image with a undefined pixel values. Default value : 1.\\
\item{{\bf s\_n \ } \textit{int} }\\
Define the width and height of the resulting source image in pixels. Default
value : 50 pixels.\\
\end{description}
The following identifier should be set if the format of the
{\bf imframe} is 0 (ascii format) or 1 (ipx format without scaling).
It is strongly advised to used
the ipx format: 2 or fits format: 3, to avoid to define such identifier.\\
\begin{description}
\item{{\bf pixel \ } \textit{float} }\\
set size of the pixel in x and y in arcseconds.\\
\item{{\bf column \ } \textit{int} }\\
column to be selected (in a multi-column ascii pixel-frame file).\\
\item{{\bf header \ } \textit{int} }\\
number of line to skip before the real beginning of the data.\\
\item{{\bf pixelx \ } \textit{float} }\\
pixel size in x in arcseconds (if the pixel is not a square).\\
\item{{\bf pixely \ } \textit{float} }\\
pixel size in y in arcseconds (if the pixel is not a square).\\
\item{{\bf xmin \ } \textit{float} }\\
x position of the bottom-left pixel (expressed in pixel units).\\
\item{{\bf ymin \ } \textit{float} }\\
y position of the bottom-left pixel (expressed in pixel units).\\
\end{description}
\subsection{image}
\textit{ under this identifier are defined some characteristics of
where $i$ is an identifier for all the multiple images
of a single source at redshift $z_i$. (see chapter datafile). $\theta_{ij}$ is
$90^{\circ}$ relative to PA. $mag$ is the magnitude of the image. It is used
in the flux optimisation mode.\\
In the same file the user can put different families of multiple images.
Here is an example of 2 families of multiple images:\\
{
\bf
C1a 20.5 30.4 1.2 0.6 40. 0.65 0.\\
C1b 30.8 20.3 1.3 0.6 50. 0.65 0.\\
C1c 25.1 25.2 1.2 0.5 60. 0.65 0.\\
C2a 10.3 -5.2 1.2 0.9 10. 0. 0.\\
C2b -9.7 -4.2 1.4 0.5 20. 0. 0.\\
}\\
The redshift of the second family of multiple images in this example
is not known, therefore the redshift is set to 0 and its convergence
can be constraint by the {\bf z\_m\_limit} identifier (see below).\\
\item{{\bf mult\_wcs \ } \textit{int} }\\
If true, the multiple images coordinates are considered absolute WCS coordinates. They are transformed to relative coordinates with the \textit{reference} keyword position given in the \textbf{runmode} section.\\
If false, their coordinates are considered relative in arcsec.\\
\item{{\bf forme \ } \textit{int} }\\
\textit{int} 0: if false, 1: if true.\\
If true will used both the position and the ellipticities of
the \textbf{multfile} as constraints. If false will only used
the position of the \textbf{multfile} as constraints. In both cases, optimisation
is done in the source plane.\\
Default value is -1 for the optimisation in the image plane.\\
If true will used the arclets in \textit{filename} to optimize the lens
parameters assuming all the sources at the redshift defined
in {\bf source} first identifier (See sect. 2.2.12).
The format of the catalogue supplied in \textit{filename} depends on the value of {\it int2}. If {\it int2 } 0: it is the same as the
arclet catalogue referred to
in section~\ref{sect:secondid:runmode} (i.e.\ an ASCII column
format of the form:
\{ $i$ $x_i$ $y_i$ $a_i$ $b_i$ $\theta_i$ $z_i$ \}, where
$\theta_i$ is associated with the ellipses' major axis, is in degrees and is
measured anticlockwise from West, and $x$ and $y$ are RA
and Dec in decimal degrees). If it is {\it int2} = 2, it needs 2 extra columns { $vare1_i$ $vare2_i$ }, with the shape measurement variance on $e1$ and $e2$. Note that the redshifts of the sources are defined
in this catalogue -- it is up to the user to provide sensible estimates for
them. \\
\textit{int1} is actually more than just a switch: it defines
the form of the likelihood used in the optimisation.
The recommended mode is 6 -- which is to
use a likelihood based on the assumed distribution of intrinsic source plane