In the .blackdynamite folder (in your home) you should add the servers where your databases are, with the option and information of your choice. For each database you can add a file .db of the name of the server (or an alias and specify the host inside: host = yourHost.domain.countryID). It is also recommended to specify the password of the database to avoid typing it when using autocompletion.
\chapter{Introduction and philosophy}
Blackdynamite is merely a tool to help
achieving a few things:
\begin{enumerate}
\item Launching a program repeatedly with varying parameters, to explore the
chosen parametric space.
\item Collect and sort results of \textbf{\color{red}Small sizes} benefiting
from the power of modern databases.
\item Analyse the results by making requests to the associated databases.
\end{enumerate}
\paragraph{Launching} is made simple by allowing any executable
to be launched. The set of directories will be generated and managed
by BlackDynamite to prevent errors. Requests of any kind will then
be made to the underlying database through friendly commands of BlackDynamite.
\paragraph{Collecting} the results will be possible thanks to the Blackdynamite C/C++ and python
API which will let you send results directly to the database and thus automatically sort them.
This is extremely useful. However heavy data such as Paraview files or any other kind of data
should not be pushed to the database fr obvious performance issues.
\paragraph{Analysis} of the results can be made easy thanks to Blackdynamite which
can retrieve data information in the form of Numpy array to be used, analyzed or plotted
thanks to the powerful and vast Python libraries such as Matplotlib and Scipy.
\chapter{Setting up a parametric study}
\section{Chose the parameters of the study}
The first thing to do is to setup the table in the database associated
with the study we want to perform. For this to be done
you need, first of all, to list all the parameters that decide
a specific case/computation. This parameters can be of simple types
like string, integers, floats, etc. At current time no vectorial
quantity can be considered as an input parameter.
Once this list is done you need to create a script, usually named \underline{\code{createDB.py}}
that will do this task. Let us examine such an example script.
\subsection{Setting up blackdynamite python modules}
First we need to set the python headers and to import the \blackdynamite modules by
\begin{command}
#!/usr/bin/env python
import BlackDynamite as BD
\end{command}
Then you have to create a generic black dynamite parser
and parse the system (including the connection parameters and credentials)
\begin{command}
parser = bdparser.BDParser()
params = parser.parseBDParameters()
\end{command}
This mechanism allows to easily inherit from the parser mechanism
of BlackDynamite, including the completion (if activated: see installation instructions).
Then you can connect to the black dynamite database
\begin{command}
base = DB.base.Base(**params)
\end{command}
\subsection{Setting up of the parametric space: the jobs pattern}
Then you have to define the parametric space (at present time,
the parametric space cannot be changed once the study started:
be careful with your choices).
Any particular job is defined as a point in the parametric space.
For instance, to create a job description and add the parameters
with int, float or list parameters, you can use the following python sequence.
\begin{command}
myjob_desc = job.Job(base)
myjob_desc.types["param1"] = int
myjob_desc.types["param2"] = float
myjob_desc.types["param3"] = str
\end{command}
\subsection{Setting up of the run space}
Aside of the jobs, a run will represent a particular realisation (computation)
of a job. To get clearer, the run will contain information of the machine it was run on,
the executable version, or the number of processors employed.
For instance creating the run pattern can be done with:
\begin{command}
myruns_desc = run.Run()
myruns_desc.types["compiler"] = str
\end{command}
There are default entries to the description of runs.
These are:
\begin{itemize}
\item machine\_name: the name of the machine where the run must be executed
\item job\_id (integer): the ID of the running job
\item has\_started (bool): flag to know whether the job has already started
\item has\_finished (bool): flag to know whether the job has already finished
\item run\_name (string): the name of the run
\item wait\_id (int): The id of a run to wait before starting
\item start\_time (TIMESTAMP): The start time for the run
\end{itemize}
\subsection{Commit the changes to the database}
Then you have to request for the creation of the database
\begin{command}
base.createBase(myjob_desc,myruns_desc,**params)
\end{command}
You have to launch the script. As mentioned, all BlackDynamite
scripts inherit from the parsing system. So that when needing to launch
one of these codes, you can always claim for the valid keywords:
Then, one has to specifiy which of these files is the entry point:
\begin{command}
myrun.setExecFile("launch.sh")
\end{command}
Finally, we have to create Run objects and attach them to jobs.
The very first task is to claim the jobs from the database.
To that end the object JobSelector shall be your friend:
\begin{command}
jobSelector = BD.JobSelector(base)
job_list = jobSelector.selectJobs()
\end{command}
This will return a job list that you can loop through and
attach the runs to:
\begin{command}
for j in job_list:
myrun['compiler'] = 'gcc'
myrun.attachToJob(j)
\end{command}
Everything should then be commited to the database:
\begin{command}
if params["truerun"] is True: base.commit()
\end{command}
To create the run one should eventually launch the script by typing:
\begin{command}
./createRuns.py --host lsmssrv1.epfl.ch --study test --machine_name lsmspc41 --run_name toto --nproc int --truerun
\end{command}
The runs are eventually launched using the tool 'launchRuns.py'.
\begin{command}
./launchRuns.py --host lsmssrv1.epfl.ch --study test --outpath /home/user/ --truerun (--nruns int)
\end{command}
\section{Accessing and manipulating the database}
The runs can actually be controlled in the database with the tool 'getRunInfo.py', and one can go to the run folder with 'enterRun.py'.
The runs are then launched using the tool 'launchRuns.py'.
\begin{command}
./getRunInfo.py --host lsmssrv1.epfl.ch --study test
./enterRun.py --host lsmssrv1.epfl.ch --study test --run_id ID
\end{command}
The status of the run can be manually modified using the command 'cleanRuns.py', the default status is CREATED (it can be turned to delete)
\begin{command}
./cleanRuns.py --host lsmssrv1.epfl.ch --study test (--runid ID) --truerun (--delete)
\end{command}
The status and the other run parameters (e.g. the compiler in the example file) can also be modified with 'updateRuns.py'. This can be done in the executed scrip to automatically set the selected parameter
\begin{command}
updateRuns.py --host lsmssrv1.epfl.ch --study test --updates 'state = toto' --truerun
\end{command}
The function 'canYouDigIt.py' is an example of how to collect data in the runs to draw graphs. Example to plot the crack length in function of the time for different sigma\_c (the study parameter):