# Welcome to Gbids: a gc3pie app to parallelize bids apps ## Install standalone gbids app ### Install required packages (gc3pie and system) #### Install GC3Pie * Grab the latest master branch ```$ wget https://raw.githubusercontent.com/uzh/gc3pie/master/install.py``` * Runt the script ```$ python ~/install.py --develop -y``` * Activate the virtualenv and generate a gc3pie.conf file ``` $ source ~/gc3pie/bin/activate $ gservers # this will generate a config file ~/.gc3/gc3pie.conf``` * Install additional python packages ```$ source ~/gc3pie/bin/activate && pip install -r requirements.txt``` #### Make sure debian requied packages are installed ``` $ sudo apt-get update && sudo apt-get install -y - libffi-dev - libssl-dev - python-dev ``` ### Install gbids ```$ cd ~ && git clone https://c4science.ch/source /gbids.git``` ## Run gbids * configure your gc3pie.conf file * identify yourself with your cloud provider * activate your virtualenv * export the gibids directory to the pythonpath: e.g. ```cd path/to/gbids && export PYTHONPATH=$PWD``` ### Examples Run gbids at different level: * Participant level ```$ python gbids.py {docker image} path/to/local/input_dir/ path/to/local/output_dir/ participant -s {session name} -N [options] -vvvv``` * Group level ```$ python gbids.py {docker image} path/to/local/input_dir/ path/to/local/output_dir/ group -s {session name} -N [options] -vvvv``` Note: The current gbids app supports bids apps that require FREESURFER license only. Use -FL option to pass the license file: ```$ python gbids.py {docker image} path/to/local/input_dir/ path/to/local/output_dir/ -FL path/to/freesurfer_license.txt participant -s {session name} -N [options] -vvvv``` ### Gbids modes Gbids can be used in TRANSFER or FILE-SERVER mode. * TRANSFER mode. * The data are located at the control node (where the gbids app is located) and they will be transferred to the job running nodes. * *example*: To use the transfer mode use -F flag and call the gbids app as follow: ```$ python gbids.py {docker image} path/to/local/input_dir/ path/to/local/output_dir/ participant -s gs -N -F -vvvv``` * FILE-SERVER mode. * The data are located on a file-server. The file-sever, e.g. a NFS server needs to export the directory containing the data to all nodes, i.e. the control node as well as the job running nodes. In order for those data to be used by the running nodes the gc3pie.conf file needs to be modified accordingly. * *example*: Assuming you are using an ubuntu image and on your control machine the NFS server is mounted at "/home/ubuntu/mnt/" you can add the following code at the end of the gc3pie.conf file: ``` $ gbids_user_data = !/bin/bash apt-get update apt-get install -y nfs-common mkdir -p /home/ubuntu/mnt/ chown -R 1000:1000 /home/ubuntu/mnt/ mount -t nfs "CHANGE_WITH_YOUR_NFS_IP_ADDRESS":/data /home/ubuntu/mnt/ ``` * This script assumes that the data are located on the NFS server under "/data" directory. You can use the local filesystem as reference, i.e to mirror your local mount point path to the control job running machine. * To use a file-server mode call the gbids app as follow: ```$ python gbids.py {docker image} path/to/local/input_dir/ path/to/local/output_dir/ participant -s gs -N -vvvv``` ## Automatic deployment of gbids It is possible to set up a ready-to-go environment, with multiple virtual machines (vm), e.g. a gbids control vm, a NFS server where to store your data and a private docker-registry where to store your own private docker images. Currently this solution has been tested on ScienceCloud. Please contact us at if you are interested in this solution.