diff --git a/README.md b/README.md index bb391aa..4ce48dd 100644 --- a/README.md +++ b/README.md @@ -1,234 +1,234 @@ c4science.ch ========= * Ansible playbook for git infrastructure on openstack INSTALL ------- * Dependencies. You need ansible >= 2.0 ``` cd ~ git clone https://github.com/ansible/ansible.git cd ansible git checkout v2.6.9 git submodule update --init --recursive sudo python setup.py install sudo pip install shade python-novaclient rfc3986 argcomplete sudo activate-global-python-argcomplete ``` * Repo ``` git clone repourl c4science.ch cd c4science.ch git submodule update --init --recursive ``` USAGE ----- * Ansible is instrumented using the `deploy.py` script ``` ./deploy.py create # Create instance and update local inventory ./deploy.py init # Apply common recipes ./deploy.py update # Apply all recipes but common ./deploy.py update-phab # Update Phabricator on app/phd to the latest stable ``` * After you ran `./deploy.py create` for the first time, you have to manually change the default SSH port on the jump server to 222 in the /etc/ssh/sshd_config file and reload SSH with `service sshd reload` * You must configure SSH so the connections go trough the jump server ~/.ssh/config ``` Host EXTERNAL_IP HostName c4science.ch-jump01 User centos Port 222 StrictHostKeyChecking no UserKnownHostsFile=/dev/null Host 10.0.* User centos ProxyCommand ssh c4science.ch-jump01 -p 222 -W %h:%p StrictHostKeyChecking no UserKnownHostsFile=/dev/null ``` ``` echo 'EXTERNAL_IP c4science.ch-jump01' >> /etc/hosts ``` * You must create floating IPs * One on region_main and put it in external_ip in vars/main.yml * One on region_back and put it in backup_ip in vars/main.yml -* You have to copy ssh hostkeys for app servers, so they are all the same +* It's better to copy ssh hostkeys for app servers, so they are all the same ``` rsync -av c4science-app00:/etc/ssh/ssh_host_*_key /tmp/ rsync -av /tmp/ssh_host_*_key c4science-app0X:/etc/ssh/ ssh c4science-app0X 'service sshd_phabricator restart' ``` * You have to copy shibboleth certificate accross instances from app00 ``` rsync -av c4science-app00:/etc/shibboleth/sp-*.pem /tmp/. rsync -av /tmp/sp-*.pem c4science-app01:/etc/shibboleth/. ssh c4science-app01 'service shibd restart' ssh c4science-app00 'openssl x509 -noout \ -fingerprint -sha1 -in /etc/shibboleth/sp-cert.pem' ssh c4science-app01 'openssl x509 -noout \ -fingerprint -sha1 -in /etc/shibboleth/sp-cert.pem' rm /tmp/sp-*.pem ``` -* Create a ssh-key without password in app00 and - copy the public key to the backup server (root user) +* Create ssh-keys without password in app and phd server then + copy the public keys to the backup server (root user) * Install the public dashboard * Create a new dashboard * Every Panels must be accessible to Public * The dashboard must be accessible to Public * Install the dashboard for all users * Install the logged dashboard * Create a new dashboard * Get the PHID of the new dashboard ``` mysql> use phabricator_dashboard; mysql> select name,phid from dashboard; ``` * Install the dashboard ``` mysql> insert into dashboard_install (installerPHID, objectPHID, applicationClass, dashboardPHID, dateCreated, dateModified) values ('PHID-USER-wwnpcpwveuiz7uts3oin', 'dashboard:default_loggedin', 'PhabricatorHomeApplication', 'PHID-DSHB-j64cvog4impmcgb7e3sa', 0, 0); ``` Jenkins installation and configuration -------------------------------------- * See docker/README * You'll also have to create a OAuth server in Phabricator with the Redirect URI http://jenkins.c4science.ch/securityRealm/finishLogin and the Client ID and Secret configured in the jenkins config.xml file SCALING UP ---------- ### Database * Add a database node in tasks/create-instances.yml by an numbered item both in the os_server and add_host actions # Patch example ``` diff --git a/tasks/create-instances.yml b/tasks/create-instances.yml index 3037cc0..a6ac097 100644 --- a/tasks/create-instances.yml +++ b/tasks/create-instances.yml @@ -79,6 +79,7 @@ - 0 - 1 - 2 + - 3 - add_host: name: "{{ openstackdb.results[item].openstack.private_v4 }}" @@ -89,6 +90,7 @@ - 0 - 1 - 2 + - 3 - name: Create Monitoring instance os_server: ``` * Create the instance: `./deploy.py create -t conf-dbs` * Run init playbook: `./deploy.py init` * Check that the node joined mysql replication: `mysql -e "SHOW SLAVE STATUS\G" | grep Running` ### App (Phabricator) * Add an app node in tasks/create-instances.yml by an numbered item both in the os_server and add_host actions * Create the instance: `./deploy.py create -t conf-app` * Run init playbook: `./deploy.py init -t conf-lbs -t -conf-app` ### Scaling down * Remove the instance from the configuration file tasks/create-instance.yml * Remove the instance from the inventory file manually * Run init playbook: `./deploy.py init` * Check that all services are running correctly * Stop the instance with: `nova stop ` * Eventually delete the instance: `nova delete ` * The volume is still available, and can be reused TEST ---- * Some benchmarking examples, ``` ## GIT Read cd /tmp parallel -j 10 git clone ssh://git@c4science.ch:2222/diffusion/TEST/test.git \ -- $(for i in $(seq 20); do echo test$i; done) 1> /dev/null ``` ## GIT Write sequential ``` cd /tmp git clone ssh://git@c4science.ch:2222/diffusion/TEST/test.git for i in {1..10}; do time sh -c "echo 'test' >> README.md; git commit -am 'test'; git push" &>/dev/null done ``` ``` ## Test multiple push to same repo REPO=$(echo {A..Z}) # Clone cd /tmp for i in $REPO; do git clone ssh://git@c4science.ch:2222/diffusion/TEST$i/test.git test$i done # GIT test lock parallel -i -j 5 sh -c ' cd test{}; git pull --no-edit; echo "* TEST" >> README.md; git commit -am "test"; git push' \ -- $(echo $REPO) ``` ``` ## HTTP ab -C phsid:COOK -C phusr:admin -n 1000 \ -c 10 https://c4science.ch/diffusion/TEST/repository/master/ ``` DEV --- * You can use Vagrant to develop on a single virtualbox instance locally ``` cd utils vagrant up vagrant provision ``` * NB: You need vagrant >= 1.8.0