c4science/40421d204285master
c4science/
40421d204285master
README.md
README.md
c4science.ch
- Ansible playbook for git infrastructure on openstack
INSTALL
- Dependencies. You need ansible >= 2.0
cd ~ git clone https://github.com/ansible/ansible.git cd ansible git submodule update --init --recursive sudo python setup.py install sudo pip install shade python-novaclient
- Repo
git clone repourl c4science.ch cd c4science.ch git submodule update --init --recursive
USAGE
- How to use,
make status #list instances make up #create instances make clean #destroy instances
- You must configure SSH so the connections go trough the jump server
~/.ssh/config
Host EXTERNAL_IP HostName c4science.ch-jump01 User centos StrictHostKeyChecking no UserKnownHostsFile=/dev/null Host 10.0.* User centos ProxyCommand ssh c4science.ch-jump01 -W %h:%p StrictHostKeyChecking no UserKnownHostsFile=/dev/null
echo 'EXTERNAL_IP c4science.ch-jump01' >> /etc/hosts
- You must create floating IPs
- One on region_main and put it in external_ip in vars/main.yml
- One on region_back and put it in backup_ip in vars/main.yml
- You must create a Switch Engines bucket
./s3cmd mb s3://phabricator
- You have to copy ssh hostkeys for app servers, so they are all the same
rsync -av c4science-app01:/etc/ssh/ssh_host_*_key /tmp/ rsync -av /tmp/ssh_host_*_key c4science-app0X:/etc/ssh/ ssh c4science-app0X 'service sshd_phabricator restart'
- You have to copy shibboleth certificate accross instances from app00
rsync -av c4science-app00:/etc/shibboleth/sp-*.pem /tmp/. rsync -av /tmp/sp-*.pem c4science-app01:/etc/shibboleth/. ssh c4science-app01 'service shibd restart' ssh c4science-app00 'openssl x509 -noout -fingerprint -sha1 -in /etc/shibboleth/sp-cert.pem' ssh c4science-app01 'openssl x509 -noout -fingerprint -sha1 -in /etc/shibboleth/sp-cert.pem' rm /tmp/sp-*.pem
- Create a ssh-key without password in app00 and copy the public key to the backup server (root user)
- Install the public dashboard
- Create a new dashboard
- Every Panels must be accessible to Public
- The dashboard must be accessible to Public
- Install the dashboard for all users
- Install the logged dashboard
- Create a new dashboard
- Get the PHID of the new dashboard
mysql> use phabricator_dashboard; mysql> select name,phid from dashboard;
- Install the dashboard
mysql> insert into dashboard_install (installerPHID, objectPHID, applicationClass, dashboardPHID, dateCreated, dateModified) values ('PHID-USER-wwnpcpwveuiz7uts3oin', 'dashboard:default_loggedin', 'PhabricatorHomeApplication', 'PHID-DSHB-j64cvog4impmcgb7e3sa', 0, 0);
Build the Jenkins slave docker images
- Build the image on your local machine
mkdir /tmp/docker cp roles/ci/templates/jenkins-slave-centos.docker /tmp/docker/Dockerfile cd /tmp/docker docker build --rm=true -t jenkins-centos:7 . docker save jenkins-centos7 > ../jenkins-centos7.tar
- Do it for every dockerfile in role/ci/templates/
- Copy the tar to the CoreOS machine and import the image
docker load < jenkins-centos7.tar docker images docker run -i -t jenkins-centos7 /bin/bash
Nagios monitoring of CoreOS
- Build the image
mkdir /tmp/docker cp roles/ci/templates/jenkins-nagios.docker /tmp/docker/Dockerfile cp roles/ci/templates/*nrpe* /tmp/docker/ cp roles/ci/templates/gmond.conf /tmp/docker/ cp roles/common/templates/check_mem.sh /tmp/docker cd /tmp/docker docker build --rm=true -t jenkins-nagios . docker save jenkins-nagios > ../jenkins-nagios.tar
- Install and run the Nagios image after copying it to the server
docker load < jenkins-nagios.tar docker run --restart=always --pid=host --net=host \ --privileged=true -d -i -t jenkins-nagios
SCALING UP
Database
- Add a database node in tasks/create-instances.yml by an numbered item both in the os_server and add_host actions
- Patch example
diff --git a/tasks/create-instances.yml b/tasks/create-instances.yml index 3037cc0..a6ac097 100644 --- a/tasks/create-instances.yml +++ b/tasks/create-instances.yml @@ -79,6 +79,7 @@ - 0 - 1 - 2 + - 3 - add_host: name: "{{ openstackdb.results[item].openstack.private_v4 }}" @@ -89,6 +90,7 @@ - 0 - 1 - 2 + - 3 - name: Create Monitoring instance os_server:
- Run init playbook: make init
- Check that the node joined galera replication: mysql -e "SHOW STATUS LIKE 'wsrep_cluster_size';"
- Even number of dbs instances is not recommended, you can use the arbitrator to have one more node using make arbitratoron the monit node
Web/storage
- Add an app node in tasks/create-instances.yml by an numbered item both in the os_server and add_host actions
- Run init playbook: make init
- Check that gluster is running: gluster volume info
- Probe the new node before being able to add the brick, from an other running instance
gluster peer probe <new ip>
- Add the brick, from an other instance
gluster volume add-brick c4science replica <n> <new ip>:/var/brick/gv0 force
- Run init playbook again: make init
Scaling down
- Stop the instance with: nova stop <instanceid>
- Remove the instance from the configuration file tasks/create-instance.yml
- Run init playbook: make init
- Eventually delete the instance: nova delete <instanceid>
- The volume is still available, and can be reused
TODO
- Shibboleth auth
- Haproxy redundancy using keepalived https://raymii.org/s/articles/Building_HA_Clusters_With_Ansible_and_Openstack.html
TEST
- Replication information
mysql -e "SHOW STATUS LIKE 'wsrep_cluster%';"
- Some benchmarking examples,
## GIT Read cd /tmp parallel -j 10 git clone ssh://git@c4science.ch:2222/diffusion/TEST/test.git \ -- $(for i in $(seq 20); do echo test$i; done) 1> /dev/null
GIT Write sequential
cd /tmp git clone ssh://git@c4science.ch:2222/diffusion/TEST/test.git for i in {1..10}; do time sh -c "echo 'test' >> README.md; git commit -am 'test'; git push" &>/dev/null done
## Conduit API (create repo from remote) REPO=$(echo {A..Z}) # Create some repositories for i in $REPO; do echo "{\"name\":\"test\", \"callsign\": \"TEST$i\", \"vcs\": \"git\", \"uri\": \"https://git.epfl.ch/repo/repo-test.git\"}" \ | arc call-conduit repository.create done # Clone them (doesnt work) #cd /tmp #for i in $REPO; do # git clone ssh://git@c4science.ch:2222/diffusion/TEST$i/test.git test$i #done # Test commit and push #parallel -i -j 10 sh -c 'cd test{}; # echo "TEST" > README.md; # git commit -am "test"; # git push' -- $(echo $REPO)
## GIT test lock parallel -i -j 5 sh -c 'cd test{}; git pull --no-edit; git commit -am "merge conflicts"; echo "* TEST" >> README.md; git commit -am "test"; git push || git pull --no-edit; git push' -- $(seq 50)
## HTTP ab -C phsid:COOK -C phusr:admin -n 1000 \ -c 10 https://c4science.ch/diffusion/TEST/repository/master/
c4science · Help