diff --git a/README.md b/README.md index 57321ed..4897435 100644 --- a/README.md +++ b/README.md @@ -1,274 +1,285 @@ c4science.ch ========= * Ansible playbook for git infrastructure on openstack INSTALL ------- * Dependencies. You need ansible >= 2.0 ``` cd ~ git clone https://github.com/ansible/ansible.git cd ansible git submodule update --init --recursive sudo python setup.py install sudo pip install shade python-novaclient ``` * Repo ``` git clone repourl c4science.ch cd c4science.ch git submodule update --init --recursive ``` USAGE ----- * How to use, ``` make status #list instances make up #create instances make clean #destroy instances ``` * You must configure SSH so the connections go trough the jump server ~/.ssh/config ``` Host EXTERNAL_IP HostName c4science.ch-jump01 User centos StrictHostKeyChecking no UserKnownHostsFile=/dev/null Host 10.0.* User centos ProxyCommand ssh c4science.ch-jump01 -W %h:%p StrictHostKeyChecking no UserKnownHostsFile=/dev/null ``` ``` echo 'EXTERNAL_IP c4science.ch-jump01' >> /etc/hosts ``` * You must create floating IPs * One on region_main and put it in external_ip in vars/main.yml * One on region_back and put it in backup_ip in vars/main.yml * You must create a Switch Engines bucket * see https://help.switch.ch/engines/documentation/s3-like-object-storage/ ``` ./s3cmd mb s3://phabricator ``` * You have to copy ssh hostkeys for app servers, so they are all the same ``` rsync -av c4science-app01:/etc/ssh/ssh_host_*_key /tmp/ rsync -av /tmp/ssh_host_*_key c4science-app0X:/etc/ssh/ ssh c4science-app0X 'service sshd_phabricator restart' ``` * You have to copy shibboleth certificate accross instances from app00 ``` rsync -av c4science-app00:/etc/shibboleth/sp-*.pem /tmp/. rsync -av /tmp/sp-*.pem c4science-app01:/etc/shibboleth/. ssh c4science-app01 'service shibd restart' ssh c4science-app00 'openssl x509 -noout -fingerprint -sha1 -in /etc/shibboleth/sp-cert.pem' ssh c4science-app01 'openssl x509 -noout -fingerprint -sha1 -in /etc/shibboleth/sp-cert.pem' rm /tmp/sp-*.pem ``` * Create a ssh-key without password in app00 and copy the public key to the backup server (root user) * Install the public dashboard * Create a new dashboard * Every Panels must be accessible to Public * The dashboard must be accessible to Public * Install the dashboard for all users * Install the logged dashboard * Create a new dashboard * Get the PHID of the new dashboard ``` mysql> use phabricator_dashboard; mysql> select name,phid from dashboard; ``` * Install the dashboard ``` mysql> insert into dashboard_install (installerPHID, objectPHID, applicationClass, dashboardPHID, dateCreated, dateModified) values ('PHID-USER-wwnpcpwveuiz7uts3oin', 'dashboard:default_loggedin', 'PhabricatorHomeApplication', 'PHID-DSHB-j64cvog4impmcgb7e3sa', 0, 0); ``` Build the Jenkins slave docker images ------------------------------------- * Build the image on your local machine ``` mkdir /tmp/docker cp roles/ci/templates/jenkins-slave-centos.docker /tmp/docker/Dockerfile cd /tmp/docker docker build --rm=true -t jenkins-centos:7 . docker save jenkins-centos:7 > ../jenkins-centos7.tar ``` * Do it for every dockerfile in role/ci/templates/ * Copy the tar to the CoreOS machine and import the image ``` docker load < jenkins-centos7.tar docker images docker run -i -t jenkins-centos:7 /bin/bash ``` ## Nagios monitoring of CoreOS * Build the image ``` mkdir /tmp/docker cp roles/ci/templates/jenkins-nagios.docker /tmp/docker/Dockerfile cp roles/ci/templates/*nrpe* /tmp/docker/ cp roles/ci/templates/gmond.conf /tmp/docker/ cp roles/common/templates/check_mem.sh /tmp/docker cd /tmp/docker docker build --rm=true -t jenkins-nagios . docker save jenkins-nagios > ../jenkins-nagios.tar ``` * Install and run the Nagios image after copying it to the server ``` docker load < jenkins-nagios.tar docker run --restart=always --pid=host --net=host \ --privileged=true -d -i -t jenkins-nagios ``` SCALING UP ---------- ### Database * Add a database node in tasks/create-instances.yml by an numbered item both in the os_server and add_host actions # Patch example ``` diff --git a/tasks/create-instances.yml b/tasks/create-instances.yml index 3037cc0..a6ac097 100644 --- a/tasks/create-instances.yml +++ b/tasks/create-instances.yml @@ -79,6 +79,7 @@ - 0 - 1 - 2 + - 3 - add_host: name: "{{ openstackdb.results[item].openstack.private_v4 }}" @@ -89,6 +90,7 @@ - 0 - 1 - 2 + - 3 - name: Create Monitoring instance os_server: ``` * Run init playbook: `make init` * Check that the node joined galera replication: `mysql -e "SHOW STATUS LIKE 'wsrep_cluster_size';"` * Even number of dbs instances is not recommended, you can use the arbitrator to have one more node using `make arbitrator`on the monit node ### Web/storage * Add an app node in tasks/create-instances.yml by an numbered item both in the os_server and add_host actions * Run init playbook: `make init` * Check that gluster is running: `gluster volume info` * Probe the new node before being able to add the brick, from an other running instance ``` gluster peer probe ``` * Add the brick, from an other instance ``` gluster volume add-brick c4science replica :/var/brick/gv0 force ``` * Run init playbook again: `make init` ### Scaling down * Stop the instance with: `nova stop ` * Remove the instance from the configuration file tasks/create-instance.yml * Run init playbook: `make init` * Eventually delete the instance: `nova delete ` * The volume is still available, and can be reused TODO ---- * Shibboleth auth * Haproxy redundancy using keepalived https://raymii.org/s/articles/Building_HA_Clusters_With_Ansible_and_Openstack.html TEST ---- * Replication information ``` mysql -e "SHOW STATUS LIKE 'wsrep_cluster%';" ``` * Some benchmarking examples, ``` ## GIT Read cd /tmp parallel -j 10 git clone ssh://git@c4science.ch:2222/diffusion/TEST/test.git \ -- $(for i in $(seq 20); do echo test$i; done) 1> /dev/null ``` ## GIT Write sequential ``` cd /tmp git clone ssh://git@c4science.ch:2222/diffusion/TEST/test.git for i in {1..10}; do time sh -c "echo 'test' >> README.md; git commit -am 'test'; git push" &>/dev/null done ``` ``` ## Conduit API (create repo from remote) REPO=$(echo {A..Z}) # Create some repositories for i in $REPO; do echo "{\"name\":\"test\", \"callsign\": \"TEST$i\", \"vcs\": \"git\", \"uri\": \"https://git.epfl.ch/repo/repo-test.git\"}" \ | arc call-conduit repository.create done # Clone them (doesnt work) #cd /tmp #for i in $REPO; do # git clone ssh://git@c4science.ch:2222/diffusion/TEST$i/test.git test$i #done # Test commit and push #parallel -i -j 10 sh -c 'cd test{}; # echo "TEST" > README.md; # git commit -am "test"; # git push' -- $(echo $REPO) ``` ``` ## GIT test lock parallel -i -j 5 sh -c 'cd test{}; git pull --no-edit; git commit -am "merge conflicts"; echo "* TEST" >> README.md; git commit -am "test"; git push || git pull --no-edit; git push' -- $(seq 50) ``` ``` ## HTTP ab -C phsid:COOK -C phusr:admin -n 1000 \ -c 10 https://c4science.ch/diffusion/TEST/repository/master/ ``` + +DEV +--- + +* You can use Vagrant to develop on a single virtualbox instance locally +``` +cd utils +vagrant up +vagrant provision +``` +* NB: You need vagrant >= 1.8.0 diff --git a/books/main_dev.yml b/books/main_dev.yml new file mode 100644 index 0000000..6174989 --- /dev/null +++ b/books/main_dev.yml @@ -0,0 +1,36 @@ +--- +#- name: debug +# hosts: 10.10.0.2 +# tasks: +# - debug: var=hostvars['10.10.0.2'] + +- name: Configure dev box + hosts: 10.10.0.2 + vars_files: + - "../vars/test.yml" + vars: + proxy: no + user: vagrant + sudo: yes + roles: + - role: ../roles/common + - role: ../roles/swap/roles/swap + - role: ../roles/haproxy + - role: ../roles/jump + - role: ../roles/galera + - role: ../roles/apache + apache_config: phabricator.conf + - role: ../roles/glusterfs + - role: ../roles/fs + - role: ../roles/phabricator + - role: ../roles/shibboleth + tasks: + - include: ../roles/phabricator/tasks/packages.yml + - include: ../roles/phabricator/tasks/users.yml + - include: ../roles/phabricator/tasks/glusterfs.yml + - include: ../roles/phabricator/tasks/install.yml myconfig=../roles/phabricator/templates/myconfig.conf.php + - include: ../roles/phabricator/tasks/daemons.yml + phd_init: ../roles/phabricator/templates/phd_init + handlers: + - include: ../handlers/main.yml + diff --git a/roles/common/templates/nrpe_local.cfg b/roles/common/templates/nrpe_local.cfg index dba780e..d2598c2 100644 --- a/roles/common/templates/nrpe_local.cfg +++ b/roles/common/templates/nrpe_local.cfg @@ -1,23 +1,25 @@ command[check_ssh]=/usr/lib64/nagios/plugins/check_ssh -H 127.0.0.1 command[check_ssh_phab]=/usr/lib64/nagios/plugins/check_ssh -H 127.0.0.1 -p {{ vcs_port }} command[check_disk_vda]=/usr/lib64/nagios/plugins/check_disk -w 20% -c 10% -p /dev/vda1 command[check_disk_vdb]=/usr/lib64/nagios/plugins/check_disk -w 20% -c 10% -p /dev/vdb command[check_disk_repo]=/usr/lib64/nagios/plugins/check_disk -X ext4 {{ repositories_path }} command[check_http_phab]=/usr/lib64/nagios/plugins/check_http -I {{ inventory_hostname }} -H {{ domain }} -u /status/ -r ALIVE command[check_http_ex_phab]=/usr/lib64/nagios/plugins/check_http -H {{ domain }} -e 'HTTP/1.1 302 Found' command[check_http_ex_phab_ssl]=/usr/lib64/nagios/plugins/check_http -H {{ domain }} --ssl -u /status/ -r ALIVE command[check_http_jenkins]=/usr/lib64/nagios/plugins/check_http -H jenkins.{{ domain }} --ssl +{% if hostvars['127.0.0.1']['openstackjump'] is defined %} command[check_mysql_remote]=/usr/lib64/nagios/plugins/check_mysql -H {{ hostvars['127.0.0.1']['openstackjump'].results[0]['openstack']['private_v4'] }} -u {{ mysql_app_user }} -p {{ mysql_app_pass }} +{% endif %} command[check_mysql_local]=/usr/lib64/nagios/plugins/check_mysql -u root command[check_phd]=/usr/lib64/nagios/plugins/check_procs -C 'php' -a {{ phabricator_path }}phabricator/scripts/daemon/phd-daemon command[check_gluster]=/usr/lib64/nagios/plugins/check_procs -C 'glusterd' -c 1 command[check_gmond]=/usr/lib64/nagios/plugins/check_procs -C 'gmond' -c 1 command[check_gmetad]=/usr/lib64/nagios/plugins/check_procs -C 'gmetad' -c 1 command[check_httpd]=/usr/lib64/nagios/plugins/check_procs -C 'httpd' -c 1: command[check_java_jenkins]=/usr/lib64/nagios/plugins/check_procs -C 'java' -c 1 command[check_shibd]=/usr/lib64/nagios/plugins/check_procs -C 'shibd' -c 1 command[check_shib_status]=/usr/lib64/nagios/plugins/check_http -H localhost -u /Shibboleth.sso/Status -R '' command[check_postfix_master]=/usr/lib64/nagios/plugins/check_procs -C master -a '-w' -c 1 command[check_postfix_pickup]=/usr/lib64/nagios/plugins/check_procs -C pickup -c 1 command[check_postfix_qmgr]=/usr/lib64/nagios/plugins/check_procs -C qmgr -c 1 command[check_mem]=/usr/local/bin/check_mem.sh -w 95 -c 98 -W 50 -C 90 diff --git a/utils/Vagrantfile b/utils/Vagrantfile index 5c06094..1950bbf 100644 --- a/utils/Vagrantfile +++ b/utils/Vagrantfile @@ -1,84 +1,50 @@ Vagrant.configure(2) do |config| config.vm.box_url = "http://cloud.centos.org/centos/7/vagrant/x86_64/images/CentOS-7-x86_64-Vagrant-1603_01.VirtualBox.box" config.vm.box = "vagrant-centos-7.1" - - config.vm.define "172.17.177.21" do |machine| - machine.vm.hostname = "c4science-jump00" - machine.vm.network "private_network", ip: "172.17.177.21" - end - - config.vm.define "172.17.177.22" do |machine| - machine.vm.hostname = "c4science-app00" - machine.vm.network "private_network", ip: "172.17.177.22" + config.vm.provider "virtualbox" do |v| + v.memory = 2048 + v.cpus = 2 end - - config.vm.define "172.17.177.24" do |machine| - machine.vm.hostname = "c4science-db00" - machine.vm.network "private_network", ip: "172.17.177.24" - end - - #config.vm.define "172.17.177.26" do |machine| - # machine.vm.hostname = "c4science-monit" - # machine.vm.network "private_network", ip: "172.17.177.26" - #end - - config.vm.define "172.17.177.27" do |machine| - machine.vm.hostname = "c4science-ci00" - machine.vm.network "private_network", ip: "172.17.177.27" + config.vm.define "10.10.0.2" do |machine| + machine.vm.hostname = "c4science-dev" + machine.vm.network "private_network", ip: "10.10.0.2" machine.vm.provision :ansible do |ansible| ansible.groups = { - "lbs" => ["172.17.177.21"], - "dbs" => ["172.17.177.24"], - "app" => ["172.17.177.22"], - "ci" => ["172.17.177.27"], - #"monit" => ["172.17.177.26"] + "lbs" => ["10.10.0.2"], + "dbs" => ["10.10.0.2"], + "app" => ["10.10.0.2"], + "phd" => ["10.10.0.2"], + "fs" => ["10.10.0.2"], + "monit" => ["10.10.0.2"], + "ci" => ["10.10.0.2"], + "ci-slave" => ["10.10.0.2"], } - ansible.playbook = "main.yml" + ansible.playbook = "../books/main_dev.yml" ansible.sudo = true + ansible.verbose = 'v' ansible.extra_vars = { + proxy: 'no', + http_proxy: '', ansible_ssh_user: 'vagrant', - domain: "172.17.177.21", - external_ip: "172.17.177.21", - backup_ip: "127.0.0.1" + domain: "10.10.0.2", + external_ip: "10.10.0.2", + backup_ip: "127.0.0.1", + openstackjump: { + results: [{ + openstack: { + private_v4: "10.10.0.2" + } + }] + } } ansible.host_vars = { - "172.17.177.21" => { - "host_name": "c4science-jump00", - "private_ip": "172.17.177.21", - }, - "172.17.177.24" => { - "host_name": "c4science-db00", - "private_ip": "172.17.177.24", - }, - "172.17.177.22" => { - "host_name": "c4science-app00", - "private_ip": "172.17.177.22", + "10.10.0.2" => { + "host_name" => "c4science-dev", + "private_ip" => "10.10.0.2" }, - "172.17.177.27" => { - "host_name": "c4science-ci00", - "private_ip": "172.17.177.27", - }, - "127.0.0.1" => { - "openstackjump" => { - "results" => [{ - "openstack" => { - "private_v4" => "172.17.177.21" - } - }] - } - } } end end - #config.vm.define "c4science-ci-slave00" do |machine| - # machine.vm.hostname = "c4science-ci-slave00" - # machine.vm.network "private_network", ip: "172.17.177.28" - # machine.vm.provision :ansible do |ansible| - # ansible.playbook = "main.yml" - # ansible.groups = { "ci-slave" => ["c4science-ci-slave00"] } - # end - #end - end