Phabricator deployment using Ansible for c4science.
Diffusion c4science (master)
Recent Commits
Recent Commits
Commit | Author | Details | Committed | ||||
---|---|---|---|---|---|---|---|
f70579a7d63f | aubort | update role postfix.. | Mar 25 | ||||
fdd37a4f5ca6 | aubort | get user emails from db | Mar 25 | ||||
1e3b041a9ec0 | aubort | Some kind of decoy for python version | Apr 7 2022 | ||||
1b8e6a97f42f | aubort | jenkins: plugins deprecated https://issues.jenkins.io/browse/INFRA-2487 | Dec 15 2021 | ||||
7738db3d5ed5 | aubort | jenkins: warnings plugins deprecated https://issues.jenkins.io/browse/INFRA-2487 | Dec 15 2021 | ||||
5f15c7efefbb | aubort | little script to help find spam accounts | Nov 25 2021 | ||||
885e7f2b9f39 | aubort | mariadb: do not keep old replication logs on the backup server | Jul 6 2021 | ||||
39357b0dff22 | aubort | filesize: actually use the limit variable ;) | Jun 11 2021 | ||||
1d228a6db9ab | aubort | Merge branch 'master' of ssh://c4science.ch/diffusion/PHINFRA/science | Jun 11 2021 | ||||
b760814a0e50 | aubort | filesize: add limit to processed repo, catch potential display error on python2 | Jun 11 2021 | ||||
2d58d6eb45d9 | rmsilva | LetsEncrypt: bugfix in renewal job | May 26 2021 | ||||
a2bca4b5b70f | aubort | monitoring: don't ignore Jenkins slave node..... | Mar 18 2021 | ||||
364c3455b52c | aubort | add script to extract switch aai identity providers (for Phabricator CSP) | Jan 25 2021 | ||||
4aefe2c53054 | aubort | Fix USAGE | Jan 12 2021 | ||||
6b13e0e3c94d | aubort | Do not expose httpd version.... | Dec 15 2020 |
README.md
README.md
c4science.ch
- Ansible playbook for git infrastructure on openstack
INSTALL
- Dependencies. You need ansible >= 2.0
cd ~ git clone https://github.com/ansible/ansible.git cd ansible git checkout v2.6.9 git submodule update --init --recursive sudo python setup.py install sudo pip install shade python-novaclient rfc3986 argcomplete sudo activate-global-python-argcomplete
- Repo
git clone repourl c4science.ch cd c4science.ch git submodule update --init --recursive
USAGE
- Ansible is instrumented using the deploy.py script
./deploy.py create # Create instance and update local inventory ./deploy.py init # Apply common recipes ./deploy.py update # Apply all recipes but common ./deploy.py update-phab # Update Phabricator on app/phd to the latest stable
- After you ran ./deploy.py create for the first time, disable selinux with setenforce 0 and manually change the default SSH port on the jump server to 222 in the /etc/ssh/sshd_config file and reload SSH with systemctl reload sshd
- You must configure SSH so the connections go trough the jump server
~/.ssh/config
Host EXTERNAL_IP HostName c4science.ch-jump01 User centos Port 222 StrictHostKeyChecking no UserKnownHostsFile=/dev/null Host 10.0.* User centos ProxyCommand ssh c4science.ch-jump01 -p 222 -W %h:%p StrictHostKeyChecking no UserKnownHostsFile=/dev/null
echo 'EXTERNAL_IP c4science.ch-jump01' >> /etc/hosts
- You must create floating IPs
- One on region_main and put it in external_ip in vars/main.yml
- One on region_back and put it in backup_ip in vars/main.yml
- It's better to copy ssh hostkeys for app servers, so they are all the same
rsync -av c4science-app00:/etc/ssh/ssh_host_*_key /tmp/ rsync -av /tmp/ssh_host_*_key c4science-app0X:/etc/ssh/ ssh c4science-app0X 'service sshd_phabricator restart'
- You have to copy shibboleth certificate accross instances from app00
rsync -av c4science-app00:/etc/shibboleth/sp-*.pem /tmp/. rsync -av /tmp/sp-*.pem c4science-app01:/etc/shibboleth/. ssh c4science-app01 'service shibd restart' ssh c4science-app00 'openssl x509 -noout \ -fingerprint -sha1 -in /etc/shibboleth/sp-cert.pem' ssh c4science-app01 'openssl x509 -noout \ -fingerprint -sha1 -in /etc/shibboleth/sp-cert.pem' rm /tmp/sp-*.pem
- Create ssh-keys without password in app and phd server then copy the public keys to the backup server (root user)
- Install the public dashboard
- Create a new dashboard
- Every Panels must be accessible to Public
- The dashboard must be accessible to Public
- Install the dashboard for all users
- Install the logged dashboard
- Create a new dashboard
- Get the PHID of the new dashboard
mysql> use phabricator_dashboard; mysql> select name,phid from dashboard;
- Install the dashboard
mysql> insert into dashboard_install (installerPHID, objectPHID, applicationClass, dashboardPHID, dateCreated, dateModified) values ('PHID-USER-wwnpcpwveuiz7uts3oin', 'dashboard:default_loggedin', 'PhabricatorHomeApplication', 'PHID-DSHB-j64cvog4impmcgb7e3sa', 0, 0);
- Configure Almanac (from the web ui)
- Add devices for phd server (whith SSH keys and SSH/HTTP interfaces (2222/80)
- Trust the keys using: /srv/phabricator/bin/almanac trust-key --id XXX
- Copy the keys to /srv/phabricator/conf/keys/ ??? (shouln't almanac do that?)
Jenkins installation and configuration
- See docker/README
- You'll also have to create a OAuth server in Phabricator with the Redirect URI http://jenkins.c4science.ch/securityRealm/finishLogin and the Client ID and Secret configured in the jenkins config.xml file
SCALING UP
Database
- Add a database node in tasks/create-instances.yml by an numbered item both in the os_server and add_host actions
Patch example
diff --git a/tasks/create-instances.yml b/tasks/create-instances.yml index 3037cc0..a6ac097 100644 --- a/tasks/create-instances.yml +++ b/tasks/create-instances.yml @@ -79,6 +79,7 @@ - 0 - 1 - 2 + - 3 - add_host: name: "{{ openstackdb.results[item].openstack.private_v4 }}" @@ -89,6 +90,7 @@ - 0 - 1 - 2 + - 3 - name: Create Monitoring instance os_server:
- Create the instance: ./deploy.py create -t conf-dbs
- Run init playbook: ./deploy.py init
- Check that the node joined mysql replication: mysql -e "SHOW SLAVE STATUS\G" | grep Running
App (Phabricator)
- Add an app node in tasks/create-instances.yml by an numbered item both in the os_server and add_host actions
- Create the instance: ./deploy.py create -t conf-app
- Run init playbook: ./deploy.py init -t conf-lbs -t -conf-app
Scaling down
- Remove the instance from the configuration file tasks/create-instance.yml
- Remove the instance from the inventory file manually
- Run init playbook: ./deploy.py init
- Check that all services are running correctly
- Stop the instance with: nova stop <instanceid>
- Eventually delete the instance: nova delete <instanceid>
- The volume is still available, and can be reused
TEST
- Some benchmarking examples,
## GIT Read cd /tmp parallel -j 10 git clone ssh://git@c4science.ch:2222/diffusion/TEST/test.git \ -- $(for i in $(seq 20); do echo test$i; done) 1> /dev/null
GIT Write sequential
cd /tmp git clone ssh://git@c4science.ch:2222/diffusion/TEST/test.git for i in {1..10}; do time sh -c "echo 'test' >> README.md; git commit -am 'test'; git push" &>/dev/null done
## Test multiple push to same repo REPO=$(echo {A..Z}) # Clone cd /tmp for i in $REPO; do git clone ssh://git@c4science.ch:2222/diffusion/TEST$i/test.git test$i done # GIT test lock parallel -i -j 5 sh -c ' cd test{}; git pull --no-edit; echo "* TEST" >> README.md; git commit -am "test"; git push' \ -- $(echo $REPO)
## HTTP ab -C phsid:COOK -C phusr:admin -n 1000 \ -c 10 https://c4science.ch/diffusion/TEST/repository/master/
DEV
- You can use Vagrant to develop on a single virtualbox instance locally
cd utils vagrant up vagrant provision
- NB: You need vagrant >= 1.8.0
c4science · Help