README (5505B)
1 # Ansible Taler Playbooks 2 3 ## Installing dependencies 4 5 Depending on your local installation, you might need 6 to install the following ansible collection: 7 8 ``` 9 $ ansible-galaxy collection install community.postgresql 10 ``` 11 12 ## Running the main Playbooks 13 14 The canonical playbooks are run via shell scripts in the top-level 15 directory. 16 17 ### Main setup (restore.sh, deploy.sh) 18 19 The "restore.sh" script extracts the latest database backup from the 20 backup server. It should be run before the deploy.sh script to obtain 21 the latest version of the database to be restored, unless you are 22 literally setting up a service from scratch (which should be ultra-rare 23 in production). 24 25 The "deploy.sh" script deploys the latest version of a system on a host. 26 If you are root@rusty.taler-ops.ch, you may be able to: 27 28 ``` 29 $ ./deploy.sh rusty 30 ``` 31 32 For TOPS production, replace the "rusty" with "spec" to use the actual secrets 33 for the deployment. For this, you first need to decrypt them: 34 35 ``` 36 $ ./contrib/decrypt inventories/host_vars/spec/prod-secrets.yml.gpg 37 ``` 38 39 Make sure to NEVER commit the decrypted production secrets to Git. 40 Instead, if you had to edit them, re-encrypt them to all admins: 41 42 ``` 43 $ ./contrib/encrypt inventories/host_vars/spec/prod-secrets.yml 44 ``` 45 46 ### sanction-check.sh 47 48 This command imports and checks the latest sanction lists: 49 50 ``` 51 $ ./sanction-check.sh $DEPLOYMENT $LIST 52 ``` 53 54 where "$DEPLOYMENT" specifies the name of the deployment to 55 use ("test" or "tops") and $LIST is the name of the sanction 56 list file on the local disk. This script currently always 57 uses the "tops" inventory. 58 59 NOTE: this should still be further automated. 60 61 62 ### Setting up backups (TOPS-only for now) 63 64 First run: 65 66 ``` 67 $ ./extract-borg-key.sh 68 ``` 69 70 The resulting SSH public key should be added to the borg-account 71 of the host storing the backup. The playbook contains the target 72 hostname! 73 74 Once the SSH key is deployed and the backup has been initialized 75 server-side (see admin-logs/pixel/03-borg.txt), start the daily 76 backups via: 77 78 ``` 79 $ ./start-borg-backups.sh $DEPLOYMENT 80 ``` 81 82 This will make a backup basically everything relevant to the 83 deployment, **except** the exchange online signing keys. The 84 backup will in particular include the system configuration 85 and a full (gzip-compressed) snapshot of the database. Thus, 86 the backups should also suffice to diagnose problems. 87 88 Backups are set to retain daily snapshots of the last 7 days, 89 weekly snapshots for the last 4 weeks, and monthly snapshots 90 for the last 6 months. 91 92 93 ### Backup (right now) 94 95 To run a backup "immediately" (instead of the daily regular 96 backups), use: 97 98 ``` 99 $ ./backup.sh $DEPLOYMENT 100 ``` 101 102 103 ### Rebooting (into a new kernel) 104 105 This should be done via the 'reboot' playbook which can 106 be invoked via the 107 108 ``` 109 $ ./reboot.sh $DEPLOYMENT 110 ``` 111 112 script. The reboot playbook first stops all Taler services, 113 then makes a backup, and then reboots. This should help us 114 restore to another system in case the host does not come back 115 online cleanly. 116 117 118 ### Testing Locally 119 120 With podman and ansible installed locally one can run: 121 122 ``` 123 $ ./test.sh 124 ``` 125 126 This will begin building the Containerfile in this repo, which is a Debian 127 base with systemd and a paswordless ssh server configured. Then container 128 will start, binding port 8022 to 127.0.0.1 on the host. Finally 129 the setup playbook will be run on the container via ssh. 130 131 132 ## Playbooks 133 134 ### backup 135 136 Runs a backup "right now". 137 138 ### borg-ssh-export 139 140 Exports the SSH public keys needed at the remote host for backups. 141 142 ### borg-start 143 144 Enables the borg backup. Should be run after the SSH public keys 145 exported via borg-ssh-export have been deployed on the receiving 146 host. 147 148 ### pixel-borg 149 150 Enables receiving (!) backups from pixel. Adds the public key from 151 pixel so we accept receiving borg backups from pixel. Note that 152 pixel still needs to be setup to send the backups. 153 154 ### reboot 155 156 Safely reboots the system by first stopping all Taler services, 157 then making a backup and only then actually rebooting it. 158 159 ### sanctionlist-check 160 161 Imports the latest sanction list and checks all records against 162 it. 163 164 ### setup 165 166 The main script that deploys our entire setup. 167 168 ## Roles 169 170 ### ansible_pull 171 172 This role setups an ansible-pull script on the host, as well as cronjob 173 which runs the ansible-pull script on regular interval. 174 175 NOTE: requires local.yml to exist in root of this repo. 176 177 ### auditor 178 179 Deploys the auditor. 180 181 ### backup 182 183 Runs the backup script, making a borg backup of the database and other key parts of the system. 184 185 ### borg-ssh-export 186 187 Exports the SSH public key that must be deployed on the host that is 188 to receive the backup. 189 190 ### borg-start 191 192 ### challenger 193 194 Deploys the various challenger services for address verification. 195 196 ### common_packages 197 198 Installs the base system packages we need on all hosts. 199 Sets up Taler package repo and installs Taler packages. 200 201 ### database 202 203 Installs the Postgresql database. 204 205 ### exchange 206 207 Deploys the Taler exchange 208 209 ### exchange-sanctionlist-import 210 211 Imports a new sanction list and checks all existing records against it. 212 213 ### libeufin-nexus 214 215 Deploys libeufin-nexus which connects us to the bank. 216 217 ### monitoring 218 219 Deploys Alloy and Promethesus exporters for host monitoring. 220 221 ### pixel_borg 222 223 Configures the host to hosting backups *from* pixel. 224 225 ### reboot 226 227 Reboots the system. 228 229 ### stop_services 230 231 Stops all Taler-related services. Useful for emergency stop and 232 used as part of the reboot playbook. 233 234 ### webserver 235 236 Configures Nginx reverse proxy (main service, not individual subdomains).