commit a882c8087ad27152e5c510109934fc2b835998e3
parent 6f02617337012b0fd75ef3d271660bbee126a56f
Author: Christian Grothoff <christian@grothoff.org>
Date: Sun, 20 Apr 2025 11:40:03 +0200
work on playbooks and documentation
Diffstat:
9 files changed, 219 insertions(+), 37 deletions(-)
diff --git a/README b/README
@@ -5,18 +5,19 @@
Depending on your local installation, you might need
to install the following ansible collection:
+```
$ ansible-galaxy collection install community.postgresql
+```
-## Running the main Playbook
+## Running the main Playbooks
-To run the main playbook (playbooks/setup.yml):
+The canonical playbooks are run via shell scripts in the top-level
+directory.
-```
-$ ansible-playbook --verbose --inventory inventories/default --limit <host> playbooks/setup.yml
-```
+### deploy.sh (playbooks/setup.yml):
-The ./deploy.sh script is an abbreviation for the above command. For example,
-if you are root.rusty.taler-ops.ch, you may be able to:
+This script deploys the latest version of a system on a host.
+If you are root@rusty.taler-ops.ch, you may be able to:
```
$ ./deploy.sh rusty
@@ -25,34 +26,43 @@ $ ./deploy.sh rusty
For TOPS production, replace the "rusty" with "spec" to use the actual secrets
for the deployment. For this, you first need to decrypt them:
+```
$ gpg -d inventories/host_vars/spec/prod-secrets.yml.gpg > inventories/host_vars/spec/prod-secrets.yml
+```
Make sure to NEVER commit the decrypted production secrets to Git.
Instead, if you had to edit them, re-encrypt them to all admins:
+```
$ cat inventories/host_vars/spec/prod-secrets.yml | gpg --encrypt \
--recipient grothoff@gnunet.org \
--recipient devan@taler.net \
--recipient me@fdold.eu > inventories/host_vars/spec/prod-secrets.yml.gpg
+```
+### sanction-check.sh
-## Checking sanction lists
-
-Run
+This command imports and checks the latest sanction lists:
+```
$ ./sanction-check.sh $DEPLOYMENT $LIST
+```
where "$DEPLOYMENT" specifies the name of the deployment to
use ("test" or "tops") and $LIST is the name of the sanction
list file on the local disk. This script currently always
uses the "tops" inventory.
+NOTE: this should still be further automated.
+
-## Setting up backups (TOPS-only for now)
+### Setting up backups (TOPS-only for now)
First run:
-./extract-borg-key.sh
+```
+$ ./extract-borg-key.sh
+```
The resulting SSH public key should be added to the borg-account
of the host storing the backup. The playbook contains the target
@@ -62,7 +72,9 @@ Once the SSH key is deployed and the backup has been initialized
server-side (see admin-logs/pixel/03-borg.txt), start the daily
backups via:
-./start-borg-backups.sh
+```
+$ ./start-borg-backups.sh
+```
This will make a backup basically everything relevant to the
deployment, **except** the exchange online signing keys. The
@@ -75,7 +87,25 @@ weekly snapshots for the last 4 weeks, and monthly snapshots
for the last 6 months.
-## Running the import/export Playbooks (TOPS-only)
+### Rebooting (into a new kernel)
+
+This should be done via the 'reboot' playbook which can
+be invoked via the
+
+```
+$ ./reboot.sh $HOSTNAME
+```
+
+script. The reboot playbook first stops all Taler services,
+then makes a backup, and then reboots. This should help us
+restore to another system in case the host does not come back
+online cleanly.
+
+
+### Running the import/export Playbooks (TOPS-only)
+
+This is used to manually import/export wire transfers from/to the
+bank:
```
$ ./export.sh
@@ -83,14 +113,56 @@ $ ./import.sh $FILE.xml
```
-## Testing Locally
+### Testing Locally
+
+With podman and ansible installed locally one can run:
+
+```
+$ ./test.sh
+```
-With podman and ansible installed locally one can run `./test.sh`.
This will begin building the Containerfile in this repo, which is a Debian
base with systemd and a paswordless ssh server configured. Then container
will start, binding port 8022 to 127.0.0.1 on the host. Finally
the setup playbook will be run on the container via ssh.
+
+## Playbooks
+
+### backup
+
+Runs a backup "right now".
+
+### borg-ssh-export
+
+Exports the SSH public keys needed at the remote host for backups.
+
+### borg-start
+
+Enables the borg backup. Should be run after the SSH public keys
+exported via borg-ssh-export have been deployed on the receiving
+host.
+
+### pixel-borg
+
+Enables receiving (!) backups from pixel. Adds the public key from
+pixel so we accept receiving borg backups from pixel. Note that
+pixel still needs to be setup to send the backups.
+
+### reboot
+
+Safely reboots the system by first stopping all Taler services,
+then making a backup and only then actually rebooting it.
+
+### sanctionlist-check
+
+Imports the latest sanction list and checks all records against
+it.
+
+### setup
+
+The main script that deploys our entire setup.
+
## Roles
### ansible_pull
@@ -100,14 +172,63 @@ which runs the ansible-pull script on regular interval.
NOTE: requires local.yml to exist in root of this repo.
+### auditor
+
+Deploys the auditor.
+
+### backup
+
+Runs the backup script, making a borg backup of the database and other key parts of the system.
+
+### borg-ssh-export
+
+Exports the SSH public key that must be deployed on the host that is
+to receive the backup.
+
+### borg-start
+
+### challenger
+
+Deploys the various challenger services for address verification.
+
### common_packages
Installs the base system packages we need on all hosts.
+Sets up Taler package repo and installs Taler packages.
-### taler-packages
+### database
-Sets up Taler package repo and installs Taler packages.
+Installs the Postgresql database.
+
+### exchange
+
+Deploys the Taler exchange
+
+### exchange-sanctionlist-import
+
+Imports a new sanction list and checks all existing records against it.
+
+### libeufin-nexus
+
+Deploys libeufin-nexus which connects us to the bank.
+
+### monitoring
+
+Deploys Alloy and Promethesus exporters for host monitoring.
+
+### pixel_borg
+
+Configures the host to hosting backups *from* pixel.
+
+### reboot
+
+Reboots the system.
+
+### stop_services
+
+Stops all Taler-related services. Useful for emergency stop and
+used as part of the reboot playbook.
-### configuration
+### webserver
-Configures Taler services. Installs configuration files, etc.
+Configures Nginx reverse proxy (main service, not individual subdomains).
diff --git a/playbooks/backup.yml b/playbooks/backup.yml
@@ -2,4 +2,4 @@
- name: Backup GNU Taler Databases
hosts: all
roles:
- - role: db_backup
+ - role: backup
diff --git a/playbooks/reboot.yml b/playbooks/reboot.yml
@@ -0,0 +1,7 @@
+---
+- name: Reboot the system after stopping all services and backing up the databases
+ hosts: all
+ roles:
+ - role: stop_services
+ - role: backup
+ - role: reboot
diff --git a/reboot.sh b/reboot.sh
@@ -0,0 +1,15 @@
+#!/bin/sh
+set -eu
+
+if [ -z ${1:-} ]
+then
+ echo "Call with 'spec' or another host/group to select target"
+ exit 1
+fi
+
+ansible-playbook \
+ --inventory inventories/default \
+ --limit "$1" \
+ playbooks/reboot.yml
+
+exit 0
diff --git a/roles/db_backup/handlers/main.yml b/roles/backup/handlers/main.yml
diff --git a/roles/backup/tasks/main.yml b/roles/backup/tasks/main.yml
@@ -0,0 +1,12 @@
+---
+# Database backup role
+
+- name: Make sure PostgreSQL is started
+ systemd:
+ name: postgresql
+ state: started
+
+- name: Run backup script
+ ansible.builtin.command:
+ cmd: /root/bin/borg-backup.sh
+ chdir: /root
diff --git a/roles/db_backup/tasks/main.yml b/roles/db_backup/tasks/main.yml
@@ -1,16 +0,0 @@
----
-# Database backup role
-
-- name: Make sure PostgreSQL is started
- systemd:
- name: postgresql
- state: started
-
-- name: Export databases
- become: true
- become_user: postgres
- community.postgresql.postgresql_db:
- login_user: postgres
- db: taler-exchange
- state: dump
- target: /tmp/taler-exchange-backup.sql.xz
diff --git a/roles/reboot/tasks/main.yml b/roles/reboot/tasks/main.yml
@@ -0,0 +1,5 @@
+---
+# Reboot system
+
+- name: Reboot system
+ reboot:
diff --git a/roles/stop_services/tasks/main.yml b/roles/stop_services/tasks/main.yml
@@ -0,0 +1,38 @@
+---
+# Stop all Taler services
+
+- name: Stop exchange
+ systemd:
+ name: taler-exchange.target
+ state: stopped
+ when: "'taler-exchange.target' in services"
+
+- name: Stop merchant
+ systemd:
+ name: taler-merchant.target
+ state: stopped
+ when: "'taler-merchant.target' in services"
+
+- name: Stop postal-challenger
+ systemd:
+ name: postal-challenger-httpd.target
+ state: stopped
+ when: "'postal-chalelnger-httpd.target' in services"
+
+- name: Stop sms-challenger
+ systemd:
+ name: sms-challenger-httpd.target
+ state: stopped
+ when: "'sms-chalelnger-httpd.target' in services"
+
+- name: Stop email-challenger
+ systemd:
+ name: email-challenger-httpd.target
+ state: stopped
+ when: "'email-chalelnger-httpd.target' in services"
+
+- name: Stop auditor
+ systemd:
+ name: taler-auditor.target
+ state: stopped
+ when: "'taler-auditor.target' in services"