summaryrefslogtreecommitdiff
path: root/design-documents
diff options
context:
space:
mode:
authorFlorian Dold <florian@dold.me>2022-02-10 11:17:30 +0100
committerFlorian Dold <florian@dold.me>2022-02-10 11:17:30 +0100
commitf324098803807019b72c10175390c8ffcfa8490e (patch)
treed4da9537d4899d8d2dde78107d94f2fe5b84acec /design-documents
parent57059e151574497b53aa280ab4abf91ee3d08a28 (diff)
downloaddocs-f324098803807019b72c10175390c8ffcfa8490e.tar.gz
docs-f324098803807019b72c10175390c8ffcfa8490e.tar.bz2
docs-f324098803807019b72c10175390c8ffcfa8490e.zip
sandbox DD
Diffstat (limited to 'design-documents')
-rw-r--r--design-documents/027-sandboxing-taler.rst107
1 files changed, 107 insertions, 0 deletions
diff --git a/design-documents/027-sandboxing-taler.rst b/design-documents/027-sandboxing-taler.rst
index b082ae55..c1e52939 100644
--- a/design-documents/027-sandboxing-taler.rst
+++ b/design-documents/027-sandboxing-taler.rst
@@ -12,6 +12,39 @@ Summary
This document presents a method of deploying all the Taler
services via one Docker container.
+Motivation
+==========
+
+It is very difficult to build GNU Taler from scratch. It is even more difficult
+to install, configure and launch it correctly.
+
+The purpose of the sandbox is to have a demonstration system that can be both
+build and launched with ideally a single command.
+
+Requirements
+============
+
+- No external services should be required, the only dependencies should be:
+
+ - podman/docker
+ - optionally: configuration files to further customize the setup
+
+- All services that are used should be installed from repositories
+ and not built from scratch (i.e. debian repos or PyPI)
+
+- There should be some "admin page" for the whole sandbox that:
+
+ - Show an overview of all deployed services, a link to their documentation
+ and the endpoints they expose
+ - Show very simple statistics (e.g. number of transactions / withdrawals)
+ - Allows generating and downloading the auditor report
+
+- Developers should be able to launch the sandbox on their own machine
+
+ - Possibly using nightly repos instead of the official stable repos
+
+- We should be able to deploy it on $NAME.sandbox.taler.net
+
Design
======
@@ -46,13 +79,87 @@ Open questions
- How to collect the static configuration values?
+ - => Via a configuration file that you pass to the container via
+ a mounted directory (=> `-v $MYCONFIG:/sandboxconfig`)
+ - If we don't pass any config, the container should have
+ sane defaults
+ - This is effectively a "meta configuration", because it will
+ be used to generate the actual configuration files
+ and do RESTful configuration at launch time.
+
- How to persist, at build time, the information
needed later at launch time to create the RESTful
resources?
+ - => The configuration should be done at launch-time of the container.
+
- Should we at this iteration hard-code passwords too?
With generated passwords, (1) it won't be possible to
manually log-in to services, (2) it won't be possible
to write the exchange password for Nexus in the conf.
Clearly, that's a problem when the sandbox is served
to the outside.
+
+- How is data persisted? (i.e. where do we store stuff)
+
+ - By allowing to mount some data directory on the host of the container
+ (This stores the DB files, config files, key files, etc.)
+ - ... even for data like the postgresql database
+ - future/optional: we *might* allow connection to an external postgresql database as well
+
+- How are services supervised?
+
+ - SystemD? gnunet-arm? supervisord? something else?
+
+ - SystemD does not work well inside containers
+
+ - alternative: one container per service, use (docker/podman)-compose
+
+ - Either one docker file per service, *or* one base container that
+ can be launched as different services via command line arg
+
+ - Advantage: It's easy to see the whole architecture from the compose yaml file
+ - Advantage: It would be easy to later deploy this on kubernetes etc.
+
+ - list of containers:
+
+ - DB container (postgres)
+ - Exchange container (contains all exchange services, for now)
+ - Split this up further?
+ - Merchant container
+
+- Do we have multi-tenancy for the sandbox? (I.e. do we allow multiple
+ currencies/exchanges/merchants/auditors per sandbox)
+
+ - Might be simpler if we disallow this
+
+- How do we handle TLS
+
+ - Do we always do HTTPs in the sandbox container?
+ - We need to think about external and internal requests
+ to the sandbox
+
+- How do we handle (external vs internal) URLs
+
+ - If we use http://localhost:$PORT for everything, we can't expose
+ the services externally
+ - Example 1: Sandbox should run on sb1.sandbox.taler.net.
+
+ - What will be the base URL for the exchange in the merchant config?
+ - If it's https://sb1.sandbox.taler.net/exchange, we need some /etc/hosts entry
+ inside the container
+ - Once you want to expose the sandbox internally, you need a proper TLS cert (i.e. letsencrypt)
+ - Inside the container, you can get away with self-signed certificates
+ - Other solution: Just require the external nginx (e.g. at gv) to reverse proxy
+ sb1.sandbox.taler.net back to the container. This means that all communication
+ between services inside the sandbox container goes through gv
+
+ - Not great, but probably fine for first iteration
+ - Disadvantages: To test the container in the non-localhost mode, you need the external proxy running
+
+- Where do we take packages from?
+
+ - By default, from the stable taler-systems.com repos and PyPI
+ - Alternatively, via the nightly gv debian repo
+ - Since we install packages at container build time, this setting (stable vs nightly)
+ results in different container base images