summaryrefslogtreecommitdiff
path: root/design-documents/027-sandboxing-taler.rst
blob: c1e529393b4d89f5a1539d0d497c1369fb252af4 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
Design Doc 027: Sandboxing all the Taler services. 
##################################################

.. note::

  This design document is currently a draft, it
  does not reflect any implementation decisions yet.

Summary
=======

This document presents a method of deploying all the Taler
services via one Docker container.

Motivation
==========

It is very difficult to build GNU Taler from scratch.  It is even more difficult
to install, configure and launch it correctly.

The purpose of the sandbox is to have a demonstration system that can be both
build and launched with ideally a single command.

Requirements
============

- No external services should be required, the only dependencies should be:

  - podman/docker
  - optionally: configuration files to further customize the setup

- All services that are used should be installed from repositories
  and not built from scratch (i.e. debian repos or PyPI)

- There should be some "admin page" for the whole sandbox that:

  - Show an overview of all deployed services, a link to their documentation
    and the endpoints they expose
  - Show very simple statistics (e.g. number of transactions / withdrawals)
  - Allows generating and downloading the auditor report

- Developers should be able to launch the sandbox on their own machine

  - Possibly using nightly repos instead of the official stable repos

- We should be able to deploy it on $NAME.sandbox.taler.net

Design
======

The container is based on Debian Sid, and it installs all
the services from their Debian packages.  During the build
process, it creates all the 'static' configuration.  This
one includes all the .conf-files, the database setup and the
keying.

Subsequently at the launch step, the system will create all
the remaining RESTful resources.  Such RESTful resources include
the merchant instances and all the euFin accounts, both at Sandbox
and at Nexus.

The sandbox will serve one HTTP base URL and make any service
reachable at $baseUrl/$service.  For example, the exchange base
URL will be "$baseUrl/exchange".

The sandbox allows to configure:

- which host it binds to, typically localhost+port.
- which host is being reverse proxied to the sandbox.  This
  helps to generate valid URIs of services.

All the other values will be hard-coded in the preparation.

The database is aunched *in* the same container along the
other services.

Open questions
==============

- How to collect the static configuration values?

  - => Via a configuration file that you pass to the container via
    a mounted directory (=> `-v $MYCONFIG:/sandboxconfig`)
  - If we don't pass any config, the container should have
    sane defaults
  - This is effectively a "meta configuration", because it will
    be used to generate the actual configuration files
    and do RESTful configuration at launch time.

- How to persist, at build time, the information
  needed later at launch time to create the RESTful
  resources?

  - => The configuration should be done at launch-time of the container.

- Should we at this iteration hard-code passwords too?
  With generated passwords, (1) it won't be possible to
  manually log-in to services, (2) it won't be possible
  to write the exchange password for Nexus in the conf.
  Clearly, that's a problem when the sandbox is served
  to the outside.

- How is data persisted? (i.e. where do we store stuff)

  - By allowing to mount some data directory on the host of the container
    (This stores the DB files, config files, key files, etc.)
  - ... even for data like the postgresql database
  - future/optional: we *might* allow connection to an external postgresql database as well

- How are services supervised?
  
  - SystemD? gnunet-arm? supervisord? something else?

    - SystemD does not work well inside containers

  - alternative: one container per service, use (docker/podman)-compose

    - Either one docker file per service, *or* one base container that
      can be launched as different services via command line arg

    - Advantage: It's easy to see the whole architecture from the compose yaml file
    - Advantage: It would be easy to later deploy this on kubernetes etc.

    - list of containers:

      - DB container (postgres)
      - Exchange container (contains all exchange services, for now)
        - Split this up further?
      - Merchant container

- Do we have multi-tenancy for the sandbox? (I.e. do we allow multiple
  currencies/exchanges/merchants/auditors per sandbox)

  - Might be simpler if we disallow this

- How do we handle TLS
  
  - Do we always do HTTPs in the sandbox container?
  - We need to think about external and internal requests
    to the sandbox

- How do we handle (external vs internal) URLs

  - If we use http://localhost:$PORT for everything, we can't expose
    the services externally
  - Example 1: Sandbox should run on sb1.sandbox.taler.net.

    - What will be the base URL for the exchange in the merchant config?
    - If it's https://sb1.sandbox.taler.net/exchange, we need some /etc/hosts entry
      inside the container
    - Once you want to expose the sandbox internally, you need a proper TLS cert (i.e. letsencrypt)
    - Inside the container, you can get away with self-signed certificates
    - Other solution: Just require the external nginx (e.g. at gv) to reverse proxy
      sb1.sandbox.taler.net back to the container. This means that all communication
      between services inside the sandbox container goes through gv

      - Not great, but probably fine for first iteration
      - Disadvantages: To test the container in the non-localhost mode, you need the external proxy running

- Where do we take packages from?

  - By default, from the stable taler-systems.com repos and PyPI
  - Alternatively, via the nightly gv debian repo
  - Since we install packages at container build time, this setting (stable vs nightly)
    results in different container base images