027-sandboxing-taler.rst (5925B)
1 DD 27: Sandboxing all the Taler services 2 ######################################## 3 4 .. note:: 5 6 This design document is currently a draft, it 7 does not reflect any implementation decisions yet. 8 9 Summary 10 ======= 11 12 This document presents a method of deploying all the Taler 13 services via one Docker container. 14 15 Motivation 16 ========== 17 18 It is very difficult to build GNU Taler from scratch. It is even more difficult 19 to install, configure and launch it correctly. 20 21 The purpose of the sandbox is to have a demonstration system that can be both 22 build and launched with ideally a single command. 23 24 Requirements 25 ============ 26 27 - No external services should be required, the only dependencies should be: 28 29 - podman/docker 30 - optionally: configuration files to further customize the setup 31 32 - All services that are used should be installed from repositories 33 and not built from scratch (i.e. debian repos or PyPI) 34 35 - There should be some "admin page" for the whole sandbox that: 36 37 - Show an overview of all deployed services, a link to their documentation 38 and the endpoints they expose 39 - Show very simple statistics (e.g. number of transactions / withdrawals) 40 - Allows generating and downloading the auditor report 41 42 - Developers should be able to launch the sandbox on their own machine 43 44 - Possibly using nightly repos instead of the official stable repos 45 46 - We should be able to deploy it on $NAME.sandbox.taler.net 47 48 Design 49 ====== 50 51 The container is based on Debian Sid, and it installs all 52 the services from their Debian packages. During the build 53 process, it creates all the 'static' configuration. This 54 one includes all the .conf-files, the database setup and the 55 keying. 56 57 Subsequently at the launch step, the system will create all 58 the remaining RESTful resources. Such RESTful resources include 59 the merchant instances and all the euFin accounts, both at Sandbox 60 and at Nexus. 61 62 The sandbox will serve one HTTP base URL and make any service 63 reachable at $baseUrl/$service. For example, the exchange base 64 URL will be "$baseUrl/exchange". 65 66 The sandbox allows to configure: 67 68 - which host it binds to, typically localhost+port. 69 - which host is being reverse proxied to the sandbox. This 70 helps to generate valid URIs of services. 71 72 All the other values will be hard-coded in the preparation. 73 74 The database is aunched *in* the same container along the 75 other services. 76 77 Open questions 78 ============== 79 80 - How to collect the static configuration values? 81 82 - => Via a configuration file that you pass to the container via 83 a mounted directory (=> ``-v $MYCONFIG:/sandboxconfig``) 84 - If we don't pass any config, the container should have 85 sane defaults 86 - This is effectively a "meta configuration", because it will 87 be used to generate the actual configuration files 88 and do RESTful configuration at launch time. 89 90 - How to persist, at build time, the information 91 needed later at launch time to create the RESTful 92 resources? 93 94 - => The configuration should be done at launch-time of the container. 95 96 - Should we at this iteration hard-code passwords too? 97 With generated passwords, (1) it won't be possible to 98 manually log-in to services, (2) it won't be possible 99 to write the exchange password for Nexus in the conf. 100 Clearly, that's a problem when the sandbox is served 101 to the outside. 102 103 - How is data persisted? (i.e. where do we store stuff) 104 105 - By allowing to mount some data directory on the host of the container 106 (This stores the DB files, config files, key files, etc.) 107 - ... even for data like the postgresql database 108 - future/optional: we *might* allow connection to an external postgresql database as well 109 110 - How are services supervised? 111 112 - SystemD? gnunet-arm? supervisord? something else? 113 114 - SystemD does not work well inside containers 115 116 - alternative: one container per service, use (docker/podman)-compose 117 118 - Either one docker file per service, *or* one base container that 119 can be launched as different services via command line arg 120 121 - Advantage: It's easy to see the whole architecture from the compose yaml file 122 - Advantage: It would be easy to later deploy this on kubernetes etc. 123 124 - list of containers: 125 126 - DB container (postgres) 127 - Exchange container (contains all exchange services, for now) 128 - Split this up further? 129 - Merchant container 130 131 - Do we have multi-tenancy for the sandbox? (I.e. do we allow multiple 132 currencies/exchanges/merchants/auditors per sandbox) 133 134 - Might be simpler if we disallow this 135 136 - How do we handle TLS 137 138 - Do we always do HTTPs in the sandbox container? 139 - We need to think about external and internal requests 140 to the sandbox 141 142 - How do we handle (external vs internal) URLs 143 144 - If we use http://localhost:$PORT for everything, we can't expose 145 the services externally 146 - Example 1: Sandbox should run on sb1.sandbox.taler.net. 147 148 - What will be the base URL for the exchange in the merchant config? 149 - If it's https://sb1.sandbox.taler.net/exchange, we need some /etc/hosts entry 150 inside the container 151 - Once you want to expose the sandbox internally, you need a proper TLS cert (i.e. letsencrypt) 152 - Inside the container, you can get away with self-signed certificates 153 - Other solution: Just require the external nginx (e.g. at gv) to reverse proxy 154 sb1.sandbox.taler.net back to the container. This means that all communication 155 between services inside the sandbox container goes through gv 156 157 - Not great, but probably fine for first iteration 158 - Disadvantages: To test the container in the non-localhost mode, you need the external proxy running 159 160 - Where do we take packages from? 161 162 - By default, from the stable taler-systems.com repos and PyPI 163 - Alternatively, via the nightly gv debian repo 164 - Since we install packages at container build time, this setting (stable vs nightly) 165 results in different container base images