aboutsummaryrefslogtreecommitdiff
path: root/sandcastle/README
blob: 1a89c22a83f86e6f3199be585892ecb21fcbd8a6 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
Description
===========

This setup orchestrates the following containers:

1.  Banking (libEufin)
2.  Shop(s)
3.  Payment service provider (Taler exchange and helpers)
4.  Database

FIXME (#7463): the current version requires the user to manually
point the bank SPA to any backend not being served at bank.demo.taler.net.

How to compile
==============

The base image (not managed by the docker compose setup) and
all the other images must be compiled.

Base image
----------

This image contains a minimal Debian distribution
with ALL the Taler software and its dependencies.

From this directory, run:

  $ ./build_base.sh [--help] [tags-file]

Composed containers
-------------------

From this directory, run:

  $ docker compose build

Hotfixes
--------

Attach to the base image first: 

  # $HOTFIX is arbitrary; helps avoid copying and pasting alphanumeric IDs
  $ docker run --name $HOTFIX -it taler_local/taler_base /bin/bash

From inside the container, navigate to "/$REPO", issue
"git pull" and install the software as usual.  Exit the
container thereafter.

Commit the container having the hotfix:

  $ docker commit $HOTFIX

That outputs a new ID ($RETVAL).  That is the ID of the
modified image.  Tag it, to let other images use it to build:
  
  $ docker tag $RETVAL taler_local/taler_base:latest

Now build all the images with docker-compose, as described
in the 'How to run' section.

How to run only one image
=========================

The following commands run only one image, from those
belonging to the compose file.  Note that such image may
easily fail because it likely relies on other images not
being run.

$ docker compose build $image-name # if also new changes need to be tested.
$ docker compose up $image-name 

'bank', 'exchange', 'merchant', 'talerdb' are valid values
for $image-name.

Enabling rewards
================

The following command (executed from the container CLI) manually creates a rewards reserve:

$ taler-harness deployment tip-topup --merchant-url https://backend.demo.taler.net/instances/survey/ --merchant-apikey=$MERCHANT_APIKEY --bank-access-url https://bank.demo.taler.net/demobanks/default/access-api/ --wire-method=iban --amount=KUDOS:5000 --bank-account=survey-at-sandbox --bank-password=$SURVEY_SECRET --exchange-url https://exchange.demo.taler.net/

The status of the rewards reserves can be checked via:

$ taler-harness deployment tip-status --merchant-url https://backend.demo.taler.net/instances/survey/ --merchant-apikey=$MERCHANT_APIKEY

To purge all non-funded rewards reserves, run:

$ taler-harness deployment tip-cleanup --merchant-url https://backend.demo.taler.net/instances/survey/ --merchant-apikey=$MERCHANT_APIKEY

[*] - To enable the "rewards balance checking" script. Please execute after doing the "docker compose up" the next command from, 
the "deployment/sandcastle" directory.

./utils/enable-services.sh

This will put to run a systemd service, which will check the rewards balance once per week (this can be change editing 
the systemd/fund-rewards.timer).  

How to run
==========

Configuration
-------------

Export the env variable TALER_SANDCASTLE_CONFIG to an
absolute path of the configuration directory.  See config/
for an example configuration directory.

Run
---

The following command starts all the services in the background,
and manages all the restarts.  Run it from this directory:

  $ docker compose up --remove-orphans -d

The ports exposed on the host by each service can be changed
via the following environment variables:

- TALER_MERCHANT_PORT
- TALER_BLOG_PORT
- TALER_DONATIONS_PORT
- TALER_SURVEY_PORT
- TALER_LANDING_PORT
- TALER_SYNC_PORT
- LIBEUFIN_SANDBOX_PORT
- LIBEUFIN_NEXUS_PORT
- LIBEUFIN_FRONTEND_PORT
- TALER_DB_PORT

TALER_DB_PORT is not used by the contained services, but
allows a 'psql' instance to attach to the contained database
for debugging.

On a daemonized setup, live logs can still be seen by running
the following command from this directory:

  $ docker compose logs --tail=$NUM --follow [container-name]

To stop the services, run the following command from this directory:
  $ docker compose stop

To start the services in the foreground, run the following command
from this directory (no restart is provided):

  $ docker compose up --remove-orphans --abort-on-container-exit

Volumes
-------

Data is kept into Docker volumes.  To export database, key
material, and logs, run the following command from this directory.

  $ ./backup.sh

The following command imports the TAR backup from
the previous step into the Docker volumes.  From this directory:

  $ ./import-backup.sh $PATH_TO_THE_TAR_FILE

The following command gives a shell to inspect the data volume:

  $ docker run -v demo_talerdata:/data -it taler_local/taler_base /bin/bash

The data is available under /data.

How to save and restore Docker images
=====================================

When certain deployment is fully working on test.taler.net, and therefore is going
to be deployed in demo.taler.net, you should save those docker working images, as
mean of backup. 

How to save working Docker images
---------------------------------

To save each --good image of each component you can execute the script "save-good.sh" without any 
arguments. This script will create some tagged docker images with the current timestamp

This way, if something goes wrong with newly created images, you can use these previous good images, to 
re-deploy the GNU Taler program. 

How to recover saved images
---------------------------

In order to use them, after the manual creation of the wrong ones, you should execute the
script "restore-good.sh". 

Thus, to restore previously created images, you should provide a
timestamp as an option, to the "restore-good.sh" script. 

Example: ./restore-good.sh 1693812987  

To get the timestamp (which was previously generated by save-good.sh), you can execute this command:

docker images #having as result "taler_local/taler_base:good-$TIMESTAMP"

Then whenever you know about the precise timestamp, you can type "./restore-good.sh <TIMESTAMP>"

[*] - Warning

This method of saving docker images and restoring them, won't work, if after executing
save-good.sh, you do a server cleanup with "docker system prune -a or --all"

Doing a "docker system prune" without the "-a" option, it's okay. 

Eventually as a future improvement, we might configure "docker registry", to
store safely all these good and stable docker images.  

Data removal
------------

Data can be classified between Taler (DBs, keys, logs), and Docker specific
(dangling images, volumes, stopped containers).  Most of Taler data is found
in 'volumes', and can be removed in the following way:

  # From this directory.
  $ docker compose down -v

Note: the current version does not store config files into volumes, but in
services' containers.

Use the following command to remove stopped containers, dangling images
and build cache, and unused networks.  After its return, the Taler sandbox
can be run again without rebuilding it.

  $ docker system prune

Disk usage can be monitored by the command:

  $ docker system df

Logs
----

Newest rotated logs can be seen by the following command,
from any directory:

  $ docker run -v demo_talerlogs:/logs -it taler_local/taler_base /bin/bash

The started container should now have all the logs under /logs.

How to test on localhost
========================

From this directory:
  
  $ ./test-docker-localhost.sh

The above test registers a new bank account to libEufin,
withdraw coins and spend them directly at the merchant backend.

NOTE: localhost works only with the default ports exposed.

How to deploy to online sites
=============================

Before deploying the sandcastle setup, you need to undertake certain replacements within 
the configuration file "config/deployment.conf". 

currency = KUDOS (or the name of your currency)
merchant-url = https://backend.domain.tld
landing-url = https://domain.tld/
blog-url = https://shop.domain.tld/
donations-url = https://donations.domain.tld/
survey-url = https://survey.domain.tld/
sync-url = https://sync.domain.tld/
bank-url = https://bank.domain.tld/
bank-backend-url = https://bank.domain.tld/demobanks/default/

After doing this, and assuming that TLS is already configured, you can use the file named 
"nginx-example.conf" on the sandcastle directory, as a NGINX virtual host, replacing 
the domain name "example.com" with your own domain name. 

You can use the SED command to replace this automatically as this, being located within
 the sandcastle directory beforehand:

sed -i "s/example.com/yourdomain.com/g" nginx-example.conf

TLS Configuration
===================

For the sake of simplicity we recommend CERBOT as a mean to obtain the Let's Encrypt
certificates. 

First install the CERTBOT program following the instructions from https://certbot.eff.org/

After you have correctly installed CERTBOT, just execute "certbot --nginx" to obtain
the necessary certificates to use the https protocol, and have them renewed automatically
 every 90 days.