blob: e777f3452d57e7183ddb1d9a080c8f2ffdef3f4f (
plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
|
DD 44: CI System
################
Summary
=======
This documents describes Taler's CI system based on Buildbot.
This document uses `RFC 2119 <https://tools.ietf.org/html/rfc2119>`_
keywords throughout.
Motivation
==========
With the current CI system there are an array of issues:
- Central place for all the jobs.
- The central config is poorly organized.
- We should prefer to keep as much CI logic in respective project source repos
as possible.
- Jobs should be split up further to allow for more granular control and
insight.
- Job triggers are unclear.
- The build environments are mutable.
- Non-trivial and error-prone to keep track of environment state.
- Hard to get an overview of what repo is causing a failure, at a quick glance.
- Bad for development workflow on a single project when you are getting
false-negatives all the time.
Proposed Solution
=================
General
-------
Jobs shall be executed inside of containers.
One build pipeline (aka. "builder") per repo.
Build steps are generated from directory structure within a given repo.
Example directory structure:
::
ci
├── ci.sh
├── Containerfile
└── jobs
├── 0-codespell
│ ├── config.ini
│ ├── dictionary.txt
│ └── job.sh
├── 1-build
│ ├── build.sh
│ └── job.sh
└── 2-docs
├── docs.sh
└── job.sh
Job directories **MUST** follow this pattern:
``<repo_root>/ci/jobs/<n-job_name>/``
``n`` is an integer used for ordering the build steps.
Job directories **MUST** contain a script named ``job.sh`` which **MAY**
execute other scripts.
Config files may optionally be created, and MUST be named ``config.ini`` and
placed in the job directory.
Available config options:
::
[build]
HALT_ON_FAILURE = True|False
WARN_ON_FAILURE = True|False
CONTAINER_BUILD = True|False
CONTAINER_NAME = <string>
Unless *all* jobs specify a "CONTAINER_NAME" in their custom config a
container file **MUST** be present at ``<repo_root>/ci/Containerfile``.
The container file will be built and used to run all of a repo's jobs
by default.
All projects SHOULD have a ``build`` step and a ``test`` step, at a minimum.
Running CI Locally
------------------
Running the CI scripts locally can be useful for development and testing.
*Be aware that custom configs for a given job may specify a alternate
container.*
Example of building the environment and running a job locally:
::
# From root of repo directory, build the container:
docker build -t <local_name_for_container> -f ci/Containerfile . # <- don't forget the "."
# Then to run one of the job scripts. For example:
docker run --rm --volume $PWD:/workdir --workdir /workdir <local_name_for_container> ci/jobs/1-build/job.sh
# or to get an interactive shell in the container, with the repo mounted at /workdir:
docker run -ti --rm --volume $PWD:/workdir --workdir /workdir <local_name_for_container> /bin/bash
Additional Builders
-------------------
To run some tests there is a need for many or most project's sourcecode to be
available in the same environment. This will be a separate builder/pipeline
from the per-repo builders. Triggers for this builder are yet to be
determined.
|