DD 44: CI System ################ Summary ======= This documents describes Taler's CI system based on Buildbot. This document uses `RFC 2119 `_ keywords throughout. Motivation ========== With the current CI system there are an array of issues: - Central place for all the jobs. - The central config is poorly organized. - We should prefer to keep as much CI logic in respective project source repos as possible. - Jobs should be split up further to allow for more granular control and insight. - Job triggers are unclear. - The build environments are mutable. - Non-trivial and error-prone to keep track of environment state. - Hard to get an overview of what repo is causing a failure, at a quick glance. - Bad for development workflow on a single project when you are getting false-negatives all the time. Proposed Solution ================= General ------- Jobs shall be executed inside of containers. One build pipeline (aka. "builder") per repo. Build steps are generated from directory structure within a given repo. Example directory structure: :: ci ├── ci.sh ├── Containerfile └── jobs ├── 0-codespell │   ├── config.ini │   ├── dictionary.txt │   └── job.sh ├── 1-build │   ├── build.sh │   └── job.sh └── 2-docs ├── docs.sh └── job.sh Job directories **MUST** follow this pattern: ``/ci/jobs//`` ``n`` is an integer used for ordering the build steps. Job directories **MUST** contain a script named ``job.sh`` which **MAY** execute other scripts. Config files may optionally be created, and MUST be named ``config.ini`` and placed in the job directory. Available config options: :: [build] HALT_ON_FAILURE = True|False WARN_ON_FAILURE = True|False CONTAINER_BUILD = True|False CONTAINER_NAME = Unless *all* jobs specify a "CONTAINER_NAME" in their custom config a container file **MUST** be present at ``/ci/Containerfile``. The container file will be built and used to run all of a repo's jobs by default. All projects SHOULD have a ``build`` step and a ``test`` step, at a minimum. Running CI Locally ------------------ Running the CI scripts locally can be useful for development and testing. *Be aware that custom configs for a given job may specify a alternate container.* Example of building the environment and running a job locally: :: # From root of repo directory, build the container: docker build -t -f ci/Containerfile . # <- don't forget the "." # Then to run one of the job scripts. For example: docker run --rm --volume $PWD:/workdir --workdir /workdir ci/jobs/1-build/job.sh # or to get an interactive shell in the container, with the repo mounted at /workdir: docker run -ti --rm --volume $PWD:/workdir --workdir /workdir /bin/bash Additional Builders ------------------- To run some tests there is a need for many or most project's sourcecode to be available in the same environment. This will be a separate builder/pipeline from the per-repo builders. Triggers for this builder are yet to be determined.