Commit f95791d4 authored by Mark Pundsack's avatar Mark Pundsack

Make Achilleas' suggested changes

parent a7caea9e
......@@ -4,12 +4,12 @@ GitLab CI allows you to use Docker Engine to build and test docker-based project
**This also allows to you to use `docker-compose` and other docker-enabled tools.**
This is one of the new trends in Continuous Integration/Deployment to:
One of the new trends in Continuous Integration/Deployment is to:
1. create an application image,
1. run tests against the created image,
1. push image to a remote registry,
1. deploy server from the pushed image
1. push image to a remote registry, and
1. deploy to a server from the pushed image.
It's also useful when your application already has the `Dockerfile` that can be used to create and test an image:
```bash
......@@ -19,9 +19,13 @@ $ docker tag my-image my-registry:5000/my-image
$ docker push my-registry:5000/my-image
```
This requires special configuration of GitLab Runner to enable `docker` support during builds. There are three methods to enable the use of `docker build` and `docker run` during builds; each with their own tradeoffs.
This requires special configuration of GitLab Runner to enable `docker` support during builds.
## 1. Use shell executor
## Runner Configuration
There are three methods to enable the use of `docker build` and `docker run` during builds; each with their own tradeoffs.
### Use shell executor
The simplest approach is to install GitLab Runner in `shell` execution mode.
GitLab Runner then executes build scripts as the `gitlab-runner` user.
......@@ -67,11 +71,11 @@ GitLab Runner then executes build scripts as the `gitlab-runner` user.
5. You can now use `docker` command and install `docker-compose` if needed.
### Notes
> **Note:**
* By adding `gitlab-runner` to `docker` group you are effectively granting `gitlab-runner` full root permissions.
For more information please checkout [On Docker security: `docker` group considered harmful](https://www.andreas-jung.com/contents/on-docker-security-docker-group-considered-harmful).
For more information please check out [On Docker security: `docker` group considered harmful](https://www.andreas-jung.com/contents/on-docker-security-docker-group-considered-harmful).
## 2. Use docker-in-docker executor
### Use docker-in-docker executor
The second approach is to use the special docker-in-docker (dind)
[Docker image](https://hub.docker.com/_/docker/) with all tools installed
......@@ -118,7 +122,7 @@ In order to do that, follow the steps:
Insecure = false
```
1. You can now use `docker` from build script:
1. You can now use `docker` in the build script:
```yaml
image: docker:latest
......@@ -136,21 +140,19 @@ In order to do that, follow the steps:
- docker run my-docker-image /script/to/run/tests
```
### Notes
* By enabling `--docker-privileged` you are effectively disabling all
> **Notes:**
> * By enabling `--docker-privileged`, you are effectively disabling all
the security mechanisms of containers and exposing your host to privilege
escalation which can lead to container breakout. For more information, check out the official Docker documentation on
[Runtime privilege and Linux capabilities][docker-cap].
* Using docker-in-docker, each build is in a clean environment without the past
history. Concurrent builds work fine because every build get it's own instance of docker engine so they won't conflict with each other. But this also means builds can be slower because there's no caching of layers.
* By default, `docker:dind` uses ``--storage-driver vfs` which is the slowest form
> * Using docker-in-docker, each build is in a clean environment without the past
history. Concurrent builds work fine because every build gets it's own instance of docker engine so they won't conflict with each other. But this also means builds can be slower because there's no caching of layers.
> * By default, `docker:dind` uses `--storage-driver vfs` which is the slowest form
offered.
An example project using this approach can be found here: https://gitlab.com/gitlab-examples/docker.
## 3. Bind Docker socket
### Use Docker socket binding
The third approach is to bind-mount `/var/run/docker.sock` into the container so that docker is available in the context of that image.
......@@ -172,14 +174,14 @@ In order to do that, follow the steps:
The above command will register a new Runner to use the special
`docker:latest` image which is provided by Docker. **Notice that it's using
the Docker daemon of the runner itself, and any containers spawned by docker commands will be siblings of the runner rather than children of the runner.** This may have complications and limitations that are unsuitable for your workflow.
the Docker daemon of the Runner itself, and any containers spawned by docker commands will be siblings of the Runner rather than children of the runner.** This may have complications and limitations that are unsuitable for your workflow.
The above command will create a `config.toml` entry similar to this:
```
[[runners]]
url = "https://gitlab.com/ci"
token = TOKEN
token = REGISTRATION_TOKEN
executor = "docker"
[runners.docker]
tls_verify = false
......@@ -191,7 +193,7 @@ In order to do that, follow the steps:
Insecure = false
```
1. You can now use `docker` from build script (note that you don't need to include the `docker:dind` service as in the option above):
1. You can now use `docker` in the build script (note that you don't need to include the `docker:dind` service as when using the Docker in Docker executor):
```yaml
image: docker:latest
......@@ -206,16 +208,14 @@ In order to do that, follow the steps:
- docker run my-docker-image /script/to/run/tests
```
### Notes
While the above method avoids using Docker in privileged mode, you should be aware of the following implications:
* By sharing the docker daemon, you are effectively disabling all
the security mechanisms of containers and exposing your host to privilege
escalation which can lead to container breakout. For example, if a project
ran `docker rm -f $(docker ps -a -q)` it would remove the GitLab Runner
containers.
* Concurrent builds may not work; if your tests
create containers with specific names, they may conflict with each other.
* Sharing files and directories from the source repo into containers may not
work as expected since volume mounting is done in the context of the host
machine, not the build container.
......@@ -306,11 +306,11 @@ deploy:
- master
```
### Notes
1. You must log in to the container registry before running commands. Putting this in `before_script` will run it before each build job.
1. Using `docker build --pull` makes sure that Docker fetches any changes to base images before building just in case your cache is stale. It takes slightly longer, but means you don’t get stuck without security patches to base images.
1. Doing an explicit `docker pull` before each `docker run` makes sure to fetch the latest image that was just built. This is especially important if you are using multiple runners that cache images locally. Using the git SHA in your image tag makes this less necessary since each build will be unique and you shouldn't ever have a stale image, but it's still possible if you re-build a given commit after a dependency has changed.
1. You don't want to build directly to `latest` in case there are multiple builds happening simultaneously.
Some things you should be aware of when using the Container Registry:
* You must log in to the container registry before running commands. Putting this in `before_script` will run it before each build job.
* Using `docker build --pull` makes sure that Docker fetches any changes to base images before building just in case your cache is stale. It takes slightly longer, but means you don’t get stuck without security patches to base images.
* Doing an explicit `docker pull` before each `docker run` makes sure to fetch the latest image that was just built. This is especially important if you are using multiple runners that cache images locally. Using the git SHA in your image tag makes this less necessary since each build will be unique and you shouldn't ever have a stale image, but it's still possible if you re-build a given commit after a dependency has changed.
* You don't want to build directly to `latest` in case there are multiple builds happening simultaneously.
[docker-in-docker]: https://blog.docker.com/2013/09/docker-can-now-run-within-docker/
[docker-cap]: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment