- Using Windows containers
- Networking
- Build directory in service
- The builds and cache storage
- The persistent storage
- The privileged mode
- How pull policies work
GitLab Runner can use Docker to run jobs on user provided images. This ispossible with the use of Docker executor.
- GitLab Runner uses Docker Engine API v1.25 to talk to the Docker Engine. This means the minimum supported version of Docker on a Linux server is 1.13.0, on Windows Server it needs to be more recent to identify the Windows Server version.
- Sep 26, 2016 For developers, Windows 10 is a great place to run Docker Windows containers and containerization support was added to the the Windows 10 kernel with the Anniversary Update (note that container images can only be based on Windows Server Core and Nanoserver, not Windows 10). All that’s missing is the Windows-native Docker Engine and some image.
The Docker executor when used with GitLab CI, connects to Docker Engineand runs each build in a separate and isolated container using the predefinedimage that is set up in .gitlab-ci.yml
and in accordance inconfig.toml
.
That way you can have a simple and reproducible build environment that can alsorun on your workstation. The added benefit is that you can test all thecommands that we will explore later from your shell, rather than having to testthem on a dedicated CI server.
The official Windows base image for containers. If your docker is running windows containers, and then if you try to fetch a linux based container such as nginx, like so. Docker pull nginx:latest you will get a message as follows. Latest: Pulling from library/nginx no matching manifest for windows/amd64 10.0.18363 in the manifest list entries So switch to linux contaners. See the very first.
The following configurations are supported:Runner is installed on: | Executor is: | Container is running: |
---|---|---|
Windows | docker-windows | Windows |
Windows | docker | Linux |
Linux | docker | Linux |
Runner is installed on: | Executor is: | Container is running: |
---|---|---|
Linux | docker-windows | Linux |
Linux | docker | Windows |
Linux | docker-windows | Windows |
Windows | docker | Windows |
Windows | docker-windows | Linux |
1.13.0
,on Windows Server it needs to be more recentto identify the Windows Server version.Using Windows containers
To use Windows containers with the Docker executor, note the followinginformation about limitations, supported Windows versions, andconfiguring a Windows Docker executor.
Nanoserver support
Introduced in GitLab Runner 13.6.
With the support for Powershell Core introduced in the Windows helper image, it is now possible to leveragethe nanoserver
variants for the helper image.
Limitations
The following are some limitations of using Windows containers withDocker executor:
- Docker-in-Docker is not supported, since it’s notsupported byDocker itself.
- Interactive web terminals are not supported.
- Host device mounting not supported.
- When mounting a volume directory it has to exist, or Docker will failto start the container, see#3754 foradditional detail.
docker-windows
executor can be run only using GitLab Runner runningon Windows.- Linux containers onWindowsare not supported, since they are still experimental. Read therelevantissue formore details.
Because of a limitation inDocker,if the destination path drive letter is not
c:
, paths are not supported for:This means values such as
f:cache_dir
are not supported, butf:
is supported.However, if the destination path is on thec:
drive, paths are also supported(for examplec:cache_dir
).
Supported Windows versions
GitLab Runner only supports the following versions of Windows whichfollows our support lifecycle forWindows:
- Windows Server 2004.
- Windows Server 1909.
- Windows Server 1903.
- Windows Server 1809.
For future Windows Server versions, we have a future version supportpolicy.
You can only run containers based on the same OS version that the Dockerdaemon is running on. For example, the following Windows ServerCore
images canbe used:
mcr.microsoft.com/windows/servercore:2004
mcr.microsoft.com/windows/servercore:2004-amd64
mcr.microsoft.com/windows/servercore:1909
mcr.microsoft.com/windows/servercore:1909-amd64
mcr.microsoft.com/windows/servercore:1903
mcr.microsoft.com/windows/servercore:1903-amd64
mcr.microsoft.com/windows/servercore:1809
mcr.microsoft.com/windows/servercore:1809-amd64
mcr.microsoft.com/windows/servercore:ltsc2019
Supported Docker versions
A Windows Server running GitLab Runner must be running a recent version of Dockerbecause GitLab Runner uses Docker to detect what version of Windows Server is running.
A combination known not to work with GitLab Runner is Docker 17.06and Server 1909. Docker does not identify the version of Windows Serverresulting in the following error:
This error should contain the Windows Server version. If you get this error,with no version specified, upgrade Docker. Try a Docker version of similar age,or later, than the Windows Server release.
Read more about troubleshooting this.
Configuring a Windows Docker executor
c:cache
as a source directory when passing the --docker-volumes
orDOCKER_VOLUMES
environment variable, there is aknown issue.Below is an example of the configuration for a simple Dockerexecutor running Windows.
For other configuration options for the Docker executor, see theadvancedconfigurationsection.
Services
You can use services byenabling network per-build networking mode.Availablesince GitLab Runner 12.9.
Workflow
The Docker executor divides the job into multiple steps:
- Prepare: Create and start the services.
- Pre-job: Clone, restore cacheand download artifacts from previousstages. This is run on a special Docker image.
- Job: User build. This is run on the user-provided Docker image.
- Post-job: Create cache, upload artifacts to GitLab. This is run ona special Docker Image.
The special Docker image is based on Alpine Linux and contains all the toolsrequired to run the prepare, pre-job, and post-job steps, like the Git and theGitLab Runner binaries for supporting caching and artifacts. You can find the definition ofthis special image in the official GitLab Runner repository.
The image
keyword
The image
keyword is the name of the Docker image that is present in thelocal Docker Engine (list all images with docker images
) or any image thatcan be found at Docker Hub. For more information about images and DockerHub please read the Docker Fundamentals documentation.
In short, with image
we refer to the Docker image, which will be used tocreate a container on which your build will run.
If you don’t specify the namespace, Docker implies library
which includes allofficial images. That’s why you’ll seemany times the library
part omitted in .gitlab-ci.yml
and config.toml
.For example you can define an image like image: ruby:2.6
, which is a shortcutfor image: library/ruby:2.6
.
Then, for each Docker image there are tags, denoting the version of the image.These are defined with a colon (:
) after the image name. For example, forRuby you can see the supported tags at https://hub.docker.com/_/ruby/. If youdon’t specify a tag (like image: ruby
), latest
is implied.
The image you choose to run your build in via image
directive must have aworking shell in its operating system PATH
. Supported shells are sh
,bash
, and pwsh
(since 13.9)for Linux, and PowerShell for Windows.GitLab Runner cannot execute a command using the underlying OS system calls(such as exec
).
The services
keyword
The services
keyword defines just another Docker image that is run duringyour build and is linked to the Docker image that the image
keyword defines.This allows you to access the service image during build time.
The service image can run any application, but the most common use case is torun a database container, e.g., mysql
. It’s easier and faster to use anexisting image and run it as an additional container than install mysql
everytime the project is built.
You can see some widely used services examples in the relevant documentation ofCI services examples.
If needed, you can assign an aliasto each service.
Networking
Networking is required to connect services to the build job and may also be used to run build jobs in user-definednetworks. Either legacy network_mode
or per-build
networking may be used.
Legacy container links
The default network mode uses Legacy container links withthe default Docker bridge
mode to link the job container with the services.
network_mode
can be used to configure how the networking stack is set up for the containersusing one of the following values:
- One of the standard Docker networking modes:
bridge
: use the bridge network (default)host
: use the host’s network stack inside the containernone
: no networking (not recommended)- Any other
network_mode
value is taken as the name of an already existingDocker network, which the build container should connect to.
For name resolution to work, Docker will manipulate the /etc/hosts
file in the buildjob container to include the service container hostname (and alias). However,the service container will not be able to resolve the build job containername. To achieve that, use the per-build
network mode.
Linked containers share their environment variables.
Network per-build
Introduced in GitLab Runner 12.9.
This mode will create and use a new user-defined Docker bridge network per build.User-defined bridge networks are covered in detail in the Docker documentation.
Unlike legacy container links used in other network modes,Docker environment variables are not shared across the containers.
Docker networks may conflict with other networks on the host, including other Docker networks,if the CIDR ranges are already in use. The default Docker address pool can be configuredvia default-address-pool
in dockerd
.
To enable this mode you need to enable the FF_NETWORK_PER_BUILD
feature flag.
When a job starts, a bridge network is created (similarly to dockernetwork create <network>
). Upon creation, the service container(s) and thebuild job container are connected to this network.
Both the build job container, and the service container(s) will be able toresolve each other’s hostnames (and aliases). This functionality isprovided by Docker.
The build container is resolvable via the build
alias as well as it’s GitLab assigned hostname.
The network is removed at the end of the build job.
Define image and services from .gitlab-ci.yml
You can simply define an image that will be used for all jobs and a list ofservices that you want to use during build time.
It is also possible to define different images and services per job:
Define image and services in config.toml
Look for the [runners.docker]
section:
The example above uses the array of tables syntax.
The image and services defined this way will be added to all builds run bythat runner, so even if you don’t define an image
inside .gitlab-ci.yml
,the one defined in config.toml
will be used.
Define an image from a private Docker registry
Starting with GitLab Runner 0.6.0, you are able to define images located toprivate registries that could also require authentication.
All you have to do is be explicit on the image definition in .gitlab-ci.yml
.
In the example above, GitLab Runner will look at my.registry.tld:5000
for theimage namespace/image:tag
.
If the repository is private you need to authenticate your GitLab Runner in theregistry. Read more on using a private Docker registry.
Accessing the services
Let’s say that you need a Wordpress instance to test some API integration withyour application.
You can then use for example the tutum/wordpress as a service image in your.gitlab-ci.yml
:
When the build is run, tutum/wordpress
will be started first and you will haveaccess to it from your build container under the hostname tutum__wordpress
and tutum-wordpress
.
The GitLab Runner creates two alias hostnames for the service that you can usealternatively. The aliases are taken from the image name following these rules:
- Everything after
:
is stripped. - For the first alias, the slash (
/
) is replaced with double underscores (__
). - For the second alias, the slash (
/
) is replaced with a single dash (-
).
Using a private service image will strip any port given and apply the rules asdescribed above. A service registry.gitlab-wp.com:4999/tutum/wordpress
willresult in hostname registry.gitlab-wp.com__tutum__wordpress
andregistry.gitlab-wp.com-tutum-wordpress
.
Configuring services
Many services accept environment variables which allow you to easily changedatabase names or set account names depending on the environment.
GitLab Runner 0.5.0 and up passes all YAML-defined variables to the createdservice containers.
For all possible configuration variables check the documentation of each imageprovided in their corresponding Docker Hub page.
Adobe acrobat pro for mac torrent. All variables are passed to all services containers. It’s not designed todistinguish which variable should go where.Secure variables are only passed to the build container.
Mounting a directory in RAM
You can mount a path in RAM using tmpfs. This can speed up the time required to test if there is a lot of I/O related work, such as with databases.If you use the tmpfs
and services_tmpfs
options in the runner configuration, you can specify multiple paths, each with its own options. See the Docker reference for details.This is an example config.toml
to mount the data directory for the official Mysql container in RAM.
Build directory in service
Since version 1.5 GitLab Runner mounts a /builds
directory to all shared services.
See an issue: https://gitlab.com/gitlab-org/gitlab-runner/-/issues/1520.
PostgreSQL service example
See the specific documentation forusing PostgreSQL as a service.
MySQL service example
See the specific documentation forusing MySQL as a service.
The services health check
After the service is started, GitLab Runner waits some time for the service tobe responsive. Currently, the Docker executor tries to open a TCP connection tothe first exposed service in the service container.
You can see how it is implemented by checking this Go command.
The builds and cache storage
The Docker executor by default stores all builds in/builds/<namespace>/<project-name>
and all caches in /cache
(inside thecontainer).You can overwrite the /builds
and /cache
directories by defining thebuilds_dir
and cache_dir
options under the [[runners]]
section inconfig.toml
. This will modify where the data are stored inside the container.
If you modify the /cache
storage path, you also need to make sure to mark thisdirectory as persistent by defining it in volumes = ['/my/cache/']
under the[runners.docker]
section in config.toml
.
Clearing Docker cache
Introduced in GitLab Runner 13.9, all created runner resources cleaned up.
GitLab Runner provides the clear-docker-cache
script to remove old containers and volumes that can unnecessarily consume disk space.
Run clear-docker-cache
regularly (using cron
once per week, for example),ensuring a balance is struck between:
- Maintaining some recent containers in the cache for performance.
- Reclaiming disk space.
clear-docker-cache
can remove old or unused containers and volumes that are created by the GitLab Runner. For a list of options, run the script with help
option:
The default option is prune-volumes
which the script will remove all unused containers (both dangling and unreferenced) and volumes.
Clearing old build images
The clear-docker-cache
script will not remove the Docker images as they are not tagged by the GitLab Runner. You can however confirm the space that can be reclaimed by running the script with the space
option as illustrated below:
Once you have confirmed the reclaimable space, run the docker system prune
command that will remove all unused containers, networks, images (both dangling and unreferenced), and optionally, volumes that are not tagged by the GitLab Runner.
The persistent storage
The Docker executor can provide a persistent storage when running the containers.All directories defined under volumes =
will be persistent between builds.
The volumes
directive supports two types of storage:
<path>
- the dynamic storage. The<path>
is persistent between subsequentruns of the same concurrent job for that project. The data is attached to acustom cache volume:runner-<short-token>-project-<id>-concurrent-<concurrency-id>-cache-<md5-of-path>
.<host-path>:<path>[:<mode>]
- the host-bound storage. The<path>
isbound to<host-path>
on the host system. The optional<mode>
can specifythat this storage is read-only or read-write (default).
The persistent storage for builds
If you make the /builds
directory a host-bound storage, your builds will be stored in:/builds/<short-token>/<concurrent-id>/<namespace>/<project-name>
, where:
<short-token>
is a shortened version of the Runner’s token (first 8 letters)<concurrent-id>
is a unique number, identifying the local job ID on theparticular runner in context of the project
The privileged mode
The Docker executor supports a number of options that allows fine-tuning of thebuild container. One of these options is the privileged
mode.
Use Docker-in-Docker with privileged mode
The configured privileged
flag is passed to the build container and allservices, thus allowing to easily use the Docker-in-Docker approach.
First, configure your runner (config.toml
) to run in privileged
mode:
Then, make your build script (.gitlab-ci.yml
) to use Docker-in-Dockercontainer:
The ENTRYPOINT
Docker Pull Windows 10 Product Key
The Docker executor doesn’t overwrite the ENTRYPOINT
of a Docker image.
That means that if your image defines the ENTRYPOINT
and doesn’t allow runningscripts with CMD
, the image will not work with the Docker executor.
With the use of ENTRYPOINT
it is possible to create special Docker image thatwould run the build script in a custom environment, or in secure mode.
You may think of creating a Docker image that uses an ENTRYPOINT
that doesn’texecute the build script, but does execute a predefined set of commands, forexample to build the Docker image from your directory. In that case, you canrun the build container in privileged mode, and makethe build environment of the runner secure.
Consider the following example:
Create a new Dockerfile:
Create a bash script (
entrypoint.sh
) that will be used as theENTRYPOINT
:Push the image to the Docker registry.
Run Docker executor in
privileged
mode. Inconfig.toml
define:In your project use the following
.gitlab-ci.yml
:
This is just one of the examples. With this approach the possibilities arelimitless.
How pull policies work
When using the docker
or docker+machine
executors, you can set thepull_policy
parameter in the runner config.toml
file as described in the configuration docs’Docker section.
This parameter defines how the runner works when pulling Docker images (for both image
and services
keywords).You can set it to a single value, or a list of pull policies, which will be attempted in orderuntil an image is pulled successfully.
If you don’t set any value for the pull_policy
parameter, thenthe runner will use the always
pull policy as the default value.
Now let’s see how these policies work.
Using the never
pull policy
The never
pull policy disables images pulling completely. If you set thepull_policy
parameter of a runner to never
, then users will be ableto use only the images that have been manually pulled on the Docker hostthe runner runs on.
If an image cannot be found locally, then the runner will fail the buildwith an error similar to:
When to use this pull policy?
This pull policy should be used if you want or need to have a fullcontrol on which images are used by the runner’s users. It is a good choicefor private runners that are dedicated to a project where only specific imagescan be used (not publicly available on any registries).
When not to use this pull policy?
This pull policy will not work properly with most of auto-scaledDocker executor use cases. Because of how auto-scaling works, the never
pull policy may be usable only when using a pre-defined cloud instanceimages for chosen cloud provider. The image needs to contain installedDocker Engine and local copy of used images.
Using the if-not-present
pull policy
When the if-not-present
pull policy is used, the runner will first checkif the image is present locally. If it is, then the local version ofimage will be used. Otherwise, the runner will try to pull the image.
When to use this pull policy?
This pull policy is a good choice if you want to use images pulled fromremote registries, but you want to reduce time spent on analyzing imagelayers difference when using heavy and rarely updated images.In that case, you will need once in a while to manually remove the imagefrom the local Docker Engine store to force the update of the image.
It is also the good choice if you need to use images that are builtand available only locally, but on the other hand, also need to allow topull images from remote registries.
When not to use this pull policy?
This pull policy should not be used if your builds use images thatare updated frequently and need to be used in most recent versions.In such a situation, the network load reduction created by this policy maybe less worthy than the necessity of the very frequent deletion of localcopies of images.
This pull policy should also not be used if your runner can be used bydifferent users which should not have access to private images usedby each other. Especially do not use this pull policy for shared runners.
To understand why the if-not-present
pull policy creates security issueswhen used with private images, read thesecurity considerations documentation.
Using the always
pull policy
The always
pull policy will ensure that the image is always pulled.When always
is used, the runner will try to pull the image even if a localcopy is available. The caching semanticsof the underlying image provider make this policy efficient.The pull attempt is fast because all image layers are cached.
If the image is not found, then the build will fail with an error similar to:
When using the always
pull policy in GitLab Runner versions older than v1.8
, it couldfall back to the local copy of an image and print a warning:
This was changed in GitLab Runner v1.8
.
When to use this pull policy?
This pull policy should be used if your runner is publicly availableand configured as a shared runner in your GitLab instance. It is theonly pull policy that can be considered as secure when the runner willbe used with private images.
This is also a good choice if you want to force users to always usethe newest images.
Also, this will be the best solution for an auto-scaledconfiguration of the runner.
When not to use this pull policy?
This pull policy will definitely not work if you need to use locallystored images. In this case, the runner will skip the local copy of the imageand try to pull it from the remote registry. If the image was built locallyand doesn’t exist in any public registry (and especially in the defaultDocker registry), the build will fail with:
Using multiple pull policies
Introduced in GitLab Runner 13.8.
The pull_policy
parameter allows you to specify a list of pull policies.The policies in the list will be attempted in order from left to right until a pull attemptis successful, or the list is exhausted.
When to use multiple pull policies?
This functionality can be useful when the Docker registry is not availableand you need to increase job resiliency.If you use the always
policy and the registry is not available, the job fails even if the desired image is cached locally.
To overcome that behavior, you can add additional fallback pull policiesthat execute in case of failure.By adding a second pull policy value of if-not-present
, the runner finds any locally-cached Docker image layers:
Any failure to fetch the Docker image causes the runner to attempt the following pull policy.Examples include an HTTP 403 Forbidden
or an HTTP 500 Internal Server Error
response from the repository.
Note that the security implications mentioned in the When not to use this pull policy?
sub-section of theUsing the if-not-present pull policy section still apply,so you should be aware of the security implications and read thesecurity considerations documentation.
Docker vs Docker-SSH (and Docker+Machine vs Docker-SSH+Machine)
We provided a support for a special type of Docker executor, namely Docker-SSH(and the autoscaled version: Docker-SSH+Machine). Docker-SSH uses the same logicas the Docker executor, but instead of executing the script directly, it uses anSSH client to connect to the build container.
Docker-SSH then connects to the SSH server that is running inside the containerusing its internal IP.
This executor is no longer maintained and will be removed in the near future.
Windows 10 Media Creation Tool
Help & feedback
Docs
Edit this pageto fix an error or add an improvement in a merge request.Create an issueto suggest an improvement to this page.
Show and post commentsto review and give feedback about this page.
Product
Create an issueif there's something you don't like about this feature.Propose functionalityby submitting a feature request.
Join First Lookto help shape new features.
Feature availability and product trials
View pricingto see all GitLab tiers and features, or to upgrade.Try GitLab for freewith access to all features for 30 days.
Get Help
If you didn't find what you were looking for,search the docs.
If you want help with something specific and could use community support,post on the GitLab forum.
For problems setting up or using this feature (depending on your GitLabsubscription).