Run Docker commands in Bitbucket Pipelines
Bitbucket Pipelines allows you to build a Docker image from a Dockerfile in your repository and to push that to a Docker registry, by running Docker commands within your build pipeline. Dive straight in – the pipeline environment is provided by default and you don't need to customize it!
Enable access to Docker
To enable access to Docker daemon, you can either add
docker as a service on the step (recommended), or add the global option in your
Add Docker as a service in your build step (recommended)
pipelines: default: - step: script: - ... services: - docker
Note that Docker does not need to be declared as a service in the
definitions section. It is a default service that is provided by Pipelines without a definition.
Add Docker to all build steps in your repository
options: docker: true
Note that even if you declare Docker here, it still counts as a service for Pipelines, has a limit of 1 GB memory, and can only be run with two other services in your build step. This setting is provided for legacy support, and we recommend setting it on a step level so there's no confusion about how many services you can run in your pipeline.
Configuring Docker as a service will:
- mount the Docker CLI executable in your build container
- run and provide your build access to a Docker daemon
You can verify this by running
pipelines: default: - step: script: - docker version services: - docker
You can check your bitbucket-pipelines.yml file with our online validator.
Running Docker commands
Inside your Pipelines script you can run most Docker commands. The exceptions are Docker swarm-related commands,
docker run --privileged, and mapping volumes with a source outside
$BITBUCKET_CLONE_DIR for security reasons on our shared build infrastructure.
See the Docker command line reference for information on how to use these commands.
Using Docker Compose
If you'd like to use Docker Compose in your container, you''ll need to install a binary that is compatible with your specified build container.
Using an external Docker daemon
If you have configured your build to run commands against your own Docker daemon hosted elsewhere, you can continue to do so. In this case, you should provide your own CLI executable as part of your build image (rather than enabling Docker in Pipelines), so the CLI version is compatible with the daemon version you are running.
Docker layer caching
If you have added Docker as a service, you can also add a Docker cache to your steps. Adding the cache can speed up your build by reusing previously built layers and only creating new dynamic layers as required in the step.
pipelines: default: - step: script: - docker build ... services: - docker caches: - docker # adds docker layer caching
A common use case for Docker cache is when you are building images. However, if you find that performance slows with the cache enabled, check you are not invalidating the layers in your dockerfile.
Docker layer caches have the same limitations and behaviors as regular caches as described on Caching Dependencies.
Docker memory limits
By default, the Docker daemon in Pipelines has a total memory limit of 1024 MB. This allocation includes all containers run via
docker run commands, as well as the memory needed to execute
docker build commands.
To increase this limit you can set a memory limit for the built-in
docker service. The
memory parameter is a whole number of megabytes greater than 128 which must fit within the available memory for the step:
pipelines: default: - step: script: - docker version services: - docker definitions: services: docker: memory: 2048
Authenticate when pushing to a registry
To push images to a registry, you need to use
docker login to authenticate prior to calling
For example, add this to your pipeline script:
docker login --username $DOCKER_USERNAME --password $DOCKER_PASSWORD