Building Linux Docker Images on Windows 2008 R2 with Maven and TeamCity

This post describes how to use Maven to build a Docker image using a remote Docker host running on Linux. This means that the Maven build can run anywhere, for example in TeamCity on Windows. The assumption here is that we have a separate (virtual) machine running Linux (RHEL 7), and we use this machine both as a Docker host for building images, and also as a private Docker registry.

The background to all this is that an organization I’m working with has standardized on TeamCity running on Windows Server 2008 R2 for continuous integration. They are in the progress of moving TeamCity to Windows 2012 R2, but the same setup can hopefully be used on the new build server.

The organization is mainly Windows-based, but there are some important Java services running on Linux (Red Hat, RHEL 7), with some more on the way. I have started experimenting with Docker for easier deployment of Java services. Since the organization is not running any very recent Windows servers with native Docker support, the focus here is on Docker for Linux.

The steps to get this build process up and running are as follows:

  1. Install Docker on the Linux machine.
  2. Allow remote access to the Docker daemon in a secure way.
  3. Configure the Maven pom.xml to create a Docker image using the Linux Docker host.
  4. Set up a private Docker registry on the Linux machine.
  5. Update the Maven pom.xml so that we can push images to the private registry.
  6. Configure TeamCity to run the build process.

Installing Docker

Docker has recently changed the packaging of distributions so that the free version is now called the Community Edition (CE), while the version where you pay for support is called the Enterprise Edition (EE).

On Red Hat, only the Enterprise Edition is supported. On CentOS, both editions are available, so to experiment with Docker for free, using CentOS is one obvious way to go. In this case, however, the organization I’m working with has standardized only on Red Hat, not CentOS, so the machine that is available for experimentation is running Red Hat 7. Since I am still only experimenting, I decided to give the free Docker version for CentOS a chance, even though the machine is running RHEL 7. These instructions should work on CentOS as well.

Follow the official installation instructions for Docker on CentOS:

$ sudo yum install -y yum-utils
$ sudo yum-config-manager \
    --add-repo \
$ sudo yum makecache fast
$ sudo yum install -y docker-ce
$ sudo systemctl start docker
$ sudo docker run hello-world

The last command should print some text starting with “Hello from Docker!”. If so, you have successfully installed Docker on your machine.

Allowing Remote Access to the Docker Daemon

At the moment, Docker is only available when you run as root on the local machine. This is because the Docker daemon binds to a Unix socket instead of a TCP port. By default that Unix socket is owned root and other users can only access it using sudo. The Docker daemon always runs as the root user.

We want to access Docker from another machine in order to build Docker images from a Windows machine, so we need to configure Docker to listen on a socket. Since anyone who can access the Docker daemon gets root privileges, we want to limit access using TLS and certificates. We will set up our own certificate authority (CA). If you have access to certificates from some other CA, you can use those instead.

First of all we create the CA:

$ cd
$ mkdir -p docker/ca
$ cd docker/ca/
$ openssl genrsa -aes256 -out ca-key.pem 4096
$ openssl req -new -x509 -days 1825 -key ca-key.pem -sha256 \
    -out ca.pem

Then we create a key and certificate for the server:

### Set HOST to the DNS name of your Docker daemon’s host:
$ cd
$ mkdir -p docker/certs
$ cd docker/certs
$ ln -s ../ca/ca.pem .
$ openssl genrsa -out server-key.pem 4096
$ openssl req -subj "/CN=$HOST" -sha256 -new \
    -key server-key.pem -out server.csr
### Provide all DNS names and IP addresses that will be used
### to contact the Docker daemon:
$ echo subjectAltName = DNS:$HOST,IP:,IP: \
    > extfile.cnf
$ openssl x509 -req -days 365 -sha256 -in server.csr \
    -CA ../ca/ca.pem -CAkey ../ca/ca-key.pem -CAcreateserial \
    -out server-cert.pem -extfile extfile.cnf

Now we create a key and certificate for the client:

$ openssl genrsa -out key.pem 4096
$ openssl req -subj '/CN=client' -new -key key.pem \
    -out client.csr
$ echo extendedKeyUsage = clientAuth > extfile.cnf
$ openssl x509 -req -days 365 -sha256 -in client.csr \
    -CA ../ca/ca.pem -CAkey ../ca/ca-key.pem -CAcreateserial \
    -out cert.pem -extfile extfile.cnf

Clean up the certificate directories:

$ rm client.csr server.csr extfile.cnf
$ chmod 0400 ../ca/ca-key.pem key.pem server-key.pem
$ chmod 0444 ../ca/ca.pem server-cert.pem cert.pem

We are finally ready to enable remote access to Docker:

$ cd
$ sudo mkdir /etc/systemd/system/docker.service.d
### Substitute $HOME/docker/certs with the directory where you
### created the certificates above:
$ cat > docker.conf <<EOF
ExecStart=/usr/bin/dockerd --tlsverify \
--tlscacert=$HOME/docker/certs/ca.pem \
--tlscert=$HOME/docker/certs/server-cert.pem \
--tlskey=$HOME/docker/certs/server-key.pem \
-H tcp://
$ sudo mv docker.conf /etc/systemd/system/docker.service.d/
$ sudo systemctl daemon-reload
$ sudo systemctl restart docker
$ sudo systemctl enable docker

We will now tell the docker client how to connect to the daemon:

$ export DOCKER_HOST=
### Substitute ~/docker/certs with your certificate directory:
$ export DOCKER_CERT_PATH=~/docker/certs
$ docker run hello-world

If the last command printed some text starting with “Hello from Docker!”, congratulations, you have now configured the Docker daemon to allow remote access on port 2376, the standard port to use for Docker over TLS.

Please note that you did not have to use sudo to run the docker command as root. Anyone who has access to the client key docker/certs/key.pem and the client certificate docker/certs/cert.pem can now call the Docker daemon from a remote host, in practice getting root access to the machine Docker is running on. It is important to keep the client key safe!

Also note that Docker is very specific when it comes to the names used for keys and certificates. The files used for client authentication must be called key.pem, cert.pem and ca.pem, respectively.

Since we want other machines to be able to connect to the Docker daemon, we need to open port 2376 in the firewall:

$ sudo firewall-cmd --zone=public --add-port=2376/tcp
$ sudo firewall-cmd --zone=public --add-port=2376/tcp \

Configuring Maven to Create a Docker Image

The Docker configuration we have done so far has been on the Linux server. We now move to some other machine, for example your workstation, where we assume that Docker is not installed. In this example the workstation is running Windows so the example paths will be using the Windows format.

We will now configure the Maven POM to create a Docker image on the Linux server, using a Docker plugin for Maven. There are several to choose from, but in this example we use the one from Spotify.




FROM frolvlad/alpine-oraclejdk8:slim
ADD app.jar
RUN sh -c 'touch /app.jar'
ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -jar /app.jar" ]

We can now try to build a Docker image:

mvn clean install docker:build

This fails with an error message saying that it cannot connect to localhost on port 2375:

[ERROR] Failed to execute goal com.spotify:docker-maven-plugin:0.4.13:build (default-cli) on project rld-docker-sample: Exception caught: java.util.concurrent.ExecutionException: org.apache.http.conn.HttpHostConnectException: Connect to localhost:2375 [localhost/, localhost/0:0:0:0:0:0:0:1] failed: Connection refused: connect -> [Help 1]

The Docker Maven plugin expects Docker to be running on the same machine, without TLS so the default port 2375 is assumed. We need to set an environment variable to tell the plugin where Docker is running:

# Set the DOCKER_HOST variable to point to your Docker machine:

If we try to run mvn docker:build now, we get a different error message, saying that the server failed to respond with a valid HTTP response:

[ERROR] Failed to execute goal com.spotify:docker-maven-plugin:0.4.13:build (default-cli) on project anmalan-service: Exception caught: java.util.concurrent.ExecutionException: org.apache.http.client.ClientProtocolException: The server failed to respond with a valid HTTP response -> [Help 1]

This is because the plugin is still trying to use plain HTTP and not HTTPS. To make the plugin understand that we want to use HTTPS, we need to provide the client key and certificate and the CA certificate that we created previously.

First of all, you need to copy the three files docker/certs/{key,cert,ca}.pem from the Docker machine to your workstation. In this example, we copy them to the directory D:\docker\certs.

We now need to point the Maven Docker plugin to the directory where the necessary certificates and key are by setting some more environment variables:


The DOCKER_TLS_VERIFY environment variable supposedly tells the client to verify the certificate of the Docker daemon. I don’t actually think the Spotify Docker client uses this variable, but it doesn’t hurt to set it.

If we now run mvn docker:build we should be greeted with “BUILD SUCCESS”.

Setting up a Private Docker Registry

We are now in a position where we can build a Docker image on the Linux machine from a remote host. We can also already push the image to the central Docker registry, but in this case I decided to experiment with a private Docker registry for the images built for the organization I’m helping.

Luckily, it is very easy to start a private Docker registry, using Docker of course. On the Linux server running the Docker daemon, give the following commands:

$ docker run -d -p 5000:5000 --restart=always --name registry \
    -v ~/docker/certs:/certs \
    -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/server-cert.pem \
    -e REGISTRY_HTTP_TLS_KEY=/certs/server-key.pem registry:2
$ sudo firewall-cmd --zone=public --add-port=5000/tcp
$ sudo firewall-cmd --zone=public --add-port=5000/tcp \
$ docker ps

As usual, you need to replace ~/docker/certs with the directory where you created the server key and certificate.

The docker ps command should show that the registry is running, and that port 5000 is mapped to port 5000 on the host machine. This means that we can now push Docker images to our registry by connecting to port 5000 on the Linux server. As you may have guessed from the environment variables provided when the registry was started, the client that wants to push an image also needs to use a key and certificate to identify itself.

Please note that who is the client and who is the server depends on your point of view. When we use the Docker Maven plugin to build an image, the plugin is the client communicating with the Docker daemon—the server—on port 2376. When we push an image to the registry, the Docker daemon is the client, communicating with the registry server on port 5000.

Configuring Maven to Push to Our Repository

You specify that you want to push to a certain registry using the address of the registry as a prefix to the Docker image name, so instead of naming the image rld/rld-docker-sample, for example, you name it to push to the registry running on



We can now try to build and push an image to our private Docker registry:

$ mvn clean install docker:build -DpushImage

This will probably fail after trying to push five times, with a rather cryptic error message saying that the certificate is signed by an unknown authority:

[ERROR] Failed to execute goal com.spotify:docker-maven-plugin:0.4.13:build (default-cli) on project rld-docker-sample: Exception caught: Get x509: certificate signed by unknown authority -> [Help 1]

The question is which certificate is signed by the unknown authority. The answer is that it is the Docker daemon connecting to the private Docker registry that uses a certificate (docker/certs/server-cert.pem) that the registry does not recognize. The reason is that we only have provided a key and certificate when starting the registry, not any CA certificate.

The solution is to add the CA certificate to a subdirectory to /etc/docker/certs.d with the same name as the repository. The file must use the file extension .crt to be picked up as a CA certificate:

# Use the name of your registry:
$ sudo mkdir -p \
# Replace ~/docker/ca with your CA directory:
$ sudo cp ~/docker/ca/ca.pem \

When we now try to build, we hopefully get “BUILD SUCCESS”:

$ mvn clean install docker:build -DpushImage

You can use the registry API to find information about the images that are stored in your private registry. For example, if you want to see which images are available, use a command like this:

$ curl --cacert ~/docker/certs/ca.pem \

To see what tags are available for a specific image, use a command like the following:

$ curl --cacert ~/docker/certs/ca.pem \

In the command above, rld/rld-docker-sample is the name of an image, one that was included in the output of the previous _catalog command.

Configuring TeamCity

Luckily, configuring TeamCity to build the Docker image is easy, since the heavy lifting is done by Maven. We need to copy the key and certificate files docker/certs/{key,cert,ca}.pem to an appropriate location on the machine running TeamCity. Let’s assume we put them in E:\docker\certs.

We also need to set the environment variables that tell the Docker client how to connect to the Docker daemon:

# Set the DOCKER_HOST variable to point to your Docker machine:

You need to restart the TeamCity process for the changes to take effect.

Since I believe in the concept of continuous delivery, every commit is a release candidate, so the build process should create an artifact with a real version number, not a snapshot. It should also create a release branch and tag the version that was built. The rest of this section describes how to set up a TeamCity build appropriate for continuous integration—it is not limited to building Docker images but can be used in many different types of project.

The build steps necessary can be reused for different projects. In TeamCity, you can create a build configuration template that defines build parameters and build steps. It is then easy to create a build configuration using the template.

Start by creating a new TeamCity project. We will now define a few configuration parameters for the project, parameters that will be available to all sub-projects, build templates and build configurations that belong to the project.

Under Parameters, define the following configuration parameters:

  • development.branch=master
  • major.version.number=
  • version.number=%major.version.number%.%build.counter%
  • release.branch=release-%version.number%

Now create a build configuration template called Maven Build with the following build steps:

  1. Create Release Branch (of type Command Line)
  2. git checkout -b %release.branch% %development.branch%
  3. Deploy Snapshots (of type Maven)
  4. mvn clean deploy -DskiptTests
  5. Update Version Numbers (of type Maven)
  6. mvn versions:set -DnewVersion=%version.number%
  7. Build Docker Image (of type Maven)
  8. mvn clean install docker:build -DpushImage
  9. Commit and Tag Release (of type Command Line)
  10. git commit -a -m "New release candidate %version.number%"
    git push origin %release.branch%
    git tag %version.number%
    git push origin %version.number%
  11. Remove Local Branch (of type Command Line, execute always)
  12. git checkout %development.branch%
    git branch -D %release.branch%

For the project you want to build, go to VCS Roots and click on Create VCS Root to define a new Git VCS root pointing to the Git repository of your project.

We can now create a build configuration called Build that is based on the Maven Build template. The build parameters that you previously defined are displayed and you need to fill in the appropriate version number to use for major.version.number. If you use 2.1, for example, each build will create a version starting with 2.1 and with a build number starting at one as the third component, generating versions 2.1.1, 2.1.2, 2.1.3, and so on.

Under Version Control Settings, click Attach VCS Root and choose the Git VCS root you created for the project. Under Checkout Options, make sure to change VCS checkout mode to Automatically on agent (if supported by VCS roots).

Under Triggers, click Add New Trigger and add a VCS Trigger with the default settings.

Congratulations, you now have a TeamCity build that will create a new tagged release candidate every time you push changes to Git. A Docker image, tagged with the version number, will also be pushed to your private Docker registry.


By setting up a Docker host running on Linux and allowing remote access to it in a secure way using TLS and certificates, we can build and tag Docker images on it from other machines that do not run Docker. We can do this using a Docker Maven plugin, for example.

Creating a private Docker registry is easy, so that we can push images to a registry that we control instead of the central registry.

With a continuous integration server like TeamCity, we can make sure that every push to Git creates a tagged release candidate, and that the corresponding Docker image is pushed to our private Docker registry.

A Simple Git Branching Strategy

In a new project it is always necessary to choose a strategy for working with your version control system when it comes to branching and release management. Some of the things I look for in a branching strategy:

  • It should be as simple as possible.
  • It should maximize the benefits of continuous integration.
  • It should make it easy to create a release.

For Git, a strategy that has been used in many projects is GitFlow. This post will look at some aspects of GitFlow and propose a simpler branching strategy.

It is important to remember that GitFlow was initially described in 2010, when manual releases were common, and is based on the idea of merging changes that should go into a release into the master branch as preparation for a production release. This means that GitFlow is not well suited for continuous delivery. In my opinion, most projects should strive for being able to do continuous delivery, even if the system is actually released in long cycles.

Develop Branch

In GitFlow, all development is done on a develop branch, and the work is merged into the master branch as a part of the release process. The idea is that the master branch should always contain code in a production-ready state.

What is the benefit of always keeping the master branch ready for production? You should never deploy from the head of a branch anyway, you should always deploy from a tag. This means you could do development on the master branch instead, and tag it when it is ready for production.

Conclusion: Do not use a develop branch, do development on the master branch.

Feature Branches

Feature branches are used to let developers work on a feature without being disturbed by the work of others. But when we use continuous integration, isolating the work that different developers do from each other is exactly what we want to avoid! All work that is being done on a branch that is not continuously integrated brings us a step closer to a miniature “integration hell”.

The alternative is to do all work on the master branch. This requires a clean code base with high cohesion and low coupling, as well as constant communication between the developers, so that developers rarely have to work on the same bit of code, and know when they do.

If a feature is large, it can either be delivered incrementally or hidden from users until it is ready. If it is necessary to make a large-scale change that affects a large portion of the code, you can use the Branch by Abstraction pattern as an alternative to creating a Git branch.

It is often useful to keep track on the changes that have been made for a specific feature. Instead of using feature branches, this can be achieved by adding the ID of the feature to the commit comment. If you are using JIRA, for example, a Git integration plugin makes it very easy to see all commits that belong to a certain issue.

Conclusion: Do not use feature branches, do development on the master branch, using small incremental commits. Every commit message should contain the ID of the feature, bug, improvement or similar being worked on.

Release and Hotfix Branches

In GitFlow, a release branch is created before each release, and any release preparation is done on the release branch, including updating version numbers to match the release. The release branch is only kept until the release is ready, when it is removed.

A hotfix branch is created if it is necessary to make a change in a system that is in production. It is created from the tag of the released system and used for making the fix, after which the hotfix branch is removed.

We do need a release branch to prepare our release, and we may also need a branch to make fixes to the release after it has been taken into production. However, it is not necessary to create separate branches for the different purposes, instead we can create a release branch where we do the release preparation and let the branch live indefinitely in case we do need to make any fixes to that release.

Conclusion: Create a release branch before each release and let the branch live indefinitely. If it is necessary to make changes to the release, do them on the release branch and make sure the changes are merged into the master branch.

Automating the Release Procedure

As an example of how the creation of the release branch can be automated, here is how the continuous integration system can be configured to support continuous delivery where each commit is a potential release.

First some build parameters:


Now the build steps:

# Create a release branch
git checkout -b %release.branch% master
# Update version numbers
mvn versions:set -DnewVersion=%version.number%
# Build and run tests
mvn -P checkstyle,findbugs,integration-test -U clean install \
# Commit and tag release
git commit -a -m "New release candidate %version.number%"
git push origin %release.branch%
git tag %version.number%
git push origin %version.number%
# Remove local branch
git checkout master
git branch -D %release.branch%


  • Avoid branching as much as possible. Do the development work on the master branch to get the most possible benefit from continuous integration.
  • Use small incremental commits, constant communication and a clean code base to avoid problems with developers working on the same piece of code.
  • For major changes, use incremental delivery, feature hiding, or Branch by Abstraction.
  • If there is a problem that needs to be fixed in a system that is in production, first of all investigate if it is possible to make the fix only in the master branch and release a new version into production. If not, do the fix in the release branch corresponding to the version in production and merge the fix into the master branch.