This post describes how to use Maven to build a Docker image using a remote Docker host running on Linux. This means that the Maven build can run anywhere, for example in TeamCity on Windows. The assumption here is that we have a separate (virtual) machine running Linux (RHEL 7), and we use this machine both as a Docker host for building images, and also as a private Docker registry.
The background to all this is that an organization I’m working with has standardized on TeamCity running on Windows Server 2008 R2 for continuous integration. They are in the progress of moving TeamCity to Windows 2012 R2, but the same setup can hopefully be used on the new build server.
The organization is mainly Windows-based, but there are some important Java services running on Linux (Red Hat, RHEL 7), with some more on the way. I have started experimenting with Docker for easier deployment of Java services. Since the organization is not running any very recent Windows servers with native Docker support, the focus here is on Docker for Linux.
The steps to get this build process up and running are as follows:
- Install Docker on the Linux machine.
- Allow remote access to the Docker daemon in a secure way.
- Configure the Maven pom.xml to create a Docker image using the Linux Docker host.
- Set up a private Docker registry on the Linux machine.
- Update the Maven pom.xml so that we can push images to the private registry.
- Configure TeamCity to run the build process.
Installing Docker
Docker has recently changed the packaging of distributions so that the free version is now called the Community Edition (CE), while the version where you pay for support is called the Enterprise Edition (EE).
On Red Hat, only the Enterprise Edition is supported. On CentOS, both editions are available, so to experiment with Docker for free, using CentOS is one obvious way to go. In this case, however, the organization I’m working with has standardized only on Red Hat, not CentOS, so the machine that is available for experimentation is running Red Hat 7. Since I am still only experimenting, I decided to give the free Docker version for CentOS a chance, even though the machine is running RHEL 7. These instructions should work on CentOS as well.
Follow the official installation instructions for Docker on CentOS:
$ sudo yum install -y yum-utils $ sudo yum-config-manager \ --add-repo \ https://download.docker.com/linux/centos/docker-ce.repo $ sudo yum makecache fast $ sudo yum install -y docker-ce $ sudo systemctl start docker $ sudo docker run hello-world
The last command should print some text starting with “Hello from Docker!”. If so, you have successfully installed Docker on your machine.
Allowing Remote Access to the Docker Daemon
At the moment, Docker is only available when you run as root on the local machine. This is because the Docker daemon binds to a Unix socket instead of a TCP port. By default that Unix socket is owned root and other users can only access it using sudo. The Docker daemon always runs as the root user.
We want to access Docker from another machine in order to build Docker images from a Windows machine, so we need to configure Docker to listen on a socket. Since anyone who can access the Docker daemon gets root privileges, we want to limit access using TLS and certificates. We will set up our own certificate authority (CA). If you have access to certificates from some other CA, you can use those instead.
First of all we create the CA:
$ cd $ mkdir -p docker/ca $ cd docker/ca/ $ openssl genrsa -aes256 -out ca-key.pem 4096 $ openssl req -new -x509 -days 1825 -key ca-key.pem -sha256 \ -out ca.pem
Then we create a key and certificate for the server:
### Set HOST to the DNS name of your Docker daemon’s host: $ HOST=docker.reallifedeveloper.com $ cd $ mkdir -p docker/certs $ cd docker/certs $ ln -s ../ca/ca.pem . $ openssl genrsa -out server-key.pem 4096 $ openssl req -subj "/CN=$HOST" -sha256 -new \ -key server-key.pem -out server.csr ### Provide all DNS names and IP addresses that will be used ### to contact the Docker daemon: $ echo subjectAltName = DNS:$HOST,IP:10.10.10.20,IP:127.0.0.1 \ > extfile.cnf $ openssl x509 -req -days 365 -sha256 -in server.csr \ -CA ../ca/ca.pem -CAkey ../ca/ca-key.pem -CAcreateserial \ -out server-cert.pem -extfile extfile.cnf
Now we create a key and certificate for the client:
$ openssl genrsa -out key.pem 4096 $ openssl req -subj '/CN=client' -new -key key.pem \ -out client.csr $ echo extendedKeyUsage = clientAuth > extfile.cnf $ openssl x509 -req -days 365 -sha256 -in client.csr \ -CA ../ca/ca.pem -CAkey ../ca/ca-key.pem -CAcreateserial \ -out cert.pem -extfile extfile.cnf
Clean up the certificate directories:
$ rm client.csr server.csr extfile.cnf $ chmod 0400 ../ca/ca-key.pem key.pem server-key.pem $ chmod 0444 ../ca/ca.pem server-cert.pem cert.pem
We are finally ready to enable remote access to Docker:
$ cd $ sudo mkdir /etc/systemd/system/docker.service.d ### Substitute $HOME/docker/certs with the directory where you ### created the certificates above: $ cat > docker.conf <<EOF [Service] ExecStart= ExecStart=/usr/bin/dockerd --tlsverify \ --tlscacert=$HOME/docker/certs/ca.pem \ --tlscert=$HOME/docker/certs/server-cert.pem \ --tlskey=$HOME/docker/certs/server-key.pem \ -H tcp://0.0.0.0:2376 EOF $ sudo mv docker.conf /etc/systemd/system/docker.service.d/ $ sudo systemctl daemon-reload $ sudo systemctl restart docker $ sudo systemctl enable docker
We will now tell the docker client how to connect to the daemon:
$ export DOCKER_HOST=127.0.0.1:2376 ### Substitute ~/docker/certs with your certificate directory: $ export DOCKER_CERT_PATH=~/docker/certs $ export DOCKER_TLS_VERIFY=1 $ docker run hello-world
If the last command printed some text starting with “Hello from Docker!”, congratulations, you have now configured the Docker daemon to allow remote access on port 2376, the standard port to use for Docker over TLS.
Please note that you did not have to use sudo to run the docker command as root. Anyone who has access to the client key docker/certs/key.pem
and the client certificate docker/certs/cert.pem
can now call the Docker daemon from a remote host, in practice getting root access to the machine Docker is running on. It is important to keep the client key safe!
Also note that Docker is very specific when it comes to the names used for keys and certificates. The files used for client authentication must be called key.pem
, cert.pem
and ca.pem
, respectively.
Since we want other machines to be able to connect to the Docker daemon, we need to open port 2376 in the firewall:
$ sudo firewall-cmd --zone=public --add-port=2376/tcp $ sudo firewall-cmd --zone=public --add-port=2376/tcp \ --permanent
Configuring Maven to Create a Docker Image
The Docker configuration we have done so far has been on the Linux server. We now move to some other machine, for example your workstation, where we assume that Docker is not installed. In this example the workstation is running Windows so the example paths will be using the Windows format.
We will now configure the Maven POM to create a Docker image on the Linux server, using a Docker plugin for Maven. There are several to choose from, but in this example we use the one from Spotify.
pom.xml
<project> ... <properties> <docker.image.prefix>rld</docker.image.prefix> </properties> ... <build> <resources> <resource> <directory>${basedir}/src/main/resources</directory> </resource> <resource> <directory>${basedir}/src/main/docker</directory> <filtering>true</filtering> <includes> <include>**/Dockerfile</include> </includes> </resource> </resources> <plugins> ... <plugin> <groupId>com.spotify</groupId> <artifactId>docker-maven-plugin</artifactId> <version>0.4.13</version> <configuration> <imageName>${docker.image.prefix}/${project.artifactId}</imageName> <imageTags> <imageTag>${project.version}</imageTag> </imageTags> <dockerDirectory>${project.build.outputDirectory}</dockerDirectory> <resources> <resource> <targetPath>/</targetPath> <directory>${project.build.directory}</directory> <include>${project.build.finalName}.jar</include> </resource> </resources> </configuration> </plugin> </plugins> </build> </project>
src/main/docker/Dockerfile
FROM frolvlad/alpine-oraclejdk8:slim VOLUME /tmp ADD @project.build.finalName@.jar app.jar RUN sh -c 'touch /app.jar' ENV JAVA_OPTS="" ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar /app.jar" ]
We can now try to build a Docker image:
mvn clean install docker:build
This fails with an error message saying that it cannot connect to localhost on port 2375:
[ERROR] Failed to execute goal com.spotify:docker-maven-plugin:0.4.13:build (default-cli) on project rld-docker-sample: Exception caught: java.util.concurrent.ExecutionException: com.spotify.docker.client.shaded.javax.ws.rs.ProcessingException: org.apache.http.conn.HttpHostConnectException: Connect to localhost:2375 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused: connect -> [Help 1]
The Docker Maven plugin expects Docker to be running on the same machine, without TLS so the default port 2375 is assumed. We need to set an environment variable to tell the plugin where Docker is running:
# Set the DOCKER_HOST variable to point to your Docker machine: DOCKER_HOST=tcp://docker.reallifedeveloper.com:2376
If we try to run mvn docker:build
now, we get a different error message, saying that the server failed to respond with a valid HTTP response:
[ERROR] Failed to execute goal com.spotify:docker-maven-plugin:0.4.13:build (default-cli) on project anmalan-service: Exception caught: java.util.concurrent.ExecutionException: com.spotify.docker.client.shaded.javax.ws.rs.ProcessingException: org.apache.http.client.ClientProtocolException: The server failed to respond with a valid HTTP response -> [Help 1]
This is because the plugin is still trying to use plain HTTP and not HTTPS. To make the plugin understand that we want to use HTTPS, we need to provide the client key and certificate and the CA certificate that we created previously.
First of all, you need to copy the three files docker/certs/{key,cert,ca}.pem
from the Docker machine to your workstation. In this example, we copy them to the directory D:\docker\certs
.
We now need to point the Maven Docker plugin to the directory where the necessary certificates and key are by setting some more environment variables:
DOCKER_CERT_PATH=D:/docker/certs DOCKER_TLS_VERIFY=1
The DOCKER_TLS_VERIFY
environment variable supposedly tells the client to verify the certificate of the Docker daemon. I don’t actually think the Spotify Docker client uses this variable, but it doesn’t hurt to set it.
If we now run mvn docker:build
we should be greeted with “BUILD SUCCESS”.
Setting up a Private Docker Registry
We are now in a position where we can build a Docker image on the Linux machine from a remote host. We can also already push the image to the central Docker registry, but in this case I decided to experiment with a private Docker registry for the images built for the organization I’m helping.
Luckily, it is very easy to start a private Docker registry, using Docker of course. On the Linux server running the Docker daemon, give the following commands:
$ docker run -d -p 5000:5000 --restart=always --name registry \ -v ~/docker/certs:/certs \ -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/server-cert.pem \ -e REGISTRY_HTTP_TLS_KEY=/certs/server-key.pem registry:2 $ sudo firewall-cmd --zone=public --add-port=5000/tcp $ sudo firewall-cmd --zone=public --add-port=5000/tcp \ --permanent $ docker ps
As usual, you need to replace ~/docker/certs
with the directory where you created the server key and certificate.
The docker ps
command should show that the registry is running, and that port 5000 is mapped to port 5000 on the host machine. This means that we can now push Docker images to our registry by connecting to port 5000 on the Linux server. As you may have guessed from the environment variables provided when the registry was started, the client that wants to push an image also needs to use a key and certificate to identify itself.
Please note that who is the client and who is the server depends on your point of view. When we use the Docker Maven plugin to build an image, the plugin is the client communicating with the Docker daemon—the server—on port 2376. When we push an image to the registry, the Docker daemon is the client, communicating with the registry server on port 5000.
Configuring Maven to Push to Our Repository
You specify that you want to push to a certain registry using the address of the registry as a prefix to the Docker image name, so instead of naming the image rld/rld-docker-sample
, for example, you name it docker.reallifedeveloper.com:5000/rld/rld-docker-sample
to push to the registry running on docker.reallifedeveloper.com:5000
.
pom.xml
<project> ... <properties> ... <docker.registry>docker.reallifedeveloper.com:5000/</docker.registry> </properties> ... <build> <plugins> ... <plugin> <groupId>com.spotify</groupId> <artifactId>docker-maven-plugin</artifactId> <version>0.4.13</version> <configuration> <imageName>${docker.registry}${docker.image.prefix}/${project.artifactId}</imageName> ... </configuration> </plugin> </plugins> </build> </project>
We can now try to build and push an image to our private Docker registry:
$ mvn clean install docker:build -DpushImage
This will probably fail after trying to push five times, with a rather cryptic error message saying that the certificate is signed by an unknown authority:
[ERROR] Failed to execute goal com.spotify:docker-maven-plugin:0.4.13:build (default-cli) on project rld-docker-sample: Exception caught: Get https://docker.reallifedeveloper.com:5000/v1/_ping: x509: certificate signed by unknown authority -> [Help 1]
The question is which certificate is signed by the unknown authority. The answer is that it is the Docker daemon connecting to the private Docker registry that uses a certificate (docker/certs/server-cert.pem
) that the registry does not recognize. The reason is that we only have provided a key and certificate when starting the registry, not any CA certificate.
The solution is to add the CA certificate to a subdirectory to /etc/docker/certs.d
with the same name as the repository. The file must use the file extension .crt
to be picked up as a CA certificate:
# Use the name of your registry: $ sudo mkdir -p \ /etc/docker/certs.d/docker.reallifedeveloper.com:5000 # Replace ~/docker/ca with your CA directory: $ sudo cp ~/docker/ca/ca.pem \ /etc/docker/certs.d/docker.reallifedeveloper.com:5000/\ ca.crt
When we now try to build, we hopefully get “BUILD SUCCESS”:
$ mvn clean install docker:build -DpushImage
You can use the registry API to find information about the images that are stored in your private registry. For example, if you want to see which images are available, use a command like this:
$ curl --cacert ~/docker/certs/ca.pem \ https://docker.reallifedeveloper.com:5000/v2/_catalog
To see what tags are available for a specific image, use a command like the following:
$ curl --cacert ~/docker/certs/ca.pem \ https://docker.reallifedeveloper.com:5000/v2/rld/rld-docker-sample/tags/list
In the command above, rld/rld-docker-sample
is the name of an image, one that was included in the output of the previous _catalog
command.
Configuring TeamCity
Luckily, configuring TeamCity to build the Docker image is easy, since the heavy lifting is done by Maven. We need to copy the key and certificate files docker/certs/{key,cert,ca}.pem
to an appropriate location on the machine running TeamCity. Let’s assume we put them in E:\docker\certs
.
We also need to set the environment variables that tell the Docker client how to connect to the Docker daemon:
# Set the DOCKER_HOST variable to point to your Docker machine: DOCKER_HOST=tcp://docker.reallifedeveloper.com:2376 DOCKER_CERT_PATH=E:/docker/certs DOCKER_TLS_VERIFY=1
You need to restart the TeamCity process for the changes to take effect.
Since I believe in the concept of continuous delivery, every commit is a release candidate, so the build process should create an artifact with a real version number, not a snapshot. It should also create a release branch and tag the version that was built. The rest of this section describes how to set up a TeamCity build appropriate for continuous integration—it is not limited to building Docker images but can be used in many different types of project.
The build steps necessary can be reused for different projects. In TeamCity, you can create a build configuration template that defines build parameters and build steps. It is then easy to create a build configuration using the template.
Start by creating a new TeamCity project. We will now define a few configuration parameters for the project, parameters that will be available to all sub-projects, build templates and build configurations that belong to the project.
Under Parameters, define the following configuration parameters:
development.branch=master
major.version.number=
version.number=%major.version.number%.%build.counter%
release.branch=release-%version.number%
Now create a build configuration template called Maven Build with the following build steps:
- Create Release Branch (of type Command Line)
- Deploy Snapshots (of type Maven)
- Update Version Numbers (of type Maven)
- Build Docker Image (of type Maven)
- Commit and Tag Release (of type Command Line)
- Remove Local Branch (of type Command Line, execute always)
git checkout -b %release.branch% %development.branch%
mvn clean deploy -DskiptTests
mvn versions:set -DnewVersion=%version.number%
mvn clean install docker:build -DpushImage
git commit -a -m "New release candidate %version.number%" git push origin %release.branch% git tag %version.number% git push origin %version.number%
git checkout %development.branch% git branch -D %release.branch%
For the project you want to build, go to VCS Roots and click on Create VCS Root to define a new Git VCS root pointing to the Git repository of your project.
We can now create a build configuration called Build that is based on the Maven Build template. The build parameters that you previously defined are displayed and you need to fill in the appropriate version number to use for major.version.number
. If you use 2.1, for example, each build will create a version starting with 2.1 and with a build number starting at one as the third component, generating versions 2.1.1, 2.1.2, 2.1.3, and so on.
Under Version Control Settings, click Attach VCS Root and choose the Git VCS root you created for the project. Under Checkout Options, make sure to change VCS checkout mode to Automatically on agent (if supported by VCS roots)
.
Under Triggers, click Add New Trigger and add a VCS Trigger with the default settings.
Congratulations, you now have a TeamCity build that will create a new tagged release candidate every time you push changes to Git. A Docker image, tagged with the version number, will also be pushed to your private Docker registry.
Conclusion
By setting up a Docker host running on Linux and allowing remote access to it in a secure way using TLS and certificates, we can build and tag Docker images on it from other machines that do not run Docker. We can do this using a Docker Maven plugin, for example.
Creating a private Docker registry is easy, so that we can push images to a registry that we control instead of the central registry.
With a continuous integration server like TeamCity, we can make sure that every push to Git creates a tagged release candidate, and that the corresponding Docker image is pushed to our private Docker registry.