Receiving Urgent Market Message Push Notifications from Nord Pool

This post is about using Nord Pool push notifications to receive urgent market messages, UMMs, in a Java application. It is mainly of interest for people in the energy sector, but could also be useful to someone looking to receive notifications using SignalR in a Java environment.

Nord Pool

Nord Pool is a European market for trading power (electricity), owned by Nasdaq.

Urgent Market Messages

An urgent market message, or UMM, is used in the energy sector to inform the market about planned and unplanned events that affect the available power in production, consumption or transmission units. For example, information about a planned maintenance of a nuclear power plant would be sent as a UMM, as would information about a power line connecting two countries being cut by mistake.

Nord Pool aggregates UMMs from European power companies and provide a REST API to get UMMs. They also provide push notifications to asynchronously receive real-time UMM information using SignalR.


SignalR is a Microsoft library for sending asynchronous notifications from servers to clients using standard web protocols.

There are two versions of SignalR that are not compatible: ASP.NET SignalR and ASP.NET Core SignalR. The Nord Pool push notifications use the older ASP.NET SignalR version, so it is important to use a client that supports that version.

In this case, we’re looking for a Java client, and luckily there is one available on GitHub. The readme for this project marks it as obsolete, and points to a version that supports the newer ASP.NET Core SignalR. However, since we want to connect to Nord Pool we will ignore the warning and use the old version.

There does not seem to be any version of the Java client uploaded to a Maven repository, so the first step is to clone the Git repository and build the code locally. There are Gradle build scripts available; I did not have Gradle available where I’m currently working, so I added simple POM files and built using Maven. The only module that is required for what we are doing here is signalr-client-sdk.

Listening for Notifications Using the Java Client

Let’s develop a Java domain service that listens for UMM notifications and calls registered handlers when UMM events occur. We start by defining the service interface that allows you to register new handlers, and start listening:

public interface UmmNotificationService {
    void registerNotificationHandler(UmmNotificationHandler notificationHandler);
    void startListeningForNotifications();

The Nord Pool push notification documentation describes three types of events:

  • New message
  • Update message
  • Cancel / dismiss message

We therefore define a notification handler interface with three methods corresponding to the events:

public interface UmmNotificationHandler {
    void onNewMessage(String jsonUmm);
    void onUpdateMessage(String jsonUmm);
    void onDismissMessage(String jsonUmm);

Note that the methods are defined to take strings with the JSON representation of the UMMs. This is in order to make the example simpler; in a real system you would normally create a Umm entity and a factory class UmmFactory to create Umm objects from their JSON representation.

We can now create an empty implementation of the service interface, so that we can start writing an integration test (note that we anticipate that we will need to define which URL to connect to):

public class NordpoolUmmNotificationService implements UmmNotificationService {
    public NordpoolUmmNotificationService(String ummPushUrl) {

    public void registerNotificationHandler(UmmNotificationHandler notificationHandler) {

    public void startListeningForNotifications() {

We are now in a position where we can create an integration test for our new service. Here I have to admit that I don’t really know how to create a solid, reliable and repeatable test. At the moment, the best I know how to do is to connect to Nord Pool, wait until a UMM push notification arrives and then verify that what we received was a correct UMM:

public class NordpoolUmmNotificationServiceIT {

    public void receiveNotifications() throws Exception {
        NordpoolUmmNotificationService service =
                new NordpoolUmmNotificationService("");
        TestUmmNotificationHandler notificationHandler = new TestUmmNotificationHandler();
        String umm = notificationHandler.umms.get(0);
        // Parse JSON and verify that it really is a UMM.

    private static final class TestUmmNotificationHandler implements UmmNotificationHandler {
        private CountDownLatch latch = new CountDownLatch(1);
        private List<String> umms= new ArrayList<>();

        public void onNewMessage(String umm) {

        public void onUpdateMessage(String umm) {

        public void onDismissMessage(String umm) {

Note that new UMMs are produced only a few per hour, so the above test may take a very long time to run.

After some trial and error, the following implementation turns out to make the test pass:

import java.util.Set;
import java.util.concurrent.CopyOnWriteArraySet;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import microsoft.aspnet.signalr.client.LogLevel;
import microsoft.aspnet.signalr.client.hubs.HubConnection;
import microsoft.aspnet.signalr.client.hubs.HubProxy;
import microsoft.aspnet.signalr.client.transport.ClientTransport;
import microsoft.aspnet.signalr.client.transport.ServerSentEventsTransport;

public class NordpoolUmmNotificationService implements UmmNotificationService {

    private static final Logger LOG = LoggerFactory.getLogger(NordpoolUmmNotificationService.class);

    private String ummPushUrl;

    private Set<UmmNotificationHandler> notificationHandlers = new CopyOnWriteArraySet<>();

    public NordpoolUmmNotificationService(String ummPushUrl) {"Creating new NordpoolUmmNotificationService: ummPushUrl={}", ummPushUrl);
        this.ummPushUrl = ummPushUrl;

    public void registerNotificationHandler(UmmNotificationHandler notificationHandler) {"Registering a new notification handler: {}", notificationHandler);

    public void startListeningForNotifications() {"Start listening for notifications");
        microsoft.aspnet.signalr.client.Logger logger = new Slf4jLogger(LOG);
        HubConnection connection = new HubConnection(ummPushUrl, "", true, logger);
        ClientTransport clientTransport = new ServerSentEventsTransport(logger);

        HubProxy proxy = connection.createHubProxy("MessageHub");
        proxy.on("newMessage", data -> {
        }, JsonElement.class);
        proxy.on("updateMessage", data -> {
        }, JsonElement.class);
        proxy.on("dismissMessage", data -> {
        }, JsonElement.class);


    private void onNewMessage(String umm) {
        for (UmmNotificationHandler notificationHandler : notificationHandlers) {
            LOG.debug("Calling onNewMessage on notification handler: umm={}, notificationHandler={}", umm,

    private void onUpdateMessage(String umm) {
        for (UmmNotificationHandler notificationHandler : notificationHandlers) {
            LOG.debug("Calling onUpdateMessage on notification handler: umm={}, notificationHandler={}", umm,

    private void onDismissMessage(String umm) {
        for (UmmNotificationHandler notificationHandler : notificationHandlers) {
            LOG.debug("Calling onDismissMessage on notification handler: umm={}, notificationHandler={}", umm,

     * An adapter class that takes an Slf4j logger and turns it into a
     * <code>microsoft.aspnet.signalr.client.Logger</code>.
    private static final class Slf4jLogger implements microsoft.aspnet.signalr.client.Logger {

        private final Logger slf4jLogger;

        Slf4jLogger(Logger slf4jLogger) {
            this.slf4jLogger = slf4jLogger;

        public void log(String message, LogLevel level) {
            switch (level) {
            case Critical:
            case Information:
            case Verbose:
                throw new IllegalStateException("Unknonwn enum constant: " + level);


A few notes regarding the code above:

  • The SignalR Java client adds /signalr to the URL you connect to, so in the case of Nord Pool you use the URL to connect.
  • The name used in the call connection.createHubProxy must be "MessageHub", otherwise you receive HTTP 500, Interval Server Error, from Nord Pool.
  • The default client transport will use a WebsocketTransport. I have not got this to work, possibly because we are behind an HTTP proxy. The ServerSentEventsTransport does work, however.

Zen and the Art of Computer Programming

Today’s Chautauqua is about software quality, and how it relates to the developer habits.

Programming as an Art

Let’s start by discussing why programming is an art and not a science.

It should be clear that programming is not a pure science, simply because quality is such an important aspect of programming. The pure sciences, logic and mathematics, are only interested in the truth or falsehood of statements; quality is not a factor.

So, perhaps programming is an applied science? People often use the term “software engineer” for someone who develops software in a “systematic” fashion. To me, that is giving the software industry too much credit. Calling software development “engineering” implies that it is possible to define the requirements of a program in an exact way, and then use some well-defined way or algorithm to create a program that meets those requirements.

There are cases where formal methods can be used to prove a program correct with respect to some definition. This has been used for proving the correctness of compilers and the soundness of type systems, for example. In most cases, however, there is no formal definition of what the program should do, only vaguely expressed requirements that change as the stakeholders learn more about the problem. This, combined with the fact that the tools used for formal correctness proofs are as yet far from mainstream, means that most of us will be writing computer programs in an ad hoc way for the foreseeable future.

The fact that we do not have a scientific method to produce programs with certain properties means that it is up to you produce programs of high quality that satisfy all the stakeholders. Luckily, there are best practices, in the form om the developer habits, that will help you in this quest. And it also gives you the freedom to create programs that are not only useful but also beautiful.

Software Quality

When we talk about the quality of a program, what do we actually mean? Quality is a notoriously difficult term to pin down, and software quality perhaps even more so than other types of quality.

A computer program has no romantic quality: it is normally not possible to look at a program and immediately decide that it is “beautiful” or “ugly”. A program may have a beautiful user interface, but that says very little about the quality of the program itself. The quality of a program is classic quality: the underlying form of the program may be “beautiful” or “ugly”. For example, the domain model may be particularly well adapted to the problem at hand, or the algorithms used may be extraordinarily simple and efficient.

The quality of a program depends on the observer, or to put it slightly differently, different people are interested in different aspects of the program’s quality. Some examples:

  • A developer is interested in how easy it is to access and build the program, and how easy it is to make changes to it. Automating the build process, providing plenty of tests, and following a coherent architecture helps in this regard.
  • Someone working in operations is interested in how stable the program is, how easy it is to find instructions for starting and stopping the program, and how easy it is to find solutions to common problems. Automation, testing and good application logging can help with this.
  • The end-user wants the program to be easy to use, and to solve a real problem in a predictable way. Agile methods with recurring demos, and specification by example for clarifying the requirements, help make a program that solves the right problem in a useful way.
  • Managers want the system to be secure and cheap to maintain. Automation, testing and clean code created in a test-driven manner make this possible.

Quality and the Habits

Creating a program with certain quality aspects is difficult because these aspects are not obvious from the program’s external form. Recognizing, and being able to create, a program with specific quality attributes requires training and experience. The habits describe procedures you can use to help you create quality programs, but if you do not understand the reasons for the habits, for the procedures, you will still create mediocre programs.

When it comes to their effect on program quality, the habits form a hierarchy. At the top is Everybody, All Together, From Early On. It is necessary to talk to the right people to find out what to do, and to keep talking to get all the details right. This helps you do the right thing.

When you know what to do, it is time to Write a Test First. This can be a FitNesse test that documents part of the acceptance criteria for a user story, in a way that is understood by all stake holders, and with a clarity that is indisputable since it is automated. It can be a unit or integration test that helps guide the design and shows that the code does what you believe it to do. This helps you do the thing right.

With the basics in place — you are doing the right thing, and you are doing it right — it is time to think about Publishing Your Results so that other people and other systems are made aware of what is going on in your system and can react accordingly. This is also a good time to Automate, Don’t Document, everything you find tedious to give you more time to do interesting work, and also make it possible to easily hand over what you have built just by saying “push this button to get started”.

You may wonder where the habit When in Doubt, Do Something fits in. All the habits have static components as well as a dynamic component. The static components are specific patterns, tools and techniques that you have learned and that you use regularly. Some of these static components are discussed in this blog.

The dynamic component is the drive to constantly improve by adapting the way you work to increase the quality of what you are doing. The Scrum sprint review is an example of this; an attempt to continuously improve the development process. You have to do the same to try to find better ways to write tests, to automate, to publish results and so on. Even to find better habits. The habit When in Doubt, Do Something is a reminder to not only use the patterns that you already know, but to adapt and improve, and find new ways to work.

So, the question is, is it hard to create programs of high quality?

“Not if you have the right attitudes. It’s having the right attitudes that’s hard.”


  1. As most of you have already understood, this post was inspired by Zen and the Art of Motorcycle Maintenance: An Inquiry Into Values by Robert M. Pirsig. The ending quote is taken directly from that book. Some of you may also find traces of the follow-up book by the same author, Lila: An Inquiry Into Morals. Both books are highly recommended reading for anyone interested in the nature of quality, or just looking for a good read.
  2. The quality that we discuss here is not identical to the quality attributes that are often called non-functional requirements. However, some non-functional requirements, such as maintainability, testability and reliability, are affected by what we call quality.

Reading JSON Files to Create Test Versions of REST Clients

This post describes a simple way to create a test version of a service that reads JSON or XML from a REST service or similar. The purpose is to easily create a fake service that reads from files instead and that can be used for testing other code that use the service.

I believe in using production code as much as possible when running tests. Every time you use special test code, i.e., code that is only used for testing, you run the risk of the test code not behaving exactly the same as the production code. You also get a maintenance problem, where the test code must be kept up to date with respect to the production code.

Some types of code are inconvenient to use for testing, however. For example, database calls require setup and may be slow, and code calling REST services require the service to be available and again, may be slow. In a previous post, we saw a simple way to replace repositories calling a database with an in-memory version. In this post, we will see how to replace code calling a REST service with a version reading from file.

When creating a fake version of a piece of code, there are two things to keep in mind:

  • The less of the production code you replace with test code, the easier it is to keep the two in sync.
  • The test version should be tested using the same test suite as the production code, to verify that the two behave identically.

Getting Started

A sample project, rld-rest-sample, accompanying this post can be found on GitHub.

Use the following commands to download, build and run the sample project:

$ mkdir reallifedeveloper
$ cd reallifedeveloper
$ git clone
$ cd rld-rest-sample
$ mvn -DcheckAll clean install # Should end with BUILD SUCCESS
$ java -Dserver.port=8081 -jar target/rld-rest-sample-1.0.jar

You can now try the following URLs to see that everything is working:

Example Code

Assume that we want to create a REST service that can list the countries of the world, and also the states of a particular country. The main reason that this was chosen as example is that there are free online services that we can use for testing.

First of all, we define the CountryService interface:

package com.reallifedeveloper.sample.domain;

import java.util.List;

public interface CountryService {

    List<Country> allCountries() throws IOException;

    List<State> statesOfCountry(String alpha3Code) throws IOException;


We now want to create an implementation of the CountryService interface that uses the free services mentioned above, from a site called GroupKT. We call this implementation GroupKTCountryService. To get started, we create integration tests that connect to the REST services and define the behavior we expect:

package com.reallifedeveloper.sample.infrastructure;

// imports...

public class GroupKTCountryServiceIT {

    public ExpectedException expectedException = ExpectedException.none();

    private GroupKTCountryService service = new GroupKTCountryService("");

    public void allCountries() throws Exception {
        List<Country> allCountries = service().allCountries();
        assertThat(allCountries, notNullValue());
        assertThat(allCountries.size(), is(249));

    public void indiaShouldHave36States() throws Exception {
        List<State> statesOfIndia = service().statesOfCountry("IND");
        assertThat(statesOfIndia, notNullValue());
        assertThat(statesOfIndia.size(), is(36));

    public void unknownCountryShouldGiveEmptyList() throws Exception {
        List<State> statesOfUnknownCountry = service().statesOfCountry("foo");
        assertThat(statesOfUnknownCountry, notNullValue());
        assertThat(statesOfUnknownCountry.isEmpty(), is(true));

    public void nullCountryShouldThrowException() throws Exception {
        expectedException.expectMessage("alpha3Code must not be null");

    public void constructorNullBaseUrlShouldThrowException() {
        expectedException.expectMessage("baseUrl must not be null");
        new GroupKTCountryService(null);

    protected CountryService service() {
        return service;

Note the protected service method that will be used later when we test the file version of the service.

The GroupKTCountryService that is created together with the integration test is as follows:

package com.reallifedeveloper.sample.infrastructure;

// imports...

public class GroupKTCountryService implements CountryService {

    private final String baseUrl;

    public GroupKTCountryService(String baseUrl) {
        if (baseUrl == null) {
            throw new IllegalArgumentException("baseUrl must not be null");
        this.baseUrl = baseUrl;

    public List<Country> allCountries() throws IOException {
        String jsonCountries = jsonAllCountries();
        ObjectMapper objectMapper = new ObjectMapper();
        RestResponseWrapper<Country> countriesResponse =
                objectMapper.readValue(jsonCountries, new TypeReference<RestResponseWrapper<Country>>() {});
        return countriesResponse.restResponse.result;

    public List<State> statesOfCountry(String alpha3Code) throws IOException {
        if (alpha3Code == null) {
            throw new IllegalArgumentException("alpha3Code must not be null");
        String jsonStates = jsonStatesOfCountry(alpha3Code);
        ObjectMapper objectMapper = new ObjectMapper();
        RestResponseWrapper<State> statesResponse =
                objectMapper.readValue(jsonStates, new TypeReference<RestResponseWrapper<State>>() {});
        return statesResponse.restResponse.result;

    protected String jsonAllCountries() throws IOException {
        RestTemplate restTemplate = new RestTemplate();
        return restTemplate.getForObject(baseUrl() + "/country/get/all", String.class);

    protected String jsonStatesOfCountry(String alpha3Code) throws IOException {
        RestTemplate restTemplate2 = new RestTemplate();
        String stateUrl = baseUrl() + "/state/get/" + alpha3Code + "/all";
        return restTemplate2.getForObject(stateUrl, String.class);

    protected String baseUrl() {
        return baseUrl;

    private static final class RestResponseWrapper<T> {
        private final RestResponse<T> restResponse;

        RestResponseWrapper(@JsonProperty("RestResponse") RestResponse<T> restResponse) {
            this.restResponse = restResponse;

        private static final class RestResponse<T> {
            private final List<String> messages;
            private final List<T> result;

            RestResponse(@JsonProperty("messages") List<String> messages,
                    @JsonProperty("result") List<T> result) {
                this.messages = messages;
                this.result = result;

Note the protected jsonAllCountries and jsonStatesOfCountry methods that return a JSON string representing the different types of information. These methods are overridden in the FileCountryService that reads JSON from files instead of over HTTP:

package com.reallifedeveloper.sample.infrastructure;



public class FileCountryService extends GroupKTCountryService {

    public FileCountryService(String baseUrl) {

    protected String jsonAllCountries() throws IOException {
        return TestUtil.readResource(baseUrl() + "/all_countries.json");

    protected String jsonStatesOfCountry(String alpha3Code) throws IOException {
        return TestUtil.readResource(baseUrl() + "/states_" + alpha3Code + ".json");


The TestUtil.readResource method comes from rld-build-tools that is available from the central Maven repository and from GitHub. The method simply reads a file from classpath and returns its contents as a string.

We also need to add a few JSON files under src/test/resources/json:

To test the FileCountryService, we use the same test cases as for the GroupKTCountryService, so we create a FileCountryServiceTest that inherits from GroupKTCountryServiceIT but plugs in a FileCountryService to test instead of a GroupKTCountryService:

package com.reallifedeveloper.sample.infrastructure;

import com.reallifedeveloper.sample.domain.CountryService;

public class FileCountryServiceTest extends GroupKTCountryServiceIT {

    private FileCountryService service = new FileCountryService("json");

    protected CountryService service() {
        return service;

We can now use the FileCountryService when testing other code, for example application services or REST resources that use the service. We can be sure that it behaves like the real service since we run the same test suite on the two.

Packaging the Code

The test versions of your services, and the JSON or XML response files that you provide, should normally be under src/test and will therefore not be available in the jar file created. If you need to use the test versions of services in other projects, you can easily configure Maven to create a jar file containing your test code:

                <!-- Always generate a *-tests.jar with all test code -->

This will create a file called something like rld-rest-sample-1.0-tests.jar containing the test code.

In other projects where you want to use the test versions of the services, add a dependency of type test-jar:


You can now use the test services and the packaged JSON or XML files when testing your other projects.


When you create a service that reads JSON or XML, isolate the methods that read over the network. Create a test version of the service that substitutes the methods with methods that read from local files instead. Also provide a few JSON or XML files that contain the responses you currently need during testing. It is easy to add more response files later if you need to add new test cases. Make sure that you run the same test cases on the file version of the service that you run on the real version.

Following these simple recommendations gives you a test version of the service that runs quickly and reliably, and that is guaranteed to be kept up to date with respect to the real version.

Writing an Integration Test First for RabbitMQ

In a previous post, we saw some Java code for redelivering messages from a queue to an exchange in RabbitMQ. Obviously, a test was written before writing the actual code.

What kind of test is appropriate in this situation? What we want to test is that messages that are in a RabbitMQ queue are removed from that queue and instead available in the queues that are bound to the exchange we move the messages to.

In this case, the important thing is how RabbitMQ behaves as a result of executing our code, so an integration test that connects to a test instance of RabbitMQ running on the developer machine is the right solution. I strongly believe that every developer should have access to a personal instance of the systems that they need to integrate with, as far as possible. This means that the developers should have their own databases, message queues, web containers, and so on, that they can use for testing without disturbing or being disturbed by anyone else. The easiest way to achieve this, in my experience, is to install the systems on each developer machine.

The test declares two exchanges, foo.domain and foo.domain.dlx, and three queues, foo.domain.queue1, foo.domain.queue2 and foo.comain.dlq. The queue foo.domain.queue1 is bound to exchange foo.domain with routing key rk1, and foo.domain.queue2 is bound to the same exchange with routing key rk2. The exchange foo.domain.dlx is set a dead letter exchange for both queues. We then put three messages, foo, bar and baz in the dead letter queue with different routing keys:

public class MoveMessagesIT {

    private static final String EXCHANGE = "foo.domain";
    private static final String EXCHANGE_DLX = "foo.domain.dlx";
    private static final String QUEUE1 = "foo.domain.queue1";
    private static final String QUEUE2 = "foo.domain.queue2";
    private static final String QUEUE_DLX = "foo.domain.dlq";
    private static final String[] TEST_MESSAGES = { "foo", "bar", "baz" };
    private static final String ROUTING_KEY1 = "rk1";
    private static final String ROUTING_KEY2 = "rk2";
    private static final String[] ROUTING_KEYS = { ROUTING_KEY1, ROUTING_KEY2, ROUTING_KEY1 };

    public void init() throws IOException {
        Connection connection = connectionFactory().newConnection();
        Channel channel = connection.createChannel();
        // Cleanup from previous test
        // EXCHANGE/QUEUEs
        channel.exchangeDeclare(EXCHANGE, "topic");
        Map<String, Object> queueArgs = new HashMap<>();
        queueArgs.put("x-message-ttl", 10 * 1000);
        queueArgs.put("x-dead-letter-exchange", EXCHANGE_DLX);
        channel.queueDeclare(QUEUE1, true, false, false, queueArgs);
        channel.queueBind(QUEUE1, EXCHANGE, ROUTING_KEY1);
        channel.queueDeclare(QUEUE2, true, false, false, queueArgs);
        channel.queueBind(QUEUE2, EXCHANGE, ROUTING_KEY2);
        // DLX/DLQ
        channel.exchangeDeclare(EXCHANGE_DLX, "topic");
        channel.queueDeclare(QUEUE_DLX, true, false, false, null);
        channel.queueBind(QUEUE_DLX, EXCHANGE_DLX, "#");
        // Send test messages to DLQ
        for (int i = 0; i < TEST_MESSAGES.length; i++) {
            channel.basicPublish(EXCHANGE_DLX, ROUTING_KEYS[i], null, TEST_MESSAGES[i].getBytes());

We also want to verify that the contents of the three queues after moving the messages are as expected, so we create a helper method, verifyMessages that reads messages from a queue, verifying that the message content and routing key are correct:

    private static void verifyMessages(String queue, String routingKey, String... messages) throws IOException {
        Connection connection = connectionFactory().newConnection();
        Channel channel = connection.createChannel();

        List<String> messagesRead = new ArrayList<>();
        while (true) {
            GetResponse response = channel.basicGet(queue, true);
            if (response == null) {
            Envelope envelope = response.getEnvelope();
            assertThat(envelope.getRoutingKey(), is(routingKey));
            messagesRead.add(new String(response.getBody()));
        assertThat(messagesRead, is(Arrays.asList(messages)));

    private static ConnectionFactory connectionFactory() {
        ConnectionFactory factory = new ConnectionFactory();
        return factory;


We are now ready to add the test method, which is very simple: move all messages from foo.domain.dlq to exchange foo.domain and then verify that the contents of the queues are as expected:

    public void moveAllMessagesToExchange() throws Exception {
        MoveMessages moveMessages = new MoveMessages("localhost", "guest", "guest", "/");
        moveMessages.moveAllMessagesToExchange(QUEUE_DLX, EXCHANGE);
        verifyMessages(QUEUE1, ROUTING_KEY1, "foo", "baz");
        verifyMessages(QUEUE2, ROUTING_KEY2, "bar");
        verifyMessages(QUEUE_DLX, null);


We have seen an example of how to write an integration test for RabbitMQ. A few things to note:

  • In this case, an integration test is exactly what is needed since we want to verify how an external system, RabbitMQ, behaves as an effect of running our code. No unit testing is necessary, and mocking the behavior of RabbitMQ in this case would only verify that our mock setup behaves the way that we believe RabbitMQ to behave.
  • Having easy access to your own instance of the systems you integrate with, for example by having them locally installed, makes integration testing much simpler.
  • You often learn useful things about the system you integrate with while you write the integration tests. In this case, the mechanics of getting a connection and channel, and for getting and publishing messages, were already in place before writing any production code.
  • Sometimes the test code is larger and more complex than the resulting code under test. This is OK, but remember to write the test code as cleanly as possible and refactor when necessary.

Redelivering Dead-Lettered Messages in RabbitMQ

Here we look at some options for redelivering messages from a RabbitMQ queue to an exchange. The queue can for example be a dead letter queue.

The background for this is that an organization I’m working with at the moment wants to automatically generate web pages based on information entered into an internal system, let’s call it system A. We decided on an architecture where system A publishes domain events using RabbitMQ, and a separate system, system B, listens to a particular kind of event and generates a web page based on information in the event.

The two systems are developed by different suppliers, and they have different release cycles. In this case, the new version of system A was released about a week before the new version of system B, and so domain events were published before anyone was ready to consume them.

The queue that system B listens to is configured with a time-to-live and a dead letter exchange. The reason for this is to avoid the problem of poison messages, i.e., messages that for some reason cause the receiving system to fail, returning the message to the queue to be redelivered, causing the system to fail again, and so on. The old book J2EE AntiPatterns called this the hot potato antipattern.

The result is that a number of messages ended up in the dead letter queue because their time-to-live was reached. When system B was ready to start consuming messages, we wanted to redeliver those messages to give system B a chance to handle them.

Options for Redelivering Messages in RabbitMQ

There are several different ways to redeliver messages in RabbitMQ:

  • Manually, using the admin GUI
  • Using the Shovel plugin
  • Using custom code

If there are just a few messages to redeliver, you can do it manually using the RabbitMQ admin GUI. In the dead letter queue, use “Get Message(s)” to get all messages, optionally setting “Requeue” to false. For each message, copy the payload and routing key and use the information to fill out the “Publish message” section for the exchange you want to redeliver the message to.

If you want to redeliver messages on a more permanent basis, e.g., to synchronize between different RabbitMQ hosts, then the Shovel plugin is probably the right way to go.

In this particular case, there were around 2,000 messages to redeliver, and we only needed to redeliver the messages once, so I decided to write some Java code to do it:

    public void moveAllMessagesToExchange(String fromQueue, String toExchange) throws IOException {
        Connection connection = connectionFactory().newConnection();
        Channel channel = connection.createChannel();
        while (true) {
            GetResponse response = channel.basicGet(fromQueue, false);
            if (response == null) {
            Envelope envelope = response.getEnvelope();
            String routingKey = envelope.getRoutingKey();
            channel.basicPublish(toExchange, routingKey, response.getProps(), response.getBody());
            channel.basicAck(envelope.getDeliveryTag(), false);

To easily run the code, I added a dummy JUnit test and ran it using Eclipse:

    public void foo() throws Exception {
        MoveMessages moveMessages = new MoveMessages("", "admin", "tops3cret", "/");
        moveMessages.moveAllMessagesToExchange("", "");


  • The code above redelivers messages from a queue to an exchange. This means that the messages will be routed to all appropriate queues that are bound to that exchange. It is easy to change the code to deliver the messages directly to a specific queue, just change the call to basicPublish to use the empty string as exchange name, and the name of the queue as routing key.
  • When you redeliver the messages from a dead letter queue, temporarily disable the consuming systems so that they do not consume messages that are being redelivered, if at all possible. If you do not, you run the risk of the consuming system nacking the messages, which cause them to be put in the dead letter queue again, where the redelivery code tries to redeliver them again. If this happens fast enough, you may have an endless loop on your hands.
  • The code to redeliver messages can be made smarter if necessary, for example by looking at the contents of the messages and only redeliver messages with certain properties, or deliver them to different exchanges based on the contents.


Redelivering messages from a queue to an exchange in RabbitMQ can be done in several ways. If you need to do this only once and for a lot of messages, the simple code shown above can be used.

Building Linux Docker Images on Windows 2008 R2 with Maven and TeamCity

This post describes how to use Maven to build a Docker image using a remote Docker host running on Linux. This means that the Maven build can run anywhere, for example in TeamCity on Windows. The assumption here is that we have a separate (virtual) machine running Linux (RHEL 7), and we use this machine both as a Docker host for building images, and also as a private Docker registry.

The background to all this is that an organization I’m working with has standardized on TeamCity running on Windows Server 2008 R2 for continuous integration. They are in the progress of moving TeamCity to Windows 2012 R2, but the same setup can hopefully be used on the new build server.

The organization is mainly Windows-based, but there are some important Java services running on Linux (Red Hat, RHEL 7), with some more on the way. I have started experimenting with Docker for easier deployment of Java services. Since the organization is not running any very recent Windows servers with native Docker support, the focus here is on Docker for Linux.

The steps to get this build process up and running are as follows:

  1. Install Docker on the Linux machine.
  2. Allow remote access to the Docker daemon in a secure way.
  3. Configure the Maven pom.xml to create a Docker image using the Linux Docker host.
  4. Set up a private Docker registry on the Linux machine.
  5. Update the Maven pom.xml so that we can push images to the private registry.
  6. Configure TeamCity to run the build process.

Installing Docker

Docker has recently changed the packaging of distributions so that the free version is now called the Community Edition (CE), while the version where you pay for support is called the Enterprise Edition (EE).

On Red Hat, only the Enterprise Edition is supported. On CentOS, both editions are available, so to experiment with Docker for free, using CentOS is one obvious way to go. In this case, however, the organization I’m working with has standardized only on Red Hat, not CentOS, so the machine that is available for experimentation is running Red Hat 7. Since I am still only experimenting, I decided to give the free Docker version for CentOS a chance, even though the machine is running RHEL 7. These instructions should work on CentOS as well.

Follow the official installation instructions for Docker on CentOS:

$ sudo yum install -y yum-utils
$ sudo yum-config-manager \
    --add-repo \
$ sudo yum makecache fast
$ sudo yum install -y docker-ce
$ sudo systemctl start docker
$ sudo docker run hello-world

The last command should print some text starting with “Hello from Docker!”. If so, you have successfully installed Docker on your machine.

Allowing Remote Access to the Docker Daemon

At the moment, Docker is only available when you run as root on the local machine. This is because the Docker daemon binds to a Unix socket instead of a TCP port. By default that Unix socket is owned root and other users can only access it using sudo. The Docker daemon always runs as the root user.

We want to access Docker from another machine in order to build Docker images from a Windows machine, so we need to configure Docker to listen on a socket. Since anyone who can access the Docker daemon gets root privileges, we want to limit access using TLS and certificates. We will set up our own certificate authority (CA). If you have access to certificates from some other CA, you can use those instead.

First of all we create the CA:

$ cd
$ mkdir -p docker/ca
$ cd docker/ca/
$ openssl genrsa -aes256 -out ca-key.pem 4096
$ openssl req -new -x509 -days 1825 -key ca-key.pem -sha256 \
    -out ca.pem

Then we create a key and certificate for the server:

### Set HOST to the DNS name of your Docker daemon’s host:
$ cd
$ mkdir -p docker/certs
$ cd docker/certs
$ ln -s ../ca/ca.pem .
$ openssl genrsa -out server-key.pem 4096
$ openssl req -subj "/CN=$HOST" -sha256 -new \
    -key server-key.pem -out server.csr
### Provide all DNS names and IP addresses that will be used
### to contact the Docker daemon:
$ echo subjectAltName = DNS:$HOST,IP:,IP: \
    > extfile.cnf
$ openssl x509 -req -days 365 -sha256 -in server.csr \
    -CA ../ca/ca.pem -CAkey ../ca/ca-key.pem -CAcreateserial \
    -out server-cert.pem -extfile extfile.cnf

Now we create a key and certificate for the client:

$ openssl genrsa -out key.pem 4096
$ openssl req -subj '/CN=client' -new -key key.pem \
    -out client.csr
$ echo extendedKeyUsage = clientAuth > extfile.cnf
$ openssl x509 -req -days 365 -sha256 -in client.csr \
    -CA ../ca/ca.pem -CAkey ../ca/ca-key.pem -CAcreateserial \
    -out cert.pem -extfile extfile.cnf

Clean up the certificate directories:

$ rm client.csr server.csr extfile.cnf
$ chmod 0400 ../ca/ca-key.pem key.pem server-key.pem
$ chmod 0444 ../ca/ca.pem server-cert.pem cert.pem

We are finally ready to enable remote access to Docker:

$ cd
$ sudo mkdir /etc/systemd/system/docker.service.d
### Substitute $HOME/docker/certs with the directory where you
### created the certificates above:
$ cat > docker.conf <<EOF
ExecStart=/usr/bin/dockerd --tlsverify \
--tlscacert=$HOME/docker/certs/ca.pem \
--tlscert=$HOME/docker/certs/server-cert.pem \
--tlskey=$HOME/docker/certs/server-key.pem \
-H tcp://
$ sudo mv docker.conf /etc/systemd/system/docker.service.d/
$ sudo systemctl daemon-reload
$ sudo systemctl restart docker
$ sudo systemctl enable docker

We will now tell the docker client how to connect to the daemon:

$ export DOCKER_HOST=
### Substitute ~/docker/certs with your certificate directory:
$ export DOCKER_CERT_PATH=~/docker/certs
$ docker run hello-world

If the last command printed some text starting with “Hello from Docker!”, congratulations, you have now configured the Docker daemon to allow remote access on port 2376, the standard port to use for Docker over TLS.

Please note that you did not have to use sudo to run the docker command as root. Anyone who has access to the client key docker/certs/key.pem and the client certificate docker/certs/cert.pem can now call the Docker daemon from a remote host, in practice getting root access to the machine Docker is running on. It is important to keep the client key safe!

Also note that Docker is very specific when it comes to the names used for keys and certificates. The files used for client authentication must be called key.pem, cert.pem and ca.pem, respectively.

Since we want other machines to be able to connect to the Docker daemon, we need to open port 2376 in the firewall:

$ sudo firewall-cmd --zone=public --add-port=2376/tcp
$ sudo firewall-cmd --zone=public --add-port=2376/tcp \

Configuring Maven to Create a Docker Image

The Docker configuration we have done so far has been on the Linux server. We now move to some other machine, for example your workstation, where we assume that Docker is not installed. In this example the workstation is running Windows so the example paths will be using the Windows format.

We will now configure the Maven POM to create a Docker image on the Linux server, using a Docker plugin for Maven. There are several to choose from, but in this example we use the one from Spotify.




FROM frolvlad/alpine-oraclejdk8:slim
ADD app.jar
RUN sh -c 'touch /app.jar'
ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -jar /app.jar" ]

We can now try to build a Docker image:

mvn clean install docker:build

This fails with an error message saying that it cannot connect to localhost on port 2375:

[ERROR] Failed to execute goal com.spotify:docker-maven-plugin:0.4.13:build (default-cli) on project rld-docker-sample: Exception caught: java.util.concurrent.ExecutionException: org.apache.http.conn.HttpHostConnectException: Connect to localhost:2375 [localhost/, localhost/0:0:0:0:0:0:0:1] failed: Connection refused: connect -> [Help 1]

The Docker Maven plugin expects Docker to be running on the same machine, without TLS so the default port 2375 is assumed. We need to set an environment variable to tell the plugin where Docker is running:

# Set the DOCKER_HOST variable to point to your Docker machine:

If we try to run mvn docker:build now, we get a different error message, saying that the server failed to respond with a valid HTTP response:

[ERROR] Failed to execute goal com.spotify:docker-maven-plugin:0.4.13:build (default-cli) on project anmalan-service: Exception caught: java.util.concurrent.ExecutionException: org.apache.http.client.ClientProtocolException: The server failed to respond with a valid HTTP response -> [Help 1]

This is because the plugin is still trying to use plain HTTP and not HTTPS. To make the plugin understand that we want to use HTTPS, we need to provide the client key and certificate and the CA certificate that we created previously.

First of all, you need to copy the three files docker/certs/{key,cert,ca}.pem from the Docker machine to your workstation. In this example, we copy them to the directory D:\docker\certs.

We now need to point the Maven Docker plugin to the directory where the necessary certificates and key are by setting some more environment variables:


The DOCKER_TLS_VERIFY environment variable supposedly tells the client to verify the certificate of the Docker daemon. I don’t actually think the Spotify Docker client uses this variable, but it doesn’t hurt to set it.

If we now run mvn docker:build we should be greeted with “BUILD SUCCESS”.

Setting up a Private Docker Registry

We are now in a position where we can build a Docker image on the Linux machine from a remote host. We can also already push the image to the central Docker registry, but in this case I decided to experiment with a private Docker registry for the images built for the organization I’m helping.

Luckily, it is very easy to start a private Docker registry, using Docker of course. On the Linux server running the Docker daemon, give the following commands:

$ docker run -d -p 5000:5000 --restart=always --name registry \
    -v ~/docker/certs:/certs \
    -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/server-cert.pem \
    -e REGISTRY_HTTP_TLS_KEY=/certs/server-key.pem registry:2
$ sudo firewall-cmd --zone=public --add-port=5000/tcp
$ sudo firewall-cmd --zone=public --add-port=5000/tcp \
$ docker ps

As usual, you need to replace ~/docker/certs with the directory where you created the server key and certificate.

The docker ps command should show that the registry is running, and that port 5000 is mapped to port 5000 on the host machine. This means that we can now push Docker images to our registry by connecting to port 5000 on the Linux server. As you may have guessed from the environment variables provided when the registry was started, the client that wants to push an image also needs to use a key and certificate to identify itself.

Please note that who is the client and who is the server depends on your point of view. When we use the Docker Maven plugin to build an image, the plugin is the client communicating with the Docker daemon—the server—on port 2376. When we push an image to the registry, the Docker daemon is the client, communicating with the registry server on port 5000.

Configuring Maven to Push to Our Repository

You specify that you want to push to a certain registry using the address of the registry as a prefix to the Docker image name, so instead of naming the image rld/rld-docker-sample, for example, you name it to push to the registry running on



We can now try to build and push an image to our private Docker registry:

$ mvn clean install docker:build -DpushImage

This will probably fail after trying to push five times, with a rather cryptic error message saying that the certificate is signed by an unknown authority:

[ERROR] Failed to execute goal com.spotify:docker-maven-plugin:0.4.13:build (default-cli) on project rld-docker-sample: Exception caught: Get x509: certificate signed by unknown authority -> [Help 1]

The question is which certificate is signed by the unknown authority. The answer is that it is the Docker daemon connecting to the private Docker registry that uses a certificate (docker/certs/server-cert.pem) that the registry does not recognize. The reason is that we only have provided a key and certificate when starting the registry, not any CA certificate.

The solution is to add the CA certificate to a subdirectory to /etc/docker/certs.d with the same name as the repository. The file must use the file extension .crt to be picked up as a CA certificate:

# Use the name of your registry:
$ sudo mkdir -p \
# Replace ~/docker/ca with your CA directory:
$ sudo cp ~/docker/ca/ca.pem \

When we now try to build, we hopefully get “BUILD SUCCESS”:

$ mvn clean install docker:build -DpushImage

You can use the registry API to find information about the images that are stored in your private registry. For example, if you want to see which images are available, use a command like this:

$ curl --cacert ~/docker/certs/ca.pem \

To see what tags are available for a specific image, use a command like the following:

$ curl --cacert ~/docker/certs/ca.pem \

In the command above, rld/rld-docker-sample is the name of an image, one that was included in the output of the previous _catalog command.

Configuring TeamCity

Luckily, configuring TeamCity to build the Docker image is easy, since the heavy lifting is done by Maven. We need to copy the key and certificate files docker/certs/{key,cert,ca}.pem to an appropriate location on the machine running TeamCity. Let’s assume we put them in E:\docker\certs.

We also need to set the environment variables that tell the Docker client how to connect to the Docker daemon:

# Set the DOCKER_HOST variable to point to your Docker machine:

You need to restart the TeamCity process for the changes to take effect.

Since I believe in the concept of continuous delivery, every commit is a release candidate, so the build process should create an artifact with a real version number, not a snapshot. It should also create a release branch and tag the version that was built. The rest of this section describes how to set up a TeamCity build appropriate for continuous integration—it is not limited to building Docker images but can be used in many different types of project.

The build steps necessary can be reused for different projects. In TeamCity, you can create a build configuration template that defines build parameters and build steps. It is then easy to create a build configuration using the template.

Start by creating a new TeamCity project. We will now define a few configuration parameters for the project, parameters that will be available to all sub-projects, build templates and build configurations that belong to the project.

Under Parameters, define the following configuration parameters:

  • development.branch=master
  • major.version.number=
  • version.number=%major.version.number%.%build.counter%
  • release.branch=release-%version.number%

Now create a build configuration template called Maven Build with the following build steps:

  1. Create Release Branch (of type Command Line)
  2. git checkout -b %release.branch% %development.branch%
  3. Deploy Snapshots (of type Maven)
  4. mvn clean deploy -DskiptTests
  5. Update Version Numbers (of type Maven)
  6. mvn versions:set -DnewVersion=%version.number%
  7. Build Docker Image (of type Maven)
  8. mvn clean install docker:build -DpushImage
  9. Commit and Tag Release (of type Command Line)
  10. git commit -a -m "New release candidate %version.number%"
    git push origin %release.branch%
    git tag %version.number%
    git push origin %version.number%
  11. Remove Local Branch (of type Command Line, execute always)
  12. git checkout %development.branch%
    git branch -D %release.branch%

For the project you want to build, go to VCS Roots and click on Create VCS Root to define a new Git VCS root pointing to the Git repository of your project.

We can now create a build configuration called Build that is based on the Maven Build template. The build parameters that you previously defined are displayed and you need to fill in the appropriate version number to use for major.version.number. If you use 2.1, for example, each build will create a version starting with 2.1 and with a build number starting at one as the third component, generating versions 2.1.1, 2.1.2, 2.1.3, and so on.

Under Version Control Settings, click Attach VCS Root and choose the Git VCS root you created for the project. Under Checkout Options, make sure to change VCS checkout mode to Automatically on agent (if supported by VCS roots).

Under Triggers, click Add New Trigger and add a VCS Trigger with the default settings.

Congratulations, you now have a TeamCity build that will create a new tagged release candidate every time you push changes to Git. A Docker image, tagged with the version number, will also be pushed to your private Docker registry.


By setting up a Docker host running on Linux and allowing remote access to it in a secure way using TLS and certificates, we can build and tag Docker images on it from other machines that do not run Docker. We can do this using a Docker Maven plugin, for example.

Creating a private Docker registry is easy, so that we can push images to a registry that we control instead of the central registry.

With a continuous integration server like TeamCity, we can make sure that every push to Git creates a tagged release candidate, and that the corresponding Docker image is pushed to our private Docker registry.

Books Every Software Developer Should Read

I try to read at least one book per month. Reading is a nice way to spend the time when you’re on a plane, a train or a bus. It is a good example of the habit when in doubt, do something—when you have some time over, try spending it doing something useful.

There are a lot of good books on software development, but every now and again you come across a book that gives you a completely new perspective on how development should be done. Here is a list of books that I think every software developer should read, in no particular order.

  • Domain-Driven Design: Tackling Complexity in the Heart of Software by Eric Evans
  • This is the book that coined the term domain-driven design as an approach to modeling the problem domain in a way understandable to domain experts, using the same terms all the way from verbal discussions to source code. Evans’ book is the reference on the theory of DDD. If you want to know what a bounded context is, or what are the transactional properties of an aggregage, this is the book to read.

  • Implementing Domain-Driven Design by Vaughn Vernon
  • If Evans’ book explains the theory of domain-driven design, this book explains the practice, and shows how to implement all the elements in DDD under different circumstances. This book also explains domain events in some detail, something that is mostly missing from Evans’ original book. Includes a lot of sample code, nicely interwoven with discussions of why the code is written the way it is.

  • Specification by Example: How Successful Teams Deliver the Right Software by Gojko Adzic
  • The author introduces the term specification by example for a process that subsumes acceptance test-driven development and behavior-driven development as a way to specify requirements using concrete executable examples of how the system should behave, with a tool such as FitNesse. Done right, this also provides live documentation that is kept up to date since it is constantly being executed.

  • Release It!: Design and Deploy Production-Ready Software by Michael T. Nygard
  • This is the book to turn to for tips on building robust systems that can be operated under real-life conditions. The author describes a number of patterns and anti-patterns for stability and capacity, and also devotes several chapters to practical operations. If you want to learn more about the circuit breaker pattern, for example, this is the book to read.

  • Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation by Jez Humble and David Farley
  • This book explains how to set up a delivery pipeline that lowers stress in the development team by making the release of a new version of a system a non-event, using automation and automated tests to guarantee quality in the system being delivered. The book includes a chapter on version control where the authors explain the benefits of developing on the mainline and avoiding branching as much as possible.

Estimating User Stories Using T-Shirt Sizes

Estimating user stories may sometimes be useful to predict, for example, what can be included in an upcoming release. If you estimate using hours and days, it usually takes a long time to agree on the estimates. An alternative is to use relative story sizes for the estimates instead of absolute time. This often turns out to be quicker, and at least as reliable as using time estimates.

The traditional way to estimate using relative story sizes is to use story points. This post presents a simplified approach using T-shirt sizes: XS, S, M, L and XL.

When to Use Relative Estimates

The first question to ask is if you should estimate at all. If you see no benefit to the estimates, then don’t produce them. For example, in a short project it is usually more effective to prioritize the user stories and just work as quickly as possible.

If you decide to use estimates, the next question is if you can use relative story size. The approach described here works best in longer projects where the team does not change much over time. Under these conditions, you can use the estimates to measure how much work is actually finished during a sprint, a number called the velocity. The velocity can be used to limit what is included in a sprint, and to predict future releases.

The Estimation Process

It is important not to overspecify the meaning of the different sizes. Just define XS as being something that is very easy and quick, and XL as being something that is close to not fitting into a single sprint. The team will automatically create their own definition of what the sizes mean.

To do the actual estimation of a user story, a variant of planning poker can be used where you can use your hands instead of a deck of cards. Since there are only five sizes available, you can use your fingers to show your estimate, with one finger being XS and five fingers XL. If something is either too small or too large, just show your fist.

So for each user story that you want to estimate, discuss the story internally, and when everyone seems to agree on what the story means, ask the team to simultaneously give their estimates using their hands.

If there are different estimates for a story, the people with the highest and the lowest estimates explain why they think their estimate is correct. The process is repeated until a consensus is reached.

If the team agrees that a story is smaller than XS then it should be included in some other story. If the story is larger than XL, it needs to be broken down into separate stories, perhaps using an epic to group the stories together.

Calculating Velocity

Calculating the velocity of the team is simplified if you have numbers to work with instead of T-shirt sizes. Numbers are also easier to use with a tool like JIRA Agile. Here is a table translating the T-shirt sizes into numerical values:

  • XS = 2
  • S = 3
  • M = 5
  • L = 8
  • XL = 13

You probably recognize the numbers as part of the Fibonacci sequence. This number sequence is commonly used when estimating with story points, and has the appropriate property of being non-linear. This means that a story that is XL is several times larger than one that is XS.

In a team new to Scrum and relative estimates, you may want to avoid using the numerical values and only use the T-shirt sizes when discussing story size with the team.


If you have problems with estimates and find that the estimation process takes too long, try this simple approach using T-shirt sizes to estimate user story size.

Everybody, All Together, From Early On

In the book Lean Architecture: for Agile Software Development, the authors James O. Coplien and Gertrud Bjørnvig claim that the secret to Lean is Everybody, all together, from early on. I don’t know enough of the history of Lean to say if that is true, but I do know that the “Lean Secret” works in practice.

The more I work in different projects and with different teams, the more I see that bringing in all the people that will be affected by the project, as early as possible, is a key factor to project success. This is not to say that everybody needs to be working actively on the project on a daily basis, but they should all be kept in the loop.

How do you know which people should be included? Try a short brainstorming session very early in the project with all the stakeholders you have identified so far. You can probably identify a few more people that will be affected in some way by the project. Use your imagination.

The people to include differs from organization to organization, and from project to project, but here is a partial list:

  • The product owner and the team members, such as developers and testers, are obvious.
  • Architects will want a say in how the system is designed.
  • System administrators want to know how the system affects IT operations.
  • The support organization may need to learn a bit about the system to be able to answer questions.
  • If the system requires training, the people developing the training material need to be informed as soon as possible.
  • The sales department may need to know about the system or product being built.
  • Last but not least, the users of the system may be interested in what the future brings.

Make a habit of constantly trying to identify new stakeholders. The earlier you can include them, the better, but if you late in the project find new people that should be informed, do all you can to get them up to speed on what the project is doing.

How do you keep all the stakeholders informed of what is going on? One simple way is to invite people to the demos that end each sprint. Not everyone will be able to come to every demo, but keep reminding them of the opportunity. If some people are particularly affected by something being demonstrated, push extra hard for them to attend that particular demo. Including people in mail correspondence is an easy way to keep them informed—just don’t include everyone in every correspondence. You can also try to invite people as observers to the daily stand-up meetings, but these meetings are often a bit too technical for the majority of stakeholders.

Using the Lean Secret effectively is an art that requires practice, but it can make or break a project.

Why You Should Publish Domain Events

A domain event is a concept from domain-driven design that signals that the state of a system has changed in a way that may be interesting to others. For example, one type of domain event might show that a user has been added to the database, while another type of domain event could signal that an invoice has been approved, reduced or rejected.

In the highly recommended book Implementing Domain-Driven Design, the author Vaughn Vernon devotes a whole chapter to domain events and how to implement them. This post presents some ideas from the book. Please take the time to read the book for a more in-depth discussion of the topic.

Benefits of Domain Events

So why should you start publishing domain events? The simple answer is that it is an easy way to make your systems more loosely coupled. Domain events can be used to facilitate eventual consistency, which can eliminate the need for distributed transactions, and can improve scalability and performance.

Domain events also make it possible to create integrations that you originally did not plan for. The events provide a way to plug into the processing of a system, and to add new “modules” in the form of completely separate systems.

If you think of the times when you have had to poll a system for changes using complex queries in order to perform some processing, you can surely see how much simpler the task would have been if you had been informed of the changes you were interested in.

Taken together, the benefits of using domain events are great, and the investment in developer time very small. Once the infrastructure for working with events is in place, actually publishing the events is trivial.

Basics of Domain Events

Here are a few things to keep in mind when working with domain events:

  • A domain event shows that something already has happened. Domain events should therefore be named using past tense, for example, UserAdded, InvoiceApproved, InvoiceReduced or InvoiceRejected.
  • Since domain events represent something that has already happened, they are immutable.
  • The domain events should make sense to domain experts, and the events should become a part of the ubiquitous language used by both domain experts and developers.
  • The producer of a domain event should not have to care about which consumers, if any, there are. A domain event is simply published, and may be picked up by zero or more consumers that can be local, i.e., code in the same system, or external, i.e., separate systems.
  • An existing system can be incrementally updated to publish domain events to solve a new integration requirement.

Implementing Domain Events

A domain event should keep track of when it occurred. It may also be useful to keep a version number, e.g., starting at 1, in case we want to examine old events after the implementation has evolved.

Other than that, the domain event should have properties that make it possible to understand what happened. It may be helpful to think of what information would be necessary to trigger the event again, and to include all such information in the event.

Since the event should be useful to external consumers, it should contain only simple data types such as strings, numbers and dates. What we are saying is basically that the event should be serializable, even though we probably will use our own serialization format, such as JSON, and not use the serialization mechanism built into the development language.

Publishing Domain Events

A domain event signals that something has occurred. This means that the event must be transactionally coupled to the domain model. If the transaction is rolled back, the effect must be the same as if the event was never published.

If you use domain events internally, for example to let a domain entity publish an event that should be acted upon by other code in the same bounded context, this is easy. Just create a thread local list of all internal subscribers, and call the event handling method of the subscribers when an event is published.

Normally, you also want to publish domain events externally, to other systems. A message queue is the perfect way to reach remote subscribers. Here we have to be careful not to send the message if the transaction in which the event was created is rolled back. This can either be accomplished by using distributed transactions, or simpler using an event store.

An event store uses the same data store as the domain model, so it is automatically included in the same transaction that generates the domain event. A background thread is then used to regularly send any new events to the message queue.

The event store has the added benefit of being a complete history of all events produced by a system. This can provide a description of the state of the system, which can be used as an alternative to storing the current state of objects, a technique called event sourcing. Even if you do not use event sourcing, having a history of events is often useful when debugging problems in a live system.

When you publish events from a service, you can normally use dependency injection to access the publisher. When you publish events from entities or value objects, you will probably have to provide a way for the producer to statically access the event publisher.

An example implementation based on the ideas from the book Implementing Domain-Driven Design can be found on GitHub.


  • Practically any system that has a state that can change can benefit from domain events.
  • Publishing domain events requires some infrastructure, such as support code and a message queue. This is a one-time investment.
  • The publishing of domain events can be incrementally added to an existing system.