Blog

A Simple Git Branching Strategy

In a new project it is always necessary to choose a strategy for working with your version control system when it comes to branching and release management. Some of the things I look for in a branching strategy:

  • It should be as simple as possible.
  • It should maximize the benefits of continuous integration.
  • It should make it easy to create a release.

For Git, a strategy that has been used in many projects is GitFlow. This post will look at some aspects of GitFlow and propose a simpler branching strategy.

It is important to remember that GitFlow was initially described in 2010, when manual releases were common, and is based on the idea of merging changes that should go into a release into the master branch as preparation for a production release. This means that GitFlow is not well suited for continuous delivery. In my opinion, most projects should strive for being able to do continuous delivery, even if the system is actually released in long cycles.

Develop Branch

In GitFlow, all development is done on a develop branch, and the work is merged into the master branch as a part of the release process. The idea is that the master branch should always contain code in a production-ready state.

What is the benefit of always keeping the master branch ready for production? You should never deploy from the head of a branch anyway, you should always deploy from a tag. This means you could do development on the master branch instead, and tag it when it is ready for production.

Conclusion: Do not use a develop branch, do development on the master branch.

Feature Branches

Feature branches are used to let developers work on a feature without being disturbed by the work of others. But when we use continuous integration, isolating the work that different developers do from each other is exactly what we want to avoid! All work that is being done on a branch that is not continuously integrated brings us a step closer to a miniature “integration hell”.

The alternative is to do all work on the master branch. This requires a clean code base with high cohesion and low coupling, as well as constant communication between the developers, so that developers rarely have to work on the same bit of code, and know when they do.

If a feature is large, it can either be delivered incrementally or hidden from users until it is ready. If it is necessary to make a large-scale change that affects a large portion of the code, you can use the Branch by Abstraction pattern as an alternative to creating a Git branch.

It is often useful to keep track on the changes that have been made for a specific feature. Instead of using feature branches, this can be achieved by adding the ID of the feature to the commit comment. If you are using JIRA, for example, a Git integration plugin makes it very easy to see all commits that belong to a certain issue.

Conclusion: Do not use feature branches, do development on the master branch, using small incremental commits. Every commit message should contain the ID of the feature, bug, improvement or similar being worked on.

Release and Hotfix Branches

In GitFlow, a release branch is created before each release, and any release preparation is done on the release branch, including updating version numbers to match the release. The release branch is only kept until the release is ready, when it is removed.

A hotfix branch is created if it is necessary to make a change in a system that is in production. It is created from the tag of the released system and used for making the fix, after which the hotfix branch is removed.

We do need a release branch to prepare our release, and we may also need a branch to make fixes to the release after it has been taken into production. However, it is not necessary to create separate branches for the different purposes, instead we can create a release branch where we do the release preparation and let the branch live indefinitely in case we do need to make any fixes to that release.

Conclusion: Create a release branch before each release and let the branch live indefinitely. If it is necessary to make changes to the release, do them on the release branch and make sure the changes are merged into the master branch.

Automating the Release Procedure

As an example of how the creation of the release branch can be automated, here is how the continuous integration system can be configured to support continuous delivery where each commit is a potential release.

First some build parameters:

major.version.number=1.0
version.number=%major.version.number%.%build.counter%
release.branch=release-%version.number%

Now the build steps:

# Create a release branch
git checkout -b %release.branch% master
# Update version numbers
mvn versions:set -DnewVersion=%version.number%
# Build and run tests
mvn -P checkstyle,findbugs,integration-test -U clean install \
        enforcer:enforce
# Commit and tag release
git commit -a -m "New release candidate %version.number%"
git push origin %release.branch%
git tag %version.number%
git push origin %version.number%
# Remove local branch
git checkout master
git branch -D %release.branch%

Conclusion

  • Avoid branching as much as possible. Do the development work on the master branch to get the most possible benefit from continuous integration.
  • Use small incremental commits, constant communication and a clean code base to avoid problems with developers working on the same piece of code.
  • For major changes, use incremental delivery, feature hiding, or Branch by Abstraction.
  • If there is a problem that needs to be fixed in a system that is in production, first of all investigate if it is possible to make the fix only in the master branch and release a new version into production. If not, do the fix in the release branch corresponding to the version in production and merge the fix into the master branch.

RealLifeDeveloper Artifacts Now Available From Central Maven Repository

The Java artifacts described so far, rld-parent and rld-build-tools, are now available from the central Maven repository.

This means that you do not have to build them manually to install them in the local repository, just add a dependency in your POM:

<dependency>
    <groupId>com.reallifedeveloper</groupId>
    <artifactId>rld-build-tools</artifactId>
    <version>1.4</version>
</dependency>

Testing Spring Data JPA and In-Memory Repositories Using DbUnit

In a previous post, we saw how to create Spring Data JPA repositories, and how to create in-memory versions of these repositories, useful for testing other code that uses repositories.

Obviously, the repositories themselves need to be tested. DbUnit can be used to populate a database with test data so that we know the state of the database when we run the tests. DbUnit reads XML files with the contents of database tables and inserts the data in the database before each test.

We also need to test the in-memory versions of the repositories, and we will see that we can use the same test cases, and the same DbUnit files, to test the in-memory repositories.

Getting Started

There is a sample project, rld-repositories-sample, that shows how the code here fits together, including Spring configuration. See the previous post for instructions.

Example Code

In the previous post, we created repositories for working with departments and employees. Now, let’s create DbUnit files containing test data:

department.xml

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE dataset SYSTEM "rld-repositories-sample.dtd">
<dataset>

    <DEPARTMENT ID="1" NAME="IT" />
    <DEPARTMENT ID="2" NAME="Sales" />

</dataset>

employee.xml

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE dataset SYSTEM "rld-repositories-sample.dtd">
<dataset>

    <EMPLOYEE ID="1" FIRST_NAME="Jack" LAST_NAME="Bauer" SALARY="100000" DEPARTMENT_ID="1" />
    <EMPLOYEE ID="2" FIRST_NAME="Chloe" LAST_NAME="O'Brian" SALARY="80000" DEPARTMENT_ID="2" />
    <EMPLOYEE ID="3" FIRST_NAME="Kim" LAST_NAME="Bauer" SALARY="120000" DEPARTMENT_ID="1" />
    <EMPLOYEE ID="4" FIRST_NAME="David" LAST_NAME="Palmer" SALARY="180000" DEPARTMENT_ID="2" />
    <EMPLOYEE ID="5" FIRST_NAME="Michelle" LAST_NAME="Dessler" SALARY="90000" DEPARTMENT_ID="1" />

</dataset>

The DbUnit XML files use a DTD that can be automatically created using the DbUnitDtdGenerator class from rld-build-tools:

rld-repositories-sample.dtd

<!ELEMENT dataset (
    DEPARTMENT*,
    EMPLOYEE*)>

<!ELEMENT DEPARTMENT EMPTY>
<!ATTLIST DEPARTMENT
    ID CDATA #REQUIRED
    NAME CDATA #REQUIRED
>

<!ELEMENT EMPLOYEE EMPTY>
<!ATTLIST EMPLOYEE
    ID CDATA #REQUIRED
    FIRST_NAME CDATA #REQUIRED
    LAST_NAME CDATA #REQUIRED
    SALARY CDATA #REQUIRED
    DEPARTMENT_ID CDATA #REQUIRED
>

We can now create integration tests that connect to a real database and use DbUnit to add the data from the XML files to make sure the database is in a known state before each test. This is simplified by using the base class AbstractDbTest from rld-build-tools. In the constructor, you provide the document type (DTD) and the XML files you want to read. You must also provide a data source by overriding the getDataSource() method.

Testing the JpaDepartmentRepository is straightforward:

JpaDepartmentRepositoryIT.java

package com.reallifedeveloper.sample.infrastructure.persistence;

// imports...

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(locations = { "classpath:META-INF/spring-context-rld-repositories-sample-test.xml" })
public class JpaDepartmentRepositoryIT extends AbstractDbTest {

    @Autowired
    private JpaDepartmentRepository repository;

    @Autowired
    private DataSource ds;

    public JpaDepartmentRepositoryIT() {
        super(null, "/dbunit/rld-repositories-sample.dtd", "/dbunit/department.xml");
    }

    @Test
    public void findByExistingName() {
        Department department = repository().findByName("IT");
        assertThat(department, notNullValue());
        assertThat(department.name(), is("IT"));
        assertThat(department.id(), is(1L));
    }

    // Other test methods...

    protected JpaDepartmentRepository repository() {
        return repository;
    }

    @Override
    protected DataSource getDataSource() {
        return ds;
    }
}

Note the repository() method that is used to access the repository being tested. This will prove useful later, when we create a sub-class for testing the in-memory version of the repository.

Testing the JpaEmployeeRepository is similar, but in this case we use test data for both departments and employees:

JpaEmployeeRepositoryIT.java

package com.reallifedeveloper.sample.infrastructure.persistence;

// imports...

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(locations = { "classpath:META-INF/spring-context-rld-repositories-sample-test.xml" })
public class JpaEmployeeRepositoryIT extends AbstractDbTest {

    private static final BigDecimal DELTA = new BigDecimal(0.0000001);

    @Autowired
    private JpaEmployeeRepository repository;

    @Autowired
    private JpaDepartmentRepository departmentRepository;

    @Autowired
    private DataSource ds;

    private Department departmentSales;

    public JpaEmployeeRepositoryIT() {
        super(null, "/dbunit/rld-repositories-sample.dtd", "/dbunit/department.xml", "/dbunit/employee.xml");
    }

    @Before
    public void init() {
        departmentSales = departmentRepository().findByName("Sales");
        if (departmentSales == null) {
            fail("Error in test data: department 'Sales' not found");
        }
    }

    @Test
    public void findByExistingId() {
        Employee employee = repository().findById(1L);
        assertThat(employee, notNullValue());
        assertThat(employee.firstName(), is("Jack"));
        assertThat(employee.lastName(), is("Bauer"));
        assertThat(employee.salary(), closeTo(new BigDecimal(100000), DELTA));
        assertThat(employee.department().name(), is("IT"));
    }

    // Other test methods...

    protected JpaEmployeeRepository repository() {
        return repository;
    }

    protected JpaDepartmentRepository departmentRepository() {
        return departmentRepository;
    }

    @Override
    protected DataSource getDataSource() {
        return ds;
    }
}

We previously created in-memory versions of the repositories, to use when testing other code that uses the repositories. We obviously need to test these implementations as well, so that we can trust them when they are used in other tests.

It is easy to create a sub-class of the integration tests and plug in in-memory implementations for the repositories, that is what the protected repository() method is for. The question is how we populate the repositories with the test data from the DbUnit XML files.

The DbUnitFlatXmlReader class from rld-build-tools can be used to read DbUnit XML files and populate Spring Data JPA repositories, including our in-memory implementations. With this, it is easy to create tests for our in-memory repositories:

InMemoryDepartmentRepositoryTest.java

package com.reallifedeveloper.sample.test;

// imports...

@RunWith(JUnit4.class)
public class InMemoryDepartmentRepositoryTest extends JpaDepartmentRepositoryIT {

    private InMemoryDepartmentRepository repository = new InMemoryDepartmentRepository();

    @Override
    public void setUpDatabase() throws Exception {
        DbUnitFlatXmlReader xmlReader = new DbUnitFlatXmlReader();
        xmlReader.read("/dbunit/department.xml", repository, Department.class, Long.class);
    }

    @Override
    public void tearDownDatabase() throws Exception {
        // Do nothing
    }

    @Override
    protected JpaDepartmentRepository repository() {
        return repository;
    }

}

InMemoryEmployeeRepositoryTest.java

package com.reallifedeveloper.sample.test;

// imports...

@RunWith(JUnit4.class)
public class InMemoryEmployeeRepositoryTest extends JpaEmployeeRepositoryIT {

    private InMemoryEmployeeRepository repository = new InMemoryEmployeeRepository();
    private InMemoryDepartmentRepository departmentRepository = new InMemoryDepartmentRepository();

    @Override
    public void setUpDatabase() throws Exception {
        DbUnitFlatXmlReader xmlReader = new DbUnitFlatXmlReader();
        xmlReader.read("/dbunit/department.xml", departmentRepository, Department.class, Long.class);
        xmlReader.read("/dbunit/employee.xml", repository, Employee.class, Long.class);
    }

    @Override
    public void tearDownDatabase() throws Exception {
        // Do nothing
    }

    @Override
    protected JpaEmployeeRepository repository() {
        return repository;
    }

    @Override
    protected JpaDepartmentRepository departmentRepository() {
        return departmentRepository;
    }
}

Notes

  • It is difficult to measure code coverage for the Spring Data JPA repositories since the actual classes are created dynamically. We can use the tests for the in-memory repositories as a substitute: run a code coverage tool on the InMemoryEmployeeRepositoryTest, for example, to see that the tests cover all of InMemoryEmployeeRepository. If they do, you know that the test JpaEmployeeRepositoryIT covers all the methods of EmployeeRepository.
  • Using DbUnitFlatXmlReader to populate repositories can be useful to insert reference data into in-memory repositories that are used when testing other code.

Conclusion

Using AbstractDbTest as a base class for your database tests is an easy way to use DbUnit. The document type for the XML files can be generated using DbUnitDtdGenerator.

With DbUnitFlatXmlReader, it is possible to use DbUnit XML files to populate Spring Data JPA repositories, including our in-memory implementations.

Creating In-Memory Versions of Spring Data JPA Repositories for Testing

This post shows how to easily create in-memory versions of your repositories. These in-memory repositories can then be injected into, for example, a service that you want to test.

I am not a big fan of mocking frameworks. The reason is that I believe that it is far too easy to oversimplify the interactions between objects when defining the return values of the mock objects. This means that you miss bugs that depend on complex interactions occurring only for some combinations of input.

Instead of mocking the dependencies of an object, I prefer to use real code as far as possible, using fake implementations of objects that are inconvenient or too slow to use during a quick test cycle. One example of this is using in-memory repositories instead of repositories that connect to a real database. Note that this is not the same as using a normal repository with an in-memory database—the in-memory database takes much longer to start than the in-memory repository.

Spring Data JPA is great for easily creating repository implementations without having to write any boilerplate code. Your normal repository interfaces, which often live in the domain layer, define the methods needed by the business logic. You then define another interface, in the infrastructure layer, that extends the repository interface and org.springframework.data.jpa.repository.JpaRepository. The actual implementation of the interface is created dynamically by Spring, using a combination of naming conventions and annotations in the JPA interface.

Getting Started

A sample project, rld-repositories-sample, accompanying this post can be found on GitHub.

Use the following commands to download and build the sample project:

$ mkdir reallifedeveloper
$ cd reallifedeveloper
$ git clone https://github.com/reallifedeveloper/rld-repositories-sample.git
$ cd rld-repositories-sample
$ mvn -DcheckAll clean install # Should end with BUILD SUCCESS

The command-line option -DcheckAll activates Maven profiles for running code quality checks using Checkstyle, FindBugs and JaCoCo.

If you want to look at the source code for rld-build-tools, that is also available on GitHub.

Example Code

Assume that we are working with entities for departments and employees:

Department.java

package com.reallifedeveloper.sample.domain;

import java.util.HashSet;
import java.util.Set;

import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
import javax.persistence.OneToMany;
import javax.persistence.Table;

@Entity
@Table(name = "department")
public class Department {

    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;

    @Column(name = "name", unique = true, nullable = false)
    private String name;

    @OneToMany(mappedBy = "department")
    private Set<Employee> employees = new HashSet<>();

    public Department(Long id, String name) {
        this.id = id;
        this.name = name;
    }

    // Required by JPA.
    Department() {
    }

    public Long id() {
        return id;
    }

    public String name() {
        return name;
    }

    public Set<Employee> employees() {
        return employees;
    }

    public void addEmployee(Employee employee) {
        employees.add(employee);
    }

}

Employee.java

package com.reallifedeveloper.sample.domain;

import java.math.BigDecimal;

import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
import javax.persistence.JoinColumn;
import javax.persistence.ManyToOne;
import javax.persistence.Table;

@Entity
@Table(name = "employee")
public class Employee {

    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;

    @Column(name = "first_name", nullable = false)
    private String firstName;

    @Column(name = "last_name", nullable = false)
    private String lastName;

    @Column(name = "salary", nullable = false)
    private BigDecimal salary;

    @ManyToOne
    @JoinColumn(name = "department_id", nullable = false)
    private Department department;

    public Employee(Long id, String firstName, String lastName, BigDecimal salary, Department department) {
        this.id = id;
        this.firstName = firstName;
        this.lastName = lastName;
        this.salary = salary;
        this.department = department;
    }

    // Required by JPA.
    Employee() {
    }

    public Long id() {
        return id;
    }

    public String firstName() {
        return firstName;
    }

    public String lastName() {
        return lastName;
    }

    public BigDecimal salary() {
        return salary;
    }

    public Department department() {
        return department;
    }

    @Override
    public String toString() {
        return "Employee{id=" + id + ", firstName=" + firstName + ", lastName=" + lastName + ", salary=" + salary
                + ", department=" + department.name() + "}";
    }

}

We create repository interfaces for working with the entities:

DepartmentRepository.java

package com.reallifedeveloper.sample.domain;

import java.util.List;

public interface DepartmentRepository {

    Department findByName(String name);

    List<Department> findAll();

    <T extends Department> T save(T department);

}

EmployeeRepository.java

package com.reallifedeveloper.sample.domain;

import java.math.BigDecimal;
import java.util.List;

public interface EmployeeRepository {

    Employee findById(Long id);

    List<Employee> findByLastName(String lastName);

    List<Employee> findEmployeesWithSalaryAtLeast(BigDecimal salary);

    <T extends Employee> T save(T Employee);

}

The repository interfaces above define only the operations that are required by the business logic. For example, if you don’t need to delete employees in the system you are building, don’t add a delete method. This is the reason that the interfaces do not extend JpaRepository directly—that would mean that all methods from that interface would be available to code using our repositories.

Instead of having our repositories extend JpaRepository directly, we create subinterfaces that extend our repository interfaces as well as JpaRepository and contain all annotations specific to Spring Data Jpa.

JpaDepartmentRepository.java

package com.reallifedeveloper.sample.infrastructure.persistence;

import org.springframework.data.jpa.repository.JpaRepository;

import com.reallifedeveloper.sample.domain.Department;
import com.reallifedeveloper.sample.domain.DepartmentRepository;

public interface JpaDepartmentRepository extends DepartmentRepository, JpaRepository<Department, Long> {

}

JpaEmployeeRepository.java

package com.reallifedeveloper.sample.infrastructure.persistence;

import java.math.BigDecimal;
import java.util.List;

import org.springframework.data.jpa.repository.JpaRepository;
import org.springframework.data.jpa.repository.Query;
import org.springframework.data.repository.query.Param;

import com.reallifedeveloper.sample.domain.Employee;
import com.reallifedeveloper.sample.domain.EmployeeRepository;

public interface JpaEmployeeRepository extends EmployeeRepository, JpaRepository<Employee, Long> {

    @Override
    @Query("select emp from Employee emp where emp.salary >= :salary")
    List<Employee> findEmployeesWithSalaryAtLeast(@Param("salary") BigDecimal salary);

}

With proper configuration, Spring will automatically create classes that implement your repository interfaces to connect to a database. But what if you want to create in-memory implementations of the interfaces? That process is simplified by the base class InMemoryJpaRepository from rld-build-tools that implements the JpaRepository interface and also provides helper methods for finding entities based on a field’s value.

InMemoryDepartmentRepository.java

package com.reallifedeveloper.sample.test;

import com.reallifedeveloper.sample.domain.Department;
import com.reallifedeveloper.sample.infrastructure.persistence.JpaDepartmentRepository;
import com.reallifedeveloper.tools.test.database.inmemory.InMemoryJpaRepository;
import com.reallifedeveloper.tools.test.database.inmemory.LongPrimaryKeyGenerator;

public class InMemoryDepartmentRepository extends InMemoryJpaRepository<Department, Long>
        implements JpaDepartmentRepository {

    public InMemoryDepartmentRepository() {
        super(new LongPrimaryKeyGenerator());
    }

    @Override
    public Department findByName(String name) {
        return findByUniqueField("name", name);
    }

}

InMemoryEmployeeRepository.java

package com.reallifedeveloper.sample.test;

import java.math.BigDecimal;
import java.util.ArrayList;
import java.util.List;

import com.reallifedeveloper.sample.domain.Employee;
import com.reallifedeveloper.sample.infrastructure.persistence.JpaEmployeeRepository;
import com.reallifedeveloper.tools.test.database.inmemory.InMemoryJpaRepository;
import com.reallifedeveloper.tools.test.database.inmemory.LongPrimaryKeyGenerator;

public class InMemoryEmployeeRepository extends InMemoryJpaRepository<Employee, Long>
        implements JpaEmployeeRepository {

    public InMemoryEmployeeRepository() {
        super(new LongPrimaryKeyGenerator());
    }

    @Override
    public Employee findById(Long id) {
        return findByUniqueField("id", id);
    }

    @Override
    public List<Employee> findByLastName(String lastName) {
        return findByField("lastName", lastName);
    }

    @Override
    public List<Employee> findEmployeesWithSalaryAtLeast(BigDecimal salary) {
        List<Employee> employeesWithSalaryAtLeast = new ArrayList<>();
        for (Employee employee : findAll()) {
            if (employee.salary().compareTo(salary) >= 0) {
                employeesWithSalaryAtLeast.add(employee);
            }
        }
        return employeesWithSalaryAtLeast;
    }

}

Notes

  • Methods from JpaRepository, such as findAll and save, are implemented by InMemoryJpaRepository, so you don’t have to implement them.
  • Methods that find entities based on the value of a single field, such as findById and findByLastName, are easily implemented using findByField or findByUniqueField. Use findByField if several entities can have the same value for the field in question, and findByUniqueField if there can be only one entity with a given value for the field.
  • Methods that do more complicated things, such as findEmployeesWithSalaryAtLeast, are implemented using custom code.
  • If you want to emulate the @GeneratedValue annotation for the id field of an entity, you need to provide a PrimaryKeyGenerator in the constructor of InMemoryJpaRepository. There are implementations of PrimaryKeyGenerator for working with integers and long integers, and it is easy to create other implementations.
  • Conclusion

    The base class InMemoryJpaRepository makes it easy to create in-memory versions of Spring Data JPA repositories. The reason to use such repositories when testing is that they can be created and destroyed very quickly.

    In a follow-up post, we will look at using DbUnit to test both normal repositories and in-memory repositories.

Habits Useful for Practising Software Developers

I sincerely believe that the micro decisions we make every day when developing software can have a huge impact on the quality of the systems that we build. This means that if we learn to make better decisions when faced with options, probably several times a day, the end result is a much better system.

In the book The 7 Habits of Highly Effective People, the author Stephen R. Covey defines a habit as the intersection of knowledge, skill and desire. Knowledge is what to do and why to do it. Skill is how to do something. Desire is the want to do it. Hopefully, this blog can provide the knowledge and skill necessary to adopt some new habits. The desire must obviously come from yourself.

The habits I believe to be useful for software developers are described here. Some are specific to software development while others are more general. None of the habits should come as any surprise; this does not mean they are easy to adopt on a day-to-day basis. Remember that knowledge alone is not enough to form a habit.

Write a Test First

Anytime you want to change the behavior of something, you should always try to first write a test that describes the expected new behavior, and then make the changes until the test passes.

Test-Driven Development (TDD) is an obvious example of using this habit when coding. Specification by Example is another example, where you specify functional or non-functional requirements as concrete examples using a tool such as FitNesse.

Don’t Document, Automate

Documentation is great for describing why you do something, and for giving the big picture of how something works. However, if you find yourself writing a to-do list, your first reaction should be to automate the procedure instead of documenting it.

This has the benefits of being testable and repeatable. And you do get written documentation as well; the automation script is a concise description of what is being done. The difference is that the script is a correct description while written documentation may start out as being fairly correct, but usually soon deteriorates.

An obvious example of this habit is using Puppet to automate provisioning of machines.

Publish Your Results

Very few systems are islands, most need to communicate with other systems and with people. In order to do so, it is important that the systems provide information about what is going on. Whenever you make a substantial change in the state of a system, try to think of others that may be interested in this change.

Examples of this habit are publishing domain events when a state change occurs in a system, and making sure to use proper application logging to make it easier to find problems.

When In Doubt, Do Something

If you are not sure what to do next, for example because there are multiple possible solutions to a problem, it is important to not just sit still and wait. By doing something you learn more, and move closer to a good solution.

Examples of this habit include spike solutions to try out a new technology, and reading a book while between assignments.

Everybody, All Together, From Early On

Make a habit of trying to communicate what you are doing to everyone who might be interested in the result. During a project, constantly try to identify new stakeholders.

Examples of this habit are having a brainstorming session at the start of the project to identify stakeholders, and to invite everyone interested to the sprint demos.