The image illustrates that valuable insights can be gained from recorded events through skilful processing and analysis, a metaphor for the value of event sourcing and automated testing in software development.

This article is based on a presentation I gave for the first time today at the PHP UK Conference in London.

In a previous article, I described how my understanding of software evolved from procedural programming to object-oriented programming to domain-driven design (DDD) and event sourcing. The realisation that not all objects are the same was important in this process. However, the step from "We only store the current state" to "We record all events" was revolutionary.

Event Storming creates a common understanding. DDD, CQRS, and event sourcing are powerful patterns. Together, they reveal an elegant truth: if we align our tests with the language of our domain and adapt our test framework to our architecture, testing becomes a bridge rather than a burden.

From requirements to code to documentation, the circle is thus complete: Our tests verify that the correct events are emitted when our commands are processed, and they generate visual documentation in Event Storming notation.

Testing in an event-based world

When I started working with event sourcing, I was faced with a fundamental question: How do we test software that does not overwrite its state changes but stores them as immutable events?

In traditional CRUD systems, we implement tests that are larger than unit tests, usually as follows: We set the database to a defined state, perform an operation, and then check the resulting state. In event-driven systems, however, we need to rethink our approach. The focus is no longer on verifying a changed state, but on ensuring that the correct events have been emitted.

This shift in perspective is profound. Instead of asking "What is the current state of the database?", we ask "Which events were recorded?". In other words, we no longer test the "what", but rather the "what happened".

Domain, Code, Tests

Let us recall Event Storming, a methodology from collaborative modelling. Instead of drawing UML diagrams alone in a quiet room, Event Storming brings all participants together in one room. With the help of countless orange sticky notes, we jointly model the timeline of the domain based on events.

These orange stickies, known as domain events, are not only valuable communication aids. They also serve as a blueprint for our tests. Each domain event identified in the workshop later becomes an event in our code. With the help of our tests, we verify that the right events are emitted under the right circumstances.

The different coloured sticky notes used in event storming have their counterparts in the test code:

  • Events are used both as test fixture ("What happened?") and for assertions ("What should happen?")
  • Commands are what we exercise in our tests ("What should be executed?")
  • Read Models are tested by generating projections based on events in the test fixture
  • Hotspots represent problems or questions and can become tests for edge cases

Given, When, Then

With the help of the "Given, When, Then" pattern, we can write our tests in a precise and understandable way:

  1. Given – Past: Events that have already occurred
  2. When – Present: Command that we execute
  3. Then – Expectation: Events that we expect or exceptions that should be raised

This example shows how testing event-based systems becomes simple and effective when we can use domain-specific language:

final class WithdrawMoneyCommandProcessorTest extends EventTestCase
{
    #[TestDox('Emits a MoneyWithdrawnEvent when money is withdrawn')]
    public function testEmitsMoneyDepositedEvent(): void
    {
        $amount      = Money::from(123, Currency::from('EUR'));
        $description = 'the-description';

        $this->given(
            $this->accountOpened('the-owner'),
        );

        $this->when(
            $this->withdrawMoney(
                $amount,
                $description,
            ),
        );

        $this->then(
            $this->moneyWithdrawn($amount, $description),
        );
    }
}

Running the test shown above yields the output in TestDox format shown below:

DepositMoneyCommandProcessor
 ✔ Emits a MoneyDepositedEvent when money is deposited

The project-specific base class EventTestCase provides the following methods, among others:

  • given() configures the events that define the initial situation for our test
  • accountOpened() is a helper method that creates an "Account Opened" event
  • when() triggers the command execution we want to test
  • withdrawMoney() is a helper method that creates a "Withdraw Money" command
  • then() configures the events we expect as a result of the command execution
  • moneyWithdrawn is a helper method that creates a "Money Withdrawn" event

EventTestCase manages the event store behind the scenes and abstracts the delegation to the command processor responsible for a command.

Beyond pure verification, EventTestCase uses the information collected from the three phases (Given, When, Then) to pass structured information about the executed test to an extension for PHPUnit using TestCase::provideAdditionalInformation(), which generates visual documentation in Event Storming notation from this data:

Visual documentation generated for single test

When we aggregate this structured information across all tests, we can visualise all events, command processors, and read models in one overview:

Visual documentation for events, event processors, command processors, and read models

You can find the complete code example on GitHub. The code sections relevant to this article are highlighted in the material for my presentation.

From burden to bridge

When I first started working with event sourcing, I initially found it to be an additional layer of complexity when testing. We have to think "in events", understand command handlers, test projections, and much more. But over time, I realised that event sourcing makes testing easier, more elegant, and more meaningful.

When I look back on my beginnings today, from "Hello World" in AmigaBASIC to event-sourced systems with automatically generated documentation in Event Storming notation by running the tests, I see a clear evolution. This applies not only to the way we design systems and structure code, but also to our way of thinking about testing.

Tests are no longer just verification. They are specification, documentation, and communication tools all in one. They close the circle from the ideas in the Event Storming workshop to the continuous development of the software.

And that is precisely the elegant truth that DDD, CQRS, and Event Sourcing reveal together: When we align our tests with the domain language and adapt our test framework to our architecture, testing becomes a bridge rather than a burden. A bridge that holds our team together, brings our documentation to life, and makes our software more understandable.