Event Sourcing: Reactivity Without the React Overhead

April 25, 2025

This is the second entry in a five-part series about event sourcing:

  1. Why I Finally Embraced Event Sourcing—And Why You Should Too
  2. What is event sourcing and why you should care
  3. Preventing painful coupling
  4. Reactivity Without the React Overhead (this page)
  5. Get started with event sourcing today

In this post, I’ll share some things I’ve enjoyed about event sourcing since adopting it.

I’ll start by saying that one of the ideas I love about some of today’s JavaScript frameworks is that the HTML of the page is a functional result of the data on the page. If that data changes, the JS framework will automatically change the affected HTML.

This feature enables some impressive user experiences, and it’s especially fun to see an edit in one part of the page immediately affect another.

I’m finding this helpful pattern to remember as I’ve been working with event sourcing. Any event can trigger updates to any number of database tables, caches, and UIs, and it’s fun to see that reactivity server-side.

Like React, but for the server

React is (mostly) a front-end framework. It’s concern is to update the in-browser HTML after data changes.

In a way, you can say event-driven microservices are similar. One part of the system publishes an event without knowing who will listen, and other parts kick off their process with the data coming in from the event.

One of the things that has caught me by surprise about event sourcing is how I get similar benefits of an event-driven microservice architecture in a monolith, while keeping complexity low.

At one time, the project I’m working on was a microservice architecture with six different Python applications. With a vertically sliced, event-sourced architecture, we could make it one.It’s currently two, since the architect feels better that way, but it can easily be one.

This project processes files through several stages. It’s so fun to see this application work. Like an event-driven microservice, the command creates an event when a document enters the system.

However, instead of going to an external PubSub queue, this event gets saved to the event store and then to an internal message bus. The code that created the event doesn’t know or care who’s listening, and any number of functions can listen in.

In this case, several functions listen for the document created event. One kicks off the first step of the processing. Another makes a new entry for the status page. A third creates an entry in a table for a slice that helps us keep track of our SLAs.

Once the first step finishes, another event is raised. In turn, a few other functions are run. One updates the status info, and another begins the next processing step.

If something went wrong with the first step, we’ll save a different event. In reaction to this event, we have a function that updates the status screen, another that adds info to an admin screen to help diagnose what went wrong, a third that notifies an external team that consumes the result of this workflow, and a fourth that will determine whether to retry the process.

Keeping the complexity in check

This sounds incredibly complicated, and in some ways it is. There are a lot of small moving parts. But they’re all visible either through looking at the event modeling diagram or leveraging the IDE to see where a type of event is used.

This is similar to having an event-driven microservice, but it all lives in a decoupled monolithDecoupled monolith?! Who would have guessed those words would be used together? and is easily deployable.

The most painful part of creating this app has been debugging issues that span the interactivity between the two services. Adding additional services dramatically increases complexity.

This is not to say that you shouldn’t use microservices. I love the idea of implementing slices in different languages to better meet specific slices' needs. But having most of the code in one code base and deploy target is nice.

I’m also thrilled that complexity doesn’t grow as the project ages. Because of the decoupled nature of the vertical slices, adding new functionality will not make the code much more complicated. Each slice is isolated, and there are only a few patterns to master.

When it’s time to start working on a new piece of functionality, I’ll create its folder and examine where my data comes from. Do I need to subscribe to events or pull from a read model? Then, I check to see what events my slice needs to publish. Once those are in place, it’s all about implementing the business logic.

Rinse and repeat.

But part of an excellent service is a great user experience, and I love how this reactivity is not just limited to the back end.

Bringing it to the browser

I value a great user experience, so early in the project, I looked for when a live-updating view would greatly benefit the user.

The first one I did was the status view I discussed in previous posts. When a document enters our system, it appears in the table like this:

Document ID Status Last Updated Duration
1542-example-94834 0% done 5 seconds ago 5 seconds

When one step as been finished, the UI looks like this:

Document ID Status Last Updated Duration
1542-example-94834 25% done 0 seconds ago 10 seconds

The way I implemented this is to have a function that subscribes to the events that would change the UI and update a database table. Something like this:

StatusEvent = typing.Union[
  DocumentCreated, 
  Step1Finished, 
  Step1Failed, 
  ...
]

def on_status_updates(event: StatusEvent):
	if isinstance(event, DocumentCreated):
		...
	elif isinstance(event, Step1Finished):
	    db.document(event.document_id).update({
		    ‘percent_done’: 25,
		    ‘last_updated’: event.stored_at,
	    })
	    ...

This project uses Google’s Firestore as its primary database, and it has a feature allowing you to subscribe to database changes.Believe it or not, I’m not using the internal bus to update the UI. That’ll wait until the next project.

When a user loads this page, we use HTMX to open a server-sent events connection to code that subscribes to changes in the status database. Something like thisComplexity was removed to improve understandability. I’m working on making this aspect more understandable for a future blog post. :

def on_database_update(changes):
    now = datetime.now(tz=UTC)
    template = templates.template('document_status_row.jinja')
    return HTTPStream(
        template.render_async(
            context=(document=changes, last_updated=now)
        )
    )

With that, any time an entry in the database changes, an updated table row gets sent to the browser as HTML and HTMX either updates an existing row or inserts the new one into the table.This isn’t unique to HTMX. Frameworks like data-star, unpoly, and fixi can do the same All this without setting up a JavaScript build pipeline or WebSockets infrastructure.

Reactivity, user experience, and history too

One final aspect of event sourcing I’ve enjoyed through this project is the ability to decide what to do based on an item’s history.

I mentioned above that an external team wants to be notified about specific conditions.

When I was tasked to implement this, the person giving it to me felt a little sorry, as they suspected this had complexity hiding below the surface.

After talking with the external team, they wanted up to two notifications for every document: one notification if that document completed every step, or one notification if the document failed any step twice.

I handled the first case similarly to this:

def document_just_completed_all_steps(event_names: list[DomainEvent]) -> bool:
    return (
        event_names.count('Step4Finished') == 1 and
        event_names[-1].name == 'Step4Finished'
    )

def should_notify(event: DomainEvent, container: svcs.Container) -> bool:
    event_store = container.get(EventStore)
    event_names = [
        event.name for event 
        in event_store.get_event_stream(event.entity_id)
    ]
    if document_just_completed_all_steps(event_names):
        return True
    return did_document_fail_retry_for_the_first_time(event_names)

Thankfully, with event sourcing and the excellent svcs frameworkThanks, Hynek! , I have access to every event that happened to that document.

I used that list of events to ensure that there was only one instance of the final event, and that it was the last event in the sequence.

Next up

If this sounds like magic, it’s not. It’s just good design and a new way of thinking about change. In the next post, I’ll show you exactly how to dip your toe into event sourcing.

© 2025 Everyday Superpowers

LINKS
About | Articles | Resources

Free! Four simple steps to solid python projects.

Reduce bugs, expand capabilities, and increase your confidence by building on these four foundational items.

Get the Guide

Join the Everyday Superpowers community!

We're building a community to help all of us grow and meet other Pythonistas. Join us!

Join

Subscribe for email updates.