Introducing event sourcing - Supporting architectures - Microsoft .NET Architecting Applications for the Enterprise, Second Edition (2014)

Microsoft .NET Architecting Applications for the Enterprise, Second Edition (2014)

Part III: Supporting architectures

Chapter 12. Introducing event sourcing

Simplicity is prerequisite for reliability.

—Edsger Dijkstra

The driving force that led to Domain-Driven Design (DDD) was the need to fill the gap between the different views that software architects and domain experts often had about business domains. DDD is a breakthrough compared to relational modeling because it promotes the use of domain modeling over data modeling. Relational modeling focuses on data entities and their relationships. Domain modeling, instead, focuses on behavior observable in the domain.

As a concrete implementation of domain modeling, DDD originally pushed the idea of a whole, all-encompassing object model capable of supporting all aspects and processes of a given domain. But models, especially complex models for complex pieces of reality, face a few challenges. This has led people to investigate alternative approaches, such as CQRS. At the moment you separate commands from queries, though, you are led to think a lot more in terms of tasks. And tasks are tightly related to domain and application events, as you saw when we talked about the Domain Model pattern and CQRS.

Event sourcing (ES) goes beyond the use of events as a tool to model business logic. ES takes events to the next level by persisting them. In an ES scenario, your data source just consists of persisted events. You don’t likely have a classic relational data store; all you store are events, and you store them sequentially as they occur in the domain. As you can guess, persisting events instead of a domain model has a significant impact on the way you organize the back end of a system. Event persistence weds well with a strategy of separating the command and query stacks, and with the idea that distinct databases are used to save the state of the application and expose it to the presentation layer.

In the end, ES is not full-fledged, standalone architecture; rather, it’s a feature used to further characterize architectures such as Domain Model and CQRS. In fact, you can have a Domain Model and CQRS application full of domain and integration events but still use plain relational databases as the data source. When you add ES to a system, you just change the structure and implementation of the data source.

The breakthrough of events

In the real world out there, we perform actions and see these actions generate a reaction. Sometimes the reaction is immediate; sometimes it is not. Sometimes the reaction becomes another action which, in turn, generates more reactions and so forth. That’s how things go in the real world. That’s not, however, how we designed software for decades. Was it, instead, the other way around?

The next big thing (reloaded)

Overall, we think that events are the next big thing in software development and probably the biggest paradigm shift to occur in the 20 years after object-orientation. And, all in all, we can probably even drop the “next” in the previous sentence and simply say events are a big thing in software development, period. Altogether, bounded contexts as well as domain and integration events, indicate a path architects can follow to deal more effectively with requirements both in terms of understanding and implementation.

Treating observable business events as persistent data adds a new perspective to development. Without a doubt, events play a key role in today’s software development. But are events and event sourcing really a new idea? In fact, the modeling problems they help to address have been in the face of developers for decades. So what?

If we look back, we find that approaches similar to what we today call ES have been used for more than 30 years in banking and other sectors of the software industry, even in languages like COBOL. So events and event sourcing are a big thing for developers today, but calling that a newbig thing might be overstating the case. On the other hand, as discussed already in Chapter 3, “Principles of software design,” very few things are really new in software.

The real world has events, not just models

A model is an abstraction of reality that architects create from the results of interviews with domain experts, stakeholders, and end users. A model that faithfully renders a given domain might be quite complex to write; a model that is less faithful, however, can be simpler to write but fail to work well, especially in systems with a long life expectation that need a bit of updating every now and then.

While facing this problem, many architects just decide not to have a domain model. In doing so, they lose a lot in flexibility and maintainability. The real world just can’t be restricted to models only—or to events only. The challenge is finding the right combination of models and events for a particular domain.

Modeling is a continuous activity, and the waterfall-like methodologies used in the past to handle projects apply now in quite a small number of scenarios. Models might need to be updated frequently as requirements change or are better understood. Whether projects start small or big, they will likely end up being much bigger than expected.

In a way, a model has the flavor of an architectural constraint and might force intense refactoring of the pillars of the system when changes occur. The work of the architect is just making choices that minimize the costs of future refactoring. The fact is, however, that for years DDD was seen, on one hand, as the correct and most ideal approach and that, on the other hand, it was hard to apply successfully.

If you take the approach that every single application’s action occurs within a well-orchestrated model, you need to know everything about the model and the domain behind it. Otherwise, the model might not be the right approach. The advent of CQRS is, in this regard, the definitive evidence that a model in a bounded context is much more manageable if it is kept simple and small. Small models, though, might describe only a subset of the domain.

In the end, as the name suggests the model is just a model; it’s not what we observe directly in the real world. In the real world, we observe events, and we sometimes build models on top of them.

Moving away from “last-known good state”

When we build a model out of observed events, a model is all we want to persist. A model is typically a cluster of objects. For both the simplicity and effectiveness of modeling and implementation, we identify some root objects in the model (called aggregate roots in Domain Model) and manage the storage and materialization of objects through them.

This is what we did for years in both relational data models and, more recently, in domain models.

Event sourcing pushes a form of lateral thinking about software architecture. Summarized, ES doesn’t lead you to primarily focus on a model that mimics the entire domain and capture its state to a persistent storage. Instead, ES focuses primarily on the sequence of events you observe. And events are all that you want to capture for persistent storage.

Last-known good state

Think for a moment about the Order entity you encountered in past chapters, both in the context of Domain Model and CQRS architectures. The Order entity has an associated attribute that indicates its state, either pending, shipped, canceled, or archived. When you read a given order from the storage, all you have is the current state of the order; or, better yet, all you have is the last-known (because persisted) good state.

What if, instead, you need to track down the entire history of the order? What if you need to show when the order was created, when it started being processed, and when it was canceled or shipped?

In Chapter 8, “Introducing the domain model,” we introduced domain events. In doing so, we said that any occurrence of the adverb “when” in the requirements is likely the occurrence of a relevant event in the domain. In the previous sentence, the word “when” occurs quite a few times. The creation, processing, and cancelation of an order are all events that change the last-known good state of an order upon occurrence.

In a “last-known good state” approach, though, each new event overwrites the state and doesn’t track the previous state. This is hardly an acceptable compromise in most realistic business scenarios. Often, you need to keep track of the order history, and this need leads to adding a mechanism that maintains a history of the changes to orders and does not just maintain the current state of orders. Figure 12-1 shows a possible model.

Image

FIGURE 12-1 Tracking the history of an order.


Image Note

The term New State in Figure 12-1 is specific to the example based on orders and refers to changes in the state of the order: approved, canceled, and so on. It doesn’t mean that the new state of the Order aggregate is being persisted whenever a new event occurs. Events refer to changes. In a banking scenario, for example, a transfer event will have the amount of the deposit but not the current balance of the account.


The model shown in Figure 12-1, however, just scratches the surface of the real problem we’re facing. It works great if your only problem is tracking the change of an order’s state. Generally speaking, in the history of an entity there might be events that also alter the structure of the entity. An example is when an item is added to an existing order before the order is processed or even before the order is shipped. Another example is when an order ships partially and missing items are either removed from the order because they are not available or they ship later. Yet another, probably more significant, example is when the entity just evolves over time according to the rules of the business. What if, all of a sudden, you find out that, say, the entity Invoice has a new PayableOrder attribute that gets a default value on existing instances but is mandatory on any new invoice?

On a relational model, solving the issue involves manipulating production databases and updates to the logic across the entire codebase wherever the Order entity is used. With a nonrelational design, the impact of the changes is much less obtrusive on both data stores and code.

The point is that the “last-known good state” approach is good, but it’s not necessarily the ideal approach to represent effectively the item’s life as it unfolds in a business domain.

Tracking just what’s happened

In the real world, we only observe events, but for some reason we feel the need to build a model to capture information in events and store it. Building models out of events is not a bad thing per se. Models are immensely helpful. However, entity-based models serve the purpose of queries exceptionally well, but they do not serve quite as well the purpose of commands.

As discussed in past chapters, separating commands from queries is vital for modern software. If nothing else, it’s vital because it makes building the software version of a business domain far easier, far more reliable and, especially, effective.

Event sourcing adds a new level of refinement to the CQRS architecture. According to event sourcing, events are the primary source of data in a system. When an event occurs, its associated collection of data is saved. In this way, the system tracks what happens, as it happens, and the information it brings.

Because the world has events and not models, this is by far a more natural way of modeling most business domains.


Image Note

When someone mentions complex software systems, many think of Healthcare.gov. Dino, in particular, is currently involved in the preliminary analysis of a system for the same domain space that never really took off because of the difficulty of designing an effective comprehensive model. Events are giving new lifeblood to an otherwise dead project.


The deep impact of events in software architecture

The “last-known good state” approach has been mainstream for decades. The “What’s Happened” approach is a relatively new approach that treats domain events as the core of the architecture. As mentioned, in some business sectors approaches similar to ES are even older than relational databases. ES is just the formalization in patterns of those old practices.

Having events play such a central role in software architecture poses some new challenges, and those attempting to implement that approach might even face some initial resistance to change. The following sections describe some reasons why events have a deep impact on software architecture.

You don’t miss a thing

The primary benefit of events is that any domain events that emerge out of analysis can be added to the system and saved to the store at nearly any time. By designing an event-based architecture, you give yourself the power to easily track nearly everything that takes place in the system.

Events are not rigidly schematized to a format or layout. An event is just a collection of properties to be persisted in some way, and not necessarily in a relational database.


Image Note

What’s the ideal granularity of events you record? First and foremost, events exist because there’s a domain analysis to suggest them and a domain expert to approve them. As an architect, you can modify the granularity of events you want to record and fine-tune such a granularity—for example, by introducing coarse-grained events that group multiple events. For example, you could use a global Validate event instead of multiple events for each step of the business-validation process. Grouping events is a controversial practice that tends to be discouraged and is being overtaken by the emerging analytical approach—event storming, which states the opposite: just deal with real domain events at the natural granularity they have in the domain.


Virtually infinite extensibility of the business scenario

Events tell you the full story of the business in a given domain. Having the story persisted as a sequence of events is beneficial because you are not restricted to a limited number of events. You can easily add new events at nearly any time—for example, when your knowledge of the domain improves and when new requirements and business needs show up.

Using a model to persist the story of the business limits you to whatever can be stored and represented within the boundaries of the model. Using events removes such limits at its root, and supporting new and different business scenarios is not just possible but also relatively inexpensive.

Dealing with event persistence requires new architectural elements such as the event store, where events are recorded.

Enabling what-if scenarios

By storing and processing events, you can build the current state of the story any time you need it. This concept is known as event replay. Event replay is a powerful concept because it enables what-if scenarios that turn out to be very helpful in some business domains (for example, in financial and scientific applications). The business and domain experts, for instance, might be interested to know what the state of the business was as of a particular date, such as December 31, 2011.

With event-based storage, you can easily replay events and change some of the run-time conditions to see different possible outcomes. The nice thing is that what-if scenarios don’t require a completely different architecture for the application. You design the system in accordance with the event-sourcing approach and get what-if scenarios enabled for free.

The possibility of using what-if scenarios is one of the main business reasons for using ES.

No mandated technologies

Nothing in event sourcing is explicitly bound to any technologies or products, whether they are Object/Relational Mapper (O/RM) tools, relational (or even nonrelational) database management systems, service buses, libraries for dependency injection, or other similar items.

Event sourcing raises the need for some software tools—primarily, an event store. However, having an event store means that you have essentially a log of what has happened. For query purposes, you probably need some ready-made data built from the log of events. This means that you probably also need a publish/subscribe mechanism to connect the command and query stacks, a relational store to make queries run faster, and probably some Inversion of Control (IoC) infrastructure for easier extensibility.

For any of these tools, you can pick up the technology or product that suits you—no restrictions and no strings attached. Sure, there are some recommendations and tools that might work better than others in the same scenario. For example, a NoSQL document store is certainly not badly positioned to create an event store. Likewise, a service bus component might be well-suited to enhance scalability by keeping commands and queries separated but synchronized without paying the costs of synchronous operations.

Events and event sourcing are about architecture; technologies are identified by family. The details are up to you and can be determined case by case.

Are there also any drawbacks?

Is it “all good” with event sourcing? Are there issues and drawbacks hidden somewhere? Let’s split the question in two parts:

Image Are there drawbacks hidden somewhere in the event-sourcing approach to software architecture?

Image Does it work for all applications?

The event-sourcing architecture is general enough to accommodate nearly any type of software application, including CRUD (Create, Read, Update, Delete) applications. It doesn’t have dependencies on any technologies, and it can be successfully employed in any bounded context you wish and on top of any technology stack you prefer. In this regard, all the plusses listed earlier in the chapter stand. As for painful points, resistance to change is really the only serious drawback of event sourcing. Resistance to change is also fed by the fact that most popular IDEs and frameworks are currently designed from the ground up to scaffold entities. This suggests that scaffolding entities and thinking in terms of entities and their relationships is the ideal way of working.

Event sourcing pushes a paradigm shift that turns around a number of certainties that developers and architects have built over time. To many, it seems that using event sourcing means not saving any data at all. There might not be any classic relational data store around, but that doesn’t mean you’re not saving the state of the application. Quite the reverse: you’re saving any single piece of data that emerges from the business domain. Yet, there might be no classic database to demonstrate in meetings with management and customers.

In our opinion, event sourcing is an important resource for architects. Each architect, though, might need to have his own epiphany with it and use it only when he feels okay and really sees event sourcing as the ideal solution to the problem. Our best advice, in fact, is this: don’t use it if you don’t see clear and outstanding benefits. Whether you fail to see benefits because no real benefits exist or because you’re not seasoned enough to spot them is truly a secondary point.

While event sourcing is a powerful architecture that can be used to lay out nearly any system, it might not be ideal in all scenarios. Events are important in business scenarios where you deal with entities that have a long lifespan. Think, for example, of a booking system for resources like meeting rooms or tennis courts. It might be relevant to show the full history of the booking: when it was entered, who entered it, and when it was modified, postponed, or canceled. Events are also relevant in an accounting application that deals with invoices and job orders. It might be far easier to organize the business logic around events like an invoice issued as the first tranche payment of a given job order, an inbound invoice that needs to be registered, or a note of credit issued for a given invoice.

Events are probably not that useful in the sample online store scenario we discussed in past chapters. Is it really relevant to track down events like searched orders, the creation of orders, and Gold-status customers? These are definitely domain events you want to identify and implement. Their role in the domain is not central as it might be in previous examples. You might not want to set up a completely different and kind of revolutionary architecture just because you have a few domain events.

Our quick rule for event sourcing might be summarized as follows: if you can find one or more domain experts needing for the sequence of events you can produce, then event sourcing is an option to explore further. If events are only useful to concatenate pieces of logic and business rules, then those events are not first-class citizens in the domain and don’t need to be persisted as events. Subsequently, you might need to implement an event sourcing architecture.

Event-sourcing architecture

Let’s now focus on what you do when you decide to use events as the primary data source of your layered system. There are a couple of fundamental aspects to look into: persisting events and laying the groundwork for queries.

Persisting events

Events should be persisted to form an audit log of what happened in the system. In the rest of the chapter, you’ll first see issues related to persisting events to a physical store. Then you’ll tackle reading events back to rebuild the state of the aggregates, or whatever business entities you use to serve both the needs of the query side of the system and commands.

An event store is a plain database, except that it is not used to persist a data model. Instead, it persists a list of event objects. As mentioned, an event is a plain collection of properties that could easily fit in a single relational table if it weren’t for the fact that an event store is expected to store several types of events. And each event will likely be a different collection of properties.

An event store has three main characteristics:

Image It contains event objects made of whatever information is useful to rebuild the state of the object the event refers to.

Image It must be able to return the stream of events associated with a given key.

Image It works as an append-only data store and doesn’t have to support updates and deletions.

Event objects must refer in some way to a business object. If you’re using the Domain Model pattern to organize the business logic of the command stack of the architecture, the event store must be able to return the stream of events associated with an aggregate instance. If you’re not using the Domain Model pattern, you have some relevant business objects in the model, anyway. It could be, for example, some OrderTableModule object if you’re using the Table Module pattern. In this case, the event data must contain some key value that uniquely identifies the relevant business object. For example, events that refer to orders should contain something like the ID of the specific order the event refers to.

Events and the overall flow of business logic

Before we go any further with the details of event-sourcing architecture and the event store, we want to make an important point as clear as possible.

Event sourcing is essentially about capturing all changes to an application state as a sequence of events. Events, however, are atomic pieces of information. Imagine the following stream of recorded events:

Image Order #123 is created by customer Xyz. The event record contains all details about the order.

Image Customer Xyz changed the shipping address for order #123. The event record contains the new address.

Image Order #123 is processed.

Image Customer Xyz changed the default shipping address. The event record contains the new shipping address.

Image Order #123 is shipped.

Each event is a record in the event store, but each event reports about a specific fact that has happened. The current state of the application results from the combined effect of all events. Every time a command is placed, the current state of the system (or just the current state of the involved business entities) must be rebuilt from the stream of events.

Options for an event store

At the end of the day, an event store is a database. What kind of database, however? Generally speaking, there’s no restriction whatsoever: an event store can be a relational DBMS, any flavor of a NoSQL database (document, key-value), or even an XML file. This said, really viable options are probably only a couple:

Image Relational DBMS An event store can be as easy to arrange as creating a single relational table where each row refers to an event object. The problem here is with the different layout of event objects. There’s no guarantee that all event classes you need to store can be massaged to fit a single schema. In situations where this can be done and results in acceptable performance, by all means we recommend you go with a classic relational DBMS. Unfortunately, we don’t expect this situation to be that common. In addition, consider that there’s no guarantee either that the structure of an event won’t change over time. If that happens, you must proceed with an expensive restructuring of the storage. A possible way to neutralize schema differences is using a sort of key-value storage in a relational storage, where the key column identifies the aggregate and the value column refers to a JSON-serialized collection of properties.

Image Document database When it comes to persistence, a document is defined as an object that has a variable number of properties. Some NoSQL products specialize in storing documents. You create a class, fill it with values, and just store it as is. The type of the class is the key information used to relate multiple objects to one another. This behavior fits perfectly with expectations set for an event store. If you use a document database such as RavenDB (which you can check out at http://ravendb.net), you have no need to massage event data into something else. You have an event object and just save it.

A side effect of most document databases is eventual consistency. Eventual consistency is when reads and writes are not aligned to the same version of the data. Most NoSQL systems are eventually consistent in the sense that they guarantee that if no updates are made to a given object for a sufficiently long period of time, a query actually returns what the last command has written. Otherwise, an old version of data is returned. Eventual consistency exists for performance reasons and is due to the combined effect of synchronous writes and asynchronous updates of indexes.

A third option you might want to consider is column stores—the most notable example of which is Microsoft SQL Server 2014. A column store is a regular relational table where data is stored using a columnar data format. Instead of having the table composed of rows sharing the same schema, in a column store you have columns treated like entities independent from rows. Columns are related to one another by some primary-key value. In SQL Server 2014, columnar storage is enabled by creating an ad hoc index on a regular table. In SQL Server 2014, the column store index is updatable.


Image Note

Event sourcing is a relatively young approach to architecture. Tools for helping with code don’t exist yet, meaning that the risk of reinventing the wheel is high and no ad hoc specialized tools are available. As far as the event store is concerned, a first attempt to create a highly specialized event store is the NEventStore project: http://neventstore.org.



Tales from the trenches

An event store is expected to grow to be a very large container of data, which poses new problems regarding performance. An event store is updated frequently and receives frequent reads for large chunks of data.

It’s really hard to say which of the three options just presented is ideal in the largest number of business scenarios. If we look only at technical aspects, we’d probably go for document databases. In this regard, RavenDB—a full .NET document database with a few interesting setup models—is worth a further look. Or, at least, RavenDB has worked just fine so far in all scenarios in which Andrea used it.

Dino, instead, has a different story to tell. A customer of his actually decided to go with a key-value relational table after considering both a SQL Server 2014 column store and RavenDB. After a week of performance measurement, the team didn’t find in the tested scenario any significant throughput improvement resulting from column and document stores. Anyway, the customer had debated for a long time whether to go with a document database (RavenDB) or a plain SQL Server table. In the end, they opted for a plain SQL Server table because of the ecosystem of tools and documentation existing for SQL Server and the existing skills of the IT department. And in spite of the additional costs of SQL Server licenses, which were a fraction of the overall cost of the product.


Replaying events

The main treat of event sourcing is the persistence of messages, which enables you to keep track of all changes in the state of the application. By reading back the log of messages, you can rebuild the state of the system. This aspect is also known as the replay of the events.

Building the state of business entities

Let’s consider the Checkout example again. A given user completes a purchase and, at the end of the saga, you might have records in the event store similar to those listed in Table 12-1.

Image

TABLE 12-1. Event log for the Checkout saga

When the OrderCreated event is persisted, it saves all the details of the order—ID, date, shipping date, shipping address, customer name, ordered items, and invoice number. Later on, when the goods ship to the customer, the bounded context taking care of delivery does something that triggers an OrderShipped event persisted as shown here:

Image

The system contains a detailed log of what’s happened, but what about the following two points?

Image How could the saga accepting the shipment of the order verify that the operation was consistent and that, say, the order ID marked as shipped was really an order marked as in-delivery?

Image How can an admin user know about the current state of a given order?

In a classic scenario where O/RM tools are used to persist a domain model, you probably have an Orders table with a row for each created order. Reading the state of the order is as easy as making a query. How does that work in an event-sourcing scenario?

To know about the state of an order (or a business entity in general), you need to go through the list of recorded events, retrieve those related to the specific order (or entity), and then in some way replay the events on an empty order to materialize an instance that faithfully represents the real state.

What it means to replay events

The replay of events to rebuild the state works in theory, but it is not always practical. The canonical example is a bank account.

How many years ago did you open your bank account? Probably, it was quite a few years ago. This means that some AccountCreated event exists in an event store, but it is a few years old. Since then, some hundreds of events might have occurred on the same account to alter its state. In the end, to get the current balance, it could be necessary to replay several hundred operations to rebuild the current state of the account.

Replaying operations poses two problems. One is performance concerns; the other is the actual mechanics of the replay. The event store contains events—namely, memories of facts that happened; it doesn’t contain the commands that generated those events. Persisting commands is a different matter; sometimes you do that; sometimes you don’t. The execution of a command might take several seconds to complete, and not everybody is willing to wait ages to know, say, the balance of the bank account. On the other hand, if you intend to repeat the same sequence of actions that led to a given state, then events—that is, memories of what happened—might not be sufficient. In a what-if scenario, for example, you might want to know about actual commands and change some parameters to measure the effects.

This means that the replay of events is an operation that must be coded by looking carefully at events and the data they carry. For example, replaying the OrderCreated event requires creating a fresh new instance of an Order aggregate filled with all the information associated with the stored event. Replaying the OrderShipped event, instead, requires just the updating of some State property on the Order class.


Image Note

In many scenarios, however, the performance is not an issue because the replay of events is part of the asynchronous operations. Furthermore, often we’re just talking about dozens of events that can be processed pretty quickly. If it is an issue, use data snapshots.


Data snapshots

To make a long story short, projecting state from logged events might be heavy-handed and impractical with a large number of events. And the number of logged events can only grow. An effective workaround consists of saving a snapshot of the aggregate state (or whatever business entities you use) at some recent point in time.

This can be done in either of two ways. You can update the aggregate at the end of each operation, or you can save the snapshot frequently (say, every 100 events), but not every time. In this way, when you need to access an aggregate, you load the most recent snapshot. If it is not already up to date, you replay all subsequent events.

Typically, a data snapshot is a relational table that contains the serialized version of an aggregate. More generally, a data snapshot is any sort of persistent cache you can think of that helps build the state quickly. There might be situations in which it can even grow very close to a classic relational database with cross-referenced tables. In this case, you end up with an event store and a relational database. There’s quite a bit of redundancy but, at a minimum, the event store keeps track of the actual sequence of events in the system.

Note, though, that you might need snapshots only for some aggregates. In this way, redundancy will be reduced to just the most critical aggregates.

Data snapshots are essentially a way to simplify development and wed events with the state of business entities. Data snapshots are created by event handlers typically, but not necessarily, within the context of a saga. With reference to our previous Checkout saga example, the state of the created order can be saved to a snapshot right before raising the OrderCreated event, or you can have a distinct saga that just captures events and maintains snapshots.

Summary

Events are bringing new ideas and new lifeblood to software architecture. Events promote a task-based approach to analysis and implementation. As obvious as it might sound, any software that focuses on tasks to accomplish can’t be wrong and won’t miss its goals. Yet, architects neglected tasks for too long and focused on models.

Building a single, all-encompassing model is sometimes hard, and separation between commands and queries showed the way to go to build systems more effectively. Event sourcing is an approach essentially based on the idea that the state of the application is saved as a stream of events. As long as events are sufficiently granular, and regardless of the ubiquitous language, logging events allows you to record everything and, more importantly, start recording new events as they are discovered or introduced.

In event sourcing, the state of aggregates must be created every time events are replayed. Rebuilding state from scratch might be heavy-handed; that’s why data snapshots are a valuable resource that can’t be neglected.

In this chapter, we focused on the theoretical aspects of the event-sourcing architecture but didn’t examine a working solution. That’s precisely what we are slated to do in the next chapter.

Finishing with a smile

To generate a smile or two at the end of this chapter, we selected a few quotes we remember reading somewhere:

Image If you find a bug in your salad, it’s not a bug: it’s a feature.

Image Computers are like men because once a woman commits to one, she finds out that if she waited just a little longer, she could have had a better model.

Image Computers are like women because even your smallest mistakes are stored in long-term memory for later retrieval.

In addition, here are a couple of funny quotes about computers and programming:

Image Programming today is a race between software engineers striving to build bigger and better idiot-proof programs and the universe trying to produce bigger and better idiots. So far, the universe is winning. (Rick Cook).

Image The trouble with programmers is that you can never tell what a programmer is doing until it’s too late. (Seymour Cray)