The mythical business layer - Devising the architecture - Microsoft .NET Architecting Applications for the Enterprise, Second Edition (2014)

Microsoft .NET Architecting Applications for the Enterprise, Second Edition (2014)

Part II: Devising the architecture

Chapter 7. The mythical business layer

Any fool can write code that a computer can understand. Good programmers write code that humans can understand.

—Martin Fowler

Since the first edition of this book, some five years ago, we observed a noticeable change in the industry: a shift from data-centric three-tier architecture to more model-centric, multilayer architecture. And another big wave of changes is coming as event-driven architecture takes root and developers start experiencing its inherent benefits. As a result, the classic business layer canonically placed in between the presentation and data access morphs into something different depending on the overall system architecture. The only thing that does not change, though, is that in one way or another you still need to implement the core, business-specific logic for the system.

In Chapter 5, “Discovering the domain architecture,” we introduced DDD and after this chapter we’ll embark on a thorough discussion of a few very popular supporting architectures for layered systems such as Domain Model, Command/Query Responsibility Segregation (CQRS), and event sourcing.

We intend to dedicate this chapter to the facts that justify and actually make the transition from a classic data-centric, three-tier world to a model-centric but still layered world. In doing so, we’ll touch on a few other, more basic patterns that can always be used to organize the business logic of a bounded context and common issues you face.

Patterns for organizing the business logic

There’s virtually no logic to implement in a simple archiving system that barely has any visual forms on top of a database. Conversely, there’s quite complex logic to deal with in a financial application or, more generally, in any application that is modeled after some real-world business process.

Where do you start designing the business logic of a real-world system?

In the past edition of the book, we dedicated a large chapter to the business logic and covered several patterns for organizing the logic of a system. Five years later, we find that some of those patterns have fallen out of mainstream use and might be seen as obsolete. Obsolete just means something like “no longer used”; it doesn’t refer to something that “no longer works.”

For sure, patterns like Table Module and Active Record are not what you should look for now, five years later. Today, the common practice is to use objects to model business entities and processes, but domain modeling might be difficult to do and expensive to implement, and these costs are not always justified by the real complexity of the problem. So, in the end there’s still room for something simpler than domain models.


Image Note

A moment ago, we hinted at Active Record becoming an obsolete, no-longer-used, pattern. To be honest, the Active Record pattern is widely used in the Ruby-on-Rails community, where it is part of the foundation of data access. So the true sense of our previous statement is fully expressed by adding “in the .NET space.”


The fairytale of CRUD and an architecture Prince Charming

If you’re a developer eager to learn new things and compare your best practices with other people’s best practices, you might be searching the Internet for tutorials on architecture practices in your technical idiom. So assume you’re a .NET developer. You’re going to find a lot of resources that claim to teach architecture but use a plain CRUD (Create, Read, Update, Delete) sample application. Well, that’s amazing at least and misleading at most. And, look, we didn’t say it’s wrong, just misleading. Except that in architecture misleading is maybe even worse than wrong.

Forget entry-level tutorials that show how clean a CRUD system might look like once you use objects instead of data rows. If it’s a music store that you need, that’s fine; but you don’t need to waste more time learning about sophisticated things like event sourcing, CQRS, and Domain-Driven Design (DDD). If it’s a music store that you need, just write your ADO.NET code or let Entity Framework infer a data model from the database. If you want more thrills, turn Entity Framework Code-first on and immerse yourself in the waters of unnecessary-but-terribly-hotcomplexity. If it’s really a CRUD, it has nearly no architecture.


Image Note

OK, just kidding. On a more serious note, our intention here is not to look like the only custodians of absolute truth. The point is you should use the right tool for the job without dragging in pet-technology dilemmas. (See http://c2.com/cgi/wiki?ArchitectureAsRequirements for more on the pet-technology anti-pattern.)


Paraphrasing a popular quote from a friend of ours, “Just because it’s a CRUD, you don’t have the right to write rubbish.” Our friend used to say that about JavaScript, but the same principle applies to CRUD as well. Separating the application layer and data layer in a CRUD is all you need to do. If you do so, the result is clean, readable, and possibly extensible. But this is just a matter of using the right tool for the job and using it right.

Through the lens of a CRUD application, you only see a section of the complexity of a real-world system. You might be led to think that all you need is a clean data layer that is well isolated from the rest of the universe. You might be led to think that, all in all, even the application layer is unnecessary and a controller (for example, presentation layer) can coordinate any activity easily enough.

Again, through the lens of a CRUD, that’s probably OK, but it shouldn’t be generalized.

Complexity is an ugly beast—easy to recognize but nearly impossible to define abstractly. Our goal here is to present general patterns of architecture and software design. We assume a high level of complexity in order to introduce the most layers and functions. No application out there has the same level of complexity and, subsequently, requires the same complex architecture. We expect that you learn the internal mechanics of most architecture patterns—essentially why things are done in a certain way—and then apply Occam’s Razor to them when facing a real problem. If you can take out some of the pieces (for example, layers) without compromising your ability to control the solution, by all means do so.

That is what makes you a good architect and also a good candidate to play the role of the Prince Charming of architecture.

The Transaction Script pattern

When it comes to organizing the business logic, there’s just one key decision you have to make: whether to go with an object-oriented design or with a procedural approach. In any case, you are choosing a paradigm to design the business logic, so it’s primarily about architecture, not technology.

The Transaction Script (TS) pattern is probably the simplest possible pattern for business logic, and it is entirely procedural. TS owes its name and classification to Martin Fowler. For further reference, have a look at page 110 of his book Patterns of Enterprise Application Architecture, Addison-Wesley, 2002. We’ll refer to this book in the future as [P of EAA].

Generalities of the pattern

TS encourages you to skip any object-oriented design and map your business components directly onto required user actions. You focus on the operations the user can accomplish through the presentation layer and write a method for each request. The method is referred to as a transaction script. The word transaction here generically indicates a business transaction you want to carry out. The word script indicates that you logically associate a sequence of system-carried actions (namely, a script) with each user action.

TS has been used for years, but it’s not becoming obsolete any time soon for just one reason: it pushes a task-based vision of the business logic, and that vision is key to increasing the level of user experience as you saw in past chapters.

In TS, each required user action is implemented proceeding from start to finish within the boundaries of a physical transaction. Data access is usually encapsulated in a set of components distinct from those implementing the actual script and likely grouped into a data-access layer. By design, TS doesn’t have any flavors of object-oriented design. Any logic you model through a TS is expressed using language syntax elements such as IF, WHILE, and FOR. Figure 7-1 summarizes the spirit of the Transaction Script pattern.

Image

FIGURE 7-1 A bird’s-eye view of the Transaction Script pattern.


Note

A system where the business logic is implemented using TS has a rather compact, layered architecture in which TS coincides with the application layer and connects directly to the data-access layer. The final schema, then, is the same as the classic three-tier architecture, proving once more that the four-layer layered architecture we introduced in Chapter 5, “Discovering the domain architecture,” is just a more modern transformation of classic three-tier.


When Transaction Script is an option

TS is suited for simple scenarios where the business logic is straightforward and, better yet, not likely to change and evolve. More generally, TS is suitable for all scenarios where—for whatever reason, whether it’s limited complexity or limited skills—you are not going to use any domain modeling.


Image Important

We’ll provide evidence and make the concept stand out even more in just a few moments, but we want to say right away that domain modeling is not the same as scripting object models quickly inferred from a database. That still might be called a domain model, but it is a rather anemic domain model.


For example, TS seems to be the perfect choice for a web portal that relies on an existing back office. In this case, you end up with some pages with interactive elements that trigger a server action. Each server action is resolved by making some data validation, perhaps making some trivial calculations, and then forwarding data to the existing back-office system.

Likewise, TS is an option for a plain CRUD system. Interestingly, it is also a valid option in situations where lack of skills and guidance make domain modeling a risky and dangerous route even for complex domains. In this case, by splitting the system in two parts—command and query—you find it easy to implement the command part via TS and maybe spend more time and effort on arranging an effective model for the sole query section of the system.


Image Note

Simplicity and complexity are concepts that are not easy to measure. More importantly, the perception of what’s simple and what’s complex depends on a person’s attitude and skills. If you’ve been doing object-oriented design for years and know your stuff well, it might be easier and faster for you to arrange a simple domain model than to switch back to procedural coding. So TS doesn’t have to be the choice every time you feel the complexity is below a given universally recognized threshold.

More simply, TS is an option you should seriously consider once you feel you know all about the system, and when it doesn’t look overly complex in terms of requirements and business rules. Objects make code elegant, but elegance is a positive attribute only if code works and is done right. There’s nothing to be ashamed of in using TS—especially in using TS when it is appropriate.


The pattern in action

When it comes to implementation, you can see each transaction script as a standalone, possibly static, method on a class, or you can implement each transaction script in its own class. In doing so, you end up having a nice implementation of the Command pattern.

One of the most popular behavioral design patterns, the Command pattern uses objects to represent actions. The command object encapsulates an action and all of its parameters. Typically, the command object exposes a standard interface so that the caller can invoke any command, without getting the intimate knowledge of the command class that will actually do the job. Here’s a quick example:

public interface IApplicationCommand
{
int Run();
}

public class BookHotelRoom : IApplicationCommand
{
Customer _guest;
DateTime _checkIn, _checkOut;
String _confirmationNumber;
// other internal members

public BookHotelRoom(Customer guest, DateTime checkIn, DateTime checkOut)
{
_guest = guest;
_checkIn = checkIn;
_checkOut = checkOut;
}

public String ConfirmationNumber
{
get { return _confirmationNumber; }
}

public int Run()
{
// Start the transaction
// Check room availability for requested stay
// Check customer information (already guest, payment method, preferences)
// Calculate room rate
// Add a new record to the Bookings database table
// Generate the confirmation number
// Commit the transaction
// E-mail the customer
// Store the confirmation number to the local member _confirmationNumber
}

...
}

The TS pattern doesn’t mandate any data types or formats for data exchange. Providing data to scripts and retrieving data from scripts is up to you. So feel free to opt for the approach that best suits your preferences and design needs. A common practice is using plain data-transfer objects (DTOs).

The Domain Model pattern

Often used in DDD solutions but not strictly bound to it, Domain Model is simply a general design pattern. The Domain Model (DM) pattern suggests architects focus on the expected behavior of the system and on the data flows that make it work. You focus on the real system, ideally, with some good help from domain experts and just try to reproduce it in terms of classes.

Generalities of the pattern

A domain model is not the same as an object model made of a collection of related classes. A domain model is a model that faithfully represents a business domain and, especially, the processes within the domain and data flows. If you reduce domain modeling to the transposition of a database structure into C# classes, you’re missing the point completely. Anyway, consider that missing the point doesn’t mean that you’re not taking the project home.


Image Note

At the risk of looking overzealous and sounding pedantic, we want to repeat that you don’t need to use successful and popular patterns to write code that works. At the same time, you should not dismiss any popular pattern you hear about as generally useless or impractical. We’ll usually be the first in line to yell that some (and perhaps even many) object-oriented patterns are cumbersome and not very practical. However, here we’re talking about strategic and architecture-level patterns that might guide you through the twists and turns of the design maze. It’s not about being freakily cool; it’s about being effective and making the right choices.


When Domain Model is an option

Complexity is the driving force for the adoption of the Domain Model pattern. Complexity should be measured in terms of the current requirements, but to spot indicators of complexity you should also look at possible future enhancements or requirements churn. Working with the Domain Model pattern is generally more expensive in simple scenarios, but it is a savvy choice in larger systems because its startup and maintenance costs can be absorbed more easily.


Image Note

Often the major strength of an approach, when taken to the limit, becomes its most significant weakness. This fact also holds true for the Domain Model pattern. It’s really not easy to envision a full system in terms of an abstract model where entities and relationships describe the processes and their mechanics. The major difficulty we’ve encountered in our careers with domain modeling is not technical, but purely design. Sometimes we spent hours and even days trying to make sense of how a given aspect of the domain was best represented, whether through a method, a combination of properties, or a domain service. And sometimes we also spent considerable time speculating about the right modifier to use for the setter of a class property, whether it was a private, a public, or just a read-only property.


The slippery point is that the domain model is a public programming interface you expose for the application layer and others to consume. In particular, the application layer is bound to a particular client application and new applications can always be written in the future, even outside your direct control. The domain model is just like an API. You never know how people are going to use it, and you should try to make any misuse just impossible. (Impractical is not enough here.) Worse yet, the domain model is not a simple API for some functionality; it’s an API for your business. It implies that the misuse of the domain model might lead to misusing the business and running inconsistent and incoherent processes.

Yes, a domain model is a tremendous responsibility and is complex. On the other hand, you need complexity to handle complexity.


Image Important

Sometimes, if you don’t feel completely confident or don’t get to know the domain very well, you might even want to avoid the domain model and or resort to using far lighter forms of it, like the notorious anemic domain model.


The pattern in action

A domain model is a collection of plain old classes, each of which faithfully represents a significant entity in the business domain. These classes are data containers and can expose public properties, but they also are expected to expose methods. The term POCO (Plain Old CLR Object) is often used to refer to classes in a domain model.

More often than not, at this point we get the question, “Which methods?”

It’s a fair-enough question, but it has no answer other than, “Your understanding of the domain will tell you which methods each class needs.” When designing a domain model, you shouldn’t feel like God at work creating the world. You are not expected to build a model that represents all the ramifications of human life on planet Earth. Your purpose is simply to recognize and define software entities that replicate the processes (and related behaviors) you observe in the business domain. Therefore, the more you know about the domain, the easier it will be for you to copycat it in software.

In the real world, you seldom observe models; more likely, you’ll observe actions and events. Those actions will help you figure out the behavior to implement on classes. Keeping an eye on actions more than on properties of entities does help. On the other hand, it’s the Tell-Don’t-Askvector we discussed in Chapter 3, “Principles of software design.”

At some point, the classes in the domain model needs to be persisted. Persistence is not a responsibility of the domain model, and it will happen outside of the domain model through repositories connected to the infrastructure layer. To the application’s eyes, the domain model is the logical database and doesn’t have any relational interface. At the same time, persistence most likely happens through a relational interface. The conversion between the model and relational store is typically performed by ad hoc tools—specifically, Object/Relational Mapper (O/RM) tools, such as Microsoft’s Entity Framework or NHibernate. The unavoidable mismatch between the domain model and the relational model is the critical point of implementing a Domain Model pattern.


Image Note

In the next chapter, we’ll return to the pros, cons, and details of a domain model when viewed through the lens of a DDD approach.


The Anemic Domain Model (anti-)pattern

The landmark of a domain model is the behavior associated with objects. And “which behavior exactly” is a common objection that boosters of the Domain Model pattern often hear. The Domain Model pattern is sometimes contrasted to another object model pattern known as Anemic Domain Model (ADM).

Generalities of the pattern

In an anemic domain model, all objects still follow the naming convention of real-world domain entities, relationships between entities exist, and the overall structure of the model closely matches the real domain space. Yet, there’s no behavior in entities, just properties. From here comes the use of the adjective anemic in the pattern’s name.

The inspiring principle of ADM is that you don’t deliberately put any logic in the domain objects. All the required logic, instead, is placed in a set of service components that all together contain the whole domain logic. These services consume the domain model, access the storage, and orchestrate persistence.

Inferring models from the database might cause anemia

The canonical example of an anemic domain model is the set of objects you get when you infer a model from an existing database structure. If you do this with Entity Framework, you get a bunch of classes that map all the tables in the database. Each class is filled with properties that are a close adaptation of table columns. Foreign keys and constraints are also honored, yielding a graph of objects that faithfully represent the underlying relational data model.

Classes that Entity Framework generates are declared partial, meaning they can be extended at the source-code level (for example, not through compile-level inheritance). In other words, using Entity Framework in a database-first model doesn’t prevent you from having a behavior-rich model. What you get by default, though, is just an anemic domain model in which all classes are plain containers of properties with public getters and setters.

Is it a pattern or an a anti-pattern?

There’s a general consensus about ADM: it is more of an anti-pattern than a pattern, and it’s a design rule you’d better not follow. We think there’s a conceptual view of it and a pragmatic view, and choosing either might be fine. Let’s see why it is almost unanimously considered an anti-pattern. The words of Martin Fowler are extremely clear in this regard, and you can read them here: http://www.martinfowler.com/bliki/AnemicDomainModel.html. (Note that the article was written over a decade ago.)

In brief, Fowler says that ADM is “contrary to the basic idea of object-oriented design; which is to combine data and process together. The anemic domain model is really just a procedural style design.” In addition, Fowler warns developers against considering anemic objects to be real objects. Anemic objects are, in the words of Fowler, “little more than bags of getters and setters.”

We couldn’t agree more. Additionally, we agree with Fowler when he says that object-oriented purism is good but we “need more fundamental arguments against this anemia.” Honestly, the arguments that Fowler brings to the table sound both promising and nebulous. He also says that ADM incurs all the costs of a domain model without yielding any of the benefits.

In summary, the key benefits referred to by Fowler can be summarized as listed here:

Image Analysis leads to objects that closely model the domain space and are tailored to the real needs.

Image The strong domain perspective of things reduces the risk of misunderstandings between the domain experts and the development team.

Image Code is likely to become more readable and easily understandable.

Image The final result has more chances to really be close to expectations.

How concrete are these declared benefits? What if the resulting domain model only looks like the real things but leaves a gap to fill? Benefits of a domain model are there if you catch them. In our (admittedly rather defensive and pragmatic) approach to software design, a good anemic model probably yields better results than a bad domain model. In the end, not surprisingly, it’s up to you and your vision of things. It depends on how you evaluate the skills of the team and whether you consider the team capable of coming up with an effective domain model.

The bottom line is that ADM is an anti-pattern, especially if it is used in complex domains where you face a lot of frequently-changing business rules. In the context of data-driven applications and CRUD systems, an anemic model is more than fine.

Moving the focus from data to tasks

For decades, relational data modeling has been a more than an effective way to model the business layer of software applications. In the .NET space, the turning point came in the early 2000s, when more and more companies that still had their core business logic carved in the rough stone of mainframes took advantage of the Microsoft .NET Framework and Internet breakthroughs to renovate and modernize their systems. In only a few years, this poured an incredible amount of complexity on the shoulders of developers. RAD and relational modeling then—we’d say almost naturally—appeared worn out and their limits were exposed. Figure 7-2 illustrates the evolution of data-modeling techniques over the past decades.

Image

FIGURE 7-2 Trends in application modeling.

The figure is neither exhaustive nor comprehensive: it only aims to show changes in patterns and approaches over the years that have been used to produce faithful models.

The chart measures the faithfulness of the produced models and observed processes. It comes as no surprise that Domain-Driven Design improved faithfulness. The advent of DDD also started moving the focus from data-centric design to task-based design. It was a natural move essentially driven by the need to handle complexity in some way.

The transition to a task-based vision of the world is the ultimate message of the Tell-Don’t-Ask principle, which we find generically mentioned more often than concretely applied. Tasks are the workflows behind each use-case and each interaction between the user and the system. From a design perspective, a single layer that contains whatever might exist between presentation and data is a rather bloated layer. In this regard, isolating the orchestration of tasks into a separate layer leaves the sole domain logic to code. Domain logic fits more than naturally either into domain objects or, if an anemic design is chosen, into separate modules.

Let’s look at an example to give more concrete form to what we have so far generically called the application layer.

Task orchestration in ASP.NET MVC

In an ASP.NET MVC application, any user-interface action ends in a method invoked on a controller class. The controller, then, is the first place where you should consider orchestrating tasks. Abstractly speaking, there’s no difference at all between a controller method and postback handler. In much the same way, every savvy developer avoids having full orchestration logic in a Button1_Click event handler, you should avoid having full orchestration logic in a controller method.

Controller as a coordinator

Responsibility-Driven Design (RDD) is a design methodology introduced a decade ago by Rebecca Wirfs-Brock and Alan McKean in the book Object Design: Roles, Responsibilities, and Collaborations (Addison-Wesley, 2002). The essence of RDD consists of breaking down a system feature into a number of actions the system must perform. Next, each action is mapped to a component (mostly a class) being designed. Executing the action becomes a specific responsibility of the component. The role of the component depends on the responsibilities it assumes. RDD defines a few stereotypes to classify the possible role each component can have. We’re not loyal followers of the RDD approach even though we find its practices useful for mechanically breaking up into digestible pieces nuts that are hard to crack.

We think that one of the RDD stereotypes—the Coordinator—meticulously describes the ideal of role of an ASP.NET MVC controller. The RDD Coordinator stereotype suggests you group all the steps that form the implementation of the action within a single application service. From within the controller method, therefore, you place a single call to the application service and use its output to feed the view-model object. That’s exactly the layout we illustrated Here’s the layout again:

public ActionResult PlaceOrder(OrderInputModel orderInfo)
{
// Input data already mapped thanks to the model binding
// infrastructure of ASP.NET MVC

// Perform the task invoking a application service and
// get a view model back from application layer
var service = new OrderService();
var model = service.PlaceOrder();

// Invoke next view
return View(model);
}

The overall structure of the ASP.NET MVC controller method is quite simple. Solicited by an incoming HTTP request, the action method relays most of the job to another component that coordinates all further steps. The call into the OrderService class in the example is where the boundary between the presentation and application layers is trespassed.

There’s an interesting alternative to the code layout just shown that might come in handy when you don’t much like the idea of having a brand new application layer for any new front end. So imagine you have a consolidated application layer and need to add a new front end. The application layer is mostly the same except for some view-model adaptation. In this case, it might be acceptable to add an extra step to the presentation layer, as shown here:

// Get response from existing application layer
var service = new OrderService();
var model = service.PlaceOrder();

// Adapt to the new view model
var newFrontendModel = someAdapter.NewViewModel(response);

In general, you shouldn’t expect the application layer to be fully reusable—the name says it all. However, any level of reusability you can achieve in a specific scenario is welcome. And the more you can reuse, the best. Instead, what is invariably shared and reusable is the domain logic.

Connecting the application and presentation layers

The junction point between application layer and presentation is the controller. In previous chapter, you saw how to use poor-man’s dependency injection to implement it. To use a full-fledged Inversion of Control (IoC) pattern like Microsoft Unity, in ASP.NET MVC you need to override the controller factory. Here’s a code snippet that shows most of it. In global.asax, at the application startup you register a custom controller factory:

var factory = new UnityControllerFactory();
ControllerBuilder.Current.SetControllerFactory(factory);

The class UnityControllerFactory can be as simple as shown here:

public class UnityControllerFactory : DefaultControllerFactory
{
public static IUnityContainer Container { get; private set; }
public UnityControllerFactory()
{
Container = new UnityContainer(); // Initialize the IoC
Container.LoadConfiguration(); // Configure it reading details from web.config
}
protected override IController GetControllerInstance(RequestContext context, Type type)
{
if (type == null) return null;
return Container.Resolve(type) as IController;
}
}

With this code in place, any controller class can be created without the default parameterless constructor:

public class HomeController
{
private IHomeService _service;
public HomeController(IHomeService service)
{
_service = service;
}
...
}


Image Note

Writing your own controller factory to integrate Unity in ASP.NET MVC is not strictly necessary. You can use the Unity.Mvc library available as a NuGet package. For more information, see http://github.com/feedbackhound/Unity.Mvc5.


Connecting the application and data-access layers

A similar problem surfaces when connecting the application with the infrastructure layer where data-access code resides. You solve it in the same way—that is, via dependency injection. As you’ll see in more detail in the next chapter, a repository is a common name for containers of data-access logic in a domain model scenario.

public class HomeService
{
private ISomeEntityRepository _someEntityRepo;
public HomeService(ISomeEntityRepository repo)
{
_someEntityRepo = repo;
}
...
}

Interestingly enough, you don’t need other changes in the code that initializes controllers and sets up factories. You only need to make sure that repository types and interfaces (one for each significant entity in the domain model) are mapped to the IoC container. All IoC libraries let you do type mapping both through configuration and via a fluent API. The reason why you don’t need further code other than configuration is that all IoC containers have today the ability to resolve chains of dependencies in a transparent way. So if you tell the IoC container to resolve the controller type, and the controller type has dependencies on some application service types, which in turn depend on one or more repositories, all of that happens in the context of a single line of code.

Using dependency injection for connecting layers is recommended, but it’s not the only solution. If you have layers living in the same process space, a simpler route is direct and local instantiation of objects, whether they are application layer services or repositories. The drawback in this case is tight coupling between the layers involved:

// Tight coupling between the class OrderService
// and the class that contains this code
var service = new OrderService();
var model = service.PlaceOrder();

At this level, the problem of tight coupling should be noted, but it is not a tragedy per se. It certainly makes the container class less testable, but it doesn’t change the substance of code. Also, refactoring to add loose coupling—namely, refactoring to the Dependency Inversion principle explained in Chapter 3—is a relatively easy task, and many code-assistant tools make it nearly a breeze.

If layers are deployed to distinct tiers, on the other hand, using HTTP interfaces is the common way to go. You can use, for example, the async/await C# pattern to place a call via the HttpClient client to a remote endpoint. The remote endpoint can be, for example, a Web API front end.

Orchestrating tasks within the domain

In general terms, the business logic of an application contains the orchestration of tasks, domain logic, and a few extra things. The extra things are commonly referred to as domain services. In a nutshell, domain services contain any logic that doesn’t fit nicely in any domain entity. When you follow the guidelines of the Domain Model pattern, the business logic is for the most part split across domain entities. If some concepts can’t be mapped in this way, that’s the realm of a domain service.

Cross-entity domain logic

Domain services typically contain shares of business logic that contain operations that involve multiple domain entities. For the most part, domain services are complex, multistep operations that are carried out within the boundaries of the domain and, perhaps, the infrastructure layer. Canonical examples of domain services are services like OrderProcessor, BestPriceFinder, or GoldCustomerEvaluator. Names assigned to services per se should reflect real operations and be easily understandable by both stakeholders and domain experts.


Image Note

The point of naming domain services so that they are immediately recognizable by experts is not simply to enhance the general readability of code. More importantly, it is to establish a ubiquitous language shared by the development team and domain experts. As you’ll see in the next chapter, ubiquitous language is a pillar of Domain-Driven Design.


Let’s expand on a sample domain service, such as the GoldCustomerEvaluator service. In an e-commerce system, after a customer has placed an order, you might need to check whether this new order changes the status of the customer because she has possibly exceeded a given minimum sales volume. For this check to be carried out, you need to read the current volume of sales for the customer, apply some of the logic of the rewarding system in place, and determine the new status. In a realistic implementation, this might require you to read the Orders tables, some rewarding specific views, and perform some calculation. In the end, it’s a mix of persistence, orders, customers, and business rules. Not a single entity is involved, and the logic is cross-entity. A good place to have it, then, is in a domain service.

Finally, operations defined as a domain service are for the most part stateless; you pass in some data and receive back some response. No state is maintained, except perhaps some cache.

Where are the connection strings?

Entities in a domain model are expected to be plain-old C# objects (POCOs) and agnostic about persistence. In other words, an Invoice class just contains data like date, number, customer, items, tax information, and payment terms. In addition, it might contain a method likeGetEstimatedDateOfPayment that works on date and payments terms, adds a bit of calculation regarding holidays, and determines when the invoice might be paid.

The code that loads, say, an Invoice entity from storage and the code that saves it persistently to storage doesn’t go with the entity itself. Other components of the system will take care of that. These classes are repositories. They are for the most part concerned with CRUD functionality and any additional persistence logic you might need. Repositories are the only part of the system that deal with connection strings.


Should you care about tiers?

Multiple tiers are not generally a good feature to have in a system. A popular quote attributed to Martin Fowler says that the first rule of distributed programming is, “Do not use distributed objects; at least until it becomes completely necessary.” Tiers slow down the overall performance and increase the overall complexity. Both aspects, then, affect the overall cost to build and maintain the application. This said, tiers—some tiers—just can’t be avoided. So they’re sort of evil, but a necessary evil.

For example, an unavoidable barrier exists between the code that runs on the user’s machine and the server-side code. Another typical divide exists between the server-side code and the database server. Web applications mostly fall into this category.

In addition, each application might have its own good reasons to introduce multiple tiers. One reason could be the need to support an external product that requires its own process. Another good reason for multiple tiers is the quest for security. A module that runs in isolation in a process can be more easily protected and accessed if it’s used only by authorized callers.

Much less often than commonly reckoned, multiple tiers serve as a trick to increase the scalability of a system. More precisely, a system where certain components can be remoted and duplicated on additional tiers is inherently more scalable than systems that do not have this capability. But scalability is just one aspect of a system. And often the quest for scalability—that is, the need for stabilized performance under pressure—hurts everyday performance.

In summary, as a general rule the number of physical tiers should be kept as low as possible. And the addition of a new tier should happen only after a careful cost-benefit analysis, where costs are mostly in the area of increased complexity and benefits lie in the area of security, scalability and, perhaps, fault tolerance. Surely you’ve seen system integrators creating a crazy three-tier or more system (especially in the early 2000s) and then wondered why the system was so slow even though the right architectural approach was employed.


Moving data across the boundaries

A tier represents a physical boundary to cross, whether it is a process boundary or machine boundary. Crossing a boundary is an expensive operation. The cost is higher if you have to reach a physically distant computer rather than just another process within the same machine. One rule of thumb to consider is that a call that crosses the boundaries of a process is about 100 times slower than an equivalent in-process call. And it is even slower if it has to travel over the network to reach the endpoint.

How does a call travel over the wire and across the boundaries? Does it travel light? Or should it bring along everything and travel overloaded? Choosing the most appropriate way of moving data across the boundaries (logical or physical) is another design problem to tackle in the business segment of an application.

Data flow in layered architecture

Figure 7-3 presents a rather abstract flow of data within a layered architecture. When a command is executed, data flows from the user interface in the form of an input model through the application layer. Depending on the requested action, the application layer might need to arrange instances of domain entities using content from the input model (for example, creating a new order from provided information). In a layered Domain Model system, persistence typically means turning domain entities into a physical—often relational—data model.

Image

FIGURE 7-3 Abstract representation of data flows across the stack of a layered architecture.


Image Note

Especially in DDD scenarios, you find sometimes the term input model replaced by the term command. The term command is closer to the idea of executing a task.


On the way back, when a query is requested, content read from the data model is first turned into a graph of domain entities and then massaged into a view model for the user interface to render it.

Abstractly speaking, a layered system manages four different models of data. The four models shown in Figure 7-3 are all logically distinct, but they might coincide sometimes. The domain model and data model are often the same in the sense that the domain model is directly persisted to the infrastructure. In an ASP.NET MVC application, the input model and view model often coincide in the GET and POST implementation of a controller action. In a CRUD system, all models might coincide and be just one—that will be the “M” in the Model-View-Controller (MVC) pattern.


Image Important

As we see it, the model in the MVC pattern is one of the most misunderstood concepts in the entire history of software. Originally devised in the 1980s, MVC began as an application pattern and could be used to architect the entire application. That was during the time of monolithic systems created end to end as a single transaction script. The advent of multilayered and multitiered systems changed the role of the MVC but didn’t invalidate its significance. MVC remains a powerful pattern, except that the idea of a single model doesn’t work anymore. Model in MVC was defined as “data being worked on in the view.” This means that today MVC is essentially a presentation pattern.


Sharing the Domain Model entities

In layered architecture that follows the Domain Model pattern, domain entities are the most relevant containers of data. Is it recommended to let domain entities bubble up to the user interface and serialize them through tiers if required?

Also, in data-driven apps where you propagate the Domain Model to the presentation layer (kind of self-tracking entities), the anemic domain model fits better.

When using a rich-Model (domain model) with methods within the classes, it doesn’t make sense to propagate the Domain Entities to the presentation layer, because the presentation layer would be getting access to those methods in the entities. In DDD applications, the presentation layer should have a different presentation model, a DTO-model (in many cases a ViewModel) based where the DTOs are designed depending on the screen/page necessities rather than based on the Domain Model.

Domain entities within layers

We don’t see any problem in passing domain model classes all around the components and modules orchestrated by the application layer. As long as the data transfer occurs between layers, there should be no problems, neither technical nor design. If the data transfer occurs between tiers, you might run into serialization issues if domain entities form a rather intricate graph with circular references. In this case, it might be easier to introduce some ad hoc, data-transfer objects specifically designed to handle a scenario or two.


Image Note

To be precise, you might face problems even moving domain entities between layers when entities have lazy-loaded properties. In this case, when the layer that receives the entity attempts to read through lazy-loaded properties, an exception is raised because the data has not been read and the storage context is no longer available.


As for letting domain entities rise up to the presentation layer, all previous considerations hold, plus more. The view model behind the user interface doesn’t have to match the structure of the domain model used to express the business logic. Quite simply, if the content carried by a domain model entity makes the cut of the user interface, why not use it and save yourself a bunch of other data-transfer classes?

Having said that in principle using domain model entities throughout the various layers and tiers is more than acceptable and, in some cases, even desirable, there are a few possible side effects to consider.

Dangers of using a single model for commands and queries

This is a deeper point we’ll properly address starting with the next chapter and even more in Chapter 10, “Introducing CQRS,” and Chapter 11, “Implementing CQRS.” For now, it suffices to say that referencing domain entities from the user interface might require writing presentation code that fully leverages the behavior built in domain entities. Because the behavior is ultimately domain logic, and domain logic must be constantly consistent with the business rules, we see the potential risk of writing presentation code that breaks up the consistency.

Possible constraints for future extensibility

Modern applications are constantly extended with new, mostly mobile, front ends. Especially with mobile and devices, the likelihood of presenting different use-cases than other front ends is high. This might create the need to bring different aggregates of data into the user interface that just don’t exist in the domain model. You don’t want to modify the domain model to meet the needs of a particular front end. Adding ad hoc data-transfer objects, then, becomes the most appropriate option.

Using data-transfer objects

It should be clear at this point that seldom does a single solution for moving data across boundaries work within a given system and for its entire lifetime. Sometimes, it might be quick and easy to use domain entities everywhere; sometimes, you want to use data-transfer objects (DTOs). It is important to know that no solution is always preferable over another. As usual, it depends.

Generalities of a data-transfer object

A data-transfer object is an object specifically designed to carry data between tiers. A DTO has no behavior and is a plain bag of getters and setters that is rather inexpensive to create (for example, it needs no unit testing). The reason for having a DTO is that, as a plain container, it allows you to pack multiple chunks of data and transfer all of that in a single roundtrip.

A DTO is an inherently serializable object. Its use is mostly recommended when remote components are involved. However, we like to offer a broader perspective of DTOs and consider also using them between layers.

DTOs vs. domain entities

The typical use of a DTO is when you need, say, to display or process order and customer information at the same time, but the amount of information required for actual processing needs only a few of the properties available on the order and customer entities. A DTO can flatten even complex hierarchies to simpler data containers that contain what’s necessary. (See Figure 7-4.)

Image

FIGURE 7-4 DTOs vs. domain entities when moving data across tiers and layers.

Sharing domain entities mostly works across layers and is essentially a desirable shortcut to minimize the overall number of classes involved. From a pure design perspective, using DTOs is the “perfect” solution that guarantees the maximum decoupling between interfacing components and also extensibility.

However, a full DTO solution inevitably leads to the proliferation of many small classes in the various Visual Studio projects. On one end, you have the problem of managing and organizing those numerous classes in folders and namespaces. On the other hand, you also have to face the cost of loading data to, and extracting data from, DTOs.

AutoMapper and adapters

The real cost of DTOs is just in filling them with data and reading back from them. This is done by ad hoc components generically called adapters. An adapter is a plain C# class; more often, however, it is a repository of static methods. Typically, an adapter copies property values from domain entities to DTOs and vice versa.

As you can see, it’s rather cumbersome and even dreary work, and it also takes several lines of code. Finally, you might even want to have some tests around to check whether mapping actually works. Building adapters, therefore, is conceptually simple but takes time and effort. Dealing with this reality led to a tool that largely automates the building of DTOs. The tool is AutoMapper, which you can read about at http://automapper.org.

With AutoMapper, you do two basic things. First, you create a map between a source type and a target type. Second, you invoke the mapping procedure to fill an instance of the target type with data in an instance of the source type:

Mapper.CreateMap<YourSourceType, YourDtoType>();

Once a mapping is defined, you invoke it via the Map method:

var dto = Mapper.Map<YourDtoType>(sourceObject);

The tool also has nongeneric versions of methods that come in handy in situations where you just don’t know the actual types involved. AutoMapper doesn’t do real magic here and doesn’t even read your mind at run time. All it does is use conventions (for example, matching property names) while not denying configuration (for example, for resolving names that don’t match or computed properties).

In addition, consider one downside of an automatic tool like AutoMapper. When you ask it to create a DTO from an entity, it can’t just avoid to navigate the entire graph of the entity which must be available in memory and therefore materialized from storage. It could be probably much easier and faster to just instruct the domain services to return readymade DTOs.


Image Note

Another option for moving data around is using IQueryable objects. As you’ll see in more detail in Chapter 14, “The persistence layer,” which is dedicated to the infrastructure of the system, a highly debated but emerging practice is returning IQueryable from data repositories.

IQueryable is the core LINQ interface, and all it does is provide functionality to evaluate a query against a LINQ-enabled data source. One reason to return IQueryable from repositories is to enable upper layers to create different kinds of queries in an easier way. This ability keeps the repository interfaces thinner and also reduces the need to use DTOs, because some DTOs can be anonymous types. Even when DTOs are created out of queries, however, they belong to the specific layer, are isolated in the context of the layer, and can be easier to manage.


Summary

In the previous chapter, we discussed the presentation layer and focused on the user experience. This included looking at how important it is to have user tasks organized around smooth and fluent processes, with no bottlenecks. In this chapter, we discussed the business layer and how important it is to have it faithfully match the real processes in the domain space.

This reminds us of the classic dichotomy between doing the right things vs. doing things right. As you might understand, one doesn’t stem from the other, but both are important achievements.

Doing things right is all that the presentation layer should be concerned with. Doing things right is the gist of efficiency: implementing tasks in an optimal way, fast and fluid. Doing the right things is, instead, all that the business layer should be concerned about. Doing the right things is about effectiveness and achieving goals. The ultimate goal of a software system is matching requirements and providing a faithful representation of the domain space.

To make domain modeling more effective, patterns like Domain Model and methodologies like DDD are fundamental. These methodologies sharpen the profile of what was generically referred to as the business layer for years. We introduced the application layer and domain layer, and we kept data access isolated in the infrastructure layer, along with other pieces of the surrounding infrastructure (such as email servers, file systems, and external services).

We’ve been quite generic about the Domain Model pattern here; in the next chapter, we focus on Domain-Driven Design.

Finishing with a smile

A successful business layer requires keen observation and modeling. It also requires an ability to do things in the simplest possible manner. “Do it simple but not simplistically” is one of our favorite mantras. And it is a wink at the first of Murphy’s laws that we list for the chapter: if you have anything complex that works, you can be sure that in the past it was something simple that worked.

See http://www.murphys-laws.com for an extensive listing of computer (and noncomputer) related laws and corollaries. Here are a few to enjoy for now:

Image A complex system that works is invariably found to have evolved from a simple system that works.

Image Investment in software reliability will increase until it exceeds the probable cost of errors.

Image In theory there is no difference between theory and practice, but in practice there is.