Udi Dahan   Udi Dahan – The Software Simplist
Enterprise Development Expert & SOA Specialist
 
  
    Blog Consulting Training Articles Speaking About
  

Archive for the ‘Business Rules’ Category



Clarified CQRS

Wednesday, December 9th, 2009

clarification
After listening how the community has interpreted Command-Query Responsibility Segregation I think that the time has come for some clarification. Some have been tying it together to Event Sourcing. Most have been overlaying their previous layered architecture assumptions on it. Here I hope to identify CQRS itself, and describe in which places it can connect to other patterns.

Download as PDF – this is quite a long post.

Why CQRS

Before describing the details of CQRS we need to understand the two main driving forces behind it: collaboration and staleness.

Collaboration refers to circumstances under which multiple actors will be using/modifying the same set of data – whether or not the intention of the actors is actually to collaborate with each other. There are often rules which indicate which user can perform which kind of modification and modifications that may have been acceptable in one case may not be acceptable in others. We’ll give some examples shortly. Actors can be human like normal users, or automated like software.

Staleness refers to the fact that in a collaborative environment, once data has been shown to a user, that same data may have been changed by another actor – it is stale. Almost any system which makes use of a cache is serving stale data – often for performance reasons. What this means is that we cannot entirely trust our users decisions, as they could have been made based on out-of-date information.

Standard layered architectures don’t explicitly deal with either of these issues. While putting everything in the same database may be one step in the direction of handling collaboration, staleness is usually exacerbated in those architectures by the use of caches as a performance-improving afterthought.

A picture for reference

I’ve given some talks about CQRS using this diagram to explain it:

CQRS

The boxes named AC are Autonomous Components. We’ll describe what makes them autonomous when discussing commands. But before we go into the complicated parts, let’s start with queries:

Queries

If the data we’re going to be showing users is stale anyway, is it really necessary to go to the master database and get it from there? Why transform those 3rd normal form structures to domain objects if we just want data – not any rule-preserving behaviors? Why transform those domain objects to DTOs to transfer them across a wire, and who said that wire has to be exactly there? Why transform those DTOs to view model objects?

In short, it looks like we’re doing a heck of a lot of unnecessary work based on the assumption that reusing code that has already been written will be easier than just solving the problem at hand. Let’s try a different approach:

How about we create an additional data store whose data can be a bit out of sync with the master database – I mean, the data we’re showing the user is stale anyway, so why not reflect in the data store itself. We’ll come up with an approach later to keep this data store more or less in sync.

Now, what would be the correct structure for this data store? How about just like the view model? One table for each view. Then our client could simply SELECT * FROM MyViewTable (or possibly pass in an ID in a where clause), and bind the result to the screen. That would be just as simple as can be. You could wrap that up with a thin facade if you feel the need, or with stored procedures, or using AutoMapper which can simply map from a data reader to your view model class. The thing is that the view model structures are already wire-friendly, so you don’t need to transform them to anything else.

You could even consider taking that data store and putting it in your web tier. It’s just as secure as an in-memory cache in your web tier. Give your web servers SELECT only permissions on those tables and you should be fine.

Query Data Storage

While you can use a regular database as your query data store it isn’t the only option. Consider that the query schema is in essence identical to your view model. You don’t have any relationships between your various view model classes, so you shouldn’t need any relationships between the tables in the query data store.

So do you actually need a relational database?

The answer is no, but for all practical purposes and due to organizational inertia, it is probably your best choice (for now).

Scaling Queries

Since your queries are now being performed off of a separate data store than your master database, and there is no assumption that the data that’s being served is 100% up to date, you can easily add more instances of these stores without worrying that they don’t contain the exact same data. The same mechanism that updates one instance can be used for many instances, as we’ll see later.

This gives you cheap horizontal scaling for your queries. Also, since your not doing nearly as much transformation, the latency per query goes down as well. Simple code is fast code.

Data modifications

Since our users are making decisions based on stale data, we need to be more discerning about which things we let through. Here’s a scenario explaining why:

Let’s say we have a customer service representative who is one the phone with a customer. This user is looking at the customer’s details on the screen and wants to make them a ‘preferred’ customer, as well as modifying their address, changing their title from Ms to Mrs, changing their last name, and indicating that they’re now married. What the user doesn’t know is that after opening the screen, an event arrived from the billing department indicating that this same customer doesn’t pay their bills – they’re delinquent. At this point, our user submits their changes.

Should we accept their changes?

Well, we should accept some of them, but not the change to ‘preferred’, since the customer is delinquent. But writing those kinds of checks is a pain – we need to do a diff on the data, infer what the changes mean, which ones are related to each other (name change, title change) and which are separate, identify which data to check against – not just compared to the data the user retrieved, but compared to the current state in the database, and then reject or accept.

Unfortunately for our users, we tend to reject the whole thing if any part of it is off. At that point, our users have to refresh their screen to get the up-to-date data, and retype in all the previous changes, hoping that this time we won’t yell at them because of an optimistic concurrency conflict.

As we get larger entities with more fields on them, we also get more actors working with those same entities, and the higher the likelihood that something will touch some attribute of them at any given time, increasing the number of concurrency conflicts.

If only there was some way for our users to provide us with the right level of granularity and intent when modifying data. That’s what commands are all about.

Commands

A core element of CQRS is rethinking the design of the user interface to enable us to capture our users’ intent such that making a customer preferred is a different unit of work for the user than indicating that the customer has moved or that they’ve gotten married. Using an Excel-like UI for data changes doesn’t capture intent, as we saw above.

We could even consider allowing our users to submit a new command even before they’ve received confirmation on the previous one. We could have a little widget on the side showing the user their pending commands, checking them off asynchronously as we receive confirmation from the server, or marking them with an X if they fail. The user could then double-click that failed task to find information about what happened.

Note that the client sends commands to the server – it doesn’t publish them. Publishing is reserved for events which state a fact – that something has happened, and that the publisher has no concern about what receivers of that event do with it.

Commands and Validation

In thinking through what could make a command fail, one topic that comes up is validation. Validation is different from business rules in that it states a context-independent fact about a command. Either a command is valid, or it isn’t. Business rules on the other hand are context dependent.

In the example we saw before, the data our customer service rep submitted was valid, it was only due to the billing event arriving earlier which required the command to be rejected. Had that billing event not arrived, the data would have been accepted.

Even though a command may be valid, there still may be reasons to reject it.

As such, validation can be performed on the client, checking that all fields required for that command are there, number and date ranges are OK, that kind of thing. The server would still validate all commands that arrive, not trusting clients to do the validation.

Rethinking UIs and commands in light of validation

The client can make of the query data store when validating commands. For example, before submitting a command that the customer has moved, we can check that the street name exists in the query data store.

At that point, we may rethink the UI and have an auto-completing text box for the street name, thus ensuring that the street name we’ll pass in the command will be valid. But why not take things a step further? Why not pass in the street ID instead of its name? Have the command represent the street not as a string, but as an ID (int, guid, whatever).

On the server side, the only reason that such a command would fail would be due to concurrency – that someone had deleted that street and that that hadn’t been reflected in the query store yet; a fairly exceptional set of circumstances.

Reasons valid commands fail and what to do about it

So we’ve got a well-behaved client that is sending valid commands, yet the server still decides to reject them. Often the circumstances for the rejection are related to other actors changing state relevant to the processing of that command.

In the CRM example above, it is only because the billing event arrived first. But “first” could be a millisecond before our command. What if our user pressed the button a millisecond earlier? Should that actually change the business outcome? Shouldn’t we expect our system to behave the same when observed from the outside?

So, if the billing event arrived second, shouldn’t that revert preferred customers to regular ones? Not only that, but shouldn’t the customer be notified of this, like by sending them an email? In which case, why not have this be the behavior for the case where the billing event arrives first? And if we’ve already got a notification model set up, do we really need to return an error to the customer service rep? I mean, it’s not like they can do anything about it other than notifying the customer.

So, if we’re not returning errors to the client (who is already sending us valid commands), maybe all we need to do on the client when sending a command is to tell the user “thank you, you will receive confirmation via email shortly”. We don’t even need the UI widget showing pending commands.

Commands and Autonomy

What we see is that in this model, commands don’t need to be processed immediately – they can be queued. How fast they get processed is a question of Service-Level Agreement (SLA) and not architecturally significant. This is one of the things that makes that node that processes commands autonomous from a runtime perspective – we don’t require an always-on connection to the client.

Also, we shouldn’t need to access the query store to process commands – any state that is needed should be managed by the autonomous component – that’s part of the meaning of autonomy.

Another part is the issue of failed message processing due to the database being down or hitting a deadlock. There is no reason that such errors should be returned to the client – we can just rollback and try again. When an administrator brings the database back up, all the message waiting in the queue will then be processed successfully and our users receive confirmation.

The system as a whole is quite a bit more robust to any error conditions.

Also, since we don’t have queries going through this database any more, the database itself is able to keep more rows/pages in memory which serve commands, improving performance. When both commands and queries were being served off of the same tables, the database server was always juggling rows between the two.

Autonomous Components

While in the picture above we see all commands going to the same AC, we could logically have each command processed by a different AC, each with it’s own queue. That would give us visibility into which queue was the longest, letting us see very easily which part of the system was the bottleneck. While this is interesting for developers, it is critical for system administrators.

Since commands wait in queues, we can now add more processing nodes behind those queues (using the distributor with NServiceBus) so that we’re only scaling the part of the system that’s slow. No need to waste servers on any other requests.

Service Layers

Our command processing objects in the various autonomous components actually make up our service layer. The reason you don’t see this layer explicitly represented in CQRS is that it isn’t really there, at least not as an identifiable logical collection of related objects – here’s why:

In the layered architecture (AKA 3-Tier) approach, there is no statement about dependencies between objects within a layer, or rather it is implied to be allowed. However, when taking a command-oriented view on the service layer, what we see are objects handling different types of commands. Each command is independent of the other, so why should we allow the objects which handle them to depend on each other?

Dependencies are things which should be avoided, unless there is good reason for them.

Keeping the command handling objects independent of each other will allow us to more easily version our system, one command at a time, not needing even to bring down the entire system, given that the new version is backwards compatible with the previous one.

Therefore, keep each command handler in its own VS project, or possibly even in its own solution, thus guiding developers away from introducing dependencies in the name of reuse (it’s a fallacy). If you do decide as a deployment concern, that you want to put them all in the same process feeding off of the same queue, you can ILMerge those assemblies and host them together, but understand that you will be undoing much of the benefits of your autonomous components.

Whither the domain model?

Although in the diagram above you can see the domain model beside the command-processing autonomous components, it’s actually an implementation detail. There is nothing that states that all commands must be processed by the same domain model. Arguably, you could have some commands be processed by transaction script, others using table module (AKA active record), as well as those using the domain model. Event-sourcing is another possible implementation.

Another thing to understand about the domain model is that it now isn’t used to serve queries. So the question is, why do you need to have so many relationships between entities in your domain model?

(You may want to take a second to let that sink in.)

Do we really need a collection of orders on the customer entity? In what command would we need to navigate that collection? In fact, what kind of command would need any one-to-many relationship? And if that’s the case for one-to-many, many-to-many would definitely be out as well. I mean, most commands only contain one or two IDs in them anyway.

Any aggregate operations that may have been calculated by looping over child entities could be pre-calculated and stored as properties on the parent entity. Following this process across all the entities in our domain would result in isolated entities needing nothing more than a couple of properties for the IDs of their related entities – “children” holding the parent ID, like in databases.

In this form, commands could be entirely processed by a single entity – viola, an aggregate root that is a consistency boundary.

Persistence for command processing

Given that the database used for command processing is not used for querying, and that most (if not all) commands contain the IDs of the rows they’re going to affect, do we really need to have a column for every single domain object property? What if we just serialized the domain entity and put it into a single column, and had another column containing the ID? This sounds quite similar to key-value storage that is available in the various cloud providers. In which case, would you really need an object-relational mapper to persist to this kind of storage?

You could also pull out an additional property per piece of data where you’d want the “database” to enforce uniqueness.

I’m not suggesting that you do this in all cases – rather just trying to get you to rethink some basic assumptions.

Let me reiterate

How you process the commands is an implementation detail of CQRS.

Keeping the query store in sync

After the command-processing autonomous component has decided to accept a command, modifying its persistent store as needed, it publishes an event notifying the world about it. This event often is the “past tense” of the command submitted:

MakeCustomerPerferredCommand -> CustomerHasBeenMadePerferredEvent

The publishing of the event is done transactionally together with the processing of the command and the changes to its database. That way, any kind of failure on commit will result in the event not being sent. This is something that should be handled by default by your message bus, and if you’re using MSMQ as your underlying transport, requires the use of transactional queues.

The autonomous component which processes those events and updates the query data store is fairly simple, translating from the event structure to the persistent view model structure. I suggest having an event handler per view model class (AKA per table).

Here’s the picture of all the pieces again:

CQRS

Bounded Contexts

While CQRS touches on many pieces of software architecture, it is still not at the top of the food chain. CQRS if used is employed within a bounded context (DDD) or a business component (SOA) – a cohesive piece of the problem domain. The events published by one BC are subscribed to by other BCs, each updating their query and command data stores as needed.

UI’s from the CQRS found in each BC can be “mashed up” in a single application, providing users a single composite view on all parts of the problem domain. Composite UI frameworks are very useful for these cases.

Summary

CQRS is about coming up with an appropriate architecture for multi-user collaborative applications. It explicitly takes into account factors like data staleness and volatility and exploits those characteristics for creating simpler and more scalable constructs.

One cannot truly enjoy the benefits of CQRS without considering the user-interface, making it capture user intent explicitly. When taking into account client-side validation, command structures may be somewhat adjusted. Thinking through the order in which commands and events are processed can lead to notification patterns which make returning errors unnecessary.

While the result of applying CQRS to a given project is a more maintainable and performant code base, this simplicity and scalability require understanding the detailed business requirements and are not the result of any technical “best practice”. If anything, we can see a plethora of approaches to apparently similar problems being used together – data readers and domain models, one-way messaging and synchronous calls.

Although this blog post is over 3000 words (a record for this blog), I know that it doesn’t go into enough depth on the topic (it takes about 3 days out of the 5 of my Advanced Distributed Systems Design course to cover everything in enough depth). Still, I hope it has given you the understanding of why CQRS is the way it is and possibly opened your eyes to other ways of looking at the design of distributed systems.

Questions and comments are most welcome.



Progressive .NET Wrap-up

Monday, September 7th, 2009

So, I’ve gotten back from a most enjoyable couple of days in Sweden where I gave two half-day tutorials, the first being the SOA and UI composition talk I gave at the European Virtual ALT.NET meeting (which you can find online here) and the other on DDD in enterprise apps (the first time I’ve done this talk).

I’ve gotten some questions about my DDD presentation there based on Aaron Jensen’s pictures:

cqs_udi_dahan_presentation

Yes – I talk with my hands. All the time.

That slide is quite an important one – I talked about it for at least 2 hours.

Here it is again, this time in full:

cqs

You may notice that the nice clean layered abstraction that the industry has gotten so comfortable with doesn’t quite sit right when looking at it from this perspective. The reason for that is that this perspective takes into account physical distribution while layers don’t.

I’ll have some more posts on this topic as well as giving a session in TechEd Europe this November.

Oh – and please do feel free to already send your questions in.



Don’t Delete – Just Don’t

Tuesday, September 1st, 2009


After reading Ayende’s post advocating against “soft deletes” I felt that I should add a bit more to the topic as there were some important business semantics missing. As developers discuss the pertinence of using an IsDeleted column in the database to mark deletion, and the way this relates to reporting and auditing concerns is weighed, the core domain concepts rarely get a mention. Let’s first understand the business scenarios we’re modeling, the why behind them, before delving into the how of implementation.

The real world doesn’t cascade

Let’s say our marketing department decides to delete an item from the catalog. Should all previous orders containing that item just disappear? And cascading farther, should all invoices for those orders be deleted as well? Going on, would we have to redo the company’s profit and loss statements?

Heaven forbid.

So, is Ayende wrong? Do we really need soft deletes after all?

On the one hand, we don’t want to leave our database in an inconsistent state with invoices pointing to non-existent orders, but on the other hand, our users did ask us to delete an entity.

Or did they?

When all you have is a hammer…

We’ve been exposing users to entity-based interfaces with “create, read, update, delete” semantics in them for so long that they have started presenting us requirements using that same language, even though it’s an extremely poor fit.

Instead of accepting “delete” as a normal user action, let’s go into why users “delete” stuff, and what they actually intend to do.

The guys in marketing can’t actually make all physical instances of a product disappear – nor would they want to. In talking with these users, we might discover that their intent is quite different:

“What I mean by ‘delete’ is that the product should be discontinued. We don’t want to sell this line of product anymore. We want to get rid of the inventory we have, but not order any more from our supplier. The product shouldn’t appear any more when customers do a product search or category listing, but the guys in the warehouse will still need to manage these items in the interim. It’s much shorter to just say ‘delete’ though.”

There seem to be quite a few interesting business rules and processes there, but nothing that looks like it could be solved by a single database column.

Model the task, not the data

Looking back at the story our friend from marketing told us, his intent is to discontinue the product – not to delete it in any technical sense of the word. As such, we probably should provide a more explicit representation of this task in the user interface than just selecting a row in some grid and clicking the ‘delete’ button (and “Are you sure?” isn’t it).

As we broaden our perspective to more parts of the system, we see this same pattern repeating:

Orders aren’t deleted – they’re cancelled. There may also be fees incurred if the order is canceled too late.

Employees aren’t deleted – they’re fired (or possibly retired). A compensation package often needs to be handled.

Jobs aren’t deleted – they’re filled (or their requisition is revoked).

In all cases, the thing we should focus on is the task the user wishes to perform, rather than on the technical action to be performed on one entity or another. In almost all cases, more than one entity needs to be considered.

Statuses

In all the examples above, what we see is a replacement of the technical action ‘delete’ with a relevant business action. At the entity level, instead of having a (hidden) technical WasDeleted status, we see an explicit business status that users need to be aware of.

The manager of the warehouse needs to know that a product is discontinued so that they don’t order any more stock from the supplier. In today’s world of retail with Vendor Managed Inventory, this often happens together with a modification to an agreement with the vendor, or possibly a cancellation of that agreement.

This isn’t just a case of transactional or reporting boundaries – users in different contexts need to see different things at different times as the status changes to reflect the entity’s place in the business lifecycle. Customers shouldn’t see discontinued products at all. Warehouse workers should, that is, until the corresponding Stock Keeping Unit (SKU) has been revoked (another status) after we’ve sold all the inventory we wanted (and maybe returned the rest back to the supplier).

Rules and Validation

When looking at the world through over-simplified-delete-glasses, we may consider the logic dictating when we can delete to be quite simple: do some role-based-security checks, check that the entity exists, delete. Piece of cake.

The real world is a bigger, more complicated cake.

Let’s consider deleting an order, or rather, canceling it. On top of the regular security checks, we’ve got some rules to consider:

If the order has already been delivered, check if the customer isn’t happy with what they got, and go about returning the order.

If the order contained products “made to order”, charge the customer for a portion (or all) of the order (based on other rules).

And more…

Deciding what the next status should be may very well depend on the current business status of the entity. Deciding if that change of state is allowed is context and time specific – at one point in time the task may have been allowed, but later not. The logic here is not necessarily entirely related to the entity being “deleted” – there may be other entities which need to be checked, and whose status may also need to be changed as well.

Summary

I know that some of you are thinking, “my system isn’t that complex – we can just delete and be done with it”.

My question to you would be, have you asked your users why they’re deleting things? Have you asked them about additional statuses and rules dictating how entities move as groups between them? You don’t want the success of your project to be undermined by that kind of unfounded assumption, do you?

The reason we’re given budgets to build business applications is because of the richness in business rules and statuses that ultimately provide value to users and a competitive advantage to the business. If that value wasn’t there, wouldn’t we be serving our users better by just giving them Microsoft Access?

In closing, given that you’re not giving your users MS Access, don’t think about deleting entities. Look for the reason why. Understand the different statuses that entities move between. Ask which users need to care about which status. I know it doesn’t show up as nicely on your resume as “3 years WXF”, but “saved the company $4 million in wasted inventory” does speak volumes.

One last sentence: Don’t delete. Just don’t.



Object Relational Mapping Sucks!

Wednesday, June 25th, 2008

For reporting, that is.image

And doesn’t handle concurrency!

Unless you don’t expose setters.

I guess it depends, doesn’t it?

Well, that was Ted’s assertion in his recent Pragmatic Architecture column on data access.

But, “it depends” doesn’t get the system built, does it?

So, here are some rules for using o/r mapping that will get you 99% of the way there.

Yes, you heard me.

Rules.

They do not depend.

If you’re doing something significantly bigger than enterprise-scale development, and you are already doing this, and it isn’t enough, give me a call. Here we go.

  1. No reporting.

    I mean it. Don’t report off of live data.
    This isn’t just a o/r mapping thing.
    Users can tolerate some, if not quite a lot of latency.

    And it’s not like objects are even used. It’s just rolled up data. Not a single behaviour for miles.

  2. Don’t expose setters

    You want multiple users sharing and collaborating on data, right? Then don’t force them to either overwrite each others data, or throw away their own. There is one simple way to avoid that: Get an object, call a method. Once the object has the most up to date data, pass all the client data in via a method call. The object will decide if its valid, from a business perspective as well, and then update the appropriate fields.

    Now your DBAs can vertically partition tables accordingly, and improve throughput. After that, you can increase the isolation level, to improve safety, without hurting throughput.

    This will also keep your logic encapsulated, bringing you closer to a true Domain Model.

    If your O/R mapping tool requires you to have setters on your domain classes, hide those from your service layer behind an interface.

  3. Grids are like reports.

    No o/r mapping required there either. While you probably won’t be showing grids of yesterday’s data to users in an interactive environment, it’s still just data – no behaviour.

    However, users should NOT update data in those grids. This gets back to rule 2. Have users select a specific task they want to perform, pop open a window, and have them do it there. Change customer address. Discount order. You get the picture. That way you’ll know what method to call on those objects you designed in rule 2.

Before wrapping up, one small thing.

You can use an O/R mapping tool to do reporting, just, for the love of Bill, don’t use the same classes you designed for your OLTP domain model. But, just because you can, doesn’t necessarily mean you should. Datasets datatables are probably just as viable a solution.



7 Simple Questions for Service Selection

Friday, May 16th, 2008

“So, which services do I need?”

This innocuous question comes up a lot. Usually I get this question after a short problem domain description. One of these came up on the nServiceBus discussion groups. Ayende took it and ran with it turning it into a nice blog post, An exercise in designing SOA systems. I’ve been meaning to write something myself. Bill put up a response already in his Service Granularity Example. So, I’m late to the party, again, but here we go.

It’s almost impossible to know, right away, which services are appropriate.

So, I’m going to focus more on the process of getting there, rather than describing the solution itself.

The domain deals with a placement agency placing physicians in positions at hospitals. doctor

1. So, what does it actually do?

In Ayende’s post, he describes several services, but I’d rather look at them as use cases: registering an open position, registering a candidate, verifying their credentials, etc. It’s worth going through this requirements process. It doesn’t necessarily translate immediately to services, but there’s value in it.

2. What does it do it to?

We should also be looking at the data model, an entity relationship diagram (ERD) , where we see that we may have placed a certain physician at a number of positions. It’s also important for us to know about under which circumstances a physician finished their employment at a previous position before, say, trying to place them at a position in the same hospital or chain of hospitals. Don’t go thinking that this what the database schema will look like, it’s all about understanding connections between various bits of data.

3. When does that happen?

The next step is to map the uses cases above to the entities in the ERD, which entity is used in which use case. It’s also important to differentiate between entities (or even more importantly, specific fields of entities) that are used in a read-only fashion within a given use case. For instance, when registering a new position, we’ll want to check that against other open positions in the same hospital so we don’t end up registering the same position twice. Also, we might want to suggest verified physicians whose credentials match the position’s requirements. Data we wouldn’t be interested in might be which other physicians we placed at that hospital.

4. What just happened?

Another valuable perspective on the problem domain is the business process view – what are the interesting business events in the system and how they unfold over time. For instance, physician registered, position opened, physician’s credentials verified, and physician placed in position (or position filled by physician) are events that describe a different business perspective than use cases.image

5. How do I decide?

Once we know what events there are, we can start looking at what kind of decisions we might want to make when those events occur and what data we’d need to make those decisions. These decisions may be as simple as updating a database or sending an email to a user. They also may include more advanced logic like when the profitability of an agreement with a specific hospital chain changes, prefer placing physicians in positions in that chain over others.

6. How do I deal with all this information?

After we have all of this information, we can start looking for cohesive bunching across all of these axes using these rules:

  • Data that is modified by a use case gets published as an event.
  • Data that is required by a use case for read-only purposes, arrives as the result of subscribing to some event.

Look for rules that differentiate behaviour based on the properties of data. Look for a correlation to some business concept. For instance, physicians probably won’t be changing their specialization, and open positions often deal with a certain specialization. Therefore, specific data instances tied to two different specializations can be said to be loosely coupled.

7. Which property slices across the domain?image

Even though the ERD may not have made it clear, and the use cases didn’t show any particular break-down, nor did the events call out this point, the key to finding the way a business domain decomposes into services lies in decoupling specific data instances.

Actually, at this point we can clump autonomous components (mere technical bits) that handle a single message, into more granular business components.

If you think about it, it makes a lot of sense. The kind of credential checking you’d do for physicians specializing in brain surgery would likely be different than for general practitioners. The kind of information you’d store would, therefore, also be different.

But, which services do I need?

Quite frankly, I don’t have enough information to know.

But if we had continued this conversation, going through issues like transactional consistency, availability requirements, and other non-functional issues we could have  gotten there.

If there’s one thing that I hope you got out of this, it’s that the questions are what’s important. The iterative process of looking at the problem domain from various perspectives, incorporating the new-found knowledge, and asking more questions is what leads us to a solution. But we don’t stop there. We keep looking for characteristics which split services apart into business components, and for consistency requirements that brings autonomous components together into services.

It’s not easy, but by focusing on these simple questions, you can get to a coherent service oriented architecture.



How to create fully encapsulated Domain Models

Friday, February 29th, 2008

image Update: The new and improved solution is now available: Domain Events, Take 2.

Most people getting started with DDD and the Domain Model pattern get stuck on this. For a while I tried answering this on the discussion groups, but here we have a nice example that I can point to next time.

The underlying problem I’ve noticed over the past few years is that developers are still thinking in terms of querying when they need more data. When moving to the Domain Model pattern, you have to “simply” represent the domain concepts in code – in other words, see things you aren’t used to seeing. I’ll highlight that part in the question below so that you can see where I’m going to go with this in my answer:

I have an instance where I believe I need access to a service or repository from my entity to evaluate a business rule but I’m using NHibernate for persistence so I don’t have a real good way to inject services into my entity. Can I get some viewpoints on just passing the services to my entity vs. using a facade?

Let me explain my problem to provide more context to the problem.

The core domain revolves around renting video games. I am working on a new feature to allow customers to trade in old video games. Customers can trade in multiple games at a time so we have a TradeInCart entity that works similar to most shopping carts that everybody is familiar with. However there are several rules that limit the items that can be placed into the TradeInCart. The core rules are:

1. Only 3 games of the same title can be added to the cart.
2. The total number of items in the cart cannot exceed 10.
3. No games can be added to the cart that the customer had previously reported lost with regards to their rental membership.
    a. If an attempt is made to add a previously reported lost game, then we need to log a BadQueueStatusAddAttempt to the persistence store.

So the first 2 rules are easily handled internally by the cart through an Add operation. Sample cart interface is below.

   1:  class TradeInCart{
   2:      Account Account{get;}
   3:      LineItem Add(Game game);
   4:      ValidationResult CanAdd(Game game);
   5:      IList<LineItems> LineItems{get;}
   6:  }

However the #3 rule is much more complicated and can’t be handled internally by the cart, so I have to depend on external services. Splitting up the validation logic for a cart add operation doesn’t seem very appealing to me at all. So I have the option of passing in a repository to get the previously reported lost games and a service to log bad attempts. This makes my cart interface ugly real quick.

   1:  class TradeInCart{
   2:      Account Account{get;}
   3:      LineItem Add(
   4:          Game game, 
   5:          IRepository<QueueHistory> repository, 
   6:          LoggingService service);
   7:   
   8:      ValidationResult CanAdd(
   9:          Game game, 
  10:          IRepository<QueueHistory> repository, 
  11:          LoggingService service);
  12:   
  13:      IList<LineItems> LineItems{get;}
  14:  }

The alternative option is to have a TradeInCartFacade that handles the validations and adding the items to the cart. The façade can have the repository and services injected though DI which is nice, but the big negative is that the cart ends up totally anemic.

Any thought on this would be greatly appreciated.

Thanks,
Jesse

As I highlighted above, the thing that will help you with your business rules is to introduce the Customer object (that you probably already have) with the property GamesReportedLost (an IList<Game>). Your TradeInCart would have a reference to the Customer object and could then check the rule in the Add method.

Before I go into the code, it looks like your Account object might be used the same way, but your description of the domain doesn’t mention accounts, so I’m going to assume that that’s unrelated for now:

   1:  public class Customer{
   2:   
   3:      /* other properties and methods */
   4:   
   5:      private IList<Game> gamesReportedLost;
   6:      public virtual IList<Game> GamesReportedLost 
   7:      { 
   8:          get
   9:          {
  10:              return gamesReportedLost;
  11:          }
  12:          set
  13:          {
  14:              gamesReportedLost = value;
  15:          }
  16:      }
  17:  }

Keep in mind that the GamesReportedLost is a persistent property of Customer. Every time a customer reports a game lost, this list needs to be kept up to date. Here’s the TradeInCart now:

   1:  public class TradeInCart
   2:  {
   3:      /* other properties and methods */
   4:   
   5:      private Customer customer;
   6:      public virtual Customer Customer
   7:      { 
   8:          get { return customer; }
   9:          set { customer = value; }
  10:      }
  11:   
  12:      private IList<LineItem> lineItems;
  13:      public virtual IList<LineItem> LineItems
  14:      {
  15:          get { return lineItems; }
  16:          set { lineItems = value; }
  17:      }
  18:   
  19:      public void Add(Game game)
  20:      {
  21:          if (lineItems.Count >= CONSTANTS.MaxItemsPerCart)
  22:          {
  23:              FailureEvents.RaiseCartIsFullEvent();
  24:              return;
  25:          }
  26:   
  27:          if (NumberOfGameAlreadyInCart(game) >=
  28:              CONSTANTS.MaxNumberOfSameGamePerCart)
  29:          {
  30:              FailureEvents
  31:                .RaiseMaxNumberOfSameGamePerCartReachedEvent();
  32:              return;
  33:          }
  34:   
  35:          if (customer.GamesReportedLost.Contains(game))
  36:              FailureEvents.RaiseGameReportedLostEvent();
  37:          else
  38:              this.lineItems.Add(new LineItem(game));
  39:      }
  40:   
  41:      private int NumberOfGameAlreadyInCart(Game game)
  42:      {
  43:          int result = 0;
  44:   
  45:          foreach(LineItem li in this.lineItems)
  46:              if (li.Game == game)
  47:                  result++;
  48:   
  49:          return result;
  50:      }
  51:  }
  52:   
  53:  public static class FailureEvents
  54:  {
  55:      public static event EventHandler GameReportedLost;
  56:      public static void RaiseGameReportedLostEvent()
  57:      {
  58:           if (GameReportedLost != null)
  59:               GameReportedLost(null, null);
  60:      }
  61:   
  62:      public static event EventHandler CartIsFull;
  63:      public static void RaiseCartIsFullEvent()
  64:      {
  65:           if (CartIsFull != null)
  66:               CartIsFull(null, null);
  67:      }
  68:   
  69:      public static event EventHandler MaxNumberOfSameGamePerCartReached;
  70:      public static void RaiseMaxNumberOfSameGamePerCartReachedEvent()
  71:      {
  72:           if (MaxNumberOfSameGamePerCartReached != null)
  73:               MaxNumberOfSameGamePerCartReached(null, null);
  74:      }
  75:  }

image Your service layer class that calls the Add method of TradeInCart would first subscribe to the relevant events in FailureEvents. If one of those events is raised, it would do the necessary logging, external system calls, etc.

As you can see, the API of TradeInCart doesn’t need to make use of any external repositories, nor do you need to inject any other external dependencies in.

One thing I didn’t do in the above code to keep it “short” is to define the relevant custom EventArgs for bubbling up the information as to which game was reported lost or already have 3 of those in the cart. That is something that definitely should be done so that the service layer can pass this information back to the client.

Here’s a look at Service Layer code:

   1:  public class AddGameToCartMessageHandler :
   2:      BaseMessageHandler<AddGameToCartMessage>
   3:  {
   4:      public override void Handle(AddGameToCartMessage m)
   5:      {
   6:          using (ISession session = SessionFactory.OpenSession())
   7:          using (ITransaction tx = session.BeginTransaction())
   8:          {
   9:              TradeInCart cart = session.Get<TradeInCart>(m.CartId);
  10:              Game g = session.Get<Game>(m.GameId);
  11:   
  12:              Domain.FailureEvents.GameReportedLost +=
  13:                gameReportedLost;
  14:              Domain.FailureEvents.CartIsFull +=
  15:                cartIsFull;
  16:              Domain.FailureEvents.MaxNumberOfSameGamePerCartReached +=
  17:                maxNumberOfSameGamePerCartReached;
  18:   
  19:              cart.Add(g);
  20:   
  21:              Domain.FailureEvents.GameReportedLost -=
  22:                gameReportedLost;
  23:              Domain.FailureEvents.CartIsFull -=
  24:                cartIsFull;
  25:              Domain.FailureEvents.MaxNumberOfSameGamePerCartReached -=
  26:                maxNumberOfSameGamePerCartReached;
  27:   
  28:              tx.Commit();
  29:          }
  30:      }
  31:   
  32:      private EventHandler gameReportedLost = delegate { 
  33:            Bus.Return((int)ErrorCodes.GameReportedLost);
  34:          };
  35:   
  36:      private EventHandler cartIsFull = delegate { 
  37:            Bus.Return((int)ErrorCodes.CartIsFull);
  38:          };
  39:   
  40:      private EventHandler maxNumberOfSameGamePerCartReached = delegate { 
  41:            Bus.Return((int)ErrorCodes.MaxNumberOfSameGamePerCartReached);
  42:          };
  43:      }
  44:  }

It’s important to remember to clean up your event subscriptions so that your Service Layer objects get garbage collected. This is one of the primary causes of memory leaks when using static events in your Domain Model. I’m hoping to find ways to use lambdas to decrease this repetitive coding pattern. You might be thinking to yourself that non-static events on your Domain Model objects would be easier, since those objects would get collected, freeing up the service layer objects for collection as well. There’s just on small problem:

The problem is that if an event is raised by a child (or grandchild object), the service layer object may not even know that that grandchild was involved and, as such, would not have subscribed to that event. The only way the service layer could work was by knowing how the Domain Model worked internally – in essence, breaking encapsulation.

If you’re thinking that using exceptions would be better, you’d be right in thinking that that won’t break encapsulation, and that you wouldn’t need all that subscribe/unsubscribe code in the service layer. The only problem is that the Domain Model needs to know that the service layer had a default catch clause so that it wouldn’t blow up. Otherwise, the service layer (or WCF, or nServiceBus) may end up flagging that message as a poison message (Read more about poison messages). You’d also have to be extremely careful about in which environments you used your Domain Model – in other words, your reuse is shot.

Conclusion

I never said it would be easy 🙂

However, the solution is simple (not complex). The same patterns occur over and over. The design is consistent. By focusing on the dependencies we now have a domain model that is reusable across many environments (server, client, sql clr, silverlight). The domain model is also testable without resorting to any fancy mock objects.

One closing comment – while I do my best to write code that is consistent with production quality environments, this code is more about demonstrating design principles. As such, I focus more on the self-documenting aspects of the code and have elided many production concerns.

Do you have a better solution?

Something that I haven’t considered?

Do me a favour – leave me a comment. Tell me what you think.



Sagas and Unit Testing – Business Process Verification Made Easy

Monday, February 4th, 2008

Sagas have always been designed with unit testing in mind. By keeping them disconnected from any communications or persistence technology, it was my belief that it should be fairly easy to use mock objects to test them. I’ve heard back from projects using nServiceBus this way that they were pleased with their ability to test them, and thought all was well.

Not so.

The other day I sat down to implement and test a non-trivial business process, and the testing was far from easy. Now as developers go, I’m not great, or an expert on unit testing or TDD, but I’m above average. It should not have been this hard. And I tried doing it with Rhino.Mocks, TypeMock, and finally Moq. It seemed like I was in a no-mans-land, between trying to do state-based testing, and setting expectations on the messages being sent (as well as correct values in those messages), nothing flowed.

Until I finally stopped trying to figure out how to test, and focused on what needed to be tested. I mean, it’s not like I was trying to build a generic mocking framework like Daniel.

Here’s an example business process, or actually, part of one, and then we’ll see how that can be tested. By the way, there will be a post coming soon which describes how we go about analysing a system, coming up with these message types, and how these sagas come into being, so stay tuned. Either that, or just come to my tutorial at QCon.

On with the process:

1. When we receive a CreateOrderMessage, whose “Completed” flag is true, we’ll send 2 AuthorizationRequestMessages to internal systems (for managers to authorize the order), one OrderStatusUpdatedMessage to the caller with a status “Received”, and a TimeoutMessage to the TimeoutManager requesting to be notified – so that the process doesn’t get stuck if one or both messages don’t get a response.

2. When we receive the first AuthorizationResponseMessage, we notify the initiator of the Order by sending them a OrderStatusUpdatedMessage with a status “Authorized1”.

3. When we get “timed out” from the TimeoutManager, we check if at least one AuthorizationResponseMessage has arrived, and if so, publish an OrderAcceptedMessage, and notify the initator (again via the OrderStatusUpdatedMessage) this time with a status of “Accepted”.

And here’s the test:

    public class OrderSagaTests 
    { 
        private OrderSaga orderSaga = null; 
        private string timeoutAddress; 
        private Saga Saga;     

        [SetUp] 
        public void Setup() 
        { 
            timeoutAddress = "timeout"; 
            Saga = Saga.Test(out orderSaga, timeoutAddress); 
        }     

        [Test] 
        public void OrderProcessingShouldCompleteAfterOneAuthorizationAndOneTimeout() 
        { 
            Guid externalOrderId = Guid.NewGuid(); 
            Guid customerId = Guid.NewGuid(); 
            string clientAddress = "client";     

            CreateOrderMessage createOrderMsg = new CreateOrderMessage(); 
            createOrderMsg.OrderId = externalOrderId; 
            createOrderMsg.CustomerId = customerId; 
            createOrderMsg.Products = new List<Guid>(new Guid[] { Guid.NewGuid() }); 
            createOrderMsg.Amounts = new List<float>(new float[] { 10.0F }); 
            createOrderMsg.Completed = true;     

            TimeoutMessage timeoutMessage = null;     

            Saga.WhenReceivesMessageFrom(clientAddress) 
                .ExpectSend<AuthorizeOrderRequestMessage>( 
                    delegate(AuthorizeOrderRequestMessage m) 
                    { 
                        return m.SagaId == orderSaga.Id; 
                    }) 
                .ExpectSend<AuthorizeOrderRequestMessage>( 
                    delegate(AuthorizeOrderRequestMessage m) 
                    { 
                        return m.SagaId == orderSaga.Id; 
                    }) 
                .ExpectSendToDestination<OrderStatusUpdatedMessage>( 
                    delegate(string destination, OrderStatusUpdatedMessage m) 
                    { 
                        return m.OrderId == externalOrderId && destination == clientAddress; 
                    }) 
                .ExpectSendToDestination<TimeoutMessage>( 
                    delegate(string destination, TimeoutMessage m) 
                    { 
                        timeoutMessage = m; 
                        return m.SagaId == orderSaga.Id && destination == timeoutAddress; 
                    }) 
                .When(delegate { orderSaga.Handle(createOrderMsg); });     

            Assert.IsFalse(orderSaga.Completed);     

            AuthorizeOrderResponseMessage response = new AuthorizeOrderResponseMessage(); 
            response.ManagerId = Guid.NewGuid(); 
            response.Authorized = true; 
            response.SagaId = orderSaga.Id;     

            Saga.ExpectSendToDestination<OrderStatusUpdatedMessage>( 
                    delegate(string destination, OrderStatusUpdatedMessage m) 
                    { 
                        return (destination == clientAddress && 
                                m.OrderId == externalOrderId && 
                                m.Status == OrderStatus.Authorized1); 
                    }) 
                .When(delegate { orderSaga.Handle(response); });     

            Assert.IsFalse(orderSaga.Completed);     

            Saga.ExpectSendToDestination<OrderStatusUpdatedMessage>( 
                    delegate(string destination, OrderStatusUpdatedMessage m) 
                    { 
                        return (destination == clientAddress && 
                                m.OrderId == externalOrderId && 
                                m.Status == OrderStatus.Accepted); 
                    }) 
                .ExpectPublish<OrderAcceptedMessage>( 
                    delegate(OrderAcceptedMessage m) 
                    { 
                        return (m.CustomerId == customerId); 
                    }) 
                .When(delegate { orderSaga.Timeout(timeoutMessage.State); });     

            Assert.IsTrue(orderSaga.Completed); 
        } 
    }

You might notice that this style is a bit similar to the fluent testing found in Rhino Mocks. That’s not coincidence. It actually makes use of Rhino Mocks internally. The thing that I discovered was that in order to test these sagas, you don’t need to actually see a mocking framework. All you should have to do is express how messages get sent, and under what criteria those messages are valid.

If you’re wondering what the OrderSaga looks like, you can find the code right here. It’s not a complete business process implementation, but its enough to understand how one would look like:

using System; 
using System.Collections.Generic; 
using ExternalOrderMessages; 
using NServiceBus.Saga; 
using NServiceBus; 
using InternalOrderMessages;     

namespace ProcessingLogic 
{ 
    [Serializable] 
    public class OrderSaga : ISaga<CreateOrderMessage>, 
        ISaga<AuthorizeOrderResponseMessage>, 
        ISaga<CancelOrderMessage> 
    { 
        #region config info     

        [NonSerialized] 
        private IBus bus; 
        public IBus Bus 
        { 
            set { this.bus = value; } 
        }     

        [NonSerialized] 
        private Reminder reminder; 
        public Reminder Reminder 
        { 
            set { this.reminder = value; } 
        }     

        #endregion     

        private Guid id; 
        private bool completed; 
        public string clientAddress; 
        public Guid externalOrderId; 
        public int numberOfPendingAuthorizations = 2; 
        public List<CreateOrderMessage> orderItems = new List<CreateOrderMessage>();     

        public void Handle(CreateOrderMessage message) 
        { 
            this.clientAddress = this.bus.SourceOfMessageBeingHandled; 
            this.externalOrderId = message.OrderId;     

            this.orderItems.Add(message);     

            if (message.Completed) 
            { 
                for (int i = 0; i < this.numberOfPendingAuthorizations; i++) 
                { 
                    AuthorizeOrderRequestMessage req = new AuthorizeOrderRequestMessage(); 
                    req.SagaId = this.id; 
                    req.OrderData = orderItems;     

                    this.bus.Send(req); 
                } 
            }     

            this.SendUpdate(OrderStatus.Recieved);     

            this.reminder.ExpireIn(message.ProvideBy - DateTime.Now, this, null); 
        }     

        public void Timeout(object state) 
        { 
            if (this.numberOfPendingAuthorizations <= 1) 
                this.Complete(); 
        }     

        public Guid Id 
        { 
            get { return id; } 
            set { id = value; } 
        }     

        public bool Completed 
        { 
            get { return completed; } 
        }     

        public void Handle(AuthorizeOrderResponseMessage message) 
        { 
            if (message.Authorized) 
            { 
                this.numberOfPendingAuthorizations--;     

                if (this.numberOfPendingAuthorizations == 1) 
                    this.SendUpdate(OrderStatus.Authorized1); 
                else 
                { 
                    this.SendUpdate(OrderStatus.Authorized2); 
                    this.Complete(); 
                } 
            } 
            else 
            { 
                this.SendUpdate(OrderStatus.Rejected); 
                this.Complete(); 
            } 
        }     

        public void Handle(CancelOrderMessage message) 
        {     

        }     

        private void SendUpdate(OrderStatus status) 
        { 
            OrderStatusUpdatedMessage update = new OrderStatusUpdatedMessage(); 
            update.OrderId = this.externalOrderId; 
            update.Status = status;     

            this.bus.Send(this.clientAddress, update); 
        }     

        private void Complete() 
        { 
            this.completed = true;     

            this.SendUpdate(OrderStatus.Accepted);     

            OrderAcceptedMessage accepted = new OrderAcceptedMessage(); 
            accepted.Products = new List<Guid>(this.orderItems.Count); 
            accepted.Amounts = new List<float>(this.orderItems.Count);     

            this.orderItems.ForEach(delegate(CreateOrderMessage m) 
                                        { 
                                            accepted.Products.AddRange(m.Products); 
                                            accepted.Amounts.AddRange(m.Amounts); 
                                            accepted.CustomerId = m.CustomerId; 
                                        });     

            this.bus.Publish(accepted); 
        } 
    } 
}

All this code is online in the subversion repository under /Samples/Saga.

Questions, comments, and general thoughts are always appreciated.



Performant and Explicit Domain Models

Monday, June 4th, 2007

Some Technical Difficulties

Ayende and I had an email conversation that started with me asking what would happen if I added an Order to a Customer’s “Orders” collection, when that collection was lazy loaded. My question was whether the addition of an element would result in NHibernate hitting the database to fill that collection. His answer was a simple “yes”. In the case where a customer can have many (millions) of Orders, that’s just not a feasible solution. The technical solution was simple – just define the Orders collection on the Customer as “inverse=true”, and then to save a new Order, just write:

session.Save( new Order(myCustomer) );

Although it works, it’s not “DDD compliant” 🙂

In Ayende’s post Architecting for Performance he quoted a part of our email conversation. The conclusion I reached was that in order to design performant domain models, you need to know the kinds of data volumes you’re dealing with. It affects both internals and the API of the model – when can you assume cascade, and when not. It’s important to make these kinds of things explicit in the Domain Model’s API.

How do you make “transparent persistence” explicit?

The problem occurs around “transparent persistence”. If we were to assume that the Customer object added the Order object to its Orders collection, then we wouldn’t have to explicitly save orders it creates, so we would write service layer code like this:

using (IDBScope scope = this.DbServices.GetScope(TransactionOption.On))
{
IOrderCreatingCustomer c = this.DbServices.Get<IOrderCreatingCustomer>(msg.CustomerId);
c.CreateOrder(message.OrderAmount);

scope.Complete();
}

On the other hand, if we designed our Domain Model around the million orders constraint, we would need to explicitly save the order, so we would write service layer code like this:

using (IDBScope scope = this.DbServices.GetScope(TransactionOption.On))
{
IOrderCreatingCustomer c = this.DbServices.Get<IOrderCreatingCustomer>(msg.CustomerId);
IOrder o = c.CreateOrder(message.OrderAmount);
this.DbServices.Save(o);

scope.Complete();
}

But the question remains, how do we communicate these guidelines to service layer developers from the Domain Model? There are a number of ways, but it’s important to decide on one and use it consistently. Performance and correctness require it.

Solution 1: Explicitness via Return Type

The first way is a little subtle, but you can do it with the return type of the “CreateOrder” method call. In the case where the Domain Model wishes to communicate that it handles transparent persistence by itself, have the method return “void”. Where the Domain Model wishes to communicate that it will not handle transparent persistence, have the method return the Order object created.

Another way to communicate the fact that an Order has been created that needs to be saved is with events. There are two sub-ways to do so:

Solution 2: Explicitness via Events on Domain Objects

The first is to just define the event on the customer object and have the service layer subscribe to it. It’s pretty clear that when the service layer receives a “OrderCreatedThatRequiresSaving” event, it should save the order passed in the event arguments.

The second realizes that the call to the customer object may come from some other domain object and that the service layer doesn’t necessarily know what can happen as the result of calling some method on the aggregate root. The change of state as the result of that method call may permeate the entire object graph. If each object in the graph raises its own events, its calling object will have to propagate that event to its parent – resulting in defining the same events in multiple places, and each object being aware of all things possible with its great-grandchild objects. That is clearly bad.

What [ThreadStatic] is for

So, the solution is to use thread-static events.

[Sidebar] Thread-static events are just static events defined on a static class, where each event has the ThreadStaticAttribute applied to it. This attribute is important for server-side scenarios where multiple threads will be running through the Domain Model at the same time. The easiest thread-safe way to use static data is to apply the ThreadStaticAttribute.

Solution 3: Explicitness via Static Events

Each object raises the appropriate static event according to its logic. In our example, Customer would call:

DomainModelEvents.RaiseOrderCreatedThatRequiresSavingEvent(newOrder);

And the service layer would write:

DomainModelEvents.OrderCreatedThatRequiresSaving +=
delegate(object sender, OrderEventArgs e) { this.DbServices.Save(e.Order); };

The advantage of this solution is that it requires minimal knowledge of the Domain Model for the Service Layer to correctly work with it. It also communicates that anything that doesn’t raise an event will be persisted transparently behind the appropriate root object.

Statics and Testability

I know that many of you are wondering if I am really advocating the use of statics. The problem with most static classes is that they hurt testability because they are difficult to mock out. Often statics are used as Facades to hide some technological implementation detail. In this case, the static class is an inherent part of the Domain Model and does not serve as a Facade for anything.

When it comes to testing the Domain Model, we don’t have to mock anything out since the Domain Model is independent of all other concerns. This leaves us with unit testing at the single Domain Class level, which is pretty useless unless we’re TDD-ing the design of the Domain Model, in which case we’ll still be fiddling around with a bunch of classes at a time. Domain Models are best tested using State-Based Testing; get the objects into a given state, call a method on one of them, assert the resulting state. The static events don’t impede that kind of testing at all.

What if we used Injection instead of Statics?

Also, you’ll find that each Service Layer class will need to subscribe to all the Domain Model’s events, something that is easily handled by a base class. I will state that I have tried doing this without a static class, and injecting that singleton object into the Service Layer classes, and in that setter having them subscribe to its events. This was also pulled into a base class. The main difference was that the Dependency Injection solution required injecting that object into Domain Objects as well. Personally, I’m against injection for domain objects. So all in all, the static solution comes with less overhead than that based on injection.

Summary

In summary, beyond the “technical basics” of being aware of your data volumes and designing your Domain Model to handle each use case performantly, I’ve found these techniques useful for designing its API as well as communicating my intent around persistence transparency. So give it a try. I’d be grateful to hear your thoughts on the matter as well as what else you’ve found that works.

Related posts:



Layering – too simplistic to actually work

Sunday, June 3rd, 2007

After seeing Mark’s post on Reasons for Isolation describing the ways Layered Architectures break down, and the ways making it more testable can change it, I’ve got to wonder – is Layering just too simplistic to actually work?

Just the other day I was doing a design review for a fairly simple Smart Client whose design was layered. In order to stay away from interfaces that accepted dozens of ints, strings, and dates, they wanted to have each layer talk to the other using “entities”. So where are these entities defined – oh, in a “vertical layer” that all the horizontal layers talk to.

OK, so we’ve taken the simplistic one-dimensional layered architecture and added a dimension. What now?

Well, it seems that having the business logic and the entities in separate layers goes against one of the most basic Object Oriented principles – encapsulation. So, let’s put the entities back in the Business Logic Layer. But then how will the Data Access Layer accept those objects as parameters?

So, that is solved by keeping Entity Interfaces in the “vertical” shared “layer”, and having the entities in the business logic layer implement those interfaces. That way, the data access layer can still accept parameters corresponding to those interfaces:

void InsertCustomer(Shared.Entities.ICustomer customer);

So far so good. Now, we want more testable UI layer code – so we use Model-View-Controller (MVC) – of whichever flavor suits your fancy. I’d say that Supervising Controller is a must. You could also add another presenter for more complex screens as in Passive View, but I’d be less strict on that. So, in which layer do these Controllers/Presenters sit? And is the Business Logic Layer the Model? Or is the Model just part of it?

Well, our Supervising Controllers are those who decide what action to do and when, where to get the data from, etc. That sounds like business logic to me. So let’s put them in the BLL. Presenters for the Passive View are much more UI centered, so let’s put them in the Presentation Layer. But we don’t want them tied to the implementation of the view, so we’ll put them in a separate package, and have them depend only on the view’s interface. So we’ll put the view interfaces in a package separate from the view implementation as well.

If it wasn’t clear up to this point, all the questions raised in this post are architectural in nature – as in they have a substantial impact on the structure and flow of the system, and will definitely have a profound effect on its maintainability. In other words, if you think that Layer Diagram covers your design – you’re probably deluding yourself. Personally, I think that’s why many developers consider architects to be “out of touch with the real world”.

When you have a design that answers these, and other architectural concerns, you’ll find that layering is of little importance. The specific constraints on each package are what counts. The fact that the Presentation Layer can talk to the Business Logic Layer doesn’t mean that the classes in your Views Implementation Package can. A large part of an architects work is to specify these constraints, and communicate them to the team. Tools like FxCop may help in terms of enforcing these constraints, but I believe that getting the team to actually “buy-in” is more effective.

Single-dimensional layered architectures don’t work. They violate Einstein’s maxim:

Make everything as simple as possible, but not simpler.

Layering – “simpler” to the point of simplistic.



NHibernate will rule, because Ayende already does

Sunday, May 20th, 2007

First I find out that NHibernate does support “Persistence by Reachability”, even though the docs say it doesn’t. Next, Ayende makes it support multiple queries in a single DB roundtrip, something I’ve been asking all the other O/R mappers out there to do. To top it off, he’s got his sights set on solving the issues I raised in my talk on Complex Business Logic with DDD and O/R Mapping at DevTeach. That’s right, he’s going to give me my decorators and state machines.

I love you, Oren.

I know that the ADO.NET Entity Framework guys are open to this as well, but I’m pretty sure that the “Entity Model” thinking will hold them back. You just can’t divorce data and behavior – not when employing state machines or decorators.

I’m sold.



   


Don't miss my best content
 

Recommendations

Bryan Wheeler, Director Platform Development at msnbc.com
Udi Dahan is the real deal.

We brought him on site to give our development staff the 5-day “Advanced Distributed System Design” training. The course profoundly changed our understanding and approach to SOA and distributed systems.

Consider some of the evidence: 1. Months later, developers still make allusions to concepts learned in the course nearly every day 2. One of our developers went home and made her husband (a developer at another company) sign up for the course at a subsequent date/venue 3. Based on what we learned, we’ve made constant improvements to our architecture that have helped us to adapt to our ever changing business domain at scale and speed If you have the opportunity to receive the training, you will make a substantial paradigm shift.

If I were to do the whole thing over again, I’d start the week by playing the clip from the Matrix where Morpheus offers Neo the choice between the red and blue pills. Once you make the intellectual leap, you’ll never look at distributed systems the same way.

Beyond the training, we were able to spend some time with Udi discussing issues unique to our business domain. Because Udi is a rare combination of a big picture thinker and a low level doer, he can quickly hone in on various issues and quickly make good (if not startling) recommendations to help solve tough technical issues.” November 11, 2010

Sam Gentile Sam Gentile, Independent WCF & SOA Expert
“Udi, one of the great minds in this area.
A man I respect immensely.”





Ian Robinson Ian Robinson, Principal Consultant at ThoughtWorks
"Your blog and articles have been enormously useful in shaping, testing and refining my own approach to delivering on SOA initiatives over the last few years. Over and against a certain 3-layer-application-architecture-blown-out-to- distributed-proportions school of SOA, your writing, steers a far more valuable course."

Shy Cohen Shy Cohen, Senior Program Manager at Microsoft
“Udi is a world renowned software architect and speaker. I met Udi at a conference that we were both speaking at, and immediately recognized his keen insight and razor-sharp intellect. Our shared passion for SOA and the advancement of its practice launched a discussion that lasted into the small hours of the night.
It was evident through that discussion that Udi is one of the most knowledgeable people in the SOA space. It was also clear why – Udi does not settle for mediocrity, and seeks to fully understand (or define) the logic and principles behind things.
Humble yet uncompromising, Udi is a pleasure to interact with.”

Glenn Block Glenn Block, Senior Program Manager - WCF at Microsoft
“I have known Udi for many years having attended his workshops and having several personal interactions including working with him when we were building our Composite Application Guidance in patterns & practices. What impresses me about Udi is his deep insight into how to address business problems through sound architecture. Backed by many years of building mission critical real world distributed systems it is no wonder that Udi is the best at what he does. When customers have deep issues with their system design, I point them Udi's way.”

Karl Wannenmacher Karl Wannenmacher, Senior Lead Expert at Frequentis AG
“I have been following Udi’s blog and podcasts since 2007. I’m convinced that he is one of the most knowledgeable and experienced people in the field of SOA, EDA and large scale systems.
Udi helped Frequentis to design a major subsystem of a large mission critical system with a nationwide deployment based on NServiceBus. It was impressive to see how he took the initial architecture and turned it upside down leading to a very flexible and scalable yet simple system without knowing the details of the business domain. I highly recommend consulting with Udi when it comes to large scale mission critical systems in any domain.”

Simon Segal Simon Segal, Independent Consultant
“Udi is one of the outstanding software development minds in the world today, his vast insights into Service Oriented Architectures and Smart Clients in particular are indeed a rare commodity. Udi is also an exceptional teacher and can help lead teams to fall into the pit of success. I would recommend Udi to anyone considering some Architecural guidance and support in their next project.”

Ohad Israeli Ohad Israeli, Chief Architect at Hewlett-Packard, Indigo Division
“When you need a man to do the job Udi is your man! No matter if you are facing near deadline deadlock or at the early stages of your development, if you have a problem Udi is the one who will probably be able to solve it, with his large experience at the industry and his widely horizons of thinking , he is always full of just in place great architectural ideas.
I am honored to have Udi as a colleague and a friend (plus having his cell phone on my speed dial).”

Ward Bell Ward Bell, VP Product Development at IdeaBlade
“Everyone will tell you how smart and knowledgable Udi is ... and they are oh-so-right. Let me add that Udi is a smart LISTENER. He's always calibrating what he has to offer with your needs and your experience ... looking for the fit. He has strongly held views ... and the ability to temper them with the nuances of the situation.
I trust Udi to tell me what I need to hear, even if I don't want to hear it, ... in a way that I can hear it. That's a rare skill to go along with his command and intelligence.”

Eli Brin, Program Manager at RISCO Group
“We hired Udi as a SOA specialist for a large scale project. The development is outsourced to India. SOA is a buzzword used almost for anything today. We wanted to understand what SOA really is, and what is the meaning and practice to develop a SOA based system.
We identified Udi as the one that can put some sense and order in our minds. We started with a private customized SOA training for the entire team in Israel. After that I had several focused sessions regarding our architecture and design.
I will summarize it simply (as he is the software simplist): We are very happy to have Udi in our project. It has a great benefit. We feel good and assured with the knowledge and practice he brings. He doesn’t talk over our heads. We assimilated nServicebus as the ESB of the project. I highly recommend you to bring Udi into your project.”

Catherine Hole Catherine Hole, Senior Project Manager at the Norwegian Health Network
“My colleagues and I have spent five interesting days with Udi - diving into the many aspects of SOA. Udi has shown impressive abilities of understanding organizational challenges, and has brought the business perspective into our way of looking at services. He has an excellent understanding of the many layers from business at the top to the technical infrstructure at the bottom. He is a great listener, and manages to simplify challenges in a way that is understandable both for developers and CEOs, and all the specialists in between.”

Yoel Arnon Yoel Arnon, MSMQ Expert
“Udi has a unique, in depth understanding of service oriented architecture and how it should be used in the real world, combined with excellent presentation skills. I think Udi should be a premier choice for a consultant or architect of distributed systems.”

Vadim Mesonzhnik, Development Project Lead at Polycom
“When we were faced with a task of creating a high performance server for a video-tele conferencing domain we decided to opt for a stateless cluster with SQL server approach. In order to confirm our decision we invited Udi.

After carefully listening for 2 hours he said: "With your kind of high availability and performance requirements you don’t want to go with stateless architecture."

One simple sentence saved us from implementing a wrong product and finding that out after years of development. No matter whether our former decisions were confirmed or altered, it gave us great confidence to move forward relying on the experience, industry best-practices and time-proven techniques that Udi shared with us.
It was a distinct pleasure and a unique opportunity to learn from someone who is among the best at what he does.”

Jack Van Hoof Jack Van Hoof, Enterprise Integration Architect at Dutch Railways
“Udi is a respected visionary on SOA and EDA, whose opinion I most of the time (if not always) highly agree with. The nice thing about Udi is that he is able to explain architectural concepts in terms of practical code-level examples.”

Neil Robbins Neil Robbins, Applications Architect at Brit Insurance
“Having followed Udi's blog and other writings for a number of years I attended Udi's two day course on 'Loosely Coupled Messaging with NServiceBus' at SkillsMatter, London.

I would strongly recommend this course to anyone with an interest in how to develop IT systems which provide immediate and future fitness for purpose. An influential and innovative thought leader and practitioner in his field, Udi demonstrates and shares a phenomenally in depth knowledge that proves his position as one of the premier experts in his field globally.

The course has enhanced my knowledge and skills in ways that I am able to immediately apply to provide benefits to my employer. Additionally though I will be able to build upon what I learned in my 2 days with Udi and have no doubt that it will only enhance my future career.

I cannot recommend Udi, and his courses, highly enough.”

Nick Malik Nick Malik, Enterprise Architect at Microsoft Corporation
You are an excellent speaker and trainer, Udi, and I've had the fortunate experience of having attended one of your presentations. I believe that you are a knowledgable and intelligent man.”

Sean Farmar Sean Farmar, Chief Technical Architect at Candidate Manager Ltd
“Udi has provided us with guidance in system architecture and supports our implementation of NServiceBus in our core business application.

He accompanied us in all stages of our development cycle and helped us put vision into real life distributed scalable software. He brought fresh thinking, great in depth of understanding software, and ongoing support that proved as valuable and cost effective.

Udi has the unique ability to analyze the business problem and come up with a simple and elegant solution for the code and the business alike.
With Udi's attention to details, and knowledge we avoided pit falls that would cost us dearly.”

Børge Hansen Børge Hansen, Architect Advisor at Microsoft
“Udi delivered a 5 hour long workshop on SOA for aspiring architects in Norway. While keeping everyone awake and excited Udi gave us some great insights and really delivered on making complex software challenges simple. Truly the software simplist.”

Motty Cohen, SW Manager at KorenTec Technologies
“I know Udi very well from our mutual work at KorenTec. During the analysis and design of a complex, distributed C4I system - where the basic concepts of NServiceBus start to emerge - I gained a lot of "Udi's hours" so I can surely say that he is a professional, skilled architect with fresh ideas and unique perspective for solving complex architecture challenges. His ideas, concepts and parts of the artifacts are the basis of several state-of-the-art C4I systems that I was involved in their architecture design.”

Aaron Jensen Aaron Jensen, VP of Engineering at Eleutian Technology
Awesome. Just awesome.

We’d been meaning to delve into messaging at Eleutian after multiple discussions with and blog posts from Greg Young and Udi Dahan in the past. We weren’t entirely sure where to start, how to start, what tools to use, how to use them, etc. Being able to sit in a room with Udi for an entire week while he described exactly how, why and what he does to tackle a massive enterprise system was invaluable to say the least.

We now have a much better direction and, more importantly, have the confidence we need to start introducing these powerful concepts into production at Eleutian.”

Gad Rosenthal Gad Rosenthal, Department Manager at Retalix
“A thinking person. Brought fresh and valuable ideas that helped us in architecting our product. When recommending a solution he supports it with evidence and detail so you can successfully act based on it. Udi's support "comes on all levels" - As the solution architect through to the detailed class design. Trustworthy!”

Chris Bilson Chris Bilson, Developer at Russell Investment Group
“I had the pleasure of attending a workshop Udi led at the Seattle ALT.NET conference in February 2009. I have been reading Udi's articles and listening to his podcasts for a long time and have always looked to him as a source of advice on software architecture.
When I actually met him and talked to him I was even more impressed. Not only is Udi an extremely likable person, he's got that rare gift of being able to explain complex concepts and ideas in a way that is easy to understand.
All the attendees of the workshop greatly appreciate the time he spent with us and the amazing insights into service oriented architecture he shared with us.”

Alexey Shestialtynov Alexey Shestialtynov, Senior .Net Developer at Candidate Manager
“I met Udi at Candidate Manager where he was brought in part-time as a consultant to help the company make its flagship product more scalable. For me, even after 30 years in software development, working with Udi was a great learning experience. I simply love his fresh ideas and architecture insights.
As we all know it is not enough to be armed with best tools and technologies to be successful in software - there is still human factor involved. When, as it happens, the project got in trouble, management asked Udi to step into a leadership role and bring it back on track. This he did in the span of a month. I can only wish that things had been done this way from the very beginning.
I look forward to working with Udi again in the future.”

Christopher Bennage Christopher Bennage, President at Blue Spire Consulting, Inc.
“My company was hired to be the primary development team for a large scale and highly distributed application. Since these are not necessarily everyday requirements, we wanted to bring in some additional expertise. We chose Udi because of his blogging, podcasting, and speaking. We asked him to to review our architectural strategy as well as the overall viability of project.
I was very impressed, as Udi demonstrated a broad understanding of the sorts of problems we would face. His advice was honest and unbiased and very pragmatic. Whenever I questioned him on particular points, he was able to backup his opinion with real life examples. I was also impressed with his clarity and precision. He was very careful to untangle the meaning of words that might be overloaded or otherwise confusing. While Udi's hourly rate may not be the cheapest, the ROI is undoubtedly a deal. I would highly recommend consulting with Udi.”

Robert Lewkovich, Product / Development Manager at Eggs Overnight
“Udi's advice and consulting were a huge time saver for the project I'm responsible for. The $ spent were well worth it and provided me with a more complete understanding of nServiceBus and most importantly in helping make the correct architectural decisions earlier thereby reducing later, and more expensive, rework.”

Ray Houston Ray Houston, Director of Development at TOPAZ Technologies
“Udi's SOA class made me smart - it was awesome.

The class was very well put together. The materials were clear and concise and Udi did a fantastic job presenting it. It was a good mixture of lecture, coding, and question and answer. I fully expected that I would be taking notes like crazy, but it was so well laid out that the only thing I wrote down the entire course was what I wanted for lunch. Udi provided us with all the lecture materials and everyone has access to all of the samples which are in the nServiceBus trunk.

Now I know why Udi is the "Software Simplist." I was amazed to find that all the code and solutions were indeed very simple. The patterns that Udi presented keep things simple by isolating complexity so that it doesn't creep into your day to day code. The domain code looks the same if it's running in a single process or if it's running in 100 processes.”

Ian Cooper Ian Cooper, Team Lead at Beazley
“Udi is one of the leaders in the .Net development community, one of the truly smart guys who do not just get best architectural practice well enough to educate others but drives innovation. Udi consistently challenges my thinking in ways that make me better at what I do.”

Liron Levy, Team Leader at Rafael
“I've met Udi when I worked as a team leader in Rafael. One of the most senior managers there knew Udi because he was doing superb architecture job in another Rafael project and he recommended bringing him on board to help the project I was leading.
Udi brought with him fresh solutions and invaluable deep architecture insights. He is an authority on SOA (service oriented architecture) and this was a tremendous help in our project.
On the personal level - Udi is a great communicator and can persuade even the most difficult audiences (I was part of such an audience myself..) by bringing sound explanations that draw on his extensive knowledge in the software business. Working with Udi was a great learning experience for me, and I'll be happy to work with him again in the future.”

Adam Dymitruk Adam Dymitruk, Director of IT at Apara Systems
“I met Udi for the first time at DevTeach in Montreal back in early 2007. While Udi is usually involved in SOA subjects, his knowledge spans all of a software development company's concerns. I would not hesitate to recommend Udi for any company that needs excellent leadership, mentoring, problem solving, application of patterns, implementation of methodologies and straight out solution development.
There are very few people in the world that are as dedicated to their craft as Udi is to his. At ALT.NET Seattle, Udi explained many core ideas about SOA. The team that I brought with me found his workshop and other talks the highlight of the event and provided the most value to us and our organization. I am thrilled to have the opportunity to recommend him.”

Eytan Michaeli Eytan Michaeli, CTO Korentec
“Udi was responsible for a major project in the company, and as a chief architect designed a complex multi server C4I system with many innovations and excellent performance.”


Carl Kenne Carl Kenne, .Net Consultant at Dotway AB
“Udi's session "DDD in Enterprise apps" was truly an eye opener. Udi has a great ability to explain complex enterprise designs in a very comprehensive and inspiring way. I've seen several sessions on both DDD and SOA in the past, but Udi puts it in a completly new perspective and makes us understand what it's all really about. If you ever have a chance to see any of Udi's sessions in the future, take it!”

Avi Nehama, R&D Project Manager at Retalix
“Not only that Udi is a briliant software architecture consultant, he also has remarkable abilities to present complex ideas in a simple and concise manner, and...
always with a smile. Udi is indeed a top-league professional!”

Ben Scheirman Ben Scheirman, Lead Developer at CenterPoint Energy
“Udi is one of those rare people who not only deeply understands SOA and domain driven design, but also eloquently conveys that in an easy to grasp way. He is patient, polite, and easy to talk to. I'm extremely glad I came to his workshop on SOA.”

Scott C. Reynolds Scott C. Reynolds, Director of Software Engineering at CBLPath
“Udi is consistently advancing the state of thought in software architecture, service orientation, and domain modeling.
His mastery of the technologies and techniques is second to none, but he pairs that with a singular ability to listen and communicate effectively with all parties, technical and non, to help people arrive at context-appropriate solutions. Every time I have worked with Udi, or attended a talk of his, or just had a conversation with him I have come away from it enriched with new understanding about the ideas discussed.”

Evgeny-Hen Osipow, Head of R&D at PCLine
“Udi has helped PCLine on projects by implementing architectural blueprints demonstrating the value of simple design and code.”

Rhys Campbell Rhys Campbell, Owner at Artemis West
“For many years I have been following the works of Udi. His explanation of often complex design and architectural concepts are so cleanly broken down that even the most junior of architects can begin to understand these concepts. These concepts however tend to typify the "real world" problems we face daily so even the most experienced software expert will find himself in an "Aha!" moment when following Udi teachings.
It was a pleasure to finally meet Udi in Seattle Alt.Net OpenSpaces 2008, where I was pleasantly surprised at how down-to-earth and approachable he was. His depth and breadth of software knowledge also became apparent when discussion with his peers quickly dove deep in to the problems we current face. If given the opportunity to work with or recommend Udi I would quickly take that chance. When I think .Net Architecture, I think Udi.”

Sverre Hundeide Sverre Hundeide, Senior Consultant at Objectware
“Udi had been hired to present the third LEAP master class in Oslo. He is an well known international expert on enterprise software architecture and design, and is the author of the open source messaging framework nServiceBus. The entire class was based on discussion and interaction with the audience, and the only Power Point slide used was the one showing the agenda.
He started out with sketching a naive traditional n-tier application (big ball of mud), and based on suggestions from the audience we explored different solutions which might improve the solution. Whatever suggestions we threw at him, he always had a thoroughly considered answer describing pros and cons with the suggested solution. He obviously has a lot of experience with real world enterprise SOA applications.”

Raphaël Wouters Raphaël Wouters, Owner/Managing Partner at Medinternals
“I attended Udi's excellent course 'Advanced Distributed System Design with SOA and DDD' at Skillsmatter. Few people can truly claim such a high skill and expertise level, present it using a pragmatic, concrete no-nonsense approach and still stay reachable.”

Nimrod Peleg Nimrod Peleg, Lab Engineer at Technion IIT
“One of the best programmers and software engineer I've ever met, creative, knows how to design and implemet, very collaborative and finally - the applications he designed implemeted work for many years without any problems!

Jose Manuel Beas
“When I attended Udi's SOA Workshop, then it suddenly changed my view of what Service Oriented Architectures were all about. Udi explained complex concepts very clearly and created a very productive discussion environment where all the attendees could learn a lot. I strongly recommend hiring Udi.”

Daniel Jin Daniel Jin, Senior Lead Developer at PJM Interconnection
“Udi is one of the top SOA guru in the .NET space. He is always eager to help others by sharing his knowledge and experiences. His blog articles often offer deep insights and is a invaluable resource. I highly recommend him.”

Pasi Taive Pasi Taive, Chief Architect at Tieto
“I attended both of Udi's "UI Composition Key to SOA Success" and "DDD in Enterprise Apps" sessions and they were exceptionally good. I will definitely participate in his sessions again. Udi is a great presenter and has the ability to explain complex issues in a manner that everyone understands.”

Eran Sagi, Software Architect at HP
“So far, I heard about Service Oriented architecture all over. Everyone mentions it – the big buzz word. But, when I actually asked someone for what does it really mean, no one managed to give me a complete satisfied answer. Finally in his excellent course “Advanced Distributed Systems”, I got the answers I was looking for. Udi went over the different motivations (principles) of Services Oriented, explained them well one by one, and showed how each one could be technically addressed using NService bus. In his course, Udi also explain the way of thinking when coming to design a Service Oriented system. What are the questions you need to ask yourself in order to shape your system, place the logic in the right places for best Service Oriented system.

I would recommend this course for any architect or developer who deals with distributed system, but not only. In my work we do not have a real distributed system, but one PC which host both the UI application and the different services inside, all communicating via WCF. I found that many of the architecture principles and motivations of SOA apply for our system as well. Enough that you have SW partitioned into components and most of the principles becomes relevant to you as well. Bottom line – an excellent course recommended to any SW Architect, or any developer dealing with distributed system.”

Consult with Udi

Guest Authored Books



Creative Commons License  © Copyright 2005-2011, Udi Dahan. email@UdiDahan.com