Udi Dahan   Udi Dahan – The Software Simplist
Enterprise Development Expert & SOA Specialist
 
  
    Blog Consulting Training Articles Speaking About
  

Archive for the ‘EDA’ Category



Udi & Greg Reach CQRS Agreement

Friday, February 10th, 2012

Lion--Tiger-psd74183Hard to believe, isn’t it?

Although both myself and Greg have been saying (quite publicly) for a long time now that we’re in agreement in about 99% of the DDD/CQRS content we talk about, it turns out the terminology we use has made it very difficult for everybody else to see that.

Anyway, on a recent call with Greg and the Microsoft Patterns & Practices team working on the CQRS guidance, I think we finally ironed out the terminological differences.

First of all, both of us clearly stated that CQRS is not meant to be the top-level architecture of a system.

The use of Bounded Contexts from Domain Driven Design is a good way to *start* handling that top-level.

The area of some contention was how big a Bounded Context should be. After going back and forth a bit, Greg brought the concept of Business Component into the conversation, and that really cleared things up all around. I was quite pleased as I’ve been going on and on about these business components for years (I think 2006 was one of my earlier posts on the topic, though the mp3 has disappeared since then).

Anyway, here’s the meat:

A given Bounded Context should be divided into Business Components, where these Business Components have full UI through DB code, and are ultimately put together in composite UI’s and other physical pipelines to fulfill the system’s functionality.

A Business Component can exist in only one Bounded Context.

CQRS, if it is to be used at all, should be used within a Business Component.

There you have it – terminological agreement in addition to the philosophical agreement that was always there.

You can find the history of my posts mentioning Business Components here.



The Danger of Centralized Workflows

Wednesday, July 13th, 2011

It isn’t uncommon for me to have a client or student at one of my courses ask me about some kind of workflow tool. This could be Microsoft Workflow Foundation, BizTalk, K2, or some kind of BPEL/orchestration engine. The question usually revolves around using this tool for all workflows in the system as opposed to the SOA-EDA-style publish/subscribe approach I espouse.

The question

The main touted benefit of these workflow-centric architectures is that we don’t have to change the code of the system in order to change its behavior resulting in ultimate flexibility!

Some of you may have already gone down this path and are shaking your heads remembering how your particular road to hell was paved with the exact same good intentions.

Let me explain why these things tend to go horribly wrong.

What’s behind the curtain

It starts with the very nature of workflow – a flow chart, is procedural in nature. First do this, then that, if this, then that, etc. As we’ve experienced first hand in our industry, procedural programming is fine for smaller problems but isn’t powerful enough to handle larger problems. That’s why we’ve come up with object-oriented programming.

I have yet to see an object-oriented workflow drag-and-drop engine. Yes, it works great for simple demo-ware apps. But if you try to through your most complex and volatile business logic at it, it will become a big tangled ball of spaghetti – just like if you were using text rather than pictures to code it.

And that’s one of the fundamental fallacies about these tools – you are still writing code. The fact that it doesn’t look like the rest of your code doesn’t change that fact. Changing the definition of your workflow in the tool IS changing your code.

On productivity

Sometimes people mention how much more productive it would be to use these tools than to write the code “by hand”. Occasionally I hear about an attempt to have “the business” use these tools to change the workflows themselves – without the involvement of developers (“imagine how much faster we could go without those pesky developers!”).

For those of us who have experienced this first-hand, we know that’s all wrong.

If “the business” is changing the workflows without developer involvement, invariably something breaks, and then they don’t know what to do. They haven’t been trained to think the way that developers have – they don’t really know how to debug. So the developers are brought back in anyway and from that point on, the business is once again giving requirements and the devs are the one implementing it.

Now when it comes to developer productivity, I can tell you that the keyboard is at least 10x more productive than the mouse. I can bang out an if statement in code much faster than draggy-dropping a diamond on the canvas, and two other activities for each side of the clause.

On maintainability

Sometimes the visualization of the workflow is presented as being much more maintainable than “regular code”.

When these workflows get to be to big/nested/reused, it ends up looking like the wiring diagram of an Intel chip (or worse). Check out the following diagram taken from the DailyWTF on a customer friendly system:

stateModel

The bigger these get, the less maintainable they are.

Now, some would push back on this saying that a method with 10,000 lines of code in it may be just as bad, if not worse. The thing is that these workflow tools guide developers down a path where it is very likely to end up with big, monolithic, procedural, nested code. When working in real code, we know we need to take responsibility for the cleanliness of our code using object-orientation, patterns, etc and refactoring things when they get too messy.

Here is where I’d bring up the SOA/pub-sub approach as an alternative – there is no longer this idea of a centralized anything. You have small pieces of code, each encapsulating a single business responsibility, working in concert with each other – reacting to each others events.

Productivity take 2: testing and version control

If you’re going to take your most complex and volatile business logic and put it into these workflow tools, have you thought about how your going to test it? How do you know that it works correctly? It tends to be VERY difficult to unit-test these kinds of workflows.

When a developer is implementing a change request, how do they know what other workflows might have been broken? Do they have to manually go through each and every scenario in the system to find out? How’s that for productivity?

Assuming something did break and the developer wants to see a diff – what’s different in the new workflow from the old one, what would that look like? When working with a team, the ability to diff and merge code is at the base of the overall team productivity.

What would happen to your team if you couldn’t diff or merge code anymore?
In this day and age, it should be considered irresponsible to develop without these version control basics.

In closing

There are some cases where these tools might make sense, but those tend to be much more rare than you’d expect (and there are usually better alternatives anyway). Regardless, the architectural analysis should start without the assumption of centralized workflow, database, or centralized anything for that matter.

If someone tries to push one of these tools/architectures on you, don’t walk away – run!



Bus and Broker Pub/Sub Differences

Thursday, March 24th, 2011

differencesOne of the things which often confuses people using NServiceBus for the first time is that it only allows an endpoint to subscribe to a given event from a single other publishing endpoint. The rule that there can only be a single publisher for a given event type is one of the things that differentiates buses from brokers, though both obviously allow you to have multiple subscribers.

Brokers

Message brokers, more broadly known and used on the Java platform, don’t come with this constraint. For example, when using ActiveMQ, you can have any number of endpoints come to the broker and publish a message under a given topic.

So where’s the problem?

It’s all about accountability.

Let’s say you’ve subscribed to a given topic, and have received two events – one telling you that the price of bananas next week will be $1/kg and another telling you that it’ll be $2/kg.

Which one is right?

Especially given that those events may have been published by any other endpoint via the broker.

Is it first one wins? Last one wins? How about first one sent vs. first one received? Ditto for last. As a subscriber, can you really be held accountable for having the logic to choose the right one? Shouldn’t this responsibility have fallen to the publishing side?

This is one of the big drawbacks of the broker, hub and spoke architecture. No responsibility. No single source of truth – unless everybody’s going to some central database, in which case – what’s the point of all this messaging anyway?

Buses

The Bus Architectural Style is all about accountability. If you are going to publish an event, you are accountable for the correctness of the data in that event – there is no central database that a subscriber can go to “just in case”. And the only way that you can be held accountable, is if you have full responsibility – ergo, you’re the only one who can publish that type of event.

If you say bananas are going to cost $1/kg next week, that’s that. Subscribers will not hear from anybody else on that topic.

Now, this is not to say that you can’t have more than one physical publishing endpoint.

You see, buses differentiate between the logical and the physical. Brokers tend to assume that the physical hub-and-spoke topology is also the logical.

In a bus, while there can only be one logical endpoint publishing a given type of event, that endpoint can be physically scaled out across multiple machines. It is the responsibility of the bus to provide infrastructure facilities to allow for that to happen in such a way that to subscribers, it still appears as if there is really only one publishing endpoint.

The same is true about the subscriber – one logical subscribing endpoint may be scaled out across multiple machines.

Product Mix-ups

Unfortunately, there are many broker-style technologies out there that are being marketed under the banner of the Enterprise Service Bus. While some products have the ability to be deployed in both a centralized and distributed fashion (sometimes called “federated” or “embedded” mode), many do not enforce the “single publishing logical-endpoint per event-type” rule.

Without this constraint, it is just too easy to make mistakes.

NServiceBus

By enforcing this constraint, we see the same kind of question appear on the discussion group time and time again:

“I have an Audit event that I’d like all of my machines to publish, and have one machine subscribe to them all, but NServiceBus won’t let me. How do I make NServiceBus support this scenario?”

And the answer is the same every time:

“You should have all the machines Send the Audit message (configured to go to the single machine handling that message), and not Publish. It is not an event until its been handled by the endpoint responsible for it.”

The semantics of the message matter a lot.

When looking at Service-Oriented Architecture, these messages are the contract and, as any lawyer will tell you, contracts need to be explicit and the intentions really need to be spelled out – otherwise the contract is practically worthless.

In closing

Friction is sometimes a good thing – it prevents us from making mistakes. It keeps cars on the road. And because that’s not enough friction, we introduce curbs as well.

If you’re looking for a service bus technology for your next project, check that it’ll give you the friction that you need to keep everybody safe. Really check what it is that the vendors are offering you – more often than not, it’s some ESB lipstick on a broker pig.

To learn more about how NServiceBus supports this kind of publish/subscribe, click here.



Polymorphism and Messaging

Thursday, January 13th, 2011

polymorphismOne of the questions that came up from my NServiceBus – .NET Service Bus Smackdown post was about the Polymorphic Message Dispatch and Polymorphic Message Routing features. People wanted to know what those are, why they’re important, and if other technologies (specifically WCF and BizTalk) support them.

Messaging Basics

First of all, when building a system using messaging, you don’t have methods that are invoked on some remote object (a.k.a “service”) to which you pass parameters. Instead, you use some generic piece of infrastructure (in the world of Java, this is most commonly a Message Broker) to send a message where a message can be thought of as a serializable class. Here’s an example of a message:

public class UserCreated : IMessage
{
    public Guid UserId { get; set; }
    public string Name { get; set; }
}

This message would be published using NServiceBus like this:

bus.Publish<UserCreated>( m =>
{
    m.UserId = Guid.NewGuid();
    m.Name = "John Smith";
});

This can be contrasted with RPC models like WCF where you need to define a “service” that has methods on it, where those methods accepts parameters. Sometimes developers try to make this more generic by having a single “service” with one method on it named something like “Process” where the parameter it accepts is of the type “object”, or they introduce generics like this: Process<T>(T message);

This is where Polymorphic Message Dispatch comes in:

Polymorphic Message Dispatch

While you can pull of the WCF generics thing, one thing that is more difficult (without writing your own dispatch model) is to have a pipeline of classes which can be invoked based on their relationship to the type passed in. Using NServiceBus, both of the following message handlers will be invoked when UserCreated arrives:

public class Persistence : IHandleMessages<UserCreated>
{
    public void Handle(UserCreated message) { }
}

public class Audit : IHandleMessages<IMessage>
{
    public void Handle(IMessage message) { }
}

Now some might say that WCF, BizTalk, and the .NET Service Bus allow you to do auditing in their own internal pipeline, and that’s true. The place where this becomes more powerful is when you need to build V2 of your system, and the publisher now publishes a slightly different event – that a user was created as a part of a campaign, requiring the subscriber to register statistics about the campaign. Of course, this event also means that a user was created. Here’s how you’d do that with NServiceBus:

public class UserCreatedFromCampaign : UserCreated
{
    public Guid CampaignId { get; set; }
}

//publisher code
bus.Publish<UserCreatedFromCampaign>( m =>
{
    m.UserId = Guid.NewGuid();
    m.Name = "John Smith";
    m.CampaignId = theCampaignId;
}

//subscriber code
public class Statistics : IHandleMessages<UserCreatedFromCampaign>
{
    public void Handle(UserCreatedFromCampaign message) { }
}

The important part is what you don’t see – since UserCreatedFromCampaign inherits from UserCrated, the Persistence handler we had from V1 will also be invoked, and so will the Audit handler of course. You don’t have to make your new code call the old code like you would in a method based dispatch model. This makes sure that the coupling in your service layer code remains constant over time as you grow the functionality of your system.

This was one of the main benefits mentioned by Rackspace in their use of NServiceBus (here):

“The main benefit NServiceBus has brought us so far is developer scalability due to lower coupling and higher consistency in our code.”

But, when looking at the above scenario, we can obviously expect that all sorts of things can happen in relation to campaigns – it is a separate concern, and thus should be handled by a separate subscriber. And this bring us to…

Polymorphic Message Routing

The challenge that we have here is that we no longer have a hierarchy where something clearly belongs on top of something else. We have users created and activities happening related to campaigns – that may happen in any combination. By having separate subscribers, we could then introduce new handlers/subscribers to our environment without touching or taking down any of the other subscribers. Here’s what the subscribers would look like:

public class Persistence : IHandleMessages<UserCreated>
{
    public void Handle(UserCreated message) { }
}

public class Statistics : IHandleMessages<CampaignActivityOccurred>
{
    public void Handle(CampaignActivityOccurred message) { }
}

But if each of the above messages were a class, how could we define a message which inherited from both?

Before answering that, we need to understand why the publisher wouldn’t just publish both of the above messages. You see, the publisher can’t make any assumptions about its subscribers – it could be that one of them has logic that correlates across both of these messages that could end up counting the occurrence as happening twice rather than once, possibly charging the account associated with the campaign twice. Publishing two messages results in two transactions when there really should have been one.

So, here’s how to define messages so that we can have multiple inheritance:

public interface UserCreated : IMessage
{
    Guid UserId { get; set; }
    string Name { get; set; }
}

public interface CampaignActivityOccurred : IMessage
{
    Guid CampaignId { get; set; }
    Guid ActivityId { get; set; }
}

public interface UserCreatedFromCampaign 
                 : UserCreated,
                   CampaignActivityOccurred 
{
}

And when the publisher publishes UserCreatedFromCampaign, the event would be routed to both the UserCreated subscriber and the CampaignActivityOccurred subscriber. The power of this approach is felt as we handle new requirements around purchases made related to a campaign. Now we can have another event which inherits from CampaignActivityOccurred and not have to worry since the existing subscriber will be routed those messages automatically.

Since WCF doesn’t have publish/subscribe capabilities, we might as well move along.

Not to throw a burning match on an ocean of oil, but REST doesn’t really support this either.

Not Content-Based Routing

This may sound like the content-based router pattern from EIP (CBR), but it’s not. The important difference is that there isn’t some part of the routing that depends on the structure of the messages. The major drawback of CBR is that it creates a central place in your system that needs to be changed any time *syntactic* changes happen to message structure *in addition to* to changes in the subscribers.

Now, this is where the BizTalk guys would say that “that’s why we can do message transformations”, and then the subscribers wouldn’t need to be changed. However, can we really know when getting a requirement that the change is syntactic and not semantic? I mean, it’s quite common that changes to message structures happen together with changes to processing logic.

You may be beginning to get the feeling that more and more logic is being sucked out of the subscribers into some monolithic black hole that is likely going to be unmaintainable and quite slow.

This is one of the main differences between using a bus and a broker – a bus supports the correct distribution of logic keeping the system loosely coupled; brokers are useful integration engines when you absolutely can’t change the applications being integrated. Enterprise Application Integration (EAI) brokers don’t usually make good Enterprise Service Bus (ESB) technology.

In Closing

NServiceBus has all sorts of features you didn’t know you needed until you saw what life could be like when you had them. Most of these features don’t have snazzy drag-and-drop demos that make people ooh-and-aah and TechEds and PDCs, but they’re really necessary to avoid finding yourself in yet another big-ball-of-mud code base telling your manager/customer (again) that it would be faster to rebuild the system from scratch than to implement that new requirement in the old one.

Take NServiceBus for a spin and see for yourself.



NServiceBus 2.5 Released

Friday, December 31st, 2010

Just before we usher in the new year, I’m happy to announce the release of NServiceBus version 2.5.

Go to the NServiceBus website

Yes, there’s a new logo, and the website’s been redesigned.
It’s been a long time coming – the previous version (2.0) was released in March.

I’m really quite excited about this version as it rolls up all the bug fixes and enhancements that customers have asked for as they ran version 2.0 under the most severe types of production environments. The next thing that is a big deal that many have been asking for is a licensed version of NServiceBus – that is, the ability to purchase a commercial license and receive support.

We all know how managers like having a throat to choke.

And now they’ll have one – NServiceBus Ltd is the company that will be providing licensing, services, and support for all customers’ NServiceBus needs. After more than 33,000 downloads and over 1000 developers in the community, the demand has really grown. Who would’ve thought all this would happen when I started NServiceBus 4 years ago (before it even had a name).

Why NServiceBus is better than WCF for your distributed systems

This question comes up repeatedly for people hearing about NServiceBus for the first time.

The answer is simple – reliability.

A system built with NServiceBus is so much more reliable to all kinds of production conditions than WCF that it’s hardly a fair comparison at all. While WCF can be configured to provide something kind of close to the same level of reliability, you need to do a fair amount of spelunking through the various options of netMsmqBinding to get it right.

The second reason to use NServiceBus instead of WCF is publish/subscribe.

The ability to make use of events and the observer pattern not just to achieve loose coupling within a single process, but across many processes, machines, and sites. Can you imagine going back to programming without events? Shudder. But that’s exactly what it’s like to use WCF in your distributed system. NServiceBus brings you the best of object-oriented programming but in a distributed and reliable infrastructure.

Don’t wait any longer

Take NServiceBus for a spin.

But things may look a bit different after you do…

RedPillBluePill

http://www.NServiceBus.com

And have a happy New Year.



The Known Unknowns of SOA

Monday, November 15th, 2010

rumsfeldOne of the better known analysts in the enterprise software area, JP Morgenthal, wrote this post about the relationship between SOA, BPM, and EA. In it he defines SOA as follows:

“SOA is a practice that focuses on modeling the entities, and relationships between entities, that comprise the business as a set of services. This can be done on a small or large scale. Typically, the relationships in this model represent consumer/provider relationships.”

I have some serious concerns about the ramifications of this definition/description.

First of all, when reading “entities”, many people will interpret that to mean the entities found in Entity Relationship Diagrams [ERD] or in Object Oriented Analysis & Design [OOAD]. In both, these entities are identified as the “nouns” of the domain. Examples of these ERD/OOAD-type entities include things like Customer, Order, and Product.

These are almost always the wrong place to start for identifying services in SOA.

Second, on the consumer/provider relationship: on the one had, this fits very well with how web services can consume (or call) other web services. However, the downsides of using web services as services in SOA is becoming well enough known that even in the same post we see this warning:

“Web Services is not SOA, it is merely a standardized approach to accessing functionality on remote systems.”

But the question remains, if a producer/consumer relationship is OK for SOA-type services, why doesn’t that hold for web services? And the answer is… it depends on the type of producer/consumer relationship. The typical relationship is one of synchronous calls from consumer to producer, this is not OK for SOA-type services either.

You see, this synchronous producer/consumer implies a model where services are not able to fulfill their objectives without calling other services. In order for us to achieve the IT/Business alignment promised by SOA, we need services which are autonomous, ie. able to fulfill their objectives without that kind of external help.

Instead, we need to look for a more loosely coupled producer/consumer relationship – like publish/subscribe, where the producer emits events, and the consumer subscribes and handles those events. The reason that this kind of relationship doesn’t hurt autonomy is that it disconnects services on the dimension of time. In order for a service to be able to make a decision autonomously without synchronously calling any other service, using only information provided by events it received in the past, it must be strongly aligned with the business.

Most projects which bandy about the SOA acronym aren’t actually made up of services – they’re made up of XML over HTTP functions calling other XML over HTTP functions, eventually calling XML over HTTP databases. You can layer as much XML and HTTP as you want on top of it, but at the end of the day, most projects are just functions calling functions calling databases – in other words, procedural programming in the large, and no amount of SOAP will wash away the stink.

Here’s a different definition of services for SOA that may communicate a bit better what it’s all about:

A service is the technical authority for a specific business capability.
Any piece of data or rule must be owned by only one service.

What this means is that even when services are publishing and subscribing to each other’s events, we always know what the authoritative source of truth is for every piece of data and rule.

Also, when looking at services from the lense of business capabilities, what we see is that many user interfaces present information belonging to different capabilities – a product’s price alongside whether or not it’s in stock. In order for us to comply with the above definition of services, this leads us to an understanding that such user interfaces are actually a mashup – with each service having the fragment of the UI dealing with its particular data.

Ultimately, process boundaries like web apps, back-end, batch-processing are very poor indicators of service boundaries. We’d expect to see multiple business capabilities manifested in each of those processes.

I know that this may be more confusing than the traditional web services approach but, to paraphrase Donald Rumsfeld, it is better to know that you don’t know, than to not know that you don’t know 🙂



Search and Messaging

Sunday, November 1st, 2009

search
One question that I get asked about quite a bit with relation to messaging is about search. Isn’t search inherently request/response? Doesn’t it have to return immediately? Wouldn’t messaging in this case hurt our performance?

While I tend to put search in the query camp in the when keeping the responsibility of commands and queries separate, and often recommend that those queries be done without messaging, there are certain types of search where messaging does make sense.

In this post, I’ll describe certain properties of the problem domain that make messaging a good candidate for a solution.

Searching is besides the point – Finding is what it’s all about

Remember that search is only a means to an end in the eyes of the user – they want to find something. One of the difficulties we users have is expressing what we want to find in ways that machines can understand.

In thinking about how we build systems to interact with users, we need to take this fuzziness into account. The more data that we have, the less homogeneous it is, the harder this problem becomes.

When talking about speed, while users are sensitive to the technical interactivity, the thing that matters most is the total time it takes for them to find what they want. If the result of each search screen pops up in 100ms, but the user hasn’t found what they’re looking for after clicking through 20 screens, the search function is ultimately broken.

Notice that the finding process isn’t perceived as “immediate” in the eyes of the user – the evaluation they do in their heads of the search results is as much a part of finding as the search itself.

Also, if the user needs to refine their search terms in order to find what they want, we’re now talking about a multi-request/multi-response process. There is nothing in the problem domain which indicates that finding is inherently request/response.

Relationships in the data

When bringing back data as the result of a search, what we’re saying is that there is a property which is the same across the result elements. But there may be more than one such property. For example, if we search for “blue” on Google Images, we get back pictures of the sky, birds, flowers, and more. Obvious so far – but let’s exploit the obvious a bit.

When the user sees that too many irrelevant results come back, they’ll want to refine their search. One way they can do that is to perform a new search and put in a more specific search phrase – like “blue sky”. Another way is for them to indicate this is by selecting an image and saying “not like this” or “more of these”. Then we can use the additional properties we know about those images to further refine the result group – either adding more images of one kind, or removing images of another.

Here’s something else that’s obvious:

Users often click or change their search before the entire result screen is shown.

It’s beginning to sound like users are already interacting with search in an asynchronous manner. What if we actually designed a system that played to that kind of interaction model?

Data-space partitioning

Once we accept the fact that the user is willing to have more results appear in increments, we can talk about having multiple servers processing the search in parallel. For large data spaces, it is unlikely for us to be able to store all the required meta data for search on one server anyway.

All we really need is a way to index these independent result-sets so that the user can access them. This can be done simply by allocating a GUID/UUID for the search request and storing the result-sets along with that ID.

Browser interaction

When the browser calls a server with the search request the first time, that server allocates an ID to that request, returns a URL containing that ID to the browser, and publishes an event containing the search term and the ID. Each of our processing nodes is subscribed to that event, performs the search on its part of the data-space, and writes its results (likely to a distributed cache) along with that ID.

The browser polls the above URL, which queries the cache (give me everything with this ID), and the browser sees which resources have been added since the last time it polled, and shows them to the user.

If the user clicks “more of these”, that initiates a new search request to the server, which follows the same pattern as before, just that the system is able to pull more relevant information. When implementing “not like this”, this performs a similar search but, instead of adding to the list of items shown, we’re removing items from the list shown based on the response from the server.

In this kind of user-system interaction model, having the user page through the result set doesn’t make very much sense as we’re not capturing the intent of the user, which is “you’re not showing me what I want”. By making it easy for the user to fine tune the result set, we get them closer to finding what they want. By performing work in parallel in a non-blocking manner on smaller sets of data, we greatly decrease the “time to first byte” as well as the time when the user can refine their search.

But Google doesn’t work like that

I know that this isn’t like the search UI we’ve all grown used to.

But then again, the search that you’re providing your users is more specific – not just pages on the web. If you’re a retailer allowing your users to search for a gift, this kind of “more like this, less like that” model is how users would interact with a real sales-person when shopping in a store. Why not model your system after the ways that people behave in the real world?

In closing

If we were to try to make use of messaging underneath “classical” search interaction models, it probably wouldn’t have been the greatest fit. If all we’re doing at a logical level is blocking RPC, then messaging would probably make the system slower. The real power that you get from messaging is being able to technically do things in parallel – that’s how it makes things faster. If you can find ways to see that parallelism in your problem domain, not only will messaging make sense technically – it will really be the only way to build that kind of system.

Learning how to disconnect from seeing the world through the RPC-tinted glasses of our technical past takes time. Focusing on the problem domain, seeing it from the user’s perspective without any technical constraints – that’s the key to finding elegant solutions. More often than not, you’ll see that the real world is non-blocking and parallel, and then you’ll be able to make the best use of messaging and other related patterns.

What are your thought? Post a comment and let me know.



[Article] EDA: SOA through the looking glass

Tuesday, September 29th, 2009


Microsoft Architecture Journal

My latest article has been published in issue 21 of the Microsoft Architecture Journal:

EDA: SOA Through The Looking Glass

While event-driven architecture (EDA) is a broadly known topic, both giving up ACID integrity guarantees and introducing eventual consistency make many architects uncomfortable. Yet it is exactly these properties that can direct architectural efforts toward identifying coarsely grained business-service boundaries—services that will result in true IT-business alignment.

Business events create natural temporal boundaries across which there is no business expectation of immediate consistency or confirmation. When they are mapped to technical solutions, the loosely coupled business domains on either side of business events simply result in autonomous, loosely coupled services whose contracts explicitly reflect the inherent publish/subscribe nature of the business.

This article will describe how all of these concepts fit together, as well as how they solve thorny issues such as high availability and fault tolerance.

UPDATE: Unfortunately, Microsoft has removed a bunch of their older stuff, so I’m reposting the content here:

Download as PDF

Introduction

While event-driven architecture (EDA) is a broadly known topic, both giving up ACID integrity guarantees and introducing eventual consistency make many architects uncomfortable. Yet it is exactly these properties that can direct architectural efforts toward identifying coarsely grained business-service boundaries—services that will result in true IT-business alignment.

Business events create natural temporal boundaries across which there is no business expectation of immediate consistency or confirmation. When they are mapped to technical solutions, the loosely coupled business domains on either side of business events simply result in autonomous, loosely coupled services whose contracts explicitly reflect the inherent publish/subscribe nature of the business.

This article will describe how all of these concepts fit together, as well as how they solve thorny issues such as high availability and fault tolerance.

Commands and Events

To understand the difference in nature between “classic” service- oriented architecture (SOA) and event-driven architecture (EDA), we must examine their building blocks: the command in SOA, and the event in EDA.

In the commonly used request/response communication pattern of service consumer to service provider in SOA, the request contains the action that the consumer wants to have performed (the command), and the response contains either the outcome of the action or some reaction to the expressed request, such as “action performed” and “not authorized.”

Commands are often named in imperative, present-tense form—for example, “update customer” and “cancel order.”

In EDA, the connection between event emitters and event consumers is reversed from the previously described SOA pattern. Consumers do not initiate communication in EDA; instead, they receive events that are produced by emitters. The communication is also inherently unidirectional; emitters do not depend on any response from consumers to continue performing their work.

Events are often named in passive, past-tense form—for example, “customer updated” and “order cancelled”—and can represent state changes in the domain of the emitter.

Events can be thought of as mirror images of the commands in a system. However, there might be cases in which the trigger for an event is not an explicit command, but something like a timeout.

Business Processes with Commands and Events

The difference between commands and events becomes even more pronounced as we look at each one as the building block in various business processes.

When we consider commands such as “create customer” and “create order,” we can easily understand how these commands can be combined to create more involved scenarios, such as: “When creating an order, if a customer is not provided, create a new customer.” This can be visualized as services that operate at different layers, as shown in Figure 1.

Figure 1: Commands and service orchestration

Figure 1. Commands and service orchestration

One can also understand the justification for having activity services perform all of their work transactionally, thus requiring one service to flow its transactional context into other lower-level services. This is especially important for commands that deal with the updating of data.

When working with commands, in each step of the business process, a higher-level service orchestrates the work of lower-level services.

When we try to translate this kind of orchestration behavior into events, we must consider the fact that events behave as mirror images of commands and represent our rules by using the past tense.

Instead of: “When creating an order, if a customer is not provided, create a new customer.”

We have: “When an order has been created, if a customer was not provided, create a new customer.”

It is clear that these rules are not equivalent. The first rule implies that an order should not be created unless a customer—whether provided or new—is associated with it. The second rule implies that an order can be created even if a customer has not been provided—stipulating the creation as a separate and additional activity.

To make use of EDA, it is becoming clear that we must think about our rules and processes in an event-driven way, as well as how that affects the way in which we structure and store our data.

Event-Driven Business Analysis and Database Design

When we analyze the “When an order has been created, if a customer was not provided, create a new customer” rule, we can see that a clear temporal boundary splits it up into two parts. In a system that has this rule, what we will see is that at a given point in time, an order might exist that does not have a corresponding customer. The rule also states the action that should be taken in such a scenario: the creation of a new customer. There might also be a nonfunctional requirement that states the maximum time that should be allowed for the action to be completed.

From a technical/database perspective, it might appear that we have allowed our data to get into an inconsistent state; however, that is only if we had modeled our database so that the Orders table had a non-nullable column that contained CustomerId—a foreign key to the Customers table. While such an entity-relationship design would be considered perfectly acceptable, we should consider how appropriate it really is, given the requirements of business consistency.

The rule itself indicates the business perspective of consistency; an order that has no connection to a customer is valid, for a certain period of time. Eventually, the business would like a customer to be affiliated with that order; however, the time frame around that can be strict (to a level of seconds) or quite lax (to a level of hours or days). It is also understandable that the business might want to change these time frames in cases in which it might provide a strategic advantage. An entity-relationship design that would reflect these realities would likely have a separate mapping table that connected Orders to Customers—leaving the Orders entity free of any constraint that relates to the Customers entity.

That is the important thing to understand about eventual consistency: It starts by identifying the business elements that do not have to be 100-percent, up-to-the-millisecond consistent, and then reflecting those relaxed constraints in the technical design.

In this case, we could even go so far as to have each of these transactions occur in its own database, as shown in Figure 2.

Figure 2: Event-driven data flows

Figure 2. Event-driven data flows

Benefits of Event-Driven Architecture

Given that EDA requires a rethinking of the core rules and processes of our business, the benefits of the approach must be quite substantial to make the effort worthwhile— and, indeed, they are. By looking at Figure 2, we can see very loose coupling between the two sides of the temporal boundary. Other than the structure of the event that passes from left to right, nothing is shared. Not only that, but after the event is published, the publisher no longer even needs to be online for the subscriber to process the event, so long as we use a durable transport (such as a queue).

These benefits become even more pronounced when we consider integration with other systems. Consider the case in which we want to integrate with a CRM, whether it is onsite or hosted in the cloud. In the EDA approach, if the CRM is unavailable (for whatever reason), the order will still be accepted. Contrasting this with the classic command- oriented service-composition approach, we would see there that the unavailability of the CRM would cause the entire transaction to time out and roll back. The same is true during integration of mainframes and other constrained resources: Even when they are online, they can process only N concurrent transactions (see Figure 3). Because the event publisher does not need to wait for confirmation from any subscriber, any transactions beyond those that are currently being processed by the mainframe wait patiently in the queue, without any adverse impact on the performance of order processing.

Figure 3: Load-leveling effect of queues between publishers and subscribers

Figure 3. Load-leveling effect of queues between publishers and subscribers

If all systems had to wait for confirmation from one another—as is common in the command-oriented approach—to bring one system to a level of 5 nines of availability, all of the systems that it calls would need to have the same level of availability (as would the systems that they call, recursively). While the investment in infrastructure might have business justification for one system (for example, order processing), it can be ruinous to have to multiply that level of investment across the board for nonstrategic systems (for example, shipping and billing).

In companies that are undergoing mergers or acquisitions, the ability to add a new subscriber quickly to any number of events from multiple publishers without having to change any code in those publishers is a big win (see Figure 4). This helps maintain stability of the core environment, while iteratively rolling out bridges between the systems of the two companies. When we look practically at bringing the new subscriber online, we can take the recording of all published events from the audit log and play them to the new subscriber, or perform the regular ETL style of data migration from one subscriber to another.

Figure 4: Adding new subscriber to existing publisher

Figure 4. Adding new subscriber to existing publisher

IT-Business Alignment, SOA, and EDA

One of the more profound benefits that SOA was supposed to bring was an improved alignment between IT and business. While the industry does not appear to have settled on how this exactly is supposed to occur, there is broad agreement that IT is currently not aligned with business. Often, this is described under the title of application “silos.”

To understand the core problem, let us try to visualize this lack of alignment, as shown in Figure 5.

Figure 5. Lack of IT/Business Alignment

Figure 5. Lack of IT/Business Alignment

What we see in this lack of alignment is that IT boundaries are different from business boundaries, so that it is understandable that the focus of SOA on explicit boundaries (from the four tenets of service orientation) would lead many to believe that it is the solution.

Yet the problem that we see here is while there are explicit technical boundaries between App 1 and App 2, the mapping to business boundaries is wrong.

If SOA is to have any chance of improving IT-business alignment, the connection between the two needs to look more like the one that is shown in Figure 6.

Figure 6. Services aligned with business boundaries

Figure 6. Services aligned with business boundaries

One could describe such a connection as a service “owning” or being responsible for a single business domain, so that anything outside the service could not perform any actions that relate to that domain. Also, any and all data that relates to that domain also would be accessible only within the service. The EDA model that we saw earlier enabled exactly that kind of strict separation and ownership— all the while, providing mechanisms for interaction and collaboration.

We should consider this strong connection when we look at rules such as: “When an order has been created, if a customer was not provided, create a new customer.” The creation of the order as an object or a row in a database has no significance in the business domain. From a business perspective, it could be the acceptance or the authorization of an order that matters.

What SOA brings to EDA in terms of IT-business alignment is the necessity of events to represent meaningful business occurrences.

For example, instead of thinking of an entity that is being deleted as an event, you should look for the business scenario around it— for example, a product that is being discontinued, a discount that is being revoked, or a shipment that is being canceled. Consider introducing a meaningful business status to your entities, instead of the technically common “deleted” column. While the business domain of sales will probably not be very interested in discontinued products and might treat them as deleted, the support domain might need to continue troubleshooting the problems that clients have with those products—for a while, at least. Modern-day collaborative business- analysis methodologies such as value networks can help identify these domains and the event flows between them.

What an EDA/SOA Service Looks Like

In the context of combined EDA and SOA, the word “service” is equivalent to a logical “thing” that can have a database schema, Web Services, and even user-interface (UI) code inside it. This is a very different perspective from the classic approach that considers services as just another layer of the architecture. In this context, services cut across multiple layers, as shown in Figure 7.

Figure 7. Services logically connecting code from different layers

Figure 7. Services logically connecting code from different layers

In this model, the processes that are running on various computers serve as generic, composite hosts of service code and have no real logical “meat” to them.

When we look at the code in each of the layers in light of the business domain that it addresses, we tend to see fairly tight coupling between a screen, its logic, and the data that it shows. The places in which we see loose coupling is between screens, logic, and data from different business domains; there is hardly any coupling (if at all) between the screen that shows employee details and the one that is used to cancel an order. The fact that both are screens and are categorized in the UI “layer” appears not to have much technical significance (if any business significance). Much the same can be said for the code that hooks those screens to the data, as well as the data structures themselves.

Any consistency concerns that might have arisen by this separation have already been addressed by the business acceptance of eventual consistency. If there are business demands that two pieces of data that have been allocated to different services always be consistent, this indicates that service boundaries are not aligned with business boundaries and must be changed.

This is extremely valuable. Architects can explain to the business the ramifications of their architectural decisions in ways that the business can understand—“There might be a couple of seconds during which these two bits of data are not in sync. Is that a problem?”—and the answer to those kinds of question is used to iterate the architecture, so as to bring it into better alignment with the business.

As soon as service boundaries reflect business boundaries, there is great flexibility within each service; each can change its own database schema without having to worry about breaking other services, or even choose to change vendors and technology to such things as object or XML databases. Interoperability between services is a question of how event structures are represented, as well as how publish/subscribe is handled. This can be done by using basic enterprise service bus (ESB) functionality, such things as the Atom Publishing Protocol, or a mix.

Integration of legacy applications in this environment occurs within the context of a service, instead of identifying them as services in their own right. Use of Web Services to ease the cost of integration continues to make sense; however, from the perspective of a business domain, it really is nothing more than an implementation detail.

Conclusion

EDA is not a technical panacea to Web Services–centric architectures. In fact, attempting to employ EDA principles on purely technical domains that implement command-centric business analysis will almost certainly fail. The introduction of eventual consistency without the ratification of business stakeholders is poorly advised.

However, if in the process of architecture we work collaboratively with the business, map out the natural boundaries that are inherent in the organization and the way in which it works, and align the boundaries of our services to them, we will find that the benefits of EDA bring substantial gains to the business in terms of greater flexibility and shorter times to market, while its apparent disadvantages become addressed in terms of additional entity statuses and finer-grained events.

By itself, EDA ignores the IT-business alignment of SOA—so critical to getting boundaries and events right. Classic SOA has largely ignored the rock-solid foundation of publish/subscribe events—dead Web Services eventing and notification standards notwithstanding. It is only in the fusing of these two approaches that they overcome the weaknesses of each other and create a whole that is greater than the sum of its parts.

Interestingly enough, even though we have almost literally turned the classic command-driven services on their heads, the service- oriented tenets of autonomy and explicit boundaries have only become more pronounced, and the goal of IT-business alignment is now within our grasp.

Beyond just being a sound theoretical foundation, this architecture has weathered the trials of production in domains such as finance, travel and hospitality, aerospace, and many others—each with its own challenging constraints and nonfunctional demands. Organizations have maximized the effectiveness of their development teams by structuring them in accordance with these same service boundaries, instead of the more common technical specialization that corresponds to layered architectures. These loosely coupled service teams were able to wring the most out of their agile methodologies, as competition for specialized shared resources was eliminated.

Oracle once named this approach SOA 2.0. Maybe it really is the next evolutionary step.



The Fallacy Of ReUse

Sunday, June 7th, 2009

This industry is pre-occupied with reuse.

There’s this belief that if we just reused more code, everything would be better.

Some even go so far as saying that the whole point of object-orientation was reuse – it wasn’t, encapsulation was the big thing. After that component-orientation was the thing that was supposed to make reuse happen. Apparently that didn’t pan out so well either because here we are now pinning our reuseful hopes on service-orientation.

Entire books of patterns have been written on how to achieve reuse with the orientation of the day.
Services have been classified every which way in trying to achieve this, from entity services and activity services, through process services and orchestration services. Composing services has been touted as the key to reusing, and creating reusable services.

I might as well let you in on the dirty-little secret:

Reuse is a fallacy

Before running too far ahead, let’s go back to what the actual goal of reuse was: getting done faster.

That’s it.

It’s a fine goal to have.

And here’s how reuse fits in to the picture:

If we were to write all the code of a system, we’d write a certain amount of code.
If we could reuse some code from somewhere else that was written before, we could write less code.
The more code we can reuse, the less code we write.
The less code we write, the sooner we’ll be done!

However, the above logical progression is based on another couple of fallacies:

Fallacy: All code takes the same amount of time to write

Fallacy: Writing code is the primary activity in getting a system done

Anyone who’s actually written some code that’s gone into production knows this.

There’s the time it takes us to understand what the system should do.
Multiply that by the time it takes the users to understand what the system should do 🙂
Then there’s the integrating that code with all the other code, databases, configuration, web services, etc.
Debugging. Deploying. Debugging. Rebugging. Meetings. Etc.

Writing code is actually the least of our worries.
We actually spend less time writing code than…

Rebugging code

Also known as bug regressions.

This is where we fix one piece of code, and in the process break another piece of code.
It’s not like we do it on purpose. It’s all those dependencies between the various bits of code.
The more dependencies there are, the more likely something’s gonna break.
Especially when we have all sorts of hidden dependencies,
like when other code uses stuff we put in the database without asking us what it means,
or, heaven forbid, changing it without telling us.

These debugging/rebugging cycles can make stabilizing a system take a long time.

So, how does reuse help/hinder with that?

Here’s how:

Dependencies multiply by reuse

It’s to be expected. If you wrote the code all in one place, there are no dependencies. By reusing code, you’ve created a dependency. The more you reuse, the more dependencies you have. The more dependencies, the more rebugging.

Of course, we need to keep in mind the difference between…

Reuse & Use

Your code uses the runtime API (JDK, .NET BCL, etc).
Likewise other frameworks like (N)Hibernate, Spring, WCF, etc.

Reuse happens when you extend and override existing behaviors within other code.
This is most often done by inheritance in OO languages.

Interestingly enough, by the above generally accepted definition, most web services “reuse” is actually really use.

Let’s take a look at the characteristics of the code we’re using and reusing to see where we get the greatest value:

The value of (re)use

If we were to (re)use a piece of code in only one part of our system, it would be safe to say that we would get less value than if we could (re)use it in more places. For example, we could say that for many web applications, the web framework we use provides more value than a given encryption algorithm that we may use in only a few places.

So, what characterizes the code we use in many places?

Well, it’s very generic.

Actually, the more generic a piece of code, the less likely it is that we’ll be changing something in it when fixing a bug in the system.

That’s important.

However, when looking at the kind of code we reuse, and the reasons around it, we tend to see very non-generic code – something that deals with the domain-specific behaviors of the system. Thus, the likelihood of a bug fix needing to touch that code is higher than in the generic/use-not-reuse case, often much higher.

How it all fits together

Goal: Getting done faster
Via: Spending less time debugging/rebugging/stabilizing
Via: Having less dependencies reasonably requiring a bug fix to touch the dependent side
Via: Not reusing non-generic code

This doesn’t mean you shouldn’t use generic code / frameworks where applicable – absolutely, you should.
Just watch the number of kind of dependencies you introduce.

Back to services

So, if we follow the above advice with services, we wouldn’t want domain specific services reusing each other.
If we could get away with it, we probably wouldn’t even want them using each other either.

As use and reuse go down, we can see that service autonomy goes up. And vice-versa.
Luckily, we have service interaction mechanisms from Event-Driven Architecture that enable use without breaking autonomy.
Autonomy is actually very similar to the principle of encapsulation that drove object-orientation in the first place.
Interesting, isn’t it?

In summary

We all want to get done faster.

Way back when, someone told us reuse was the way to do that.

They were wrong.

Reuse may make sense in the most tightly coupled pieces of code you have, but not very much anywhere else.

When designing services in your SOA, stay away from reuse, and minimize use (with EDA patterns).

The next time someone pulls the “reuse excuse”, you’ll be ready.


Further Reading



Saga Persistence and Event-Driven Architectures

Monday, April 20th, 2009

imageWhen working with clients, I run into more than a couple of people that have difficulty with event-driven architecture (EDA). Even more people have difficulty understanding what sagas really are, let alone why they need to use them. I’d go so far to say that many people don’t realize the importance of how sagas are persisted in making it all work (including the Workflow Foundation team).

The common e-commerce example

We accept orders, bill the customer, and then ship them the product.

Fairly straight-forward.

Since each part of that process can be quite complex, let’s have each step be handled by a service:

Sales, Billing, and Shipping. Each of these services will publish an event when it’s done its part. Sales will publish OrderAccepted containing all the order information – order Id, customer Id, products, quantities, etc. Billing will publish CustomerBilledForOrder containing the customer Id, order Id, etc. And Shipping will publish OrderShippedToCustomer with its data.

So far, so good. EDA and SOA seem to be providing us some value.

Where’s the saga?

Well, let’s consider the behavior of the Shipping service. It shouldn’t ship the order to the customer until it has received the CustomerBilledForOrder event as well as the OrderAccepted event. In other words, Shipping needs to hold on to the state that came in the first event until the second event comes in. And this is exactly what sagas are for.

Let’s take a look at the saga code that implements this. In order to simplify the sample a bit, I’ll be omitting the product quantities.

   1:      public class ShippingSaga : Saga<ShippingSagaData>,
   2:          ISagaStartedBy<OrderAccepted>,
   3:          ISagaStartedBy<CustomerBilledForOrder>
   4:      {
   5:          public void Handle(OrderAccepted message)
   6:          {
   7:              this.Data.ProductIdsInOrder = message.ProductIdsInOrder;
   8:          }
   9:   
  10:          public void Handle(CustomerBilledForOrder message)
  11:          {
  12:               this.Bus.Send<ShipOrderToCustomer>(
  13:                  (m =>
  14:                  {
  15:                      m.CustomerId = message.CustomerId;
  16:                      m.OrderId = message.OrderId;
  17:                      m.ProductIdsInOrder = this.Data.ProductIdsInOrder;
  18:                  }
  19:                  ));
  20:   
  21:              this.MarkAsComplete();
  22:          }
  23:   
  24:          public override void Timeout(object state)
  25:          {
  26:              
  27:          }
  28:      }

First of all, this looks fairly simple and straightforward, which is good.
It’s also wrong, which is not so good.

One problem we have here is that events may arrive out of order – first CustomerBilledForOrder, and only then OrderAccepted. What would happen in the above saga in that case? Well, we wouldn’t end up shipping the products to the customer, and customers tend not to like that (for some reason).

There’s also another problem here. See if you can spot it as I go through the explanation of ISagaStartedBy<T>.

Saga start up and correlation

The “ISagaStartedBy<T>” that is implemented for both messages indicates to the infrastructure (NServiceBus) that when a message of that type arrives, if an existing saga instance cannot be found, that a new instance should be started up. Makes sense, doesn’t it? For a given order, when the OrderAccepted event arrives first, Shipping doesn’t currently have any sagas handling it, so it starts up a new one. After that, when the CustomerBilledForOrder event arrives for that same order, the event should be handled by the saga instance that handled the first event – not by a new one.

I’ll repeat the important part: “the event should be handled by the saga instance that handled the first event”.

Since the only information we stored in the saga was the list of products, how would we be able to look up that saga instance when the next event came in containing an order Id, but no saga Id?

OK, so we need to store the order Id from the first event so that when the second event comes along we’ll be able to find the saga based on that order Id. Not too complicated, but something to keep in mind.

Let’s look at the updated code:

   1:      public class ShippingSaga : Saga<ShippingSagaData>,
   2:          ISagaStartedBy<OrderAccepted>,
   3:          ISagaStartedBy<CustomerBilledForOrder>
   4:      {
   5:          public void Handle(CustomerBilledForOrder message)
   6:          {
   7:              this.Data.CustomerHasBeenBilled = true;
   8:   
   9:              this.Data.CustomerId = message.CustomerId;
  10:              this.Data.OrderId = message.OrderId;
  11:   
  12:              this.CompleteIfPossible();
  13:          }
  14:   
  15:          public void Handle(OrderAccepted message)
  16:          {
  17:              this.Data.ProductIdsInOrder = message.ProductIdsInOrder;
  18:   
  19:              this.Data.CustomerId = message.CustomerId;
  20:              this.Data.OrderId = message.OrderId;
  21:   
  22:              this.CompleteIfPossible();
  23:          }
  24:   
  25:          private void CompleteIfPossible()
  26:          {
  27:              if (this.Data.ProductIdsInOrder != null && this.Data.CustomerHasBeenBilled)
  28:              {
  29:                  this.Bus.Send<ShipOrderToCustomer>(
  30:                     (m =>
  31:                     {
  32:                         m.CustomerId = this.Data.CustomerId;
  33:                         m.OrderId = this.Data.OrderId;
  34:                         m.ProductIdsInOrder = this.Data.ProductIdsInOrder;
  35:                     }
  36:                     ));
  37:                  this.MarkAsComplete();
  38:              }
  39:          }
  40:      }

And that brings us to…

Saga persistence

We already saw why Shipping needs to be able to look up its internal sagas using data from the events, but what that means is that simple blob-type persistence of those sagas is out. NServiceBus comes with an NHibernate-based saga persister for exactly this reason, though any persistence mechanism which allows you to query on something other than saga Id would work just as well.

Let’s take a quick look at the saga data that we’ll be storing and see how simple it is:

   1:      public class ShippingSagaData : ISagaEntity
   2:      {
   3:          public virtual Guid Id { get; set; }
   4:          public virtual string Originator { get; set; }
   5:          public virtual Guid OrderId { get; set; }
   6:          public virtual Guid CustomerId { get; set; }
   7:          public virtual List<Guid> ProductIdsInOrder { get; set; }
   8:          public virtual bool CustomerHasBeenBilled { get; set; }
   9:      }

You might have noticed the “Originator” property in there and wondered what it is for. First of all, the ISagaEntity interface requires the two properties Id and Originator. Originator is used to store the return address of the message that started the saga. Id is for what you think it’s for. In this saga, we don’t need to send any messages back to whoever started the saga, but in many others we do. In those cases, we’ll often be handling a message from some other endpoint when we want to possibly report some status back to the client that started the process. By storing that client’s address the first time, we can then “ReplyToOriginator” at any point in the process.

The manufacturing sample that comes with NServiceBus shows how this works.

Saga Lookup

Earlier, we saw the need to search for sagas based on order Id. The way to hook into the infrastructure and perform these lookups is by implementing “IFindSagas<T>.Using<M>” where T is the type of the saga data and M is the type of message. In our example, doing this using NHibernate would look like this:

   1:      public class ShippingSagaFinder : 
   2:          IFindSagas<ShippingSagaData>.Using<OrderAccepted>,
   3:          IFindSagas<ShippingSagaData>.Using<CustomerBilledForOrder>
   4:      {
   5:          public ShippingSagaData FindBy(CustomerBilledForOrder message)
   6:          {
   7:              return FindBy(message.OrderId)
   8:          }
   9:   
  10:          public ShippingSagaData FindBy(OrderAccepted message)
  11:          {
  12:              return FindBy(message.OrderId)
  13:          }
  14:   
  15:          private ShippingSagaData FindBy(Guid orderId)
  16:          {
  17:              return sessionFactory.GetCurrentSession().CreateCriteria(typeof(ShippingSagaData))
  18:                  .Add(Expression.Eq("OrderId", orderId))
  19:                  .UniqueResult<ShippingSagaData>();
  20:          }
  21:   
  22:          private ISessionFactory sessionFactory;
  23:   
  24:          public virtual ISessionFactory SessionFactory
  25:          {
  26:              get { return sessionFactory; }
  27:              set { sessionFactory = value; }
  28:          }
  29:      }

For a performance boost, we’d probably index our saga data by order Id.

On concurrency

Another important note is that for this saga, if both messages were handled in parallel on different machines, the saga could get stuck. The persistence mechanism here needs to prevent this. When using NHibernate over a database with the appropriate isolation level (Repeatable Read – the default in NServiceBus), this “just works”. If/When implementing your own saga persistence mechanism, it is important to understand the kind of concurrency your business logic can live with.

Take a look at Ayende’s example for mobile phone billing to get a feeling for what that’s like.

Summary

In almost any event-driven architecture, you’ll have services correlating multiple events in order to make decisions. The saga pattern is a great fit there, and not at all difficult to implement. You do need to take into account that events may arrive out of order and implement the saga logic accordingly, but it’s really not that big a deal. Do take the time to think through what data will need to be stored in order for the saga to be fault-tolerant, as well as a persistence mechanism that will allow you to look up that data based on event data.

If you feel like giving this approach a try, but don’t have an environment handy for this, download NServiceBus and take a look at the samples. It’s really quick and easy to get set up.



   


Don't miss my best content
 

Recommendations

Bryan Wheeler, Director Platform Development at msnbc.com
“Udi Dahan is the real deal.

We brought him on site to give our development staff the 5-day “Advanced Distributed System Design” training. The course profoundly changed our understanding and approach to SOA and distributed systems.

Consider some of the evidence: 1. Months later, developers still make allusions to concepts learned in the course nearly every day 2. One of our developers went home and made her husband (a developer at another company) sign up for the course at a subsequent date/venue 3. Based on what we learned, we’ve made constant improvements to our architecture that have helped us to adapt to our ever changing business domain at scale and speed If you have the opportunity to receive the training, you will make a substantial paradigm shift.

If I were to do the whole thing over again, I’d start the week by playing the clip from the Matrix where Morpheus offers Neo the choice between the red and blue pills. Once you make the intellectual leap, you’ll never look at distributed systems the same way.

Beyond the training, we were able to spend some time with Udi discussing issues unique to our business domain. Because Udi is a rare combination of a big picture thinker and a low level doer, he can quickly hone in on various issues and quickly make good (if not startling) recommendations to help solve tough technical issues.” November 11, 2010

Sam Gentile Sam Gentile, Independent WCF & SOA Expert
“Udi, one of the great minds in this area.
A man I respect immensely.”





Ian Robinson Ian Robinson, Principal Consultant at ThoughtWorks
"Your blog and articles have been enormously useful in shaping, testing and refining my own approach to delivering on SOA initiatives over the last few years. Over and against a certain 3-layer-application-architecture-blown-out-to- distributed-proportions school of SOA, your writing, steers a far more valuable course."

Shy Cohen Shy Cohen, Senior Program Manager at Microsoft
“Udi is a world renowned software architect and speaker. I met Udi at a conference that we were both speaking at, and immediately recognized his keen insight and razor-sharp intellect. Our shared passion for SOA and the advancement of its practice launched a discussion that lasted into the small hours of the night.
It was evident through that discussion that Udi is one of the most knowledgeable people in the SOA space. It was also clear why – Udi does not settle for mediocrity, and seeks to fully understand (or define) the logic and principles behind things.
Humble yet uncompromising, Udi is a pleasure to interact with.”

Glenn Block Glenn Block, Senior Program Manager - WCF at Microsoft
“I have known Udi for many years having attended his workshops and having several personal interactions including working with him when we were building our Composite Application Guidance in patterns & practices. What impresses me about Udi is his deep insight into how to address business problems through sound architecture. Backed by many years of building mission critical real world distributed systems it is no wonder that Udi is the best at what he does. When customers have deep issues with their system design, I point them Udi's way.”

Karl Wannenmacher Karl Wannenmacher, Senior Lead Expert at Frequentis AG
“I have been following Udi’s blog and podcasts since 2007. I’m convinced that he is one of the most knowledgeable and experienced people in the field of SOA, EDA and large scale systems.
Udi helped Frequentis to design a major subsystem of a large mission critical system with a nationwide deployment based on NServiceBus. It was impressive to see how he took the initial architecture and turned it upside down leading to a very flexible and scalable yet simple system without knowing the details of the business domain. I highly recommend consulting with Udi when it comes to large scale mission critical systems in any domain.”

Simon Segal Simon Segal, Independent Consultant
“Udi is one of the outstanding software development minds in the world today, his vast insights into Service Oriented Architectures and Smart Clients in particular are indeed a rare commodity. Udi is also an exceptional teacher and can help lead teams to fall into the pit of success. I would recommend Udi to anyone considering some Architecural guidance and support in their next project.”

Ohad Israeli Ohad Israeli, Chief Architect at Hewlett-Packard, Indigo Division
“When you need a man to do the job Udi is your man! No matter if you are facing near deadline deadlock or at the early stages of your development, if you have a problem Udi is the one who will probably be able to solve it, with his large experience at the industry and his widely horizons of thinking , he is always full of just in place great architectural ideas.
I am honored to have Udi as a colleague and a friend (plus having his cell phone on my speed dial).”

Ward Bell Ward Bell, VP Product Development at IdeaBlade
“Everyone will tell you how smart and knowledgable Udi is ... and they are oh-so-right. Let me add that Udi is a smart LISTENER. He's always calibrating what he has to offer with your needs and your experience ... looking for the fit. He has strongly held views ... and the ability to temper them with the nuances of the situation.
I trust Udi to tell me what I need to hear, even if I don't want to hear it, ... in a way that I can hear it. That's a rare skill to go along with his command and intelligence.”

Eli Brin, Program Manager at RISCO Group
“We hired Udi as a SOA specialist for a large scale project. The development is outsourced to India. SOA is a buzzword used almost for anything today. We wanted to understand what SOA really is, and what is the meaning and practice to develop a SOA based system.
We identified Udi as the one that can put some sense and order in our minds. We started with a private customized SOA training for the entire team in Israel. After that I had several focused sessions regarding our architecture and design.
I will summarize it simply (as he is the software simplist): We are very happy to have Udi in our project. It has a great benefit. We feel good and assured with the knowledge and practice he brings. He doesn’t talk over our heads. We assimilated nServicebus as the ESB of the project. I highly recommend you to bring Udi into your project.”

Catherine Hole Catherine Hole, Senior Project Manager at the Norwegian Health Network
“My colleagues and I have spent five interesting days with Udi - diving into the many aspects of SOA. Udi has shown impressive abilities of understanding organizational challenges, and has brought the business perspective into our way of looking at services. He has an excellent understanding of the many layers from business at the top to the technical infrstructure at the bottom. He is a great listener, and manages to simplify challenges in a way that is understandable both for developers and CEOs, and all the specialists in between.”

Yoel Arnon Yoel Arnon, MSMQ Expert
“Udi has a unique, in depth understanding of service oriented architecture and how it should be used in the real world, combined with excellent presentation skills. I think Udi should be a premier choice for a consultant or architect of distributed systems.”

Vadim Mesonzhnik, Development Project Lead at Polycom
“When we were faced with a task of creating a high performance server for a video-tele conferencing domain we decided to opt for a stateless cluster with SQL server approach. In order to confirm our decision we invited Udi.

After carefully listening for 2 hours he said: "With your kind of high availability and performance requirements you don’t want to go with stateless architecture."

One simple sentence saved us from implementing a wrong product and finding that out after years of development. No matter whether our former decisions were confirmed or altered, it gave us great confidence to move forward relying on the experience, industry best-practices and time-proven techniques that Udi shared with us.
It was a distinct pleasure and a unique opportunity to learn from someone who is among the best at what he does.”

Jack Van Hoof Jack Van Hoof, Enterprise Integration Architect at Dutch Railways
“Udi is a respected visionary on SOA and EDA, whose opinion I most of the time (if not always) highly agree with. The nice thing about Udi is that he is able to explain architectural concepts in terms of practical code-level examples.”

Neil Robbins Neil Robbins, Applications Architect at Brit Insurance
“Having followed Udi's blog and other writings for a number of years I attended Udi's two day course on 'Loosely Coupled Messaging with NServiceBus' at SkillsMatter, London.

I would strongly recommend this course to anyone with an interest in how to develop IT systems which provide immediate and future fitness for purpose. An influential and innovative thought leader and practitioner in his field, Udi demonstrates and shares a phenomenally in depth knowledge that proves his position as one of the premier experts in his field globally.

The course has enhanced my knowledge and skills in ways that I am able to immediately apply to provide benefits to my employer. Additionally though I will be able to build upon what I learned in my 2 days with Udi and have no doubt that it will only enhance my future career.

I cannot recommend Udi, and his courses, highly enough.”

Nick Malik Nick Malik, Enterprise Architect at Microsoft Corporation
“You are an excellent speaker and trainer, Udi, and I've had the fortunate experience of having attended one of your presentations. I believe that you are a knowledgable and intelligent man.”

Sean Farmar Sean Farmar, Chief Technical Architect at Candidate Manager Ltd
“Udi has provided us with guidance in system architecture and supports our implementation of NServiceBus in our core business application.

He accompanied us in all stages of our development cycle and helped us put vision into real life distributed scalable software. He brought fresh thinking, great in depth of understanding software, and ongoing support that proved as valuable and cost effective.

Udi has the unique ability to analyze the business problem and come up with a simple and elegant solution for the code and the business alike.
With Udi's attention to details, and knowledge we avoided pit falls that would cost us dearly.”

Børge Hansen Børge Hansen, Architect Advisor at Microsoft
“Udi delivered a 5 hour long workshop on SOA for aspiring architects in Norway. While keeping everyone awake and excited Udi gave us some great insights and really delivered on making complex software challenges simple. Truly the software simplist.”

Motty Cohen, SW Manager at KorenTec Technologies
“I know Udi very well from our mutual work at KorenTec. During the analysis and design of a complex, distributed C4I system - where the basic concepts of NServiceBus start to emerge - I gained a lot of "Udi's hours" so I can surely say that he is a professional, skilled architect with fresh ideas and unique perspective for solving complex architecture challenges. His ideas, concepts and parts of the artifacts are the basis of several state-of-the-art C4I systems that I was involved in their architecture design.”

Aaron Jensen Aaron Jensen, VP of Engineering at Eleutian Technology
“Awesome. Just awesome.

We’d been meaning to delve into messaging at Eleutian after multiple discussions with and blog posts from Greg Young and Udi Dahan in the past. We weren’t entirely sure where to start, how to start, what tools to use, how to use them, etc. Being able to sit in a room with Udi for an entire week while he described exactly how, why and what he does to tackle a massive enterprise system was invaluable to say the least.

We now have a much better direction and, more importantly, have the confidence we need to start introducing these powerful concepts into production at Eleutian.”

Gad Rosenthal Gad Rosenthal, Department Manager at Retalix
“A thinking person. Brought fresh and valuable ideas that helped us in architecting our product. When recommending a solution he supports it with evidence and detail so you can successfully act based on it. Udi's support "comes on all levels" - As the solution architect through to the detailed class design. Trustworthy!”

Chris Bilson Chris Bilson, Developer at Russell Investment Group
“I had the pleasure of attending a workshop Udi led at the Seattle ALT.NET conference in February 2009. I have been reading Udi's articles and listening to his podcasts for a long time and have always looked to him as a source of advice on software architecture.
When I actually met him and talked to him I was even more impressed. Not only is Udi an extremely likable person, he's got that rare gift of being able to explain complex concepts and ideas in a way that is easy to understand.
All the attendees of the workshop greatly appreciate the time he spent with us and the amazing insights into service oriented architecture he shared with us.”

Alexey Shestialtynov Alexey Shestialtynov, Senior .Net Developer at Candidate Manager
“I met Udi at Candidate Manager where he was brought in part-time as a consultant to help the company make its flagship product more scalable. For me, even after 30 years in software development, working with Udi was a great learning experience. I simply love his fresh ideas and architecture insights.
As we all know it is not enough to be armed with best tools and technologies to be successful in software - there is still human factor involved. When, as it happens, the project got in trouble, management asked Udi to step into a leadership role and bring it back on track. This he did in the span of a month. I can only wish that things had been done this way from the very beginning.
I look forward to working with Udi again in the future.”

Christopher Bennage Christopher Bennage, President at Blue Spire Consulting, Inc.
“My company was hired to be the primary development team for a large scale and highly distributed application. Since these are not necessarily everyday requirements, we wanted to bring in some additional expertise. We chose Udi because of his blogging, podcasting, and speaking. We asked him to to review our architectural strategy as well as the overall viability of project.
I was very impressed, as Udi demonstrated a broad understanding of the sorts of problems we would face. His advice was honest and unbiased and very pragmatic. Whenever I questioned him on particular points, he was able to backup his opinion with real life examples. I was also impressed with his clarity and precision. He was very careful to untangle the meaning of words that might be overloaded or otherwise confusing. While Udi's hourly rate may not be the cheapest, the ROI is undoubtedly a deal. I would highly recommend consulting with Udi.”

Robert Lewkovich, Product / Development Manager at Eggs Overnight
“Udi's advice and consulting were a huge time saver for the project I'm responsible for. The $ spent were well worth it and provided me with a more complete understanding of nServiceBus and most importantly in helping make the correct architectural decisions earlier thereby reducing later, and more expensive, rework.”

Ray Houston Ray Houston, Director of Development at TOPAZ Technologies
“Udi's SOA class made me smart - it was awesome.

The class was very well put together. The materials were clear and concise and Udi did a fantastic job presenting it. It was a good mixture of lecture, coding, and question and answer. I fully expected that I would be taking notes like crazy, but it was so well laid out that the only thing I wrote down the entire course was what I wanted for lunch. Udi provided us with all the lecture materials and everyone has access to all of the samples which are in the nServiceBus trunk.

Now I know why Udi is the "Software Simplist." I was amazed to find that all the code and solutions were indeed very simple. The patterns that Udi presented keep things simple by isolating complexity so that it doesn't creep into your day to day code. The domain code looks the same if it's running in a single process or if it's running in 100 processes.”

Ian Cooper Ian Cooper, Team Lead at Beazley
“Udi is one of the leaders in the .Net development community, one of the truly smart guys who do not just get best architectural practice well enough to educate others but drives innovation. Udi consistently challenges my thinking in ways that make me better at what I do.”

Liron Levy, Team Leader at Rafael
“I've met Udi when I worked as a team leader in Rafael. One of the most senior managers there knew Udi because he was doing superb architecture job in another Rafael project and he recommended bringing him on board to help the project I was leading.
Udi brought with him fresh solutions and invaluable deep architecture insights. He is an authority on SOA (service oriented architecture) and this was a tremendous help in our project.
On the personal level - Udi is a great communicator and can persuade even the most difficult audiences (I was part of such an audience myself..) by bringing sound explanations that draw on his extensive knowledge in the software business. Working with Udi was a great learning experience for me, and I'll be happy to work with him again in the future.”

Adam Dymitruk Adam Dymitruk, Director of IT at Apara Systems
“I met Udi for the first time at DevTeach in Montreal back in early 2007. While Udi is usually involved in SOA subjects, his knowledge spans all of a software development company's concerns. I would not hesitate to recommend Udi for any company that needs excellent leadership, mentoring, problem solving, application of patterns, implementation of methodologies and straight out solution development.
There are very few people in the world that are as dedicated to their craft as Udi is to his. At ALT.NET Seattle, Udi explained many core ideas about SOA. The team that I brought with me found his workshop and other talks the highlight of the event and provided the most value to us and our organization. I am thrilled to have the opportunity to recommend him.”

Eytan Michaeli Eytan Michaeli, CTO Korentec
“Udi was responsible for a major project in the company, and as a chief architect designed a complex multi server C4I system with many innovations and excellent performance.”


Carl Kenne Carl Kenne, .Net Consultant at Dotway AB
“Udi's session "DDD in Enterprise apps" was truly an eye opener. Udi has a great ability to explain complex enterprise designs in a very comprehensive and inspiring way. I've seen several sessions on both DDD and SOA in the past, but Udi puts it in a completly new perspective and makes us understand what it's all really about. If you ever have a chance to see any of Udi's sessions in the future, take it!”

Avi Nehama, R&D Project Manager at Retalix
“Not only that Udi is a briliant software architecture consultant, he also has remarkable abilities to present complex ideas in a simple and concise manner, and...
always with a smile. Udi is indeed a top-league professional!”

Ben Scheirman Ben Scheirman, Lead Developer at CenterPoint Energy
“Udi is one of those rare people who not only deeply understands SOA and domain driven design, but also eloquently conveys that in an easy to grasp way. He is patient, polite, and easy to talk to. I'm extremely glad I came to his workshop on SOA.”

Scott C. Reynolds Scott C. Reynolds, Director of Software Engineering at CBLPath
“Udi is consistently advancing the state of thought in software architecture, service orientation, and domain modeling.
His mastery of the technologies and techniques is second to none, but he pairs that with a singular ability to listen and communicate effectively with all parties, technical and non, to help people arrive at context-appropriate solutions. Every time I have worked with Udi, or attended a talk of his, or just had a conversation with him I have come away from it enriched with new understanding about the ideas discussed.”

Evgeny-Hen Osipow, Head of R&D at PCLine
“Udi has helped PCLine on projects by implementing architectural blueprints demonstrating the value of simple design and code.”

Rhys Campbell Rhys Campbell, Owner at Artemis West
“For many years I have been following the works of Udi. His explanation of often complex design and architectural concepts are so cleanly broken down that even the most junior of architects can begin to understand these concepts. These concepts however tend to typify the "real world" problems we face daily so even the most experienced software expert will find himself in an "Aha!" moment when following Udi teachings.
It was a pleasure to finally meet Udi in Seattle Alt.Net OpenSpaces 2008, where I was pleasantly surprised at how down-to-earth and approachable he was. His depth and breadth of software knowledge also became apparent when discussion with his peers quickly dove deep in to the problems we current face. If given the opportunity to work with or recommend Udi I would quickly take that chance. When I think .Net Architecture, I think Udi.”

Sverre Hundeide Sverre Hundeide, Senior Consultant at Objectware
“Udi had been hired to present the third LEAP master class in Oslo. He is an well known international expert on enterprise software architecture and design, and is the author of the open source messaging framework nServiceBus. The entire class was based on discussion and interaction with the audience, and the only Power Point slide used was the one showing the agenda.
He started out with sketching a naive traditional n-tier application (big ball of mud), and based on suggestions from the audience we explored different solutions which might improve the solution. Whatever suggestions we threw at him, he always had a thoroughly considered answer describing pros and cons with the suggested solution. He obviously has a lot of experience with real world enterprise SOA applications.”

Raphaël Wouters Raphaël Wouters, Owner/Managing Partner at Medinternals
“I attended Udi's excellent course 'Advanced Distributed System Design with SOA and DDD' at Skillsmatter. Few people can truly claim such a high skill and expertise level, present it using a pragmatic, concrete no-nonsense approach and still stay reachable.”

Nimrod Peleg Nimrod Peleg, Lab Engineer at Technion IIT
“One of the best programmers and software engineer I've ever met, creative, knows how to design and implemet, very collaborative and finally - the applications he designed implemeted work for many years without any problems!”

Jose Manuel Beas
“When I attended Udi's SOA Workshop, then it suddenly changed my view of what Service Oriented Architectures were all about. Udi explained complex concepts very clearly and created a very productive discussion environment where all the attendees could learn a lot. I strongly recommend hiring Udi.”

Daniel Jin Daniel Jin, Senior Lead Developer at PJM Interconnection
“Udi is one of the top SOA guru in the .NET space. He is always eager to help others by sharing his knowledge and experiences. His blog articles often offer deep insights and is a invaluable resource. I highly recommend him.”

Pasi Taive Pasi Taive, Chief Architect at Tieto
“I attended both of Udi's "UI Composition Key to SOA Success" and "DDD in Enterprise Apps" sessions and they were exceptionally good. I will definitely participate in his sessions again. Udi is a great presenter and has the ability to explain complex issues in a manner that everyone understands.”

Eran Sagi, Software Architect at HP
“So far, I heard about Service Oriented architecture all over. Everyone mentions it – the big buzz word. But, when I actually asked someone for what does it really mean, no one managed to give me a complete satisfied answer. Finally in his excellent course “Advanced Distributed Systems”, I got the answers I was looking for. Udi went over the different motivations (principles) of Services Oriented, explained them well one by one, and showed how each one could be technically addressed using NService bus. In his course, Udi also explain the way of thinking when coming to design a Service Oriented system. What are the questions you need to ask yourself in order to shape your system, place the logic in the right places for best Service Oriented system.

I would recommend this course for any architect or developer who deals with distributed system, but not only. In my work we do not have a real distributed system, but one PC which host both the UI application and the different services inside, all communicating via WCF. I found that many of the architecture principles and motivations of SOA apply for our system as well. Enough that you have SW partitioned into components and most of the principles becomes relevant to you as well. Bottom line – an excellent course recommended to any SW Architect, or any developer dealing with distributed system.”

Consult with Udi

Guest Authored Books



Creative Commons License  © Copyright 2005-2011, Udi Dahan. email@UdiDahan.com