Udi Dahan   Udi Dahan – The Software Simplist
Enterprise Development Expert & SOA Specialist
    Blog Consulting Training Articles Speaking About

Archive for the ‘Development’ Category

Evolving Loosely-Coupled Frameworks & Apps

Wednesday, July 14th, 2010

This post will be less of a big-concept type posts I usually do, and more of a tip for people building and maintaining infrastructure and frameworks either open-source or internally for their companies. I’m going to illustrate this with NServiceBus as it is a large enough code base to have significant complexity and open so that you can go and take a look yourself. Trying to include some example in here would be just too small to be useful or for the point to come across.

Some background

As a cohesive framework, NServiceBus makes it quite easy for developers to pick and choose which settings they want turned on and off. Being built as a loosely-coupled set of components that don’t know about each other has always kept the internal complexity low. But as the NServiceBus API has been evolving over the years, and the functionality offered has increased, some interesting challenges have popped up as the codebase has been refactored.

The challenge

The UnicastBus class has grown too large and it’s time to refactor something out. Coincidentally, users have been asking for a better “header” story for messages – the ability to specify static headers that will be appended to all messages being sent (useful for things like security tokens), as well as per message headers. So, we want to refactor all the header management out to its own component independent of the UnicastBus class.

So, here’s the issue. So far, users have specified “.UnicastBus()” as a part of the fluent code-configuration, and shouldn’t have to change that – they shouldn’t need to know that header management is now a separate component. But then how can the new component bootstrap itself into the startup, such that it gets all the dependency injection facilities of the rest of the framework? Remember that the component doesn’t know which container technology is being used (since the user can swap it out) or when the container has been set.

The solution

The only part of the framework that knows about when all DI configuration is set is the configuration component, thus it will have to be the one that invokes the new component (without knowing about it). Introduce an interface (say INeedInitialization) and scan all the types loaded looking for classes which implement that type, register them into the container, and invoke them. Have the new component implement that interface, and in its initialization have it hook into the events and/or pipelines of other parts of the system.

Other uses

One historically problematic area in NServiceBus has been people forgetting to call “.LoadMessageHandlers()”. This can now be wired in automatically by a class in the UnicastBus component via the same mechanism.

A new feature coming in the next version is the “data bus”, a component which will allow sending large quantities of data through the bus without going through the messaging pipelines. This will help people get around the 4MB limit of MSMQ and, even more importantly, the much smaller 8KB limit of Azure. We will be able to introduce the functionality transparently with the same mechanism.

As an extension point, developers can now enrich the NServiceBus framework with their own capabilities and make those available via the contrib project to the community at large. This is better than the IWantToRunAtStartup interface that was only available for those using the generic host (which excluded web apps) and gives a consistent extensibility story for all uses.


Extensibility has always been a challenge when writing object-oriented code and dependency injection techniques have helped, but sometimes you need a bit more to take things to the next level while maintaining a backwards-compatible API.

Like I said, not a ground-shaking topic but something quite necessary in creating loosely-coupled frameworks and applications. Once you know it’s there, it isn’t really a big deal. If you didn’t know to do it, you may have been contorting your codebase in all kinds of ways to try to achieve similar things.

If you want to take a look at the code, you can find the SVN repository here: https://nservicebus.svn.sourceforge.net/svnroot/nservicebus/trunk/

Server Naming and Configuration Conflicts

Saturday, June 5th, 2010

ConfigurationIn my work with clients the topic of how to handle the movement of software from one environment to another inevitably comes up. Sometimes this is in the context of NServiceBus but the problem is more generic. The faster that an organization is able to get software out the door, the more agile they can be.

Unfortunately, there is one tiny little mistake that I see almost everywhere that gets in the way, and that’s going to be the topic of this post.

The Problem

Let’s say you have a standard web app environment – some web servers, application servers, and a database server. Your web servers need to send messages to the application servers. So far, so good.

In your test environment, you have an application server called AS_01_Test, and your web servers are configured to send it messages. However, in your staging environment the application server fulfilling that same role is called AS_01_Stage. This creates a configuration problem – you need to change the config of your web servers as you move the web app from Test to Staging.

I’ve seen companies doing all sorts of creative things to get around this problem – some of them involve putting all configuration settings in a database so that they can be centrally managed and visualized. I’d like to suggest an alternative approach.

What if…

What if server names were the same across all environments?

Well, you wouldn’t need to change configuration as you moved the system between environments. That’s a good thing.

But how can that be? Wouldn’t there be a conflict if there were two machines with the same name?

The answer is that there wouldn’t be a conflict if the machines were on different networks. Not all machines have to be on the same network. We can set up as many networks / virtual networks as we like. And it is clear that we don’t need machines in one environment / network to talk to machines in another environment. I mean, under no circumstances would we want web servers in our test environment to talk to application servers in the production environment.

These separate networks provide much needed isolation, beyond solving the server naming problem.

In closing

It’s really a tiny thing when you think about – multiple networks. But that’s exactly why software developers overlook it so often – because it’s not a “software solution” to the configuration problem we perceive as a “software problem”.

I wrote about related multi-environment configuration issues in this earlier post: Convention over Configuration – The Next Generation

I’m happy to say that this functionality is now in NServiceBus called “profiles” and you can read more about how they work here.

How are you handling the flow of moving software through to production? Leave your comments below.

CQRS isn’t the answer – it’s just one of the questions

Friday, May 7th, 2010

dont panicWith the growing interest in Command/Query Responsibility Segregation (CQRS), more people are starting to ask questions about how to apply it to their applications. CQRS is actually in danger of reaching “best practice” status at which point in time people will apply it indiscriminately with truly terrible results.

One of the things that I’ve been trying to do with my presentations around the world on CQRS was to explain the why behind it, just as much as the what. The problem with the format of these presentations is that they’re designed to communicate a fairly closed message: here’s the problem, here’s how that problem manifests itself, here’s a solution.

In this post, I’m going to try to go deeper.

The hitchhiker’s guide to the galaxy

In this most excellent book, one of the things that struck me was the theme that made it’s way through the whole book – starting with the answer to life, the universe, and everything: 42. By the time you get to the end of the book, you find out that the real question to life, the universe, and everything is “what do you get when you multiply 6 by 9”. And that’s how the book leaves it.

To us engineers, we can’t just accept the fact that the book would say that 6*9 = 42 when we know it’s 54. After bashing our heads on the rigid rules of math, we realize that not all math problems are necessarily in base 10, and that if we switch to base 13, the number 42 is 4*13 + 2 = 54. So, the book was right – but that’s not the point.

What’s the point?

The hitchhiker’s guide is an example of a teaching technique which presents an apparent paradox, leaving the student to dig up unspoken and unthought assumptions in order to resolve it. Key to this technique are rigid rules which do not allow any compromise or shortcuts on the student’s part.

The purpose of this technique is not for the student to learn the answer, but to gain deeper understanding, which in turn changes the way they go about thinking about problems in the future.

So, when given the problem 4*5, we do not just immediately answer 20, instead we clarify in which numeric base the question is being phrased, and only then go to solve the problem. In base 13, the answer would be 17. In hex, the answer would be 14.

The externally visible change is that we know which questions to ask in order to arrive at the right answer – not that we know the answer ahead of time.

Making an “ass” out of “u” and “me”

Let’s start at the end – one of the unspoken assumptions that has been causing problems:

All businesses can be treated the same from the perspective of software.

In our previous example, we assumed that all math problems use base 10. It turns out that different bases are useful for different domains (like base 2 for computers). We can say similar things about degrees and radians in geometry. The more we look at the real world, the more we see this repeating itself. There’s no reason that software should be any different.

Base 10 is not a ubiquitous best practice. We shouldn’t be surprised that there really aren’t best practices for software either.

Here’s another problematic assumption:

“The business” can (and do) tell us what they need in a way we can understand.

So many software fads have been built on the quicksand of this assumption. OOAD – on verbs and nouns. 4GL and other visual tools that “the business” will use directly. SOA – on IT business alignment. I expect we haven’t seen the end of this.

Some of you may be wondering why this is false, others are sagely nodding their heads in agreement.

The myth of “the business”

Unless you have a single user, who is also the CEO paying for the development, there is no “the”. It’s an amalgam of people with different backgrounds, skills, and goals – there is no homogeneity. Even if no software was involved, many business organizations are dysfunctional with conflicting goals, policies, and politics.

To some extent, we technical people have hidden ourselves away in IT to avoid the scary world of business whose rules we don’t understand. With the rise in importance of information to the world, we’ve been pulled back – being forced to talk to people, and not just computers. Luckily, we’ve been able to create a buffer to insulate ourselves – we’ve taken the less successful technical people from our heard and nominated them “business analysts”. No, not all companies do it this way, but we do need to take a minute to reflect on how information flows between the business Mars into and out of the IT Venus.

On human communication

Even if we made this insulation layer more permeable, allowing and encouraging more technical people and business people to cross its boundary, we still need to deal with the problem of two humans communicating with each other. There are enough books that have been written on this topic, so I won’t go into that beyond recommending (strongly) to technical people to read (some of) them.

Rather, I’d like to focus on the environment in which these discussions take place. IT has been around long enough, and users have used computers long enough, that a certain amount of tainting has taken place. If the world was a trial, the evidence would have been thrown out as untrustworthy.

When users tell you what they want, they’re usually framing that with respect to the current system that they’re using. “Like the old system – but faster, and with better search, and more information on that screen, and…”

At this point, business analysts write down and formalize these “requirements” into some IT-sanctioned structure (use cases, user stories, whatever), at which point developers are told to build it. Users only know what they didn’t want when developers deliver exactly what was asked.

How can that be?

These are not the “requirements” you are looking for

Users ultimately dictate solutions to us, as a delta from the previous set of solutions we’ve delivered them. That’s just human psychology – writer’s block when looking at a blank page, as compared to the ease with which we provide “constructive criticism” on somebody else’s work.

We need to get the real requirements. We need to probe beyond the veneer:

  • Why do you need this additional screen?
  • What real-world trigger will cause you to open it?
  • Is there more than one trigger?
  • How are they different?
  • etc, etc, etc…

This is real work – different work than programming. It requires different skills. And that’s not even getting into the political navigation between competing organizational forces.

But let’s say that you don’t have (enough) people with these skills in your organization. What then?

Enter CQRS

CQRS gives us a set of questions to ask, and some rigid rules that our answers must conform to. If our answers don’t fit, we need to go back to the drawing board and move things around and/or go back to “the business” and seek deeper understanding there.

For each screen/task/piece of data:

Will multiple users be collaborating on data related to this task?
Look at every shred of raw data, not just at the entity level.
Are there business consistency requirements around groups of raw data?

If “the business” answers no – ask them if they see that answer changing, and if so, in what time frame, and why. What changing conditions in the business environment would cause that to change – what other parts of the system would need to be re-examined under those conditions.

After understanding all that and you find a true single-user-only-thing, then you can use standard “CRUD” techniques and technologies. There are no inherent time-propagation problems in a single-user environment – so eventual consistency is beyond pointless, it actually makes matters worse.

On the other hand, if the business-data-space is collaborative, the inherent time-propagation of information between actors means they will be making decisions on data that isn’t up-to-the-millisecond-accurate anyway. This is physics, gravity – you can’t fight it (and win).

The rule for collaboration

Actors must be able submit one-way commands that will fail only under exceptional business circumstances.

The challenge we have is how to achieve the real business objectives uncovered in our previous “requirements excavation” activities and follow this rule at the same time. This will likely involve a different user-system interaction than those implemented in the past. UI design is part of the solution domain – it shouldn’t be dictated by the business (otherwise it’s like someone asking you to run a marathon, but also dictating how you do so, like by tying your shoelaces together).

Many of the technical patterns I described in my previous blog post describe the tools involved. BTW, hackers can be considered “exceptional actors” – the business actually wants their commands to fail.

In Summary

The hard and fast rule of CQRS about one-way commands is relevant for collaborative domains only. This domain has inherent eventual consistency – in the real world. Taking that and baking it into our solution domain is how we align with the business.

The process we go through, until ultimately arriving at one-way-almost-always-successful-commands is business analysis. Rejecting pre-formulated solutions, truly understanding the business drivers, and then representing those as directly as possible in our solution domain – that’s our job.

After doing this enough times and/or in more than one business domain, we may gain the insight that there is no cookie-cutter, one-size-fits-all, best-practice solution architecture for everything. Each problem domain is distinct and different – and we need to understand the details, because they should shape the resulting software structure.

The next time the business tell us to implement 42, we’ll use CQRS along with other questioning techniques until we can get “6 x 9” out of them, learning from the exercise what are the significant and stable parts of the business – ultimately helping us to “build the right system, and to build the system right”.

Don’t Panic 🙂

On MS, OSS, and Java

Saturday, May 1st, 2010

JavaIt appears that my last post caught a lot of people’s attention, with responses online and offline from people in the community as well as inside Microsoft. Some read it as a criticism of Microsoft. Others found it rang true with their experiences, particularly in their interactions with technological decision makers. One thing I’d like to do in this post is to broaden the scope of the discussion to include the Java side as well, as many in the enterprise space are working in a multi-platform/multi-vendor environment. Let’s start with some history.

Java takes the enterprise

When Java originally came out, it was an interesting language that you could use to write applets – code that would run the same everywhere, in the browser, on the desktop, etc. SUN was the keeper of Java. And then came this concept of a container – the thing that would run your Java code, which then grew to handle things like transactions, and became the Enterprise Java Bean – EJB, and that came out of IBM, with SUN adopting it later.

The adoption of Java at that point was important enough that the specs were opened up, and many EJB technologies blossomed. With backing from big companies already inside the enterprise, the only possible fight came from COBOL around Y2K, but that was a dying gasp. Microsoft wasn’t in the game as Windows NT wasn’t competition for UNIX or mainframes.

Multi-vendor as a way of life

With multiple big and medium-sized vendors offering similar, competing, and complementary technologies, all tied together by the promise of backwards compatibility in Java and the specs demanding interoperability, customers could safely go for best-of-breed solutions. This forced technological decision makers to truly evaluate the offerings on merits, not just lineage.

Many attribute the rise of OSS in Java to the fact that the existing containers were so heavyweight. I believe that was a secondary effect. As a result of the fact that the industry had embraced and internalized the values of thinking and choosing for itself, it was willing to look at alternatives with much humbler lineage, ultimately using them on their merits. It was the culture.

The Microsoft ecosystem

This culture was practically nonexistent on the Microsoft side of the border. As the only vendor, Microsoft was put on a pedestal – it was the best, period. The industry hungrily looked to Seattle not only for technology, but also for guidance and leadership. If a developer could get a job at Microsoft, they were “hot stuff”, the best of the best. This isn’t a bad thing – it was just a thing.

This enabled technological decision makers on the Microsoft side to have much shorter thought and decision processes than their counterparts on the Java side.

All of these things got baked into the culture.

About Microsoft

Like all that were ever on a pedestal, the fall was a matter of time. Expectations being that high, it was inevitable. You can’t make all people happy all the time, and the conditions in the industry were changing, and the company had to change to remain competitive.

Let me say this clearly: Microsoft was not at fault.

Sure, it’s easy to say in retrospect they should have communicated more clearly about this, or built that technology differently. If you haven’t yet worked in a big company, you may not know this, but big companies aren’t just bigger small companies. It’s a hodge-podge of competing agenda, initiatives, politics, people, and power. There’s a saying that things only get done despite the organization’s best efforts.

For a company Microsoft’s size, what they manage to get done is incredible.

On acquisitions and OSS

Microsoft has come under fire over the years for offering their own implementations of open source technologies- as if the vendors on the Java side didn’t do this. The Java world was ultra competitive, the big vendors would eat promising upstarts in order to win back lost contracts to key customers. This made the technological decision makers broaden their thought processes to include risk management as a part of managing their technological portfolio. To a large extent, this actually justifies the existence of a C-level role related to technology – the CIO.

Chief Officers of Information and Technology

I found it interesting to see the difference in age, experience, background, and thought processes between people holding the CIO title at organizations that were Microsoft-centric and those with a more heterogeneous technology investment. This was likely influenced to a large extent by the history of technological evolution, age and size of organizations with the resulting culture and hiring practices, among other things. This pattern continued with the CTO as well.

Obviously one wouldn’t expect the same thought processes in the CTO of a 20 person IT shop and the CTO of Ford Motor Company (for example). They shouldn’t be the same.

It appeared that as Microsoft became more focused on innovation they started listening more to the technology leaders of smaller companies, not a bad thing by itself. Choosing A means not choosing B, and in order to stay competitive, a choice must be made.

Fast forward

I think that what happened was necessary, and will be good for the industry and Microsoft. Technological decision making in companies that were traditionally Microsoft-centric has evolved. This has clarified Microsoft’s role as a platform vendor who can be trusted, and whose tools can be used or not used as the situation dictates, with comparable commercial and OSS tooling evaluated on the same criteria.

Just as IBM reinvented itself and now occupies a sustainable role in a combined commercial and OSS ecosystem of platforms, tools, and services, Microsoft appears to have made several big strides partnering with the community in much more productive ways, yet with more strides to be made as well.

A challenge to OSS on the Microsoft platform

In this new and more mature environment, OSS can’t remain the same either. Some code a developer whipped up in their free time and put in an online accessible repository with a decent license just won’t cut it any more.

In my previous post I called out the Linq2Sql support story – the same goes for OSS. Active development is required, and so is support, and so is documentation. The commitment needs to be much more serious.

Also, until usage reaches some critical mass, it is unlikely that a single developer or even a small group of committers will be able to do it without the help of the community. Really the only alternative is for there to be some commercial story that can fund it – support, consulting, training, commercial add-ons, etc. A combination of community (both dev and use) and commercial cash-flow is probably most sustainable.

If you are running an OSS project, understand that these criteria will be used to evaluate it.

In closing

I think I’ve managed to alienate previous supporters from all sides.

I believe we are entering some interesting times, where not only are vendors and OSS projects being evaluated differently than in the past, but that traditional architectural paradigms are changing as well.

Regardless of what the answers are, I’m happy that more of us are asking more questions. Some questions are the right questions, some are the wrong ones, and sometimes we just ask at the wrong time, but as an industry I think that we’re getting better.

Thanks for reading.

Thoughts on Microsoft History and OSS

Friday, April 23rd, 2010

open sourceIt’s coming on 4 years now that I’ve been running NServiceBus. How the time flies. As it has worked its way into the critical infrastructure of many organizations, more and more managers have been asking me questions about how this .NET OSS thing works. In this post, I’ll try to answer that question based on the history of .NET.

Although I have been privy over the years to see behind the veil at Redmond, I will be focusing primarily on the externally visible actions of Microsoft and how the industry has reacted on average – specifically in the enterprise space, where I spend most of my time.

What managers are concerned about

When there will be problem with this technology, who will support us?
How will that change when the author of the technology moves on or loses interest?
How long can we expect to be using this technology?
How long will it take to learn it? When will we recoup our investment?

Traditionally, for companies on the Microsoft platform, choosing Microsoft as their primary technology vendor has been the safe bet. As a large company, Microsoft can afford to employ support engineers. These engineers are different from the ones who wrote the technology to begin with. Also, Microsoft has 10 year support guarantees. As a result, companies expected to use the technology for at least that long. With their size, Microsoft also had the ability to create copious amounts of documentation easing learning.

To turn the common phrase, nobody got fired for choosing Microsoft.

Open source seemed like risky business, at the time.

What changed?

It started with the Composite Application Block – CAB.

This technology was put out by the Patterns and Practices (P&P) group at Microsoft. After it came out, the gears of big marketing machines at Microsoft got to work, telling the industry about this great new thing at conferences, and the Microsoft Consulting Services (MCS) folks started using it with clients. CIOs at large companies sent developers to training on CAB by the dozen.

And then CAB was dead. Sorry. Microsoft prefers the word “done” to dead.
Just like Don Box said that COM wasn’t dead, it was done.

“What about that 10 year support thing?”, asked the CIOs (among many others).

You don’t understand, said Microsoft. You see, P&P are not a product group in Microsoft. They don’t have the resources to provide that kind of support. And the rest of the company isn’t obliged to support what they put out – since they’re not a product group.

You could literally hear the collective jaw of the Microsoft part of the industry drop.

This wasn’t supposed to happen (like housing prices in the US).

And then came .NET 3.0, and things seemed to go back to normal.

There were lots of wonderful things that came with it – like LINQ, and…

Linq to Sql

This was BIG.
Finally – after ObjectSpaces was promised at PDC ’03 and later shelved, then WinFS (same story), it had arrived.
The object-relational mapper (ORM) from Microsoft.

The marketing machine went into high gear – it was the v3 promise. Linq2Sql (L2S) came from a product group. This was serious. It was shown at conferences and user groups all over the world. Developers were sent to be trained on it.

And then Entity Framework (EF) was announced – and L2S was done.

CIOs were rubbing their eyes in disbelief. “But this is from a product group – you have to support it, right?”

Well, said Microsoft, yes. We will support it with our support org. It’s just that we won’t continue to develop new features for it in the product org. We actually have a different sub-org that will be working on EF and they’re under a different division (SQL Server) than the sub-org that made L2S.

Jaws were hitting the pavement. All that investment erased. Just like that. It turns out that support without active development doesn’t mean very much, no matter how many support engineers are involved.

But the industry picked itself back up, and got back to work on the new foundational pieces that came with .NET 3.0. If Microsoft is calling it “Foundation”, that must mean they’re committed to it.

And then came Workflow Foundation

Workflow Foundation (WF) was a dream-come-true. At last, Model-Driven Development (MDD) had come to the Microsoft platform. Drag-and-drop on a whole new level. Imagine the reuse. Imagine the maintainability. Programming at higher levels of abstraction. A marketers dream. This was the culmination of previous efforts of Whitehorse in VS2005 and the Software Factories Initiative – or so we were told.

And then came .NET 3.5 – and the new WF wasn’t backwards compatible with the old one.
And then it happened again with .NET 3.5 SP1, and again with 4.0 (no more state machine – no wait, it’s back again, but not in the box).

All those companies that had long-running workflows in production needed to manually migrate them each time.

But this had become old news to the folks using Microsoft in the enterprise.
With Microsoft – you really can’t be sure any more.

What about open source?

Well, it was beginning to look more and more stable in comparison.
Especially the larger, more established projects. Those with active development.
Log4Net. NHibernate. Castle. etc.

There was also the fact that most enterprises were heterogeneous anyway – doing both Java and .NET development. Open source tools and frameworks were common in the Java space, politically greasing the wheels for .NET OSS in those organization.

Boasting features and capabilities several years ahead of what was coming out of Microsoft, more companies gave them a chance, and were pleasantly surprised. And the virtuous cycle of OSS gained speed. With more use, they become even more stable and got even more features, driving yet more use. Blog posts about them bloomed all over the web. User group presentations were given. At a presentation I gave at TechEd Europe 2006, I used NHibernate in my demo.

OSS had crossed the chasm. No, not everybody used it, or knew of it, but a critical mass of the industry had grown to depend on it.

Microsoft’s actions over the years had done more for OSS adoption than I think many would have imagined.

On Pub/Sub, Messaging, and SOA

Interestingly enough, when the first version of WCF was still in the oven (then called Indigo) there were discussions on whether it would support publish/subscribe messaging. Here we are, 3 versions later, and still no pub/sub, but the discussions continue 😉

Message brokers were always important for enterprises in the Java space – IBM MQ; Tibco RV; Sonic; companies were paying millions for this stuff. Microsoft had MSMQ – finally at v3 with XP and Server 2003, but still with insignificant penetration to the market. BizTalk did have a good run, though, but not so much as a message broker, more as an integration and orchestration engine, unfortunately coming late to the Enterprise Application Integration (EAI) party.

Later, the SQL Server guys came with Service Broker – messaging in the database. But you could see that their heart wasn’t in it. The API was clunky. It still didn’t have pub/sub. There was no binding available for WCF.

After some time making noise in the SOA space with Oslo, that changed as well. Oslo is now Sql Server Modeling.

I imagine that some of the adoption pick-up with NServiceBus can be attributed to the vacuum Microsoft left behind when exiting the messaging/pub-sub/soa space. The way NServiceBus aligns with the principles found in the corresponding Java technologies makes it very palatable to enterprises working with both platforms.

The fact that there’s active development, a vibrant and growing community, and even training available definitely contribute as well.

In closing

I don’t fault Microsoft for any of this. There are a million things that they could have done. Choosing to do one thing means choosing not to do many others. The decisions they made were done with the best intentions. Hindsight is 20/20 of-course.

And that’s just it – we do need to take a look back.

If you’re a manager making a technology related decision, or are working with managers in those positions, knowing the history of today’s technology can give you a more accurate representation of the risk involved in each choice. Also, understanding the vector that Microsoft has decided to take in various areas is critical, especially if you find out that your architectural choices aren’t quite aligned with some of those vectors.

In this post, I’ve tried not to take a stance on whether a certain approach (ORM, Pub/Sub, etc) is good or bad, or even getting into which cases it’s appropriate or not – just to describe Microsoft’s externally visible behavior in that space.

I hope that this short history lesson can help your organization make the right technology decisions in the future for its specific context. Your comments and thoughts are most welcome, as always.

On Design for Testability

Sunday, April 18th, 2010

keeping balanceAlmost at every conference, event, training, or consulting engagement someone asks for my opinion on the whole design for testability thing. I’m not quite sure why I haven’t blogged on this topic, especially at the time that a lot of the other bloggers were weighing in, but better late than never.

Before getting into that, I want to start with a slightly broader scope of discussion.

You see, I get asked about “best practices” on all sorts of things. And I try not to be the kind of consultant that responds with “it depends”, but the context of the question often makes the answer irrelevant. And the unspoken context of a best-practice question is:

Given infinite time and budget

The biggest problem that I see with well-intentioned, best-practices-following developers and architects is that they don’t ask the question “is this the right thing for us to be focusing on right now?” Understandably, that is a difficult question to answer – but it needs to be asked, since you don’t have infinite time or budget to do everything according to best practices (assuming those even exist).

About testing

The biggest issue I have with the “design for testability” topic is the extremely narrow view it takes of the word “testability”, usually in the form of more code written by a developer which invokes the production code of the system, also known as “unit tests”.

There are many different kinds of testing – unit, integration, functional, load, performance, exploratory, etc… where some may be automated and others not. Should we not discuss what “design for testability” means for not-just-unit-testing?

And what’s the point of testing anyway?

It’s not to find bugs.

Research has shown that testing (of all kinds) is not the most effective way of finding bugs. I don’t have the reference handy but I’m pretty sure that it’s from Alistair Cockburn’s work. Code reviews are (on average) about 60% more effective.

Don’t get me wrong – testing can provide indications that the software has bugs in it, but not necessarily where in the code those bugs are.

The purpose of testing is to provide quantitative and qualitative information about the system that can help various stakeholders in their decision-making processes. The relevance of that information indicates the quality of the testing. Here are some examples:

  • The system supports 100 concurrent users, with the expected user-type distribution (X% role A, Y% role B, etc), performing expected use-case distributions, and collaboration scenarios.
  • Time to proficiency for new users in role A is expected to be 3 days
  • Alternate #2 of use case #12 fails on step #3

As you can see, the relevance of the above information is dependent on what decisions the various stakeholders need to make. The bullet on load can help us decide if more machines are needed or if developers need to tune the performance of the systems. The bullet on time to proficiency can help us decide if larger investment in usability is required. Information like the last bullet can be used in conjunction with the first two to decide on the timing and type of a release.

The timeliness of this relevant information is critical to the success of a project.

Choosing which and how much of the various testing activities to perform when is something that needs to be revisited several times throughout the lifetime of a project, taking into account the current risks (threats and probabilities) and time and resource investment to mitigate them.

Let me reiterate – we’re not going to have enough time to do everything.

On iterations

If the only part of your organization that is doing iterations are your developers, you’re not agile.

In order to capitalize on the information that testers are providing, you need them in your iterations.

The same goes for the other roles involved in the project – business analysts, DBAs, sysadmins, etc.

I know that 99% of organizations aren’t structured in a way to do this.

I never said doing this would be easy.

On design

Figuring out what kind of design and how much to do when is just as important, and just as hard. Design for testability is one part of that, but not the only one, or necessarily the most important one at any point of time.

Within that design for testability topic is the “design for unit-testing” sub-topic which seems to be the popular one. Before getting into the design aspects of it, let’s take a closer look at the unit-testing side of things.

On unit-testing

The assumption is that having more unit tests will lead to a code-base with less bugs, thus requiring shorter time to get the system into production, which will pay back the time it took to write those unit tests to begin with.

In practice, what tends to happen is that as development progresses, testing code breaks as the structure of the production code changes. Now one of two things happens – either the testing code is removed or rewritten. In either case, we didn’t get the return on investment we expected on the first bit of testing code. Unfortunately, rare is the case where the relevant people in the organization understand why, resulting in the same situation repeating itself over and over again.

Those projects would have been better off without unit testing, though the organization as a whole might have used those experiences to learn and improve. It’s been my experience that if the organization wasn’t conscious enough in the context of the project to notice the situation, it is unlikely to do so at higher levels.

On fragile unit tests

The reason that a unit test ends up being rewritten (or removed) is that its code was coupled to the production code in such a way that it broke when the production code changed. This tendency to break (fragility) is a critical property of a unit test. A fragile unit test will slow down a developer doing work on some existing code – it actually makes the system less maintainable.

For a unit test code to be stable (not fragile) it needs to be coupled to stable properties of the production code. The question of whether the production code is designed in such a way that it has stable properties – is a design question. Is it a unit? If not, you will not be able to write a unit-test against it.

And anyway, who said that every class is a unit, or should be a unit? Domain models (when done right) are good examples of a unit, yet the classes that make them up may not be units. Unit-testing should only be attempted with things which are units.

I think too much weight is put on whether a dependency of a class is a concrete or interface type, and not nearly enough on the nature of the dependency. I wouldn’t blame the hammer for pounding my thumb, and by the same token I think that blame should not be directed towards tools like those from TypeMock.

On tools

There is so much more depth to both design and testability that needs to be more broadly understood. No tool has yet been created to handle either design or testing in such a way that humans can give up responsibility for the outcome.

Over the years I’ve noticed that tools are most significant when used by skilled practitioners, which makes sense in retrospect. Giving a novice carpenter a laser-guided saw probably won’t significantly change the outcome of their work. Ultimately, the skilled practitioners are the ones that create tools – not the novices. And no tool, no matter how advanced, will make a novice perform at levels like the skilled practitioner.

In the case of a project too big for a single skilled practitioner to complete in the time required (or at all), the balance of importance shifts away from tools to the project management topics described above.

In summary

I hope that this post has shed some light on the context in which decisions with respect to testing need to be made. Design is one activity that can support certain kinds of testing, but not the only one, or even the most important one for the given type of testing necessary at that time in the project.

Design is hard. Project management is hard. Testing is hard.

Getting the right mix of people that together have enough experience and skills in these activities isn’t easy.

Don’t expect that sprinkling some interfaces in your code base will be enough.
That doesn’t count much in the way of design, just as writing code in a testing namespace doesn’t count much in the way of testability.

Looking forward to hearing your comments.

On Small Applications

Sunday, March 7th, 2010

smallI hear this too often: “X sounds like a great pattern, but it’s overkill for small applications”. Many patterns have been subjected to this including (but not limited to): SOA, DDD, CQRS, ORM, etc. Often the statement is made by a person without experience in the given pattern (though possibly experienced in other patterns). Let’s take a look at the second part – the “small application”, and ask:

What makes an app small?

Or inversely, what makes an app warrant the “enterprise” moniker?

If there’s one thing that the history of our industry has shown repeatedly, it’s that developers aren’t particularly accurate with their estimates. Like, orders-of-magnitude inaccurate. Knowing this, it’s surprising that the “small app” argument seems to win so many arguments. The same goes for justifications in the form of “we’ve got to have an X, this is a BIG project”.

So, what makes an app small?

Is it a small number of lines of code? Well, what if those lines of code are keeping planes in the air?

Is it a small number of developers? Same as above. Actually, history has shown that some of the most valuable bits of code written were done by small numbers of developers.

Is it that it will only be installed on a single machine?

Is it…

What could it be?

The real issue

The small app argument is a diversionary tactic.

Loosely translated, it means “I’m comfortable where I am and I don’t want to change”.

Moving on…

The real story of size

Once we actually look at the specific context of an app, we tend to see that someone cares a great deal about it, enough to finance its custom development – rather than buying an off-the-shelf alternative. The expected lifetime of business use is easily 3-5 years, if not 7-10, during which many enhancements will likely be requested. Thus, some non-functional properties of the code matter – at the very least maintainability.

In which case, if the given pattern or approach does significantly improve the desired non-functional properties of the app, it only makes sense to use it.

There is one class of software that might possibly be treated as “small” – the one-off script that’s written to automate some IT task. And even then, so many of these scripts end up living longer than the apps themselves that they should be engineered at the same level of quality.

In closing

Don’t counter a “small app” argument with psychology.
It will only make matters worse.

Instead, rephrase the issue around the lifetime of business use.

I’ve found that there are precious few cases where the harsh light of reality doesn’t help the appropriate decisions be made. If indeed this is a small-lifetime-app, just drag-and-drop until you’re done. Otherwise, the time it takes to understand and evaluate the applicability of the given patterns will definitely pay itself back many times over the life of the app.

And managers, keep your ears open for it. The technical risks behind that statement are icebergs waiting to sink your project.

* with thanks to Mike Nichols for pushing my buttons.

Don’t Delete – Just Don’t

Tuesday, September 1st, 2009

After reading Ayende’s post advocating against “soft deletes” I felt that I should add a bit more to the topic as there were some important business semantics missing. As developers discuss the pertinence of using an IsDeleted column in the database to mark deletion, and the way this relates to reporting and auditing concerns is weighed, the core domain concepts rarely get a mention. Let’s first understand the business scenarios we’re modeling, the why behind them, before delving into the how of implementation.

The real world doesn’t cascade

Let’s say our marketing department decides to delete an item from the catalog. Should all previous orders containing that item just disappear? And cascading farther, should all invoices for those orders be deleted as well? Going on, would we have to redo the company’s profit and loss statements?

Heaven forbid.

So, is Ayende wrong? Do we really need soft deletes after all?

On the one hand, we don’t want to leave our database in an inconsistent state with invoices pointing to non-existent orders, but on the other hand, our users did ask us to delete an entity.

Or did they?

When all you have is a hammer…

We’ve been exposing users to entity-based interfaces with “create, read, update, delete” semantics in them for so long that they have started presenting us requirements using that same language, even though it’s an extremely poor fit.

Instead of accepting “delete” as a normal user action, let’s go into why users “delete” stuff, and what they actually intend to do.

The guys in marketing can’t actually make all physical instances of a product disappear – nor would they want to. In talking with these users, we might discover that their intent is quite different:

“What I mean by ‘delete’ is that the product should be discontinued. We don’t want to sell this line of product anymore. We want to get rid of the inventory we have, but not order any more from our supplier. The product shouldn’t appear any more when customers do a product search or category listing, but the guys in the warehouse will still need to manage these items in the interim. It’s much shorter to just say ‘delete’ though.”

There seem to be quite a few interesting business rules and processes there, but nothing that looks like it could be solved by a single database column.

Model the task, not the data

Looking back at the story our friend from marketing told us, his intent is to discontinue the product – not to delete it in any technical sense of the word. As such, we probably should provide a more explicit representation of this task in the user interface than just selecting a row in some grid and clicking the ‘delete’ button (and “Are you sure?” isn’t it).

As we broaden our perspective to more parts of the system, we see this same pattern repeating:

Orders aren’t deleted – they’re cancelled. There may also be fees incurred if the order is canceled too late.

Employees aren’t deleted – they’re fired (or possibly retired). A compensation package often needs to be handled.

Jobs aren’t deleted – they’re filled (or their requisition is revoked).

In all cases, the thing we should focus on is the task the user wishes to perform, rather than on the technical action to be performed on one entity or another. In almost all cases, more than one entity needs to be considered.


In all the examples above, what we see is a replacement of the technical action ‘delete’ with a relevant business action. At the entity level, instead of having a (hidden) technical WasDeleted status, we see an explicit business status that users need to be aware of.

The manager of the warehouse needs to know that a product is discontinued so that they don’t order any more stock from the supplier. In today’s world of retail with Vendor Managed Inventory, this often happens together with a modification to an agreement with the vendor, or possibly a cancellation of that agreement.

This isn’t just a case of transactional or reporting boundaries – users in different contexts need to see different things at different times as the status changes to reflect the entity’s place in the business lifecycle. Customers shouldn’t see discontinued products at all. Warehouse workers should, that is, until the corresponding Stock Keeping Unit (SKU) has been revoked (another status) after we’ve sold all the inventory we wanted (and maybe returned the rest back to the supplier).

Rules and Validation

When looking at the world through over-simplified-delete-glasses, we may consider the logic dictating when we can delete to be quite simple: do some role-based-security checks, check that the entity exists, delete. Piece of cake.

The real world is a bigger, more complicated cake.

Let’s consider deleting an order, or rather, canceling it. On top of the regular security checks, we’ve got some rules to consider:

If the order has already been delivered, check if the customer isn’t happy with what they got, and go about returning the order.

If the order contained products “made to order”, charge the customer for a portion (or all) of the order (based on other rules).

And more…

Deciding what the next status should be may very well depend on the current business status of the entity. Deciding if that change of state is allowed is context and time specific – at one point in time the task may have been allowed, but later not. The logic here is not necessarily entirely related to the entity being “deleted” – there may be other entities which need to be checked, and whose status may also need to be changed as well.


I know that some of you are thinking, “my system isn’t that complex – we can just delete and be done with it”.

My question to you would be, have you asked your users why they’re deleting things? Have you asked them about additional statuses and rules dictating how entities move as groups between them? You don’t want the success of your project to be undermined by that kind of unfounded assumption, do you?

The reason we’re given budgets to build business applications is because of the richness in business rules and statuses that ultimately provide value to users and a competitive advantage to the business. If that value wasn’t there, wouldn’t we be serving our users better by just giving them Microsoft Access?

In closing, given that you’re not giving your users MS Access, don’t think about deleting entities. Look for the reason why. Understand the different statuses that entities move between. Ask which users need to care about which status. I know it doesn’t show up as nicely on your resume as “3 years WXF”, but “saved the company $4 million in wasted inventory” does speak volumes.

One last sentence: Don’t delete. Just don’t.

Convention over Configuration – The Next Generation?

Saturday, August 15th, 2009

Convention over configuration describes a style of development made popular by Ruby on Rails which has gained a great deal of traction in the .net ecosystem. After using frameworks designed in this way, I can say that the popularity is justified – it is much more pleasurable developing this way.

The thing is, when looking at this in light of the full software development lifecycle, there are signs that the waters run deeper than we might have originally thought.

Let’s take things one step at a time though…

What is it?

Wikipedia tells us:

“Convention over Configuration (aka Coding by convention) is a software design paradigm which seeks to decrease the number of decisions that developers need to make, gaining simplicity, but not necessarily losing flexibility. The phrase essentially means a developer only needs to specify unconventional aspects of the application.”

What this means is that frameworks built in this way have default implementations that can be swapped out if needed. So far so good.

For example…

In NServiceBus, there is an abstraction for how subscription data is stored and multiple implementations – one in-memory, another using a durable MSMQ queue, and a third which uses a database. The convention for that part of the system is that the MSMQ implementation will be used, unless something else is specified.

Developers wishing to specify a different implementation can specify the desired implementation in the container – either one that comes out of the box, or their own implementation of ISubscriptionStorage.

Things get more interesting when we consider the full lifecycle.

Lifecycle effects

When developers are in the early phases of writing a new service, they want to focus primarily on what the service does – its logic. They don’t want to muck around with MSMQ queues for storing subscriptions and would much rather use the in-memory storage.

As the service takes shape and the developers want to run the full service on their machine, possibly testing basic fault-tolerance behaviors – kill one service, see that the others get a timeout, bring the service back up, wanting it to maintain all the previous subscriptions.

Moving on from there, our developers want to take the same system they just tested on their machine and move it into a staging environment. There, they don’t want to use the MSMQ implementation for subscription storage, but rather the database implementation – as will be used in the production environment.

While it may not sound like a big deal – changing the code which specifies which implementation to use when moving from one environment to another, consider that on top of just subscription storage, there is logging (output to console, file, db?), saga persistence (in-memory, file-based DB, relational DB), and more.

It’s actually quite likely that something will get missed as we move the system between environments. Can there be a better way?

What if…

What if there was some way for the developer to express their intent to the system, and the system could change its conventions, without the developer having to change any code or configuration files?

You might compare this (in concept) to debug builds and release builds. Same code, same config, but the runtime behaves different between the two.

As I mulled over how we could capture that intent without any code or config changes, the solution that I kept coming to seemed too trivial at first, so I dismissed it. Yet, it was the simplest one that would work for console and WinForms applications, as well as windows services – command line arguments. The only thing is that I don’t think those are available for web applications.

But since we’re still in “what if” land, and I’m more thinking out loud here than providing workable solutions for tomorrow morning, let’s “what if” command line arguments worked for web apps too.

Command-Line Intent

Going back to our original scenario, when developers are working on the logic of the service, they run it using the generic NServiceBus host process, passing it the command line parameter /lite (or whatever). The host then automatically configures all the in-memory implementations.

As the system progresses, when the developer wants to run everything on their machine, they run the processes with /integration. The host then configures the appropriate implementations (MSMQ for subscription storage, SQLite for saga persistence, etc.

When the developers want to run the system in production, they could specify /production (or maybe that could be the default?), and the database backed implementations would be configured.


Imagine being able to move that fluidly from one environment to another. Not needing to pore over configuration files or startup script code which configures a zillion implementation details. Not needing to worry that as you moved the system to staging something would break.

Imagine short, frictionless iterations even for large scale systems.

Imagine – lifecycle-aware frameworks making all this imagination a reality.

In Closing

We’re not there yet – but we’re not that far either. The generic host we’re providing with NServiceBus 2.0 is now being extended to support exactly these scenarios.

It’s my hope that as more of us think about this challenge, we’ll come up with better solutions and more intelligent frameworks. Just as convention came to our rescue before, breaking us out of the pain of endless XML configuration, I hope this new family of lifecycle-aware frameworks will make the friction of moving a system through dev, test, staging, and production a thing of the past.

A worthy problem for us all to solve, don’t you think?

Any ideas on how to make it a reality?
Send them in – leave a comment below.

Domain Events – Salvation

Sunday, June 14th, 2009

I’ve been hearing from people that have had a great deal of success using the Domain Event pattern and the infrastructure I previously provided for it in Domain Events – Take 2. I’m happy to say that I’ve got an improvement that I think you’ll like. The main change is that now we’ll be taking an approach that is reminiscent to how events are published in NServiceBus.


Before diving right into the code, I wanted to take a minute to recall how we got here.

It started by looking for how to create fully encapsulated domain models.

The main assertion being that you do *not* need to inject anything into your domain entities.

Not services. Not repositories. Nothing.

Just pure domain model goodness.

Make Roles Explicit

I’m going to take the advice I so often give. A domain event is a role, and thus should be represented explicitly:

   1:  public interface IDomainEvent {}

If this reminds you of the IMessage marker interface in nServiceBus, you’re beginning to see where this is going…

How to define domain events

A domain event is just a simple POCO that represents an interesting occurence in the domain. For example:

   1:  public class CustomerBecamePreferred : IDomainEvent 
   2:  {
   3:      public Customer Customer { get; set; }
   4:  }

For those of you concerned about the number of events you may have, and therefore are thinking about bunching up these events by namespaces or things like that, slow down. The number of domain events and their cohesion is directly related to that of the domain model.

If you feel the need to split your domain events up, there’s a good chance that you should be looking at splitting your domain model too. This is the bottom-up way of identifying bounded contexts.

How to raise domain events

In your domain entities, when a significant state change happens you’ll want to raise your domain events like this:

   1:  public class Customer
   2:  {
   3:      public void DoSomething()
   4:      {
   5:          DomainEvents.Raise(new CustomerBecamePreferred() { Customer = this });
   6:      }
   7:  }

We’ll look at the DomainEvents class in just a second, but I’m guessing that some of you are wondering “how did that entity get a reference to that?” The answer is that DomainEvents is a static class. “OMG, static?! But doesn’t that hurt testability?!” No, it doesn’t. Here, look:

Unit testing with domain events

One of the things we’d like to check when unit testing our domain entities is that the appropriate events are raised along with the corresponding state changes. Here’s an example:

   1:  public void DoSomethingShouldMakeCustomerPreferred()
   2:  {
   3:      var c = new Customer();
   4:      Customer preferred = null;
   6:      DomainEvents.Register<CustomerBecamePreferred>(
   7:          p => preferred = p.Customer
   8:              );
  10:      c.DoSomething();
  11:      Assert(preferred == c && c.IsPreferred);
  12:  }

As you can see, the static DomainEvents class is used in unit tests as well. Also notice that you don’t need to mock anything – pure testable bliss.

Who handles domain events

First of all, consider that when some service layer object calls the DoSomething method of the Customer class, it doesn’t necessarily know which, if any, domain events will be raised. All it wants to do is its regular schtick:

   1:  public void Handle(DoSomethingMessage msg)
   2:  {
   3:      using (ISession session = SessionFactory.OpenSession())
   4:      using (ITransaction tx = session.BeginTransaction())
   5:      {
   6:          var c = session.Get<Customer>(msg.CustomerId);
   7:          c.DoSomething();
   9:          tx.Commit();
  10:      }
  11:  }

The above code complies with the Single Responsibility Principle, so the business requirement which states that when a customer becomes preferred, they should be sent an email belongs somewhere else.

Notice that the key word in the requirement – “when”.

Any time you see that word in relation to your domain, consider modeling it as a domain event.

So, here’s the handling code:

   1:  public class CustomerBecamePreferredHandler : Handles<CustomerBecamePreferred>
   2:  { 
   3:     public void Handle(CustomerBecamePreferred args)
   4:     {
   5:        // send email to args.Customer
   6:     }
   7:  } 

This code will run no matter which service layer object we came in through.

Here’s the interface it implements:

   1:  public interface Handles<T> where T : IDomainEvent
   2:  {
   3:      void Handle(T args); 
   4:  } 

Fairly simple.

Please be aware that the above code will be run on the same thread within the same transaction as the regular domain work so you should avoid performing any blocking activities, like using SMTP or web services. Instead, prefer using one-way messaging to communicate to something else which does those blocking activities.

Also, you can have multiple classes handling the same domain event. If you need to send email *and* call the CRM system *and* do something else, etc, you don’t need to change any code – just write a new handler. This keeps your system quite a bit more stable than if you had to mess with the original handler or, heaven forbid, service layer code.

Where domain event handlers go

These handler classes do not belong in the domain model.

Nor do they belong in the service layer.

Well, that’s not entirely accurate – you see, there’s no *the* service layer. There is the part that accepts messages from clients and calls methods on the domain model. And there is another, independent part that handles events from the domain. Both of these will probably make use of a message bus, but that implementation detail shouldn’t deter you from keeping each in their own package.

The infrastructure

I know you’ve been patient, reading through all my architectural blah-blah, so here it is:

   1:  public static class DomainEvents
   2:  { 
   3:      [ThreadStatic] //so that each thread has its own callbacks
   4:      private static List<Delegate> actions;
   6:      public static IContainer Container { get; set; } //as before
   8:      //Registers a callback for the given domain event
   9:      public static void Register<T>(Action<T> callback) where T : IDomainEvent
  10:      {
  11:         if (actions == null)
  12:            actions = new List<Delegate>();
  14:         actions.Add(callback);
  15:     }
  17:     //Clears callbacks passed to Register on the current thread
  18:     public static void ClearCallbacks ()
  19:     {
  20:         actions = null;
  21:     }
  23:     //Raises the given domain event
  24:     public static void Raise<T>(T args) where T : IDomainEvent
  25:     {
  26:        if (Container != null)
  27:           foreach(var handler in Container.ResolveAll<Handles<T>>())
  28:              handler.Handle(args);
  30:        if (actions != null)
  31:            foreach (var action in actions)
  32:                if (action is Action<T>)
  33:                    ((Action<T>)action)(args);
  34:     }
  35:  } 

Notice that while this class *can* use a container, the container isn’t needed for unit tests which use the Register method.

When used server side, please make sure that you add a call to ClearCallbacks in your infrastructure’s end of message processing section. In nServiceBus this is done with a message module like the one below:

   1:  public class DomainEventsCleaner : IMessageModule
   2:  { 
   3:      public void HandleBeginMessage() { }
   5:      public void HandleEndMessage()
   6:      {
   7:          DomainEvents.ClearCallbacks();
   8:      }
   9:  }

The main reason for this cleanup is that someone just might want to use the Register API in their original service layer code rather than writing a separate domain event handler.


Like all good things in life, 3rd time’s the charm.

It took a couple of iterations, and the API did change quite a bit, but the overarching theme has remained the same – keep the domain model focused on domain concerns. While some might say that there’s only a slight technical difference between calling a service (IEmailService) and using an event to dispatch it elsewhere, I beg to differ.

These domain events are a part of the ubiquitous language and should be represented explicitly.

CustomerBecamePreferred is nothing at all like IEmailService.

In working with your domain experts or just going through a requirements document, pay less attention to the nouns and verbs that Object-Oriented Analysis & Design call attention to, and keep an eye out for the word “when”. It’s a critically important word that enables us to model important occurrences and state changes.

What do you think? Are you already using this approach? Have you already tried it and found it broken in some way? Do you have any suggestions on how to improve it?

Let me know – leave a comment below.


Don't miss my best content


Bryan Wheeler, Director Platform Development at msnbc.com
Udi Dahan is the real deal.

We brought him on site to give our development staff the 5-day “Advanced Distributed System Design” training. The course profoundly changed our understanding and approach to SOA and distributed systems.

Consider some of the evidence: 1. Months later, developers still make allusions to concepts learned in the course nearly every day 2. One of our developers went home and made her husband (a developer at another company) sign up for the course at a subsequent date/venue 3. Based on what we learned, we’ve made constant improvements to our architecture that have helped us to adapt to our ever changing business domain at scale and speed If you have the opportunity to receive the training, you will make a substantial paradigm shift.

If I were to do the whole thing over again, I’d start the week by playing the clip from the Matrix where Morpheus offers Neo the choice between the red and blue pills. Once you make the intellectual leap, you’ll never look at distributed systems the same way.

Beyond the training, we were able to spend some time with Udi discussing issues unique to our business domain. Because Udi is a rare combination of a big picture thinker and a low level doer, he can quickly hone in on various issues and quickly make good (if not startling) recommendations to help solve tough technical issues.” November 11, 2010

Sam Gentile Sam Gentile, Independent WCF & SOA Expert
“Udi, one of the great minds in this area.
A man I respect immensely.”

Ian Robinson Ian Robinson, Principal Consultant at ThoughtWorks
"Your blog and articles have been enormously useful in shaping, testing and refining my own approach to delivering on SOA initiatives over the last few years. Over and against a certain 3-layer-application-architecture-blown-out-to- distributed-proportions school of SOA, your writing, steers a far more valuable course."

Shy Cohen Shy Cohen, Senior Program Manager at Microsoft
“Udi is a world renowned software architect and speaker. I met Udi at a conference that we were both speaking at, and immediately recognized his keen insight and razor-sharp intellect. Our shared passion for SOA and the advancement of its practice launched a discussion that lasted into the small hours of the night.
It was evident through that discussion that Udi is one of the most knowledgeable people in the SOA space. It was also clear why – Udi does not settle for mediocrity, and seeks to fully understand (or define) the logic and principles behind things.
Humble yet uncompromising, Udi is a pleasure to interact with.”

Glenn Block Glenn Block, Senior Program Manager - WCF at Microsoft
“I have known Udi for many years having attended his workshops and having several personal interactions including working with him when we were building our Composite Application Guidance in patterns & practices. What impresses me about Udi is his deep insight into how to address business problems through sound architecture. Backed by many years of building mission critical real world distributed systems it is no wonder that Udi is the best at what he does. When customers have deep issues with their system design, I point them Udi's way.”

Karl Wannenmacher Karl Wannenmacher, Senior Lead Expert at Frequentis AG
“I have been following Udi’s blog and podcasts since 2007. I’m convinced that he is one of the most knowledgeable and experienced people in the field of SOA, EDA and large scale systems.
Udi helped Frequentis to design a major subsystem of a large mission critical system with a nationwide deployment based on NServiceBus. It was impressive to see how he took the initial architecture and turned it upside down leading to a very flexible and scalable yet simple system without knowing the details of the business domain. I highly recommend consulting with Udi when it comes to large scale mission critical systems in any domain.”

Simon Segal Simon Segal, Independent Consultant
“Udi is one of the outstanding software development minds in the world today, his vast insights into Service Oriented Architectures and Smart Clients in particular are indeed a rare commodity. Udi is also an exceptional teacher and can help lead teams to fall into the pit of success. I would recommend Udi to anyone considering some Architecural guidance and support in their next project.”

Ohad Israeli Ohad Israeli, Chief Architect at Hewlett-Packard, Indigo Division
“When you need a man to do the job Udi is your man! No matter if you are facing near deadline deadlock or at the early stages of your development, if you have a problem Udi is the one who will probably be able to solve it, with his large experience at the industry and his widely horizons of thinking , he is always full of just in place great architectural ideas.
I am honored to have Udi as a colleague and a friend (plus having his cell phone on my speed dial).”

Ward Bell Ward Bell, VP Product Development at IdeaBlade
“Everyone will tell you how smart and knowledgable Udi is ... and they are oh-so-right. Let me add that Udi is a smart LISTENER. He's always calibrating what he has to offer with your needs and your experience ... looking for the fit. He has strongly held views ... and the ability to temper them with the nuances of the situation.
I trust Udi to tell me what I need to hear, even if I don't want to hear it, ... in a way that I can hear it. That's a rare skill to go along with his command and intelligence.”

Eli Brin, Program Manager at RISCO Group
“We hired Udi as a SOA specialist for a large scale project. The development is outsourced to India. SOA is a buzzword used almost for anything today. We wanted to understand what SOA really is, and what is the meaning and practice to develop a SOA based system.
We identified Udi as the one that can put some sense and order in our minds. We started with a private customized SOA training for the entire team in Israel. After that I had several focused sessions regarding our architecture and design.
I will summarize it simply (as he is the software simplist): We are very happy to have Udi in our project. It has a great benefit. We feel good and assured with the knowledge and practice he brings. He doesn’t talk over our heads. We assimilated nServicebus as the ESB of the project. I highly recommend you to bring Udi into your project.”

Catherine Hole Catherine Hole, Senior Project Manager at the Norwegian Health Network
“My colleagues and I have spent five interesting days with Udi - diving into the many aspects of SOA. Udi has shown impressive abilities of understanding organizational challenges, and has brought the business perspective into our way of looking at services. He has an excellent understanding of the many layers from business at the top to the technical infrstructure at the bottom. He is a great listener, and manages to simplify challenges in a way that is understandable both for developers and CEOs, and all the specialists in between.”

Yoel Arnon Yoel Arnon, MSMQ Expert
“Udi has a unique, in depth understanding of service oriented architecture and how it should be used in the real world, combined with excellent presentation skills. I think Udi should be a premier choice for a consultant or architect of distributed systems.”

Vadim Mesonzhnik, Development Project Lead at Polycom
“When we were faced with a task of creating a high performance server for a video-tele conferencing domain we decided to opt for a stateless cluster with SQL server approach. In order to confirm our decision we invited Udi.

After carefully listening for 2 hours he said: "With your kind of high availability and performance requirements you don’t want to go with stateless architecture."

One simple sentence saved us from implementing a wrong product and finding that out after years of development. No matter whether our former decisions were confirmed or altered, it gave us great confidence to move forward relying on the experience, industry best-practices and time-proven techniques that Udi shared with us.
It was a distinct pleasure and a unique opportunity to learn from someone who is among the best at what he does.”

Jack Van Hoof Jack Van Hoof, Enterprise Integration Architect at Dutch Railways
“Udi is a respected visionary on SOA and EDA, whose opinion I most of the time (if not always) highly agree with. The nice thing about Udi is that he is able to explain architectural concepts in terms of practical code-level examples.”

Neil Robbins Neil Robbins, Applications Architect at Brit Insurance
“Having followed Udi's blog and other writings for a number of years I attended Udi's two day course on 'Loosely Coupled Messaging with NServiceBus' at SkillsMatter, London.

I would strongly recommend this course to anyone with an interest in how to develop IT systems which provide immediate and future fitness for purpose. An influential and innovative thought leader and practitioner in his field, Udi demonstrates and shares a phenomenally in depth knowledge that proves his position as one of the premier experts in his field globally.

The course has enhanced my knowledge and skills in ways that I am able to immediately apply to provide benefits to my employer. Additionally though I will be able to build upon what I learned in my 2 days with Udi and have no doubt that it will only enhance my future career.

I cannot recommend Udi, and his courses, highly enough.”

Nick Malik Nick Malik, Enterprise Architect at Microsoft Corporation
You are an excellent speaker and trainer, Udi, and I've had the fortunate experience of having attended one of your presentations. I believe that you are a knowledgable and intelligent man.”

Sean Farmar Sean Farmar, Chief Technical Architect at Candidate Manager Ltd
“Udi has provided us with guidance in system architecture and supports our implementation of NServiceBus in our core business application.

He accompanied us in all stages of our development cycle and helped us put vision into real life distributed scalable software. He brought fresh thinking, great in depth of understanding software, and ongoing support that proved as valuable and cost effective.

Udi has the unique ability to analyze the business problem and come up with a simple and elegant solution for the code and the business alike.
With Udi's attention to details, and knowledge we avoided pit falls that would cost us dearly.”

Børge Hansen Børge Hansen, Architect Advisor at Microsoft
“Udi delivered a 5 hour long workshop on SOA for aspiring architects in Norway. While keeping everyone awake and excited Udi gave us some great insights and really delivered on making complex software challenges simple. Truly the software simplist.”

Motty Cohen, SW Manager at KorenTec Technologies
“I know Udi very well from our mutual work at KorenTec. During the analysis and design of a complex, distributed C4I system - where the basic concepts of NServiceBus start to emerge - I gained a lot of "Udi's hours" so I can surely say that he is a professional, skilled architect with fresh ideas and unique perspective for solving complex architecture challenges. His ideas, concepts and parts of the artifacts are the basis of several state-of-the-art C4I systems that I was involved in their architecture design.”

Aaron Jensen Aaron Jensen, VP of Engineering at Eleutian Technology
Awesome. Just awesome.

We’d been meaning to delve into messaging at Eleutian after multiple discussions with and blog posts from Greg Young and Udi Dahan in the past. We weren’t entirely sure where to start, how to start, what tools to use, how to use them, etc. Being able to sit in a room with Udi for an entire week while he described exactly how, why and what he does to tackle a massive enterprise system was invaluable to say the least.

We now have a much better direction and, more importantly, have the confidence we need to start introducing these powerful concepts into production at Eleutian.”

Gad Rosenthal Gad Rosenthal, Department Manager at Retalix
“A thinking person. Brought fresh and valuable ideas that helped us in architecting our product. When recommending a solution he supports it with evidence and detail so you can successfully act based on it. Udi's support "comes on all levels" - As the solution architect through to the detailed class design. Trustworthy!”

Chris Bilson Chris Bilson, Developer at Russell Investment Group
“I had the pleasure of attending a workshop Udi led at the Seattle ALT.NET conference in February 2009. I have been reading Udi's articles and listening to his podcasts for a long time and have always looked to him as a source of advice on software architecture.
When I actually met him and talked to him I was even more impressed. Not only is Udi an extremely likable person, he's got that rare gift of being able to explain complex concepts and ideas in a way that is easy to understand.
All the attendees of the workshop greatly appreciate the time he spent with us and the amazing insights into service oriented architecture he shared with us.”

Alexey Shestialtynov Alexey Shestialtynov, Senior .Net Developer at Candidate Manager
“I met Udi at Candidate Manager where he was brought in part-time as a consultant to help the company make its flagship product more scalable. For me, even after 30 years in software development, working with Udi was a great learning experience. I simply love his fresh ideas and architecture insights.
As we all know it is not enough to be armed with best tools and technologies to be successful in software - there is still human factor involved. When, as it happens, the project got in trouble, management asked Udi to step into a leadership role and bring it back on track. This he did in the span of a month. I can only wish that things had been done this way from the very beginning.
I look forward to working with Udi again in the future.”

Christopher Bennage Christopher Bennage, President at Blue Spire Consulting, Inc.
“My company was hired to be the primary development team for a large scale and highly distributed application. Since these are not necessarily everyday requirements, we wanted to bring in some additional expertise. We chose Udi because of his blogging, podcasting, and speaking. We asked him to to review our architectural strategy as well as the overall viability of project.
I was very impressed, as Udi demonstrated a broad understanding of the sorts of problems we would face. His advice was honest and unbiased and very pragmatic. Whenever I questioned him on particular points, he was able to backup his opinion with real life examples. I was also impressed with his clarity and precision. He was very careful to untangle the meaning of words that might be overloaded or otherwise confusing. While Udi's hourly rate may not be the cheapest, the ROI is undoubtedly a deal. I would highly recommend consulting with Udi.”

Robert Lewkovich, Product / Development Manager at Eggs Overnight
“Udi's advice and consulting were a huge time saver for the project I'm responsible for. The $ spent were well worth it and provided me with a more complete understanding of nServiceBus and most importantly in helping make the correct architectural decisions earlier thereby reducing later, and more expensive, rework.”

Ray Houston Ray Houston, Director of Development at TOPAZ Technologies
“Udi's SOA class made me smart - it was awesome.

The class was very well put together. The materials were clear and concise and Udi did a fantastic job presenting it. It was a good mixture of lecture, coding, and question and answer. I fully expected that I would be taking notes like crazy, but it was so well laid out that the only thing I wrote down the entire course was what I wanted for lunch. Udi provided us with all the lecture materials and everyone has access to all of the samples which are in the nServiceBus trunk.

Now I know why Udi is the "Software Simplist." I was amazed to find that all the code and solutions were indeed very simple. The patterns that Udi presented keep things simple by isolating complexity so that it doesn't creep into your day to day code. The domain code looks the same if it's running in a single process or if it's running in 100 processes.”

Ian Cooper Ian Cooper, Team Lead at Beazley
“Udi is one of the leaders in the .Net development community, one of the truly smart guys who do not just get best architectural practice well enough to educate others but drives innovation. Udi consistently challenges my thinking in ways that make me better at what I do.”

Liron Levy, Team Leader at Rafael
“I've met Udi when I worked as a team leader in Rafael. One of the most senior managers there knew Udi because he was doing superb architecture job in another Rafael project and he recommended bringing him on board to help the project I was leading.
Udi brought with him fresh solutions and invaluable deep architecture insights. He is an authority on SOA (service oriented architecture) and this was a tremendous help in our project.
On the personal level - Udi is a great communicator and can persuade even the most difficult audiences (I was part of such an audience myself..) by bringing sound explanations that draw on his extensive knowledge in the software business. Working with Udi was a great learning experience for me, and I'll be happy to work with him again in the future.”

Adam Dymitruk Adam Dymitruk, Director of IT at Apara Systems
“I met Udi for the first time at DevTeach in Montreal back in early 2007. While Udi is usually involved in SOA subjects, his knowledge spans all of a software development company's concerns. I would not hesitate to recommend Udi for any company that needs excellent leadership, mentoring, problem solving, application of patterns, implementation of methodologies and straight out solution development.
There are very few people in the world that are as dedicated to their craft as Udi is to his. At ALT.NET Seattle, Udi explained many core ideas about SOA. The team that I brought with me found his workshop and other talks the highlight of the event and provided the most value to us and our organization. I am thrilled to have the opportunity to recommend him.”

Eytan Michaeli Eytan Michaeli, CTO Korentec
“Udi was responsible for a major project in the company, and as a chief architect designed a complex multi server C4I system with many innovations and excellent performance.”

Carl Kenne Carl Kenne, .Net Consultant at Dotway AB
“Udi's session "DDD in Enterprise apps" was truly an eye opener. Udi has a great ability to explain complex enterprise designs in a very comprehensive and inspiring way. I've seen several sessions on both DDD and SOA in the past, but Udi puts it in a completly new perspective and makes us understand what it's all really about. If you ever have a chance to see any of Udi's sessions in the future, take it!”

Avi Nehama, R&D Project Manager at Retalix
“Not only that Udi is a briliant software architecture consultant, he also has remarkable abilities to present complex ideas in a simple and concise manner, and...
always with a smile. Udi is indeed a top-league professional!”

Ben Scheirman Ben Scheirman, Lead Developer at CenterPoint Energy
“Udi is one of those rare people who not only deeply understands SOA and domain driven design, but also eloquently conveys that in an easy to grasp way. He is patient, polite, and easy to talk to. I'm extremely glad I came to his workshop on SOA.”

Scott C. Reynolds Scott C. Reynolds, Director of Software Engineering at CBLPath
“Udi is consistently advancing the state of thought in software architecture, service orientation, and domain modeling.
His mastery of the technologies and techniques is second to none, but he pairs that with a singular ability to listen and communicate effectively with all parties, technical and non, to help people arrive at context-appropriate solutions. Every time I have worked with Udi, or attended a talk of his, or just had a conversation with him I have come away from it enriched with new understanding about the ideas discussed.”

Evgeny-Hen Osipow, Head of R&D at PCLine
“Udi has helped PCLine on projects by implementing architectural blueprints demonstrating the value of simple design and code.”

Rhys Campbell Rhys Campbell, Owner at Artemis West
“For many years I have been following the works of Udi. His explanation of often complex design and architectural concepts are so cleanly broken down that even the most junior of architects can begin to understand these concepts. These concepts however tend to typify the "real world" problems we face daily so even the most experienced software expert will find himself in an "Aha!" moment when following Udi teachings.
It was a pleasure to finally meet Udi in Seattle Alt.Net OpenSpaces 2008, where I was pleasantly surprised at how down-to-earth and approachable he was. His depth and breadth of software knowledge also became apparent when discussion with his peers quickly dove deep in to the problems we current face. If given the opportunity to work with or recommend Udi I would quickly take that chance. When I think .Net Architecture, I think Udi.”

Sverre Hundeide Sverre Hundeide, Senior Consultant at Objectware
“Udi had been hired to present the third LEAP master class in Oslo. He is an well known international expert on enterprise software architecture and design, and is the author of the open source messaging framework nServiceBus. The entire class was based on discussion and interaction with the audience, and the only Power Point slide used was the one showing the agenda.
He started out with sketching a naive traditional n-tier application (big ball of mud), and based on suggestions from the audience we explored different solutions which might improve the solution. Whatever suggestions we threw at him, he always had a thoroughly considered answer describing pros and cons with the suggested solution. He obviously has a lot of experience with real world enterprise SOA applications.”

Raphaël Wouters Raphaël Wouters, Owner/Managing Partner at Medinternals
“I attended Udi's excellent course 'Advanced Distributed System Design with SOA and DDD' at Skillsmatter. Few people can truly claim such a high skill and expertise level, present it using a pragmatic, concrete no-nonsense approach and still stay reachable.”

Nimrod Peleg Nimrod Peleg, Lab Engineer at Technion IIT
“One of the best programmers and software engineer I've ever met, creative, knows how to design and implemet, very collaborative and finally - the applications he designed implemeted work for many years without any problems!

Jose Manuel Beas
“When I attended Udi's SOA Workshop, then it suddenly changed my view of what Service Oriented Architectures were all about. Udi explained complex concepts very clearly and created a very productive discussion environment where all the attendees could learn a lot. I strongly recommend hiring Udi.”

Daniel Jin Daniel Jin, Senior Lead Developer at PJM Interconnection
“Udi is one of the top SOA guru in the .NET space. He is always eager to help others by sharing his knowledge and experiences. His blog articles often offer deep insights and is a invaluable resource. I highly recommend him.”

Pasi Taive Pasi Taive, Chief Architect at Tieto
“I attended both of Udi's "UI Composition Key to SOA Success" and "DDD in Enterprise Apps" sessions and they were exceptionally good. I will definitely participate in his sessions again. Udi is a great presenter and has the ability to explain complex issues in a manner that everyone understands.”

Eran Sagi, Software Architect at HP
“So far, I heard about Service Oriented architecture all over. Everyone mentions it – the big buzz word. But, when I actually asked someone for what does it really mean, no one managed to give me a complete satisfied answer. Finally in his excellent course “Advanced Distributed Systems”, I got the answers I was looking for. Udi went over the different motivations (principles) of Services Oriented, explained them well one by one, and showed how each one could be technically addressed using NService bus. In his course, Udi also explain the way of thinking when coming to design a Service Oriented system. What are the questions you need to ask yourself in order to shape your system, place the logic in the right places for best Service Oriented system.

I would recommend this course for any architect or developer who deals with distributed system, but not only. In my work we do not have a real distributed system, but one PC which host both the UI application and the different services inside, all communicating via WCF. I found that many of the architecture principles and motivations of SOA apply for our system as well. Enough that you have SW partitioned into components and most of the principles becomes relevant to you as well. Bottom line – an excellent course recommended to any SW Architect, or any developer dealing with distributed system.”

Consult with Udi

Guest Authored Books

Creative Commons License  © Copyright 2005-2011, Udi Dahan. email@UdiDahan.com