Udi Dahan   Udi Dahan – The Software Simplist
Enterprise Development Expert & SOA Specialist
 
  
    Blog Consulting Training Articles Speaking About
  

Archive for the ‘Testing’ Category



The Danger of Centralized Workflows

Wednesday, July 13th, 2011

It isn’t uncommon for me to have a client or student at one of my courses ask me about some kind of workflow tool. This could be Microsoft Workflow Foundation, BizTalk, K2, or some kind of BPEL/orchestration engine. The question usually revolves around using this tool for all workflows in the system as opposed to the SOA-EDA-style publish/subscribe approach I espouse.

The question

The main touted benefit of these workflow-centric architectures is that we don’t have to change the code of the system in order to change its behavior resulting in ultimate flexibility!

Some of you may have already gone down this path and are shaking your heads remembering how your particular road to hell was paved with the exact same good intentions.

Let me explain why these things tend to go horribly wrong.

What’s behind the curtain

It starts with the very nature of workflow – a flow chart, is procedural in nature. First do this, then that, if this, then that, etc. As we’ve experienced first hand in our industry, procedural programming is fine for smaller problems but isn’t powerful enough to handle larger problems. That’s why we’ve come up with object-oriented programming.

I have yet to see an object-oriented workflow drag-and-drop engine. Yes, it works great for simple demo-ware apps. But if you try to through your most complex and volatile business logic at it, it will become a big tangled ball of spaghetti – just like if you were using text rather than pictures to code it.

And that’s one of the fundamental fallacies about these tools – you are still writing code. The fact that it doesn’t look like the rest of your code doesn’t change that fact. Changing the definition of your workflow in the tool IS changing your code.

On productivity

Sometimes people mention how much more productive it would be to use these tools than to write the code “by hand”. Occasionally I hear about an attempt to have “the business” use these tools to change the workflows themselves – without the involvement of developers (“imagine how much faster we could go without those pesky developers!”).

For those of us who have experienced this first-hand, we know that’s all wrong.

If “the business” is changing the workflows without developer involvement, invariably something breaks, and then they don’t know what to do. They haven’t been trained to think the way that developers have – they don’t really know how to debug. So the developers are brought back in anyway and from that point on, the business is once again giving requirements and the devs are the one implementing it.

Now when it comes to developer productivity, I can tell you that the keyboard is at least 10x more productive than the mouse. I can bang out an if statement in code much faster than draggy-dropping a diamond on the canvas, and two other activities for each side of the clause.

On maintainability

Sometimes the visualization of the workflow is presented as being much more maintainable than “regular code”.

When these workflows get to be to big/nested/reused, it ends up looking like the wiring diagram of an Intel chip (or worse). Check out the following diagram taken from the DailyWTF on a customer friendly system:

stateModel

The bigger these get, the less maintainable they are.

Now, some would push back on this saying that a method with 10,000 lines of code in it may be just as bad, if not worse. The thing is that these workflow tools guide developers down a path where it is very likely to end up with big, monolithic, procedural, nested code. When working in real code, we know we need to take responsibility for the cleanliness of our code using object-orientation, patterns, etc and refactoring things when they get too messy.

Here is where I’d bring up the SOA/pub-sub approach as an alternative – there is no longer this idea of a centralized anything. You have small pieces of code, each encapsulating a single business responsibility, working in concert with each other – reacting to each others events.

Productivity take 2: testing and version control

If you’re going to take your most complex and volatile business logic and put it into these workflow tools, have you thought about how your going to test it? How do you know that it works correctly? It tends to be VERY difficult to unit-test these kinds of workflows.

When a developer is implementing a change request, how do they know what other workflows might have been broken? Do they have to manually go through each and every scenario in the system to find out? How’s that for productivity?

Assuming something did break and the developer wants to see a diff – what’s different in the new workflow from the old one, what would that look like? When working with a team, the ability to diff and merge code is at the base of the overall team productivity.

What would happen to your team if you couldn’t diff or merge code anymore?
In this day and age, it should be considered irresponsible to develop without these version control basics.

In closing

There are some cases where these tools might make sense, but those tend to be much more rare than you’d expect (and there are usually better alternatives anyway). Regardless, the architectural analysis should start without the assumption of centralized workflow, database, or centralized anything for that matter.

If someone tries to push one of these tools/architectures on you, don’t walk away – run!



When to avoid CQRS

Friday, April 22nd, 2011

which way?It looks like that CQRS has finally “made it” as a full blown “best practice”.

Please accept my apologies for my part in the overly-complex software being created because of it.

I’ve tried to do what I could to provide a balanced view on the topic with posts like Clarified CQRS and Race Conditions Don’t Exist.

It looks like that wasn’t enough, so I’ll go right out and say it:

Most people using CQRS (and Event Sourcing too) shouldn’t have done so.

Should we really go back to N-Tier?

When not using CQRS (which is the majority of the time), you don’t need N-Tier either.

You see, if you’re not in a collaborative domain then you don’t have multiple writers to the same logical set of data as an inherent property of your domain. As such, having a single database where all data lives isn’t really necessary.

Data is inherently partitioned by who owns it.

Let’s take the online shopping cart as an example. There aren’t any use cases where users operate on each others’ carts – ergo, not collaborative, therefore not a good candidate for CQRS. Same goes for user profiles, and tons of other cases.

So why is it that we need a separate tier to run our business logic?

Originally, the application server tier was introduced for improved scalability, but specifically around managing the connection pool to the database. Increasing numbers of clients (when each had its own user/account for connecting to the database) caused problems. Luckily, most web applications side-step this problem – that is, until someone got the idea that the web server was only supposed to run the UI layer, and the Business Logic layer would be on a separate application server tier.

Rubbish – see Fowler’s First Law of Distribution: Don’t.

Keep it all on one tier. Same goes for smart clients.
No, Silverlight, you don’t count – architecturally speaking, you’re a glorified browser.

But what about scalability?

In a non-collaborative domain, where you can horizontally add more database servers to support more users/requests/data at the same time you’re adding web servers – there is no real scalability problem (caveat, until you’re Amazon/Google/Facebook scale).

Database servers can be cheap – if using MySQL/SQL Express/others.

But what about the built-in event-log CQRS/ES gives us?

Architectural gold-plating / stealing from the business.

Who put you in a position to decide that development time and resources should be diverted from short-term business-value-adding features to support a non-functional requirement that the business didn’t ask for?

If you sat down with them, explaining the long-term value of having an archive of all actions in the system, and they said OK, build this into the system from the beginning, that would be fine. Most people who ask me about CQRS and/or Event Sourcing skip this step.

Finally, you can usually implement this specific requirement with some simple interception and logging. Don’t over-engineer the solution. If using messaging, you can get this by turning on journaling, or if you want to centralize this archive, NServiceBus can forward all messages to a specific queue.

Don’t forget that this storage has a cost – including administration. Nothing is free.

What about the “proof of correctness” in Event Sourcing

I’ve heard statements made that when you use the events that flowed into/through your system AS your system’s data, rather than transforming those events to some other schema (relational or otherwise) and storing the result – you can prove that your system behaves correctly.

Let me put it this way:

No programming technique used by humans will prevent those same humans from creating bugs.
No testing technique used by humans will prevent those same humans from not catching those bugs.
* Automated tests – see programming technique.

While having a full archive of all events can allow us to roll the system back to some state, fix a bug, and roll forwards, that assumes that we’re in a closed system. We have users which are outside the system. If a user made a decision based on data influenced by the bug, there’s no automated way for us to know that, or correct for it as we roll forwards.

In short, we’re interested in the business’ behavior – as composed of user and system behavior. No proof can exist.

Umm, so where should we use it

If you’ve uncovered a scenario where you’re wondering “first-one-wins, or last-one-wins”, that’s often a good candidate for a place where CQRS could make sense. Then re-read my Race Conditions Don’t Exist post.

Also, CQRS should not be your top-level architectural pattern – that would be SOA.
CQRS, if used at all, would be used inside a service boundary only.

Given that SOA guides us away from having a given 3rd normal form entity exist in any one service, it is unlikely that the building blocks of your CQRS design will be those kinds of entities. Most 3rd normal form one-to-many and many-to-many relationships simply do not exist when doing SOA and CQRS properly.

Therefore, I’m sorry to say that most sample application you’ll see online that show CQRS are architecturally wrong. I’d also be extremely wary of frameworks that guide you towards an entity-style aggregate root CQRS model.

In Summary

So, when should you avoid CQRS?

The answer is most of the time.

Here’s the strongest indication I can give you to know that you’re doing CQRS correctly: Your aggregate roots are sagas.

And the biggest caveat – the above are generalizations, and can’t necessarily be true for every specific scenario. If you’re Greg Young, then you probably can (and will) decide on your own on these matters. For everybody else, please take these warnings to heart. There have been far too many clients that have come to me all mixed up with their use CQRS in areas where it wasn’t warranted.

If you want to know everything you need to know to apply CQRS appropriately, please come to my course – there is so much unlearning to do first that just can’t happen via a series of blog posts.



On Design for Testability

Sunday, April 18th, 2010

keeping balanceAlmost at every conference, event, training, or consulting engagement someone asks for my opinion on the whole design for testability thing. I’m not quite sure why I haven’t blogged on this topic, especially at the time that a lot of the other bloggers were weighing in, but better late than never.

Before getting into that, I want to start with a slightly broader scope of discussion.

You see, I get asked about “best practices” on all sorts of things. And I try not to be the kind of consultant that responds with “it depends”, but the context of the question often makes the answer irrelevant. And the unspoken context of a best-practice question is:

Given infinite time and budget

The biggest problem that I see with well-intentioned, best-practices-following developers and architects is that they don’t ask the question “is this the right thing for us to be focusing on right now?” Understandably, that is a difficult question to answer – but it needs to be asked, since you don’t have infinite time or budget to do everything according to best practices (assuming those even exist).

About testing

The biggest issue I have with the “design for testability” topic is the extremely narrow view it takes of the word “testability”, usually in the form of more code written by a developer which invokes the production code of the system, also known as “unit tests”.

There are many different kinds of testing – unit, integration, functional, load, performance, exploratory, etc… where some may be automated and others not. Should we not discuss what “design for testability” means for not-just-unit-testing?

And what’s the point of testing anyway?

It’s not to find bugs.

Research has shown that testing (of all kinds) is not the most effective way of finding bugs. I don’t have the reference handy but I’m pretty sure that it’s from Alistair Cockburn’s work. Code reviews are (on average) about 60% more effective.

Don’t get me wrong – testing can provide indications that the software has bugs in it, but not necessarily where in the code those bugs are.

The purpose of testing is to provide quantitative and qualitative information about the system that can help various stakeholders in their decision-making processes. The relevance of that information indicates the quality of the testing. Here are some examples:

  • The system supports 100 concurrent users, with the expected user-type distribution (X% role A, Y% role B, etc), performing expected use-case distributions, and collaboration scenarios.
  • Time to proficiency for new users in role A is expected to be 3 days
  • Alternate #2 of use case #12 fails on step #3

As you can see, the relevance of the above information is dependent on what decisions the various stakeholders need to make. The bullet on load can help us decide if more machines are needed or if developers need to tune the performance of the systems. The bullet on time to proficiency can help us decide if larger investment in usability is required. Information like the last bullet can be used in conjunction with the first two to decide on the timing and type of a release.

The timeliness of this relevant information is critical to the success of a project.

Choosing which and how much of the various testing activities to perform when is something that needs to be revisited several times throughout the lifetime of a project, taking into account the current risks (threats and probabilities) and time and resource investment to mitigate them.

Let me reiterate – we’re not going to have enough time to do everything.

On iterations

If the only part of your organization that is doing iterations are your developers, you’re not agile.

In order to capitalize on the information that testers are providing, you need them in your iterations.

The same goes for the other roles involved in the project – business analysts, DBAs, sysadmins, etc.

I know that 99% of organizations aren’t structured in a way to do this.

I never said doing this would be easy.

On design

Figuring out what kind of design and how much to do when is just as important, and just as hard. Design for testability is one part of that, but not the only one, or necessarily the most important one at any point of time.

Within that design for testability topic is the “design for unit-testing” sub-topic which seems to be the popular one. Before getting into the design aspects of it, let’s take a closer look at the unit-testing side of things.

On unit-testing

The assumption is that having more unit tests will lead to a code-base with less bugs, thus requiring shorter time to get the system into production, which will pay back the time it took to write those unit tests to begin with.

In practice, what tends to happen is that as development progresses, testing code breaks as the structure of the production code changes. Now one of two things happens – either the testing code is removed or rewritten. In either case, we didn’t get the return on investment we expected on the first bit of testing code. Unfortunately, rare is the case where the relevant people in the organization understand why, resulting in the same situation repeating itself over and over again.

Those projects would have been better off without unit testing, though the organization as a whole might have used those experiences to learn and improve. It’s been my experience that if the organization wasn’t conscious enough in the context of the project to notice the situation, it is unlikely to do so at higher levels.

On fragile unit tests

The reason that a unit test ends up being rewritten (or removed) is that its code was coupled to the production code in such a way that it broke when the production code changed. This tendency to break (fragility) is a critical property of a unit test. A fragile unit test will slow down a developer doing work on some existing code – it actually makes the system less maintainable.

For a unit test code to be stable (not fragile) it needs to be coupled to stable properties of the production code. The question of whether the production code is designed in such a way that it has stable properties – is a design question. Is it a unit? If not, you will not be able to write a unit-test against it.

And anyway, who said that every class is a unit, or should be a unit? Domain models (when done right) are good examples of a unit, yet the classes that make them up may not be units. Unit-testing should only be attempted with things which are units.

I think too much weight is put on whether a dependency of a class is a concrete or interface type, and not nearly enough on the nature of the dependency. I wouldn’t blame the hammer for pounding my thumb, and by the same token I think that blame should not be directed towards tools like those from TypeMock.

On tools

There is so much more depth to both design and testability that needs to be more broadly understood. No tool has yet been created to handle either design or testing in such a way that humans can give up responsibility for the outcome.

Over the years I’ve noticed that tools are most significant when used by skilled practitioners, which makes sense in retrospect. Giving a novice carpenter a laser-guided saw probably won’t significantly change the outcome of their work. Ultimately, the skilled practitioners are the ones that create tools – not the novices. And no tool, no matter how advanced, will make a novice perform at levels like the skilled practitioner.

In the case of a project too big for a single skilled practitioner to complete in the time required (or at all), the balance of importance shifts away from tools to the project management topics described above.

In summary

I hope that this post has shed some light on the context in which decisions with respect to testing need to be made. Design is one activity that can support certain kinds of testing, but not the only one, or even the most important one for the given type of testing necessary at that time in the project.

Design is hard. Project management is hard. Testing is hard.

Getting the right mix of people that together have enough experience and skills in these activities isn’t easy.

Don’t expect that sprinkling some interfaces in your code base will be enough.
That doesn’t count much in the way of design, just as writing code in a testing namespace doesn’t count much in the way of testability.

Looking forward to hearing your comments.



Convention over Configuration – The Next Generation?

Saturday, August 15th, 2009

PicardKirk
Convention over configuration describes a style of development made popular by Ruby on Rails which has gained a great deal of traction in the .net ecosystem. After using frameworks designed in this way, I can say that the popularity is justified – it is much more pleasurable developing this way.

The thing is, when looking at this in light of the full software development lifecycle, there are signs that the waters run deeper than we might have originally thought.

Let’s take things one step at a time though…

What is it?

Wikipedia tells us:

“Convention over Configuration (aka Coding by convention) is a software design paradigm which seeks to decrease the number of decisions that developers need to make, gaining simplicity, but not necessarily losing flexibility. The phrase essentially means a developer only needs to specify unconventional aspects of the application.”

What this means is that frameworks built in this way have default implementations that can be swapped out if needed. So far so good.

For example…

In NServiceBus, there is an abstraction for how subscription data is stored and multiple implementations – one in-memory, another using a durable MSMQ queue, and a third which uses a database. The convention for that part of the system is that the MSMQ implementation will be used, unless something else is specified.

Developers wishing to specify a different implementation can specify the desired implementation in the container – either one that comes out of the box, or their own implementation of ISubscriptionStorage.

Things get more interesting when we consider the full lifecycle.

Lifecycle effects

When developers are in the early phases of writing a new service, they want to focus primarily on what the service does – its logic. They don’t want to muck around with MSMQ queues for storing subscriptions and would much rather use the in-memory storage.

As the service takes shape and the developers want to run the full service on their machine, possibly testing basic fault-tolerance behaviors – kill one service, see that the others get a timeout, bring the service back up, wanting it to maintain all the previous subscriptions.

Moving on from there, our developers want to take the same system they just tested on their machine and move it into a staging environment. There, they don’t want to use the MSMQ implementation for subscription storage, but rather the database implementation – as will be used in the production environment.

While it may not sound like a big deal – changing the code which specifies which implementation to use when moving from one environment to another, consider that on top of just subscription storage, there is logging (output to console, file, db?), saga persistence (in-memory, file-based DB, relational DB), and more.

It’s actually quite likely that something will get missed as we move the system between environments. Can there be a better way?

What if…

What if there was some way for the developer to express their intent to the system, and the system could change its conventions, without the developer having to change any code or configuration files?

You might compare this (in concept) to debug builds and release builds. Same code, same config, but the runtime behaves different between the two.

As I mulled over how we could capture that intent without any code or config changes, the solution that I kept coming to seemed too trivial at first, so I dismissed it. Yet, it was the simplest one that would work for console and WinForms applications, as well as windows services – command line arguments. The only thing is that I don’t think those are available for web applications.

But since we’re still in “what if” land, and I’m more thinking out loud here than providing workable solutions for tomorrow morning, let’s “what if” command line arguments worked for web apps too.

Command-Line Intent

Going back to our original scenario, when developers are working on the logic of the service, they run it using the generic NServiceBus host process, passing it the command line parameter /lite (or whatever). The host then automatically configures all the in-memory implementations.

As the system progresses, when the developer wants to run everything on their machine, they run the processes with /integration. The host then configures the appropriate implementations (MSMQ for subscription storage, SQLite for saga persistence, etc.

When the developers want to run the system in production, they could specify /production (or maybe that could be the default?), and the database backed implementations would be configured.

Imagine…

Imagine being able to move that fluidly from one environment to another. Not needing to pore over configuration files or startup script code which configures a zillion implementation details. Not needing to worry that as you moved the system to staging something would break.

Imagine short, frictionless iterations even for large scale systems.

Imagine – lifecycle-aware frameworks making all this imagination a reality.

In Closing

We’re not there yet – but we’re not that far either. The generic host we’re providing with NServiceBus 2.0 is now being extended to support exactly these scenarios.

It’s my hope that as more of us think about this challenge, we’ll come up with better solutions and more intelligent frameworks. Just as convention came to our rescue before, breaking us out of the pain of endless XML configuration, I hope this new family of lifecycle-aware frameworks will make the friction of moving a system through dev, test, staging, and production a thing of the past.

A worthy problem for us all to solve, don’t you think?

Any ideas on how to make it a reality?
Send them in – leave a comment below.



Domain Events – Salvation

Sunday, June 14th, 2009

sphere
I’ve been hearing from people that have had a great deal of success using the Domain Event pattern and the infrastructure I previously provided for it in Domain Events – Take 2. I’m happy to say that I’ve got an improvement that I think you’ll like. The main change is that now we’ll be taking an approach that is reminiscent to how events are published in NServiceBus.

Background

Before diving right into the code, I wanted to take a minute to recall how we got here.

It started by looking for how to create fully encapsulated domain models.

The main assertion being that you do *not* need to inject anything into your domain entities.

Not services. Not repositories. Nothing.

Just pure domain model goodness.

Make Roles Explicit

I’m going to take the advice I so often give. A domain event is a role, and thus should be represented explicitly:

   1:  public interface IDomainEvent {}

If this reminds you of the IMessage marker interface in nServiceBus, you’re beginning to see where this is going…

How to define domain events

A domain event is just a simple POCO that represents an interesting occurence in the domain. For example:

   1:  public class CustomerBecamePreferred : IDomainEvent 
   2:  {
   3:      public Customer Customer { get; set; }
   4:  }

For those of you concerned about the number of events you may have, and therefore are thinking about bunching up these events by namespaces or things like that, slow down. The number of domain events and their cohesion is directly related to that of the domain model.

If you feel the need to split your domain events up, there’s a good chance that you should be looking at splitting your domain model too. This is the bottom-up way of identifying bounded contexts.

How to raise domain events

In your domain entities, when a significant state change happens you’ll want to raise your domain events like this:

   1:  public class Customer
   2:  {
   3:      public void DoSomething()
   4:      {
   5:          DomainEvents.Raise(new CustomerBecamePreferred() { Customer = this });
   6:      }
   7:  }

We’ll look at the DomainEvents class in just a second, but I’m guessing that some of you are wondering “how did that entity get a reference to that?” The answer is that DomainEvents is a static class. “OMG, static?! But doesn’t that hurt testability?!” No, it doesn’t. Here, look:

Unit testing with domain events

One of the things we’d like to check when unit testing our domain entities is that the appropriate events are raised along with the corresponding state changes. Here’s an example:

   1:  public void DoSomethingShouldMakeCustomerPreferred()
   2:  {
   3:      var c = new Customer();
   4:      Customer preferred = null;
   5:   
   6:      DomainEvents.Register<CustomerBecamePreferred>(
   7:          p => preferred = p.Customer
   8:              );
   9:   
  10:      c.DoSomething();
  11:      Assert(preferred == c && c.IsPreferred);
  12:  }

As you can see, the static DomainEvents class is used in unit tests as well. Also notice that you don’t need to mock anything – pure testable bliss.

Who handles domain events

First of all, consider that when some service layer object calls the DoSomething method of the Customer class, it doesn’t necessarily know which, if any, domain events will be raised. All it wants to do is its regular schtick:

   1:  public void Handle(DoSomethingMessage msg)
   2:  {
   3:      using (ISession session = SessionFactory.OpenSession())
   4:      using (ITransaction tx = session.BeginTransaction())
   5:      {
   6:          var c = session.Get<Customer>(msg.CustomerId);
   7:          c.DoSomething();
   8:   
   9:          tx.Commit();
  10:      }
  11:  }

The above code complies with the Single Responsibility Principle, so the business requirement which states that when a customer becomes preferred, they should be sent an email belongs somewhere else.

Notice that the key word in the requirement – “when”.

Any time you see that word in relation to your domain, consider modeling it as a domain event.

So, here’s the handling code:

   1:  public class CustomerBecamePreferredHandler : Handles<CustomerBecamePreferred>
   2:  { 
   3:     public void Handle(CustomerBecamePreferred args)
   4:     {
   5:        // send email to args.Customer
   6:     }
   7:  } 

This code will run no matter which service layer object we came in through.

Here’s the interface it implements:

   1:  public interface Handles<T> where T : IDomainEvent
   2:  {
   3:      void Handle(T args); 
   4:  } 

Fairly simple.

Please be aware that the above code will be run on the same thread within the same transaction as the regular domain work so you should avoid performing any blocking activities, like using SMTP or web services. Instead, prefer using one-way messaging to communicate to something else which does those blocking activities.

Also, you can have multiple classes handling the same domain event. If you need to send email *and* call the CRM system *and* do something else, etc, you don’t need to change any code – just write a new handler. This keeps your system quite a bit more stable than if you had to mess with the original handler or, heaven forbid, service layer code.

Where domain event handlers go

These handler classes do not belong in the domain model.

Nor do they belong in the service layer.

Well, that’s not entirely accurate – you see, there’s no *the* service layer. There is the part that accepts messages from clients and calls methods on the domain model. And there is another, independent part that handles events from the domain. Both of these will probably make use of a message bus, but that implementation detail shouldn’t deter you from keeping each in their own package.

The infrastructure

I know you’ve been patient, reading through all my architectural blah-blah, so here it is:

   1:  public static class DomainEvents
   2:  { 
   3:      [ThreadStatic] //so that each thread has its own callbacks
   4:      private static List<Delegate> actions;
   5:   
   6:      public static IContainer Container { get; set; } //as before
   7:   
   8:      //Registers a callback for the given domain event
   9:      public static void Register<T>(Action<T> callback) where T : IDomainEvent
  10:      {
  11:         if (actions == null)
  12:            actions = new List<Delegate>();
  13:   
  14:         actions.Add(callback);
  15:     }
  16:   
  17:     //Clears callbacks passed to Register on the current thread
  18:     public static void ClearCallbacks ()
  19:     {
  20:         actions = null;
  21:     }
  22:   
  23:     //Raises the given domain event
  24:     public static void Raise<T>(T args) where T : IDomainEvent
  25:     {
  26:        if (Container != null)
  27:           foreach(var handler in Container.ResolveAll<Handles<T>>())
  28:              handler.Handle(args);
  29:   
  30:        if (actions != null)
  31:            foreach (var action in actions)
  32:                if (action is Action<T>)
  33:                    ((Action<T>)action)(args);
  34:     }
  35:  } 

Notice that while this class *can* use a container, the container isn’t needed for unit tests which use the Register method.

When used server side, please make sure that you add a call to ClearCallbacks in your infrastructure’s end of message processing section. In nServiceBus this is done with a message module like the one below:

   1:  public class DomainEventsCleaner : IMessageModule
   2:  { 
   3:      public void HandleBeginMessage() { }
   4:   
   5:      public void HandleEndMessage()
   6:      {
   7:          DomainEvents.ClearCallbacks();
   8:      }
   9:  }

The main reason for this cleanup is that someone just might want to use the Register API in their original service layer code rather than writing a separate domain event handler.

Summary

Like all good things in life, 3rd time’s the charm.

It took a couple of iterations, and the API did change quite a bit, but the overarching theme has remained the same – keep the domain model focused on domain concerns. While some might say that there’s only a slight technical difference between calling a service (IEmailService) and using an event to dispatch it elsewhere, I beg to differ.

These domain events are a part of the ubiquitous language and should be represented explicitly.

CustomerBecamePreferred is nothing at all like IEmailService.

In working with your domain experts or just going through a requirements document, pay less attention to the nouns and verbs that Object-Oriented Analysis & Design call attention to, and keep an eye out for the word “when”. It’s a critically important word that enables us to model important occurrences and state changes.

What do you think? Are you already using this approach? Have you already tried it and found it broken in some way? Do you have any suggestions on how to improve it?

Let me know – leave a comment below.



Unit Testing for Developers and Managers

Tuesday, September 30th, 2008

image “We need to rewrite the system.”

Thus begins the story of yet another developer trying to convince their manager to adopt test-driven development (or any other methodology or technology). There’s a good chance this developer’s been reading all sorts of stuff on blogs (like those linked here) that have convinced him that salvation lies that way.

Don’t get me wrong.

There’s a good chance the developer’s right.

It’s just that that’s besides the point.

Developers and Managers

There’s a difference between how developers view a practice and how a manager (defined for the purposes of this post as someone in charge of delivering something) view that same practice. From a developer perspective, Ian’s point about unit testing is spot on:

“The problem is that the most important step is not doing it right, but doing it at all.”

Yet, as Ian himself points out in the title, this is a learning issue. If you want to learn to swim, there’s no replacement for jumping in the pool.

The manager’s perspective is a bit different.

Yes, we want our developers to improve their skill set. Yes, we understand that unit testing will ultimately improve quality. Yes, we know developers need to practice these skills as a part of their job. But, and it’s a big ole’ but, when it comes time to sink or swim, and we’ve got a deadline, those desires need to be balanced with delivering. Accounts of unit testing adoption efforts resulting in more (test) code to support with little apparent improvement in quality in the short term, well, they scare us. Arnon’s post gives more links supporting that feeling.

What’s a Unit Test anyway?

Is it any class that happens to have a TestFixture attribute on it?

If we are to “decouple” unit testing from good design, as Roy has described, that’s a not-improbable outcome. If the design of the system is such as there aren’t any real “units”, what exactly are we testing? Regardless of static or dynamic typing, replaceability of code, and other technological things, does the fact that all TestMethods in that TestFixture complete successfully mean anything? In other words, what did the test test?

It is clear that these tests cost something.

It’s more code to write. It’s more code to maintain.

The question is, what value are we getting from these “unit tests that any developer without design skills can write”?

The manager in me doesn’t like this return on investment.

By the way, TDD is as much the evolution of unit testing as the screw driver is the evolution of the hammer. But that’ll have to wait for a different post.

What’s Design Got To Do With It?

If you’re looking for the technical ability to write a test fixture and replace calls to other classes, then design has nothing to it.

If tests are to be valuable – design has everything to do with it.

The difficulty our developer is having unit-testing the system is a symptom of design problems. There’s a good chance that’s why he suggested a rewrite.

By the way, please do a search & replace in your vocabulary on the word “rewrite” with the word “redesign”. The code’s syntax isn’t the problem – it’s not the “m_”, camel case, or anything like that. It’s not that if the code was rewritten under the same design that all problems will go away.

Redesign, or do nothing.

The community’s been discussing the issues of coupling, interfaces, mocking, and tools at length in the context of testability. I won’t reiterate the debate here but I’ll tell you this:

If logic is duplicated, if the code is tightly coupled, if there is no separation of concerns, the unit tests will be useless – even if they “test” the class in isolation.

Cut the coverage crap

Metrics lie.

The fact that there’s a bunch of other code which calls 100% of the system’s code and doesn’t contain false assertions doesn’t mean that the code is high quality or doesn’t contain bugs.

In a well designed system, most “logic” will be contained in two “layers” – the controllers and the domain model. These classes should be independent of any and all technological concerns. You can and should get high unit test coverage on classes in these layers. Shoot for 100%, it’s worth it.

Testing domain models is all about asserting state. While using setters to get the domain objects into a necessary initial state is OK, setters should not be used beyond that. Testing controllers is primarily about interactions – mocks will probably be needed for views and service agents. Commands do not need to be mocked out.

Most other layers have some dependence on technology that makes unit tests relatively less valuable. Coverage on these layers is most meaningless. My guidance is to take the effort that would have been spent on unit testing these other layers and invest it all in automated integration tests. You’re likely to get a much higher return on investment. Much higher.

Much.

Everybody’s Right

Developers aren’t just born knowing good design, testing, or anything else. Universities, colleges, and most vendors do little to change that state of affairs. Books help, a bit, but when learning to swim, you’ve got to get your feet wet, and on the job training is, by and large, all there is. As such, lowering the barrier to entry is important.

Keeping in mind the Dreyfus model of knowledge acquisition, it’s not about “dumbing down” software development, it’s about bringing novices up to speed:

“In the beginning [novices] learn to recognize objective facts and features, relevant to the skill. Characteristic of relevant elements are that they can be recognized context-free, i.e. without reference to the overall situation. The novice acquire basic rules to follow, acting upon those facts and features. The rules are also context-free, i.e. no notice is taken to the surroundings. On account of this the novice feels very little responsibility for the result.” (emphasis mine)

Managers are ultimately responsible for the result.

Managers shouldn’t necessarily sacrifice their projects on this altar of learning. Organizations need to find ways for developers to safely practice these techniques as a part of developing their “human resources”. First of all, this needs to be communicated to everyone – that the organization understands the importance of these techniques, the desires of developers to adopt them, and the projects that need to be delivered.

Some projects may be allocated additional non-functional requirements: the software will be developed test-first, there will be at least 80% unit test coverage, etc. It can make sense to have developers spend some time on these projects after finishing one more delivery focused project and before going onto another one. As more developers become proficient with unit testing and design, the delivery focused projects can start to benefit from these skills.

It’s a gradual process.

The Important Bit

No matter how you go about unit testing, do periodic test reviews.

Just like code reviews.

That’s it.

 


Related Posts

Business Process Verification

Self documenting and Test-Driven Alien Artifacts

SOA Testing



Make WCF and WF as Scalable and Robust as NServiceBus

Monday, June 30th, 2008

This topic is getting more play as more people are using WCF and WF in real-world scenarios, so I thought I’d pull the things that I’ve been watching in this space together:

Reliabilitydoctor

Locking in SqlWorkflowPersistenceService (via Ron Jacobs) where, if you want predictable persistence (MS: ‘none of our customers asked for this to be easy’), you need to use a custom activity (which Ron was kind enough to supply).

“Given what I learned today I’d have to say that I’d be very careful about using workflows with an optimistic locking.  Detecting these types of situations is not that simple.”

Let’s think about that. If we’re doing pessimistic locking, we get into the problem of, if a host restarts (as the result of a critical windows patch or some other unexpected occurrence), that the workflow won’t be able to be handled by any other host in the meantime (you didn’t care so much about your SLA, did you?).

Luckily, someone’s come up with a hack that works around this robustness problem in Scalable Workflow Persistence and Ownership.

“So this code will attempt to load workflow instances with expired locks every second. Is it a hack? Yes. But without one of two things in the SqlWorkflowPersistenceService its the sort of code you have to write to pick up unlocked workflow instances robustly.”

This will seriously churn the table used to store your workflows, decreasing performance of workflows that haven’t timed out. Oh well.

Testability

Implementing WCF Services without Referencing WCF (via Mark Seemann):

“More than a year ago, I wrote my first post on unit testing WCF services. One of my points back then was that you have to be careful that the service implementation doesn’t use any of the services provided by the WCF runtime environment (if you want to keep the service testable). As soon as you invoke something like OperationContext.Current, your code is not going to work in a unit testing scenario, but only when hosted by WCF.”

After pointing out some of the more basic difficulties in testability a straightforward WCF implementation brings, Mark turns the heat up in his follow-up post, Modifying Behavior of WCF-Free Service Implementations:

“Perhaps you need to control the service’s ConcurrencyMode, or perhaps you need to set UseSynchronizationContext. These options are typically controlled by the ServiceBehaviorAttribute. You may also want to provide an IInstanceProvider via a custom attribute that implements IContractBehavior. However, you can’t set these attributes on the service implementation itself, since it mustn’t have a reference to System.ServiceModel.”

Wow – all the things required to make a WCF service scalable and thread-safe make it difficult to test. In the end, we’re beginning to see how many hoops we have to go through in order to get separation of concerns, but until we can take all this and get it out of our application code, it’s an untenable solution. I hope Mark will continue with this series, if only so I can take the framework that might grow out of it and use it as a generic WCF transport for NServiceBus.

Comparisonapples and oranges

After the Neuron-NServiceBus comparison that Sam and I had, we talked some more. After going through some of the rational and thinking, Sam even put nServiceBus into his WCF-Neuron comparison talk. Sam had this to say about nServiceBus:

“The bottom line is: I like what I see. Although it’s a framework, not an ESB product like Neuron, it’s a powerful framework that takes the right approach on SOA and enforces a paradigm of reliable one-way, *non-blocking* calls. That is the point of the talk tonight overall; we need to get away from the stack world of synchronous RPC calls to true asynchronous non-blocking message based SOA systems.”

The main concern I have with a WCF+WF based solution is that developers need to know a lot in order to make it testable, scalable, and robust. In nServiceBus, that’s baked into the design. It would be extremely difficult for a developer writing application logic to interfere with when persistence needs to happen, or the concurrency strategy of long-running workflows. The fact that message handlers in the service layer don’t need concurrency modes, instance providers, or any of that junk make them testable by default.



Sagas and Unit Testing – Business Process Verification Made Easy

Monday, February 4th, 2008

Sagas have always been designed with unit testing in mind. By keeping them disconnected from any communications or persistence technology, it was my belief that it should be fairly easy to use mock objects to test them. I’ve heard back from projects using nServiceBus this way that they were pleased with their ability to test them, and thought all was well.

Not so.

The other day I sat down to implement and test a non-trivial business process, and the testing was far from easy. Now as developers go, I’m not great, or an expert on unit testing or TDD, but I’m above average. It should not have been this hard. And I tried doing it with Rhino.Mocks, TypeMock, and finally Moq. It seemed like I was in a no-mans-land, between trying to do state-based testing, and setting expectations on the messages being sent (as well as correct values in those messages), nothing flowed.

Until I finally stopped trying to figure out how to test, and focused on what needed to be tested. I mean, it’s not like I was trying to build a generic mocking framework like Daniel.

Here’s an example business process, or actually, part of one, and then we’ll see how that can be tested. By the way, there will be a post coming soon which describes how we go about analysing a system, coming up with these message types, and how these sagas come into being, so stay tuned. Either that, or just come to my tutorial at QCon.

On with the process:

1. When we receive a CreateOrderMessage, whose “Completed” flag is true, we’ll send 2 AuthorizationRequestMessages to internal systems (for managers to authorize the order), one OrderStatusUpdatedMessage to the caller with a status “Received”, and a TimeoutMessage to the TimeoutManager requesting to be notified – so that the process doesn’t get stuck if one or both messages don’t get a response.

2. When we receive the first AuthorizationResponseMessage, we notify the initiator of the Order by sending them a OrderStatusUpdatedMessage with a status “Authorized1”.

3. When we get “timed out” from the TimeoutManager, we check if at least one AuthorizationResponseMessage has arrived, and if so, publish an OrderAcceptedMessage, and notify the initator (again via the OrderStatusUpdatedMessage) this time with a status of “Accepted”.

And here’s the test:

    public class OrderSagaTests 
    { 
        private OrderSaga orderSaga = null; 
        private string timeoutAddress; 
        private Saga Saga;     

        [SetUp] 
        public void Setup() 
        { 
            timeoutAddress = "timeout"; 
            Saga = Saga.Test(out orderSaga, timeoutAddress); 
        }     

        [Test] 
        public void OrderProcessingShouldCompleteAfterOneAuthorizationAndOneTimeout() 
        { 
            Guid externalOrderId = Guid.NewGuid(); 
            Guid customerId = Guid.NewGuid(); 
            string clientAddress = "client";     

            CreateOrderMessage createOrderMsg = new CreateOrderMessage(); 
            createOrderMsg.OrderId = externalOrderId; 
            createOrderMsg.CustomerId = customerId; 
            createOrderMsg.Products = new List<Guid>(new Guid[] { Guid.NewGuid() }); 
            createOrderMsg.Amounts = new List<float>(new float[] { 10.0F }); 
            createOrderMsg.Completed = true;     

            TimeoutMessage timeoutMessage = null;     

            Saga.WhenReceivesMessageFrom(clientAddress) 
                .ExpectSend<AuthorizeOrderRequestMessage>( 
                    delegate(AuthorizeOrderRequestMessage m) 
                    { 
                        return m.SagaId == orderSaga.Id; 
                    }) 
                .ExpectSend<AuthorizeOrderRequestMessage>( 
                    delegate(AuthorizeOrderRequestMessage m) 
                    { 
                        return m.SagaId == orderSaga.Id; 
                    }) 
                .ExpectSendToDestination<OrderStatusUpdatedMessage>( 
                    delegate(string destination, OrderStatusUpdatedMessage m) 
                    { 
                        return m.OrderId == externalOrderId && destination == clientAddress; 
                    }) 
                .ExpectSendToDestination<TimeoutMessage>( 
                    delegate(string destination, TimeoutMessage m) 
                    { 
                        timeoutMessage = m; 
                        return m.SagaId == orderSaga.Id && destination == timeoutAddress; 
                    }) 
                .When(delegate { orderSaga.Handle(createOrderMsg); });     

            Assert.IsFalse(orderSaga.Completed);     

            AuthorizeOrderResponseMessage response = new AuthorizeOrderResponseMessage(); 
            response.ManagerId = Guid.NewGuid(); 
            response.Authorized = true; 
            response.SagaId = orderSaga.Id;     

            Saga.ExpectSendToDestination<OrderStatusUpdatedMessage>( 
                    delegate(string destination, OrderStatusUpdatedMessage m) 
                    { 
                        return (destination == clientAddress && 
                                m.OrderId == externalOrderId && 
                                m.Status == OrderStatus.Authorized1); 
                    }) 
                .When(delegate { orderSaga.Handle(response); });     

            Assert.IsFalse(orderSaga.Completed);     

            Saga.ExpectSendToDestination<OrderStatusUpdatedMessage>( 
                    delegate(string destination, OrderStatusUpdatedMessage m) 
                    { 
                        return (destination == clientAddress && 
                                m.OrderId == externalOrderId && 
                                m.Status == OrderStatus.Accepted); 
                    }) 
                .ExpectPublish<OrderAcceptedMessage>( 
                    delegate(OrderAcceptedMessage m) 
                    { 
                        return (m.CustomerId == customerId); 
                    }) 
                .When(delegate { orderSaga.Timeout(timeoutMessage.State); });     

            Assert.IsTrue(orderSaga.Completed); 
        } 
    }

You might notice that this style is a bit similar to the fluent testing found in Rhino Mocks. That’s not coincidence. It actually makes use of Rhino Mocks internally. The thing that I discovered was that in order to test these sagas, you don’t need to actually see a mocking framework. All you should have to do is express how messages get sent, and under what criteria those messages are valid.

If you’re wondering what the OrderSaga looks like, you can find the code right here. It’s not a complete business process implementation, but its enough to understand how one would look like:

using System; 
using System.Collections.Generic; 
using ExternalOrderMessages; 
using NServiceBus.Saga; 
using NServiceBus; 
using InternalOrderMessages;     

namespace ProcessingLogic 
{ 
    [Serializable] 
    public class OrderSaga : ISaga<CreateOrderMessage>, 
        ISaga<AuthorizeOrderResponseMessage>, 
        ISaga<CancelOrderMessage> 
    { 
        #region config info     

        [NonSerialized] 
        private IBus bus; 
        public IBus Bus 
        { 
            set { this.bus = value; } 
        }     

        [NonSerialized] 
        private Reminder reminder; 
        public Reminder Reminder 
        { 
            set { this.reminder = value; } 
        }     

        #endregion     

        private Guid id; 
        private bool completed; 
        public string clientAddress; 
        public Guid externalOrderId; 
        public int numberOfPendingAuthorizations = 2; 
        public List<CreateOrderMessage> orderItems = new List<CreateOrderMessage>();     

        public void Handle(CreateOrderMessage message) 
        { 
            this.clientAddress = this.bus.SourceOfMessageBeingHandled; 
            this.externalOrderId = message.OrderId;     

            this.orderItems.Add(message);     

            if (message.Completed) 
            { 
                for (int i = 0; i < this.numberOfPendingAuthorizations; i++) 
                { 
                    AuthorizeOrderRequestMessage req = new AuthorizeOrderRequestMessage(); 
                    req.SagaId = this.id; 
                    req.OrderData = orderItems;     

                    this.bus.Send(req); 
                } 
            }     

            this.SendUpdate(OrderStatus.Recieved);     

            this.reminder.ExpireIn(message.ProvideBy - DateTime.Now, this, null); 
        }     

        public void Timeout(object state) 
        { 
            if (this.numberOfPendingAuthorizations <= 1) 
                this.Complete(); 
        }     

        public Guid Id 
        { 
            get { return id; } 
            set { id = value; } 
        }     

        public bool Completed 
        { 
            get { return completed; } 
        }     

        public void Handle(AuthorizeOrderResponseMessage message) 
        { 
            if (message.Authorized) 
            { 
                this.numberOfPendingAuthorizations--;     

                if (this.numberOfPendingAuthorizations == 1) 
                    this.SendUpdate(OrderStatus.Authorized1); 
                else 
                { 
                    this.SendUpdate(OrderStatus.Authorized2); 
                    this.Complete(); 
                } 
            } 
            else 
            { 
                this.SendUpdate(OrderStatus.Rejected); 
                this.Complete(); 
            } 
        }     

        public void Handle(CancelOrderMessage message) 
        {     

        }     

        private void SendUpdate(OrderStatus status) 
        { 
            OrderStatusUpdatedMessage update = new OrderStatusUpdatedMessage(); 
            update.OrderId = this.externalOrderId; 
            update.Status = status;     

            this.bus.Send(this.clientAddress, update); 
        }     

        private void Complete() 
        { 
            this.completed = true;     

            this.SendUpdate(OrderStatus.Accepted);     

            OrderAcceptedMessage accepted = new OrderAcceptedMessage(); 
            accepted.Products = new List<Guid>(this.orderItems.Count); 
            accepted.Amounts = new List<float>(this.orderItems.Count);     

            this.orderItems.ForEach(delegate(CreateOrderMessage m) 
                                        { 
                                            accepted.Products.AddRange(m.Products); 
                                            accepted.Amounts.AddRange(m.Amounts); 
                                            accepted.CustomerId = m.CustomerId; 
                                        });     

            this.bus.Publish(accepted); 
        } 
    } 
}

All this code is online in the subversion repository under /Samples/Saga.

Questions, comments, and general thoughts are always appreciated.



Estimate Individually – Fail Globally?

Saturday, September 1st, 2007

After reading Derek Hatchard’s post, The Art and War of Estimating and Scheduling Software, I wanted to follow up on my previous post on the topic, Don’t Trust Developers with Project Management. The problem lies with individualistic thinking.

Developers, and managers too for that matter, by and large are concerned with “productivity”. Developers want the latest tools and technologies so that they can churn out more code faster. Managers create schedules trying to get the maximum efficiency out of each one of their developers. They consider resource utilization and other terms that sound manager-ish.

Fact is, on medium to large sized projects, if you look at the studies you’ll find that developer productivity when measured as total lines of (non-blank) code of the system in production divided by the total number of developer days comes in roughly at 6. Maybe 7.

7 lines of code a day.

Let that sink in for a second.

I can hear the managers screaming already. OMFG, what were they doing all day long?! It takes, what, 10 minutes to put out 7 lines of code? An hour even, if it’s complicated recursive code and stuff. And they say they don’t like us micro-managing them?! Now we know why. It’s because they’re goofing off all day long.

Well, managers, that’s not really the way it goes. You see, you have to take into account the time it took to learn the technology, tools, frameworks, etc. Add to that the time of understanding the requirements, which is really sitting through boring meetings that don’t explain much. Finally, our poor developer actually gets to implement the requirement. Maybe run the system a couple of times, trying out the feature they implemented, and checking the code in.

Well, that’s actually the easy part. Now comes the part which kills most of the time. After a bunch of features have been developed by the team, the testers start banging away at it and find a bunch of bugs. Now the developer has to reverse-engineer some bizarre system behavior and figure out which part of the system is to blame. That involves usually some educated guessing (unless they’ve just joined the team and have been put in the bug-fixer role to “learn the system”, in which case it is thoroughly UNeducated guessing). They change some code, run the system, which looks like its been fixed, check the new code in, and close the bug.

But the bugs keep coming. And as the project progresses towards production, more and more of the developers time is spent looking through code and changing existing code, that actually writing new code.

And the larger the system, the more bugs. And I don’t mean that the number of bugs linearly increases with lines of code, or number of features. It’s probably closer to exponential. If it’s a mission critical system, the performance bugs will be taking an order of magnitude more time to fix than other bugs.

So, as you can see, getting a system into production is a team effort. It includes the developers and testers, of course, but also management, and the customer, and how they manage scope. This is kind of a “duh” statement, but we’re getting to the punch-line.

If getting a system into production involves the entire team, isn’t that obviously true for each feature too?

In which case, why are we asking just the developers to estimate the time it takes to get a feature “done”? Why are we trying so hard to measure their productivity?

I know why. It’s so we can get rid of the less productive ones and give bonuses to the more productive ones!

Back to the main issue. I don’t “trust” developer estimates because I need to see the team’s capability to put features in production. The involves all aspects, and often many team members, in some cases multiple developers going through the same code. This involves all overhead and cross team communication, sick days, etc. It’s also why I try to get multiple data points over time to understand the team’s velocity.

While I care about the quality of my developers, and testers, and everybody on my team and would like them to be able to estimate their work as best they can, I’ve got a project to put into production. And the best way I’ll know when it’ll go into production is by having data that’ll enable me to state to my management:

“Our team is finishing 20 feature-units a month, we’ve got 200 feature-units to go, so we’ll be done in around 10 months.”

If I’m busy micro-measuring each developers estimates, I won’t have the time to see the forest. By first taking a harsh look at the reality of what the team can do, I can start looking for ways to make it better. Maybe the bottleneck is between analysts and developers, maybe we’re seeing the same bugs regressing many times, but until we know where we are, we can’t run controlled experiments to see what makes us better.

Focusing on the individual developer, getting them the latest and greatest tools may be great for their morale, but it probably won’t make a bit of difference to their actual productivity.

Next time – what to do when management asks you what it’ll take to be done sooner.



Don't trust developers with project management

Monday, August 6th, 2007

What with all the warm and fuzzy feelings around trusting developers (here, here, and here) I just have to tone it down a bit. The title takes it a bit far – but less than you might think. Just today I had a talk with one of the team leads on the project I’m consulting on. It boiled down to this:

Developers don’t know how to estimate.

Or more specifically, the variance in the actual completion time of a feature from the estimate given by a developer increases probably exponentially with time.

For example, if the estimate is a day, you can expect it to be finished in around a day. If the estimate is a week (5 works days), it will probably vary between 4-10 work days. If the estimate is a month, in all actuallity the developer probably doesn’t know enough to say but will answer when pressed.

This is why Ron (the team lead) asked me if I wasn’t worried I was putting myself in a lose-lose situation by changing the project structure. There were two “teams” when I came in – developers and testers. All the team leads had “committed” to “finishing” the project in 6 months. When I originally proposed the change to more feature-driven teams, mixing skilled and newer developers and testers together on the same team, came the cry:

“It’ll take us twice as long this way.”

“It’s so much less efficient than before.”

And on, and on. What was funny to me was that 3 of the “6” months were already gone and not a single feature worked. We were half “gone”, and nowhere near half done.

The thing is that Ron was sure I was cooking my own goose with upper management. What he, and most other developers don’t know is that upper management has gotten used to the state of things. If developers say one month, management has seen enough history to know that it’ll really be 3-4 months. So when I come in and do things different from what developers are used to, upper management is thrilled – that’s why they brought me in the first place.

The difference is that by working based on features, and measuring project progress by feature-units completed per iteration, I drive down variability. This creates solid data about progress saying when we’ll be done (more or less). This is quite different from the “normal” course of many projects:

“OK, so it’s been 8 months now on your 6 month project. When will you guys be finished already?”

Without the data, your only strategy is hope: “Umm, I hope the developers will be done this week(?)”

Don’t get me wrong. I trust my developers and testers deeply. But it’s not their job to know how to estimate and manage projects. PMs who take developers estimates as is and stake the project on that being correct are setting themselves up for failure – with only themselves to blame.

Now back to your nice warm-and-fuzzy blogging… 🙂



   


Don't miss my best content
 

Recommendations

Bryan Wheeler, Director Platform Development at msnbc.com
“Udi Dahan is the real deal.

We brought him on site to give our development staff the 5-day “Advanced Distributed System Design” training. The course profoundly changed our understanding and approach to SOA and distributed systems.

Consider some of the evidence: 1. Months later, developers still make allusions to concepts learned in the course nearly every day 2. One of our developers went home and made her husband (a developer at another company) sign up for the course at a subsequent date/venue 3. Based on what we learned, we’ve made constant improvements to our architecture that have helped us to adapt to our ever changing business domain at scale and speed If you have the opportunity to receive the training, you will make a substantial paradigm shift.

If I were to do the whole thing over again, I’d start the week by playing the clip from the Matrix where Morpheus offers Neo the choice between the red and blue pills. Once you make the intellectual leap, you’ll never look at distributed systems the same way.

Beyond the training, we were able to spend some time with Udi discussing issues unique to our business domain. Because Udi is a rare combination of a big picture thinker and a low level doer, he can quickly hone in on various issues and quickly make good (if not startling) recommendations to help solve tough technical issues.” November 11, 2010

Sam Gentile Sam Gentile, Independent WCF & SOA Expert
“Udi, one of the great minds in this area.
A man I respect immensely.”





Ian Robinson Ian Robinson, Principal Consultant at ThoughtWorks
"Your blog and articles have been enormously useful in shaping, testing and refining my own approach to delivering on SOA initiatives over the last few years. Over and against a certain 3-layer-application-architecture-blown-out-to- distributed-proportions school of SOA, your writing, steers a far more valuable course."

Shy Cohen Shy Cohen, Senior Program Manager at Microsoft
“Udi is a world renowned software architect and speaker. I met Udi at a conference that we were both speaking at, and immediately recognized his keen insight and razor-sharp intellect. Our shared passion for SOA and the advancement of its practice launched a discussion that lasted into the small hours of the night.
It was evident through that discussion that Udi is one of the most knowledgeable people in the SOA space. It was also clear why – Udi does not settle for mediocrity, and seeks to fully understand (or define) the logic and principles behind things.
Humble yet uncompromising, Udi is a pleasure to interact with.”

Glenn Block Glenn Block, Senior Program Manager - WCF at Microsoft
“I have known Udi for many years having attended his workshops and having several personal interactions including working with him when we were building our Composite Application Guidance in patterns & practices. What impresses me about Udi is his deep insight into how to address business problems through sound architecture. Backed by many years of building mission critical real world distributed systems it is no wonder that Udi is the best at what he does. When customers have deep issues with their system design, I point them Udi's way.”

Karl Wannenmacher Karl Wannenmacher, Senior Lead Expert at Frequentis AG
“I have been following Udi’s blog and podcasts since 2007. I’m convinced that he is one of the most knowledgeable and experienced people in the field of SOA, EDA and large scale systems.
Udi helped Frequentis to design a major subsystem of a large mission critical system with a nationwide deployment based on NServiceBus. It was impressive to see how he took the initial architecture and turned it upside down leading to a very flexible and scalable yet simple system without knowing the details of the business domain. I highly recommend consulting with Udi when it comes to large scale mission critical systems in any domain.”

Simon Segal Simon Segal, Independent Consultant
“Udi is one of the outstanding software development minds in the world today, his vast insights into Service Oriented Architectures and Smart Clients in particular are indeed a rare commodity. Udi is also an exceptional teacher and can help lead teams to fall into the pit of success. I would recommend Udi to anyone considering some Architecural guidance and support in their next project.”

Ohad Israeli Ohad Israeli, Chief Architect at Hewlett-Packard, Indigo Division
“When you need a man to do the job Udi is your man! No matter if you are facing near deadline deadlock or at the early stages of your development, if you have a problem Udi is the one who will probably be able to solve it, with his large experience at the industry and his widely horizons of thinking , he is always full of just in place great architectural ideas.
I am honored to have Udi as a colleague and a friend (plus having his cell phone on my speed dial).”

Ward Bell Ward Bell, VP Product Development at IdeaBlade
“Everyone will tell you how smart and knowledgable Udi is ... and they are oh-so-right. Let me add that Udi is a smart LISTENER. He's always calibrating what he has to offer with your needs and your experience ... looking for the fit. He has strongly held views ... and the ability to temper them with the nuances of the situation.
I trust Udi to tell me what I need to hear, even if I don't want to hear it, ... in a way that I can hear it. That's a rare skill to go along with his command and intelligence.”

Eli Brin, Program Manager at RISCO Group
“We hired Udi as a SOA specialist for a large scale project. The development is outsourced to India. SOA is a buzzword used almost for anything today. We wanted to understand what SOA really is, and what is the meaning and practice to develop a SOA based system.
We identified Udi as the one that can put some sense and order in our minds. We started with a private customized SOA training for the entire team in Israel. After that I had several focused sessions regarding our architecture and design.
I will summarize it simply (as he is the software simplist): We are very happy to have Udi in our project. It has a great benefit. We feel good and assured with the knowledge and practice he brings. He doesn’t talk over our heads. We assimilated nServicebus as the ESB of the project. I highly recommend you to bring Udi into your project.”

Catherine Hole Catherine Hole, Senior Project Manager at the Norwegian Health Network
“My colleagues and I have spent five interesting days with Udi - diving into the many aspects of SOA. Udi has shown impressive abilities of understanding organizational challenges, and has brought the business perspective into our way of looking at services. He has an excellent understanding of the many layers from business at the top to the technical infrstructure at the bottom. He is a great listener, and manages to simplify challenges in a way that is understandable both for developers and CEOs, and all the specialists in between.”

Yoel Arnon Yoel Arnon, MSMQ Expert
“Udi has a unique, in depth understanding of service oriented architecture and how it should be used in the real world, combined with excellent presentation skills. I think Udi should be a premier choice for a consultant or architect of distributed systems.”

Vadim Mesonzhnik, Development Project Lead at Polycom
“When we were faced with a task of creating a high performance server for a video-tele conferencing domain we decided to opt for a stateless cluster with SQL server approach. In order to confirm our decision we invited Udi.

After carefully listening for 2 hours he said: "With your kind of high availability and performance requirements you don’t want to go with stateless architecture."

One simple sentence saved us from implementing a wrong product and finding that out after years of development. No matter whether our former decisions were confirmed or altered, it gave us great confidence to move forward relying on the experience, industry best-practices and time-proven techniques that Udi shared with us.
It was a distinct pleasure and a unique opportunity to learn from someone who is among the best at what he does.”

Jack Van Hoof Jack Van Hoof, Enterprise Integration Architect at Dutch Railways
“Udi is a respected visionary on SOA and EDA, whose opinion I most of the time (if not always) highly agree with. The nice thing about Udi is that he is able to explain architectural concepts in terms of practical code-level examples.”

Neil Robbins Neil Robbins, Applications Architect at Brit Insurance
“Having followed Udi's blog and other writings for a number of years I attended Udi's two day course on 'Loosely Coupled Messaging with NServiceBus' at SkillsMatter, London.

I would strongly recommend this course to anyone with an interest in how to develop IT systems which provide immediate and future fitness for purpose. An influential and innovative thought leader and practitioner in his field, Udi demonstrates and shares a phenomenally in depth knowledge that proves his position as one of the premier experts in his field globally.

The course has enhanced my knowledge and skills in ways that I am able to immediately apply to provide benefits to my employer. Additionally though I will be able to build upon what I learned in my 2 days with Udi and have no doubt that it will only enhance my future career.

I cannot recommend Udi, and his courses, highly enough.”

Nick Malik Nick Malik, Enterprise Architect at Microsoft Corporation
“You are an excellent speaker and trainer, Udi, and I've had the fortunate experience of having attended one of your presentations. I believe that you are a knowledgable and intelligent man.”

Sean Farmar Sean Farmar, Chief Technical Architect at Candidate Manager Ltd
“Udi has provided us with guidance in system architecture and supports our implementation of NServiceBus in our core business application.

He accompanied us in all stages of our development cycle and helped us put vision into real life distributed scalable software. He brought fresh thinking, great in depth of understanding software, and ongoing support that proved as valuable and cost effective.

Udi has the unique ability to analyze the business problem and come up with a simple and elegant solution for the code and the business alike.
With Udi's attention to details, and knowledge we avoided pit falls that would cost us dearly.”

Børge Hansen Børge Hansen, Architect Advisor at Microsoft
“Udi delivered a 5 hour long workshop on SOA for aspiring architects in Norway. While keeping everyone awake and excited Udi gave us some great insights and really delivered on making complex software challenges simple. Truly the software simplist.”

Motty Cohen, SW Manager at KorenTec Technologies
“I know Udi very well from our mutual work at KorenTec. During the analysis and design of a complex, distributed C4I system - where the basic concepts of NServiceBus start to emerge - I gained a lot of "Udi's hours" so I can surely say that he is a professional, skilled architect with fresh ideas and unique perspective for solving complex architecture challenges. His ideas, concepts and parts of the artifacts are the basis of several state-of-the-art C4I systems that I was involved in their architecture design.”

Aaron Jensen Aaron Jensen, VP of Engineering at Eleutian Technology
“Awesome. Just awesome.

We’d been meaning to delve into messaging at Eleutian after multiple discussions with and blog posts from Greg Young and Udi Dahan in the past. We weren’t entirely sure where to start, how to start, what tools to use, how to use them, etc. Being able to sit in a room with Udi for an entire week while he described exactly how, why and what he does to tackle a massive enterprise system was invaluable to say the least.

We now have a much better direction and, more importantly, have the confidence we need to start introducing these powerful concepts into production at Eleutian.”

Gad Rosenthal Gad Rosenthal, Department Manager at Retalix
“A thinking person. Brought fresh and valuable ideas that helped us in architecting our product. When recommending a solution he supports it with evidence and detail so you can successfully act based on it. Udi's support "comes on all levels" - As the solution architect through to the detailed class design. Trustworthy!”

Chris Bilson Chris Bilson, Developer at Russell Investment Group
“I had the pleasure of attending a workshop Udi led at the Seattle ALT.NET conference in February 2009. I have been reading Udi's articles and listening to his podcasts for a long time and have always looked to him as a source of advice on software architecture.
When I actually met him and talked to him I was even more impressed. Not only is Udi an extremely likable person, he's got that rare gift of being able to explain complex concepts and ideas in a way that is easy to understand.
All the attendees of the workshop greatly appreciate the time he spent with us and the amazing insights into service oriented architecture he shared with us.”

Alexey Shestialtynov Alexey Shestialtynov, Senior .Net Developer at Candidate Manager
“I met Udi at Candidate Manager where he was brought in part-time as a consultant to help the company make its flagship product more scalable. For me, even after 30 years in software development, working with Udi was a great learning experience. I simply love his fresh ideas and architecture insights.
As we all know it is not enough to be armed with best tools and technologies to be successful in software - there is still human factor involved. When, as it happens, the project got in trouble, management asked Udi to step into a leadership role and bring it back on track. This he did in the span of a month. I can only wish that things had been done this way from the very beginning.
I look forward to working with Udi again in the future.”

Christopher Bennage Christopher Bennage, President at Blue Spire Consulting, Inc.
“My company was hired to be the primary development team for a large scale and highly distributed application. Since these are not necessarily everyday requirements, we wanted to bring in some additional expertise. We chose Udi because of his blogging, podcasting, and speaking. We asked him to to review our architectural strategy as well as the overall viability of project.
I was very impressed, as Udi demonstrated a broad understanding of the sorts of problems we would face. His advice was honest and unbiased and very pragmatic. Whenever I questioned him on particular points, he was able to backup his opinion with real life examples. I was also impressed with his clarity and precision. He was very careful to untangle the meaning of words that might be overloaded or otherwise confusing. While Udi's hourly rate may not be the cheapest, the ROI is undoubtedly a deal. I would highly recommend consulting with Udi.”

Robert Lewkovich, Product / Development Manager at Eggs Overnight
“Udi's advice and consulting were a huge time saver for the project I'm responsible for. The $ spent were well worth it and provided me with a more complete understanding of nServiceBus and most importantly in helping make the correct architectural decisions earlier thereby reducing later, and more expensive, rework.”

Ray Houston Ray Houston, Director of Development at TOPAZ Technologies
“Udi's SOA class made me smart - it was awesome.

The class was very well put together. The materials were clear and concise and Udi did a fantastic job presenting it. It was a good mixture of lecture, coding, and question and answer. I fully expected that I would be taking notes like crazy, but it was so well laid out that the only thing I wrote down the entire course was what I wanted for lunch. Udi provided us with all the lecture materials and everyone has access to all of the samples which are in the nServiceBus trunk.

Now I know why Udi is the "Software Simplist." I was amazed to find that all the code and solutions were indeed very simple. The patterns that Udi presented keep things simple by isolating complexity so that it doesn't creep into your day to day code. The domain code looks the same if it's running in a single process or if it's running in 100 processes.”

Ian Cooper Ian Cooper, Team Lead at Beazley
“Udi is one of the leaders in the .Net development community, one of the truly smart guys who do not just get best architectural practice well enough to educate others but drives innovation. Udi consistently challenges my thinking in ways that make me better at what I do.”

Liron Levy, Team Leader at Rafael
“I've met Udi when I worked as a team leader in Rafael. One of the most senior managers there knew Udi because he was doing superb architecture job in another Rafael project and he recommended bringing him on board to help the project I was leading.
Udi brought with him fresh solutions and invaluable deep architecture insights. He is an authority on SOA (service oriented architecture) and this was a tremendous help in our project.
On the personal level - Udi is a great communicator and can persuade even the most difficult audiences (I was part of such an audience myself..) by bringing sound explanations that draw on his extensive knowledge in the software business. Working with Udi was a great learning experience for me, and I'll be happy to work with him again in the future.”

Adam Dymitruk Adam Dymitruk, Director of IT at Apara Systems
“I met Udi for the first time at DevTeach in Montreal back in early 2007. While Udi is usually involved in SOA subjects, his knowledge spans all of a software development company's concerns. I would not hesitate to recommend Udi for any company that needs excellent leadership, mentoring, problem solving, application of patterns, implementation of methodologies and straight out solution development.
There are very few people in the world that are as dedicated to their craft as Udi is to his. At ALT.NET Seattle, Udi explained many core ideas about SOA. The team that I brought with me found his workshop and other talks the highlight of the event and provided the most value to us and our organization. I am thrilled to have the opportunity to recommend him.”

Eytan Michaeli Eytan Michaeli, CTO Korentec
“Udi was responsible for a major project in the company, and as a chief architect designed a complex multi server C4I system with many innovations and excellent performance.”


Carl Kenne Carl Kenne, .Net Consultant at Dotway AB
“Udi's session "DDD in Enterprise apps" was truly an eye opener. Udi has a great ability to explain complex enterprise designs in a very comprehensive and inspiring way. I've seen several sessions on both DDD and SOA in the past, but Udi puts it in a completly new perspective and makes us understand what it's all really about. If you ever have a chance to see any of Udi's sessions in the future, take it!”

Avi Nehama, R&D Project Manager at Retalix
“Not only that Udi is a briliant software architecture consultant, he also has remarkable abilities to present complex ideas in a simple and concise manner, and...
always with a smile. Udi is indeed a top-league professional!”

Ben Scheirman Ben Scheirman, Lead Developer at CenterPoint Energy
“Udi is one of those rare people who not only deeply understands SOA and domain driven design, but also eloquently conveys that in an easy to grasp way. He is patient, polite, and easy to talk to. I'm extremely glad I came to his workshop on SOA.”

Scott C. Reynolds Scott C. Reynolds, Director of Software Engineering at CBLPath
“Udi is consistently advancing the state of thought in software architecture, service orientation, and domain modeling.
His mastery of the technologies and techniques is second to none, but he pairs that with a singular ability to listen and communicate effectively with all parties, technical and non, to help people arrive at context-appropriate solutions. Every time I have worked with Udi, or attended a talk of his, or just had a conversation with him I have come away from it enriched with new understanding about the ideas discussed.”

Evgeny-Hen Osipow, Head of R&D at PCLine
“Udi has helped PCLine on projects by implementing architectural blueprints demonstrating the value of simple design and code.”

Rhys Campbell Rhys Campbell, Owner at Artemis West
“For many years I have been following the works of Udi. His explanation of often complex design and architectural concepts are so cleanly broken down that even the most junior of architects can begin to understand these concepts. These concepts however tend to typify the "real world" problems we face daily so even the most experienced software expert will find himself in an "Aha!" moment when following Udi teachings.
It was a pleasure to finally meet Udi in Seattle Alt.Net OpenSpaces 2008, where I was pleasantly surprised at how down-to-earth and approachable he was. His depth and breadth of software knowledge also became apparent when discussion with his peers quickly dove deep in to the problems we current face. If given the opportunity to work with or recommend Udi I would quickly take that chance. When I think .Net Architecture, I think Udi.”

Sverre Hundeide Sverre Hundeide, Senior Consultant at Objectware
“Udi had been hired to present the third LEAP master class in Oslo. He is an well known international expert on enterprise software architecture and design, and is the author of the open source messaging framework nServiceBus. The entire class was based on discussion and interaction with the audience, and the only Power Point slide used was the one showing the agenda.
He started out with sketching a naive traditional n-tier application (big ball of mud), and based on suggestions from the audience we explored different solutions which might improve the solution. Whatever suggestions we threw at him, he always had a thoroughly considered answer describing pros and cons with the suggested solution. He obviously has a lot of experience with real world enterprise SOA applications.”

Raphaël Wouters Raphaël Wouters, Owner/Managing Partner at Medinternals
“I attended Udi's excellent course 'Advanced Distributed System Design with SOA and DDD' at Skillsmatter. Few people can truly claim such a high skill and expertise level, present it using a pragmatic, concrete no-nonsense approach and still stay reachable.”

Nimrod Peleg Nimrod Peleg, Lab Engineer at Technion IIT
“One of the best programmers and software engineer I've ever met, creative, knows how to design and implemet, very collaborative and finally - the applications he designed implemeted work for many years without any problems!”

Jose Manuel Beas
“When I attended Udi's SOA Workshop, then it suddenly changed my view of what Service Oriented Architectures were all about. Udi explained complex concepts very clearly and created a very productive discussion environment where all the attendees could learn a lot. I strongly recommend hiring Udi.”

Daniel Jin Daniel Jin, Senior Lead Developer at PJM Interconnection
“Udi is one of the top SOA guru in the .NET space. He is always eager to help others by sharing his knowledge and experiences. His blog articles often offer deep insights and is a invaluable resource. I highly recommend him.”

Pasi Taive Pasi Taive, Chief Architect at Tieto
“I attended both of Udi's "UI Composition Key to SOA Success" and "DDD in Enterprise Apps" sessions and they were exceptionally good. I will definitely participate in his sessions again. Udi is a great presenter and has the ability to explain complex issues in a manner that everyone understands.”

Eran Sagi, Software Architect at HP
“So far, I heard about Service Oriented architecture all over. Everyone mentions it – the big buzz word. But, when I actually asked someone for what does it really mean, no one managed to give me a complete satisfied answer. Finally in his excellent course “Advanced Distributed Systems”, I got the answers I was looking for. Udi went over the different motivations (principles) of Services Oriented, explained them well one by one, and showed how each one could be technically addressed using NService bus. In his course, Udi also explain the way of thinking when coming to design a Service Oriented system. What are the questions you need to ask yourself in order to shape your system, place the logic in the right places for best Service Oriented system.

I would recommend this course for any architect or developer who deals with distributed system, but not only. In my work we do not have a real distributed system, but one PC which host both the UI application and the different services inside, all communicating via WCF. I found that many of the architecture principles and motivations of SOA apply for our system as well. Enough that you have SW partitioned into components and most of the principles becomes relevant to you as well. Bottom line – an excellent course recommended to any SW Architect, or any developer dealing with distributed system.”

Consult with Udi

Guest Authored Books



Creative Commons License  © Copyright 2005-2011, Udi Dahan. email@UdiDahan.com