Distributed Architecture on ARCast.TV Rapid Response
Monday, January 14th, 2008.A while ago, me and Ron Jacobs (virtually) got together and did a couple “rapid responses” to questions on the MSDN architecture forums, and I just noticed that they’re online. The really great thing is that there are transcripts! For your convenience, I’ve included them here.
By the way, if you’re looking for more Q&A style info, check out the Ask Udi podcast. If you have a pressing question and need a shorter turn around time than the month or so it usually takes me for the podcast, send me an email to OnlineConsultation@UdiDahan.com.
Number 1
Ron: Hey, welcome back to ARCast Rapid Response. This is your host Ron Jacobs and today I’m looking at the MSDN architecture forum where I see this message from “theking2.” Yeah? OK, so “king,” he says, he’s building a distributed architecture that has a number of external systems. These external systems interface through a telnet connection and so they accept commands and return results as ACKS or NACKS.
Typically these systems have limited resources for the number of simultaneous sessions you can open, so, five to fifty depending on the system. What he did to get around this, was, he created some Enterprise Services objects and some pooled objects that set up these connections and then he has some Web services. The Web services are going to receive an incoming message. They’re going to call these pooled COM+ objects and they’re going to make the telnet calls to the external systems. Sounds interesting.
He says, after a year of production it has become apparent that some of the external systems are not performing very well. He says the bulk of the requests, but not all, to the external systems can be done asynchronously. So, he’s opting for a message queue-based solution using pseudosynchronous calls whenever a direct response is needed.
So, the question is, at what layer would message queuing make most sense?
So, should the clients, this Web service that receives the message — should it do a queue? Put a message in the queue and then the COM+ objects would pop off or they have some central Web services that would pop it. So, the central Web services or these Enterprise Service objects? Or maybe just a communication at the top of the telnet. He says this is the first time when he’s using message queuing.
On the line with me I have Udi Dahan, the Software Simplist from Israel.
Udi, this is a very interesting application and my first gut reaction is, does it really matter where you put the queuing?
Udi Dahan: Well, actually I took a look at it as well and I’d have to say that it does because the problem that he’s trying to solve isn’t that clear. We know that there is some sort of performance problem but we’re not quite sure where it is. We know that there are long and varying latencies in the responses but we’re not really quite sure why.
While we know that their external system is a bit slow but our choice of where to put the queue will probably have an impact, obviously on the development model of the clients and the Web services as well as how those external systems would work. So, I’d have to say that choosing the correct place to put the queue is important.
Ron: Well, let me interject something here because what you said just made me think. Now, if the problem is that these external systems are slow and limited number of connections, the first question we ought to ask is, does queuing help this situation at all?
Udi: Well, that’s probably a good first step. I mean every single time someone comes with a solution and then says, “OK, what’s the problem,” it’s always a good thing to check that solution first.
It looks like the problem that he has here has to deal with or the reason that he wants to use a queue is to do some kind of load leveling. He’s getting too many requests or at too high a rate from his clients and external Web services and external web applications more than his back end systems can use. So, using a queue as a load leveling mechanism is definitely the right way to go. So, from that perspective I think that putting a queue somewhere in there is a good idea.
Ron: OK. So then if you put a queue, it seems to me that it’s not going to make that much difference which layer you put the queue, would it?
Udi: Well, it might for the main reason that you really have to look at where his bottleneck is and that’s his back end systems. The bottleneck also has to do with the number of connections that can be opened and the number of sessions that can be opened. The place that I’d be looking at doing that is probably between those pooled COM+ objects and his central Web services for the main reason that that really gives a nice encapsulation in terms of the Web services towards both his organization’s internal services if they are other Web services, web applications or clients and everybody else that’s going on out there while keeping that abstraction out of the way.
So, the choice of using pooled COM objects is one of the ways he does the load leveling now. One of the problems he has is that it doesn’t seem to be doing that much for him because the switches and knobs that are available in COM+ in order to do that load leveling aren’t that great. What I’d be looking at in his situation is to put a queue in there but on the back side of that queue, not talking directly to the external system but doing something with WCF.
WCF has an incredible amount of switches and knobs in order to do the load throttling and the number of threads that are open. He could also do that on a large number of URIs in order to sort of split up the load from that perspective allowing him to cache results quite a bit better. So, that’s where I’d be looking at too. Just throw away those COM+ objects, put WCF in there, use the MSMQ binding and start configuring things from there.
Ron: There’s a lot of stuff in the message, but I think his core concern is performance. He mentions pseudosynchronous calls. I think by that he means, a message comes in to the web service, he’s going to drop something on the queue and then hold that message response until he gets a response back from a queue. So, it’s sort of synchronous but sort of not synchronous. So, in effect he’s kind of waiting on a queue instead of waiting on the pooled object to make this outbound telnet call.
I could agree if you said, “Well, look, our big problem is that we keep getting time outs because when we go to get a COM+ object from the pool, COM+ waits for a while and then it says, “Hey, there’s no object available’ and it returns an error,” then the queue is definitely going to help that problem. But in terms of the sheer through put or performance of the system, this is not going to help at all. It’s going to still be the same performance.
Now if you said, “Oh look, we can do some of this work kind of at a later point in time, ” well queuing doesn’t allow you to time shift the work. Right? So, if you said, “Look we can rethink this solution.” So you get a message in, we stuff something in a queue that we’ll deal with later, and then very quickly return a response like some kind of a number like, hey “your transaction number blah, blah, blah, will be processed later, it’s queued for processing, ” whatever.
I mean that introduces a lot of complexity in the system but it clearly would provide better response at the Web service layer. What do you think?
Udi: Well I think that at the most basic level, his throughput is dictated by his back end systems. From what he seems to be describing, every single request that is going through there, has to hit that back end system. If he has a limited number of back end systems that are supporting a limited number of connections, that’s going to limit his throughput no matter what technology he puts in front of that. So that’s at the core level. You just can’t get away from that.
The one thing that I would agree with you in your description there is the choice of using those COM+ objects. I mean COM+ was a great technology when it came out. The problem occurs, of course, when we start getting into larger and larger delays around the response time and we start getting all sorts of time out exceptions and things like that. So in that respect, I definitely say you know, take a step back from there.
But in terms of everything that he has around there, the queue isn’t going to make the back end system run any faster. What it will do is definitely complicate his system because he’s taking something that used to be synchronous and making it asynchronous. Writing Web services in order to handle that, I mean just adding a bunch of threads in order to listen to queues is not going to make things any simpler.
However, what it might do is to improve the resource usage of those Web services, OK? So instead of having those Web services have a bunch of threads open, waiting for the response coming back from those COM+ pooled objects, those threads could be relinquished and really just be triggered back up when a response comes back from the queue.
So I don’t see an improvement in the kind of solution that MSMQ or queuing would put in there in terms of the latency — how long it would take for a response to get back. However, I do see an improvement in terms of the resource usage of all the other players in the system.
Ron: I would agree with that. I would just say though that if you make the Web server that is hosting these Web services more resource efficient, maybe all you’re going to do is enable it to get more requests in queue the more quickly. Ultimately, this solution I think is going to solve a lot of problems related to time outs and server busy errors and that sort of thing, thread contentions, but not likely to increase overall performance.
But I definitely agree though. I would move this solution forward to WCF. I used to be on the COM+ team. COM+ was rolled into WCF so that it would have similar capabilities for pooling, instancing behavior, transactional support, those sorts of things. I would definitely move that forward into WCF.
OK! So great answer, Udi. Thank you so much for being on this ARCast Rapid Response.
Number 2
Ron: Hey this is Ron Jacobs back with another ARCast.TV Rapid Response. Today I’m joined by Udi Dahan, the Software Simplist from Israel.
Udi, I’m looking at the MSDN Architecture Forum and here’s a question from “blast.” Blast says he’s looking for where to put business rules. He’s developing a WinForm application. He uses data sets as the data layer, he says. He’s thinking about business rules and where to put them.
He says obviously, the more organized and centralized business rules are, the better. He’s tempted to put the business rules in the UI layer especially with the type data set. It makes a lot of sense there but not all rules belong on the client. He says some rules belong on the server, perhaps in a trigger.
So he’s asking where do you put your rules? How do you think about this problem, Udi?
Udi: Well, it looks like what he’s doing here is developing a two-tier client that is using WinForms and using datasets and speaking directly to the database. That in essence is part of his problem in that in terms of performance, he’d like to run more rules in the UI layer so that the user won’t be sending garbage to the database.
He also understands that because he’s building a multi-user system, there is a limited capability, in terms of concurrency, of actually having all the rules run correctly in order to make sure that everything is correct. So, his choice of an architecture, working two-tier is the main problem of why he has to fragment his business rules.
If he were to move towards a three-tier solution, that is put an application server between his smart client and the database, it would be a lot easier to put those business rules there. Now, once the business rules are out of the database, because again, we don’t have to deal with the concurrency issues once we have an application server and we’re using transactions there and we don’t have any disconnected problems, then what we can do is use those same DLLs, that same CLR code that runs the business rules, and deploy it client-side and use it there.
So, in terms of deployment, what we’d have is we’d have the same rules, both running client-side and server-side, whereas from a development perspective, we’d have them organized and centralized. That’s the way that I’d go about it.
Ron: Yeah, you know, I think conceptually I agree with you that a multi-tier solution would be a very good idea here. What I would probably think about conceptually, is breaking down rules into things that really ought to happen on the client-side. In particular, rules related to validation of data, so you know that you’ve got good and complete data before you ship it off to the server-side. Oftentimes you have to do that anyway because you have a button that shouldn’t be enabled until the data is valid, or something like that.
Udi: Absolutely.
Ron: Of course, we all know that if you have middle-tier web services, you must do validation both on the client-side and the server-side, because you must ensure that the valid data is received on the web server. So I agree with you that creating an assembly that you deploy on both sides is a good idea.
I would just expand on what you said a little bit and think about maybe on the server-side using a workflow foundation and business rules and workflow as a way to handle a lot of the heavier lifting, server-side validations and business rules that might require maybe sifting through more data or whatever kind of things, but server-side business rules that are more oriented towards business logic, and even if you have very, very data-intensive roles, then maybe some of those might even happen in the database. Don’t you agree?
Udi: Oh, absolutely. Absolutely. That’s something that I think often gets swept under the rug too much. Things like unique constraints and things like that are kinds of business rules. They protect the referential integrity and if we look at the alternatives, sometimes getting 10 million rows out of the database, in order to do some sort of unique email validation upon them, that’s just going to kill your performance.
There are certain things that it just makes sense to do them in the database, it’s just the best way to do it. The hard part, from a development perspective, is maintaining the coherence of your business rules. When you say, “OK, I want a single perspective, what are all the rules in my system?”
Even though we might try to keep it all CLR based, some of the things like unique constraints, like referential integrity, will be in the database. So, what I sometimes suggest to do is to have a separate solution, in terms of your development team, where you put all your business rules.
This includes both the SQL statements for defining your unique constraints and your referential integrity. Also put in that validation logic, your workflow that you’re going to be running server-side. If it’s AJAX controls and regular expressions that you’re going to be doing client-side in order to validate that data, absolutely make sure you have, from a development perspective, one place where you can go where you can see everything, because if you don’t do that [inaudible] can be running, and when things stop working, you won’t know how to debug it.
Ron: All right. Well, excellent answer. Udi, thank you so much.
If you liked this article, you might also like articles in these categories: If you've got a minute, you might enjoy taking a look at some of my best articles. I've gone through the hundreds of articles I've written over the past 6 years and put together a list of the best ones as ranked by my 5000+ readers. You won't be disappointed. If you'd like to get new articles sent to you when they're published, it's easy and free. Subscribe right here. Follow me on Twitter @UdiDahan. Something on your mind? Got a question? I'd be thrilled to hear it. Leave a comment below or email me, whatever works for you. Your comment... |