One of the advantages of being old is that you get to see the same mistakes time and time again which means that you don’t have to think so hard, you just have to remember what happened last time! In my case I normally remember my mistakes fairly well so I end up looking pretty smart with minimal effort.
For example I can remember when component based design was all the rage and everyone was off building component based systems. We gave lots of information about a component; it is a binary standard which type’s functions into interfaces with a well known base interface etc etc. The snag was we never really gave much guidance as to when they should be used.
So one day I was phoned up by a major UK bank who had built their new customer system as a smart client (to use today’s lingo) using COM but it was running really slowly. Clearly Windows didn’t scale, sigh. Off on the train I go to visit the customer who was very proud of their all new, all component based design. Five minutes into the meeting it became clear that there were a lot of components in the system so I asked how many components there were in total. About 3000 they said! And this in a desktop system. Yikes!
I mentioned that the components were probably a bit on the small size and there were certainly too many of them. The designers pointed out, not unreasonably I thought, that no one had ever told them how big a component should be so how were they to know? Anyway after a bit of redesign (we’d call it refactoring now) we got the component count down to a reasonable level and the system worked fine.
This was brought back to me last week when I sat in a meeting about SOA. There was a lot of discussion about what SOA was: explicit boundary’s, autonomous, etc which reminded me a lot of the COM definitions but with different words. Alas there was no discussion as to what a Service should look like, how big, what level of functionality etc; this was left as an exercise for the designer. I pointed out that we needed to give guidance on what a service should look like but this was clearly not understood.
Anyway I reiterate my definition of what a service should be from my previous Blog:
A model or abstraction
Of independent or loosely coupled
And business level functionalities or services
This gives a rough guide as to what should be in a service; that is a piece of business level functionality, not a function call. Please all you people building services make sure that the service is reasonably big otherwise you will have performance problems. So how big is reasonably big? The answer to that is given by Platt’s first law.
Meanwhile I wait by the phone for the first person with a 3000 services based system which runs slowly because, as we all know; Windows doesn’t scale!
Very nicely put. I, too, have been having 'deja vu' feelings about the whole SOA discussion. Not so much that it's not useful, just that it's being sold (as COM was, as Web Services were, etc.) as the end-all, be-all solution to our application architecture woes. And you're absolutely right that like the architectures that preceded it, not enough specific guidance is being provided to ensure success.
And you're *so* right about the tendency of architects and developers to put the blame on Windows rather than their own inefficient design.
Of course, what's funny about all this is that in the case you describe the cause of poor performance and scalability is no mystery, and applies to just about any non-trivial application architecture. I'm betting that many of the components they wrote were out-of-process components, and as such, they were crossing process boundaries far more often than they should have been. The same types of issues exist with Web services when the interface is chatty rather than chunky, and with COM Interop, and certainly with SOA as well.
The trick then, IMO, is not just to teach architects to use a *particular* architecture the right way, so much as to teach them to educate themselves about the costs incurred when making calls in a given architecture, and to balance those costs against the amount of work being done by each call. When an architect groks this, it should be pretty straightforward for them to create a scalable design regardless of which architecture they choose (assuming, of course, that the underlying platform provides the level of scalability desired).
Thanks for this. I totally agree with your last para, we need to educate architects on how to partition and then we will get a lot less scalability problems. After that we just have to crack the resource misuse; I just looked at a .Net system with 4 DB connections per client.. and it was a 500 user system!
4 per client? Whoof! Was it lack of connection pooling, or were they deliberately using 4 different identities for different parts of the app? Or something else entirely?
Definitely agree on the resource [mis]use...another area where education is necessary. Saw a thread on a private discussion list recently on the topic of impersonation and connection pooling that demonstrated that even experienced developers aren't always certain of the implications of these techniques. The good news is that at least these folks were asking the questions before charging ahead with a problematic implmentation. :-)
Dont think they knew what connection pooling was.. 4 identities per part of the app! They just hadnt thought about it.
I sometimes get the feeling that developers are just swapping terms as time goes by.
There are customer, order, etc, objects.
Which later grew into customer, order, etc, components.
And now I'm starting to hear about customer, order, etc, services.
And it's not even the "old dog, new tricks" case. More often than not, it's the "new" dogs that don't know the old tricks that make the "old mistakes" with the new technology.
I have to agree with that. See my Blog today