For a while, developmental and operational environments have rarely worked well together.   Developers design something and then throw it over the wall for the infrastructure team to design an environment to support it.   Then it’s thrown over the wall for operational teams to deploy and monitor it for the rest of its life.

What’s the wall?   It’s usually email, conference calls and planning meetings.   Most of the tools between these groups work well with each other.  Furthermore, even the languages utilized between teams are alien between each other (with significant hours spent on translation activities between teams).   Within a regular silo based solution, groups treat each other’s domains (storage, server environments, components of the applications, data center services, solution design, etc…) as “do not enter” zones of mutually acceptable areas of exclusion.    These areas of comfortable ignorance lay the foundation for the house of cards which awaits.

Therefore, like sheep diseased with industry groupthink, so many thought we could force everyone to agree on the same management APIs, protocols or same programming language to be the magic which cures the quagmire of datacenter complexity.   The problem was these approaches were focused on commonality within a group (example, CIM for datacenter management, WS-I for developers).  Given the big promise, complexity should have decreased.   However, most would agree datacenter complexity is instead, growing.

And with service oriented architecture, the complexity of the datacenter is only increasing.   Services must depend on other services developed and managed with different technologies and with different levels of maturity.   Furthermore, the quality of these designs and management models between silos can be radically different.  Like a deeply troubled psychiatric patient, datacenters are becoming more brittle with pressure to manage co-dependent services which managers and architects often do not control.  

We live in illusions of control in the datacenter.   There are versions of technologies and designs we inherit from others.  We rarely control the politics of product selection or alignment from the solutions we are asked to provide infrastructure for.   We rarely influence the technological evolution of the products and technologies we use today.   Even senior management has less control than most think about operational budgets, personnel and skill levels supported in a datacenter.    Unfortunately for some, happily promoting this illusion of control in blissful ignorance lays the ground work for the worst.

The truth is much of our blissful ignorance is not a technical design problem, but a human problem.   Instead of agreeing to use the same versions of the same products, people use the tools and products which work best for them.    Fighting this natural tendency is a losing battle.   The question is: do we embrace this assumption and design solutions to adapt or fight the losing battle of command and control to promote the illusion?

Microsoft made a decision several years ago to embrace diversity for developers.  This is why Microsoft decided to invest in .net in a radically different way from others.   At the time, the Java community assumed most wanted to change operating systems by promoting developers write to the same programming language specification and use a JVM (Java bytecode was used for the Java language. In my 5 years at Sun, I’ve never seen any other languages utilized in the mainstream environment attached to Java bytecode). This is not to slam Java.  It was just addressing a very different business problem (the platform).    

Instead, Microsoft focused on the human problem.  Microsoft found that people rarely port code from platform to platform in their companies (even between JVMs).  However, developers often utilize different programming languages to solve different business challenges.   With examples today like dynamic languages, this trend will only increase.   Humans are adaptive and utilize languages and structures which work for them at that specific time.  In other words, Microsoft realized that promoting a single programming language was an illusion of control.   The key for .net was to enable programmers of different languages to be successful with Microsoft’s platform (which they could control).

Yet, there is still a problem for infrastructure architecture and solution architecture.  They still don’t really communicate well.   There have been positive standards based trends in the datacenter.   WS-Management will help tremendously with interoperability between management environments and diverse application and operating systems (not to mention past work with CIM and others).  Microsoft adopted WS-Management with W2K3R2 and future management products.   Yet, this was never designed to address the “throw it over the wall” infrastructure design and management problem today.

Now Microsoft is promoting the architectural specifications for the Systems Modeling Language (SML).   SML is different.   It’s one of the first proposed standards focused on the relationships between system components and even between application services.   SML does not assume people design in the same language, leverage the same operating platform or network or use the same management or deployment environment.  It assumes controlling those areas are an illusion.  SML is focused on mapping architectural relationships between these diverse systems up and down (hardware to application) and across (between services/applications). 

1)      It will allow developers to list the dependencies between a designed application and the application server configuration (example: IIS configuration for .net applications), network dependencies, database dependencies, hardware dependencies and even external service dependencies.

2)      It will allow infrastructure architecture teams to map constraints (datacenter standards) to drive reuse and reduce complexity and costs.  Furthermore, being able to map out constraints in the SML model will drive security standards (example constraint: “no databases can be in the DMZ”).   Developer tools designed to absorb SML constraints can establish and enforce this at design time to reduce defects before they reach test and production.

3)      Solutions mapped utilizing the SML will be able to be more accurately tested, deployed and optimized in the datacenter.   Furthermore, with relationship dependencies more accurately mapped, datacenter teams will be able to reduce their MTTR (mean time to repair) when troubleshooting an SML modeled solution with a more accurate health model.

For example, what happens when outbound email stops working in a company?   The operational team checks the outbound MTAs (Mail transfer Agents) for problems.   If the outbound MTAs are ok but are just not processing any mail, then the team might realize the probable bottleneck could be outbound DNS.   Once outbound DNS is repaired, mail operates normally again.  However, the budget and team which managed the email design often did not have anything to do with the budget or team involved with designing the DNS servers (and vice versa).   Yet, there is a clear relationship which should be recognized and mapped.  Currently, little exists in the world of standards to do this.

While I know it’s popular for the press to be skeptical, Microsoft recognizes the significant diversity of technologies in our datacenter.   You can see this in most of Microsoft’s products now aligning much better to open standards and structures.  This blog’s example:  with the support of partners, competitors and strangers in the industry, Microsoft announced it will promote SML as a common standard to help tear down these walls of ignorance between architecture teams and allow us to live with a little more control in this world of constantly evolving technologies in the datacenter.   Besides building this model into Visual Studio, Microsoft is committing this model in most of its next generation management products.   Furthermore, if you want to see how past Novell or Sun interoperability announcements will be more realized in the future, just look to the power of SML.  This is part of the vision of a new Microsoft and I believe it’s a pretty good idea.

You can find out more information at

http://www.microsoft.com/windowsserversystem/dsi/serviceml.mspx

http://www.microsoft.com/windowsserversystem/dsi/default.mspx

Microsoft renaming SDM to SML

http://www.microsoft.com/presspass/press/2006/jul06/07-31SMLPR.mspx