Many engineers have asked: what is wrong with complexity?

Doesn't it lead to data center designs which are more precise and accurate?  Doesn't it lead to data center designs which more optimally capitalizes on technical innovation?   Of course it does.   However, what's most optimal for a specific application/system isn't always what is most optimal for a business's data center ecosystem.

So why do data center architects dislike architectural complexity?

It's simple.  While each siloed application is optimized with more diverse complex designs, it generates aggregated data center complexity which must be managed, monitored, instrumented, provisioned and patched as well as integrate with existing data center support services.   This aggregated complexity consumes significant amounts of software and hardware acquisition costs, scale and scope of people work involved,  and increased process complexity (to account for the diversity of application/system complexity in the environment).  This is why what traditionally has worked in the past in a research environment usually performs poorly in a business/operational environment.

Data Center architects treat aggregated data center complexity as a disease that must always be monitored and treated to prevent systemic infection in the business.   They know that if the infection reaches critical mass, business units significantly loose its ability to bring solutions to market quick enough to compete, the cost of operating existing environments exponentially increases, systemic quality capabilities become brittle and unreliable and the ability to capture, manage and capitalize on organic knowledge of the IT environment (from a technical and business level) significantly decreases.    Complexity is a disease that constantly threatens infection into your business ecosystem.

So many have argued that the cure is complete technically standards based designs to reduce dependency on a vendor or product.   While this approach does have some significant benefits to reduce some interoperability issues, it does not always lead to a reduction in complexity.  Sometimes, the cost of extra products required, training required and processes needed to support a specific open standards approach is far higher than a non-standards alternative.    You must simply look at your business’s data center ecosystem and decide what is best for your company (make that discriminating business decision).    I recommend developing a data center common engineering and architectural criteria (through your own formal change and configuration control processes).  That is a first step.  Then measure the success of the criteria against business metrics on a quarterly basis (how much complexity did this reduce last quarter?)   Also develop an enterprise data center architectural roadmap for the next 3 years which focuses on incrementally reducing complexity in the environment.   While it is good to approach this by consolidating services and some applications, i would be careful about consolidating servers until you've mitigated the operational and architectural complexity that might be created with such an environment (the is true for consolidating networks and storage management environments).