If you have looked at any of the new components of System Center 2012 you may have noticed that everyone seems to have a bunch of reports, some of them have data marts or data warehouses and some of them have analytics in the form of analysis services cubes. Apart from the confusion over when to use what, why has so much effort been put into this?
In order to answer that let’s consider what information we need from System Center. I use the term information here deliberately as you may be aware that System Center chucks out tons of data, e.g. virtual machine X is running SQL Server, this update failed, that server has restarted, etc.etc. A good example of this how a badly setup Operations Manager will swamp the IT team with all the messages it throws out.
Rather than all this noise what we need is answers such as:
This isn’t an exhaustive list rather these questions characterise the way you might interact with the information coming out of System Center, and help frame an understanding of how business intelligence fits into the picture.
What task are assigned to me ?
This is operational reporting also referred to as consumption reporting because in the process of acting on the report the data in it becomes obsolete. In this case If I action a task assigned to me from a report. it’s then closed and won’t appear on the report if I run it again. This is the simplest type of report and is usually directly sourced form the operational database (hence the other name). In System Center 2012 these reports are usually built in to things like management packs in Operations Manager and Service Manager.
Is everything that needs to be running actually working OK?
This often expressed as a dashboard and is often found running on a large screen in a large helpdesk or operations room. In order to answer this type of question you might need data from more than one source and a deeper understanding of the source data is needed, for example to understand what systems need to be monitored and what the components of those systems are. In the BI world we might use a dashboard for this kind of analysis which might be interactive rather than static, enabling the end user to drill into a problem area to see more detail. Dashboards typically get their data from a data warehouse which is nothing more than a specialised database where the design (schema) is optimised for reporting rather than input. System Center does include some dashboarding capability but this a set of components and tools rather than a finished solution as dashboards are very individual to an organisation so there’s no right answer than can be implemented in a product. For example your System Center dashboard would probably compare actual performance against service levels, across time across business units. However the SLAs in your business will vary considerably e.g. “server uptime bust be greater than 99.999% between 8am- 6pm on working days” or “client login time on our corporate internet site must be less than 500ms “ so you’ll have to do some work to get those to show up.
How can I be more proactive?
In order to answer this kind of question a data warehouse is also needed because the answer might depend on what’s happened before where operational systems e.g. Operations Manager are routinely purged of older data to maintain performance. However writing endless reports and running them to get an answer as vague as this would take too long what is need is an interactive way to navigate through the data to understand the trends and discover patterns that might not immediately be apparent. This is the realm of OLAP and data mining both of which are built into SQL Server standard edition, (which you get with System Center 2012), and there’s an option to use this as part of Virtual Machine Manager 2012 for this kind of reason.
Hopefully that’s got you thinking, but if not let me leave you with a though and a question