Building Better Service Level Dashboards

Microsoft has added a lot of functionality into SCOM 2012 to make creating dashboards easy. The only problem is they have given you a blank canvas without much in the way of guidance. This can be great, but it can also be problematic. The fact that you can make a 9 cell grid layout filled with graphs and data doesn’t mean that you should.

What you should do, is strive to build effective dashboards.  What is an effective dashboard? There is no right answer– I am making up the phrase– though I would argue that effective dashboards are ones in which the dashboard is designed to give insight into a service with a specific audience in mind.

A dashboard that is useful for your engineers or sysadmins is going to–OR SHOULD–look very different from a dashboard for Tier I support. Much like a dashboard for Tier I should look different from a dashboard for non IT customers. I like to break down service level dashboards into specific sub categories based on audience.

For the sake of this post lets divide potential dashboards into three groups:

1. Dashboards for non technical internal clients often published on an internal sharepoint site.

2. Dashboards for Tier I Support and upper IT management published via limited rights login to SCOM web console.

3. Dashboards for Systems engineers and Sysadmins.

Obviously this is going to vary greatly depending on what business you are in, but you get the idea.

I think in general we tend to do a pretty good job with 1 and 3.  Service Level Dashboards for non technical internal clients just need to provide basic information: is the service up or down, and to the best of our monitoring ability how well are we meeting the SLA?

The out of box Service Level Dashboard in SCOM 2012 does this quite effectively:

I say to the best of our ability above, because even with synthetic transactions there is always the possibility that a complex service can be degraded or down in some respect without your monitors picking up on it. (Exchange servers are up and running perfectly, but your BES server for Blackberries is down.) Or alternatively, your monitoring picks up a problem, but isn’t smart enough to correlate it into a change in the dashboard.  At best service monitoring is an evolutionary process not one that you set up and leave alone. IT Managers may not want to hear it, but ultimately your ability to track  a service depends on the accuracy of your monitors, and building accurate monitors requires iteration and time.

Dashboards for engineers and sysadmins are often built with very specific requirements in mind, or are redundant and aren’t needed so they tend to not be a problem either.

Where I most see the most potential for people to get into trouble is in creating dashboards for their Tier I support, and also for senior IT management. The easy answer is to just have them use the simple up/down  service level dashboard. The problem is that while this is a perfectly acceptable level of transparency to provide to Non IT, it often isn’t enough info, especially for the occasional situation when your up/down dashboard says everything is fine, and users are calling in complaining with issues.

Below is an example of a dashboard I would create for an e-mail or messaging service  for Tier I operators and upper level IT management that seeks to find the middle ground:

– In the Upper left you have a state widget. It is pegged to a group which contains all servers related to e-mail service. It should be made up of not just exchange servers. Mine contains BES and ISA servers to provide a more complete picture of the health of all related parts. Some would say build a simple distributed app to do this, but this starts to get troublesome when dealing with load-balanced systems, or systems where a negative status of one system doesn’t need to roll up to the status of the entire app.

– Upper middle is a Service Level Widget which is tied to the Exchange 2010 Application from the Exchange 2010 MP. It’s not perfect, but it does a decent job of generally showing when core e-mail functionality is up or down.

– Upper right: An alerts widget which looks at anything related to the health of the servers in the group on the left.

– Middle: Graph of outlook latency. Honestly, it is unlikely that Tier I is going to gain useful info from this graphic. You can, and I have been able to see noticeable shifts if one member of a load balanced or clustered pair is down, but this falls into the category of behold the power of pretty graphs. Sometimes its nice for your Tier I and upper IT management to feel empowered, and for whatever reason I have found that pretty graphs can do that even if they may or may not know exactly what they are looking at.

– Bottom: Again empowerment via pretty graphs.

 

Management Pack Tuning: Logical Disk Fragmentation is High

One of the first floods of warnings new SCOM admins often get in their inbox is Logical Disk Fragmentation is High.

Here is how it usually goes down:
1. You import the Microsoft Windows Server Management packs on Monday.

2. You spend a day or two tuning out the memory and CPU spikes that fall into a range that you would consider noise.

3. By the end of the week you are feeling pretty good about yourself and decide to setup some notification channels and subscriptions for some peace of mind over the weekend.

4. You wake up Saturday morning to find your inbox or console full of Disk Fragmentation warnings.
What have you done wrong? Nothing, it is just that by default the Windows Server MP runs its disk fragmentation check every Saturday at 3:00 AM so unless you preemptively made the necessary overrides to your environment you will be treated to this nice little surprise. Certainly not the end of the world, but here is where many admins screw up. SCOM offers the wonderful functionality of being able to fix the fragmentation problem with two clicks via the Logical Disk Defragmentation task.

You have fragmented disks, you can defrag them in two clicks, how is this not a good thing?

The first question you have to ask before even thinking about defraging a server via SCOM is if the fragmented server is virtual or physical.? If the server is physical the answer to if you should defrag is–maybe. If the server is virtual– the answer is no.

For a good explanation of the implications of defraging a virtual server I would recommend Cormac Hogan’s September 20, 2011 post on the VMware vSphere Blog.

The post is specific to why this is a really bad idea for VMware shops, but much of the reasoning is applicable to any virtualized server.

The key points that apply to any virtualized servers are that:

1. You are unlikely to see any benefits from defraging virtual disks since generally multiple VM’s run on any given datastore/storage pool in an enterprise environment.

2. If any of your disks are thin provisioned the defrag process will cause your VMDK/VHD to expand and to unnecessarily chew up space. (If you were to defrag a large number of thin provisioned servers at the same time you could theoretically cause an outage if any of your datastores/storage pools are oversubscribed.)

3. Defragging creates more I/O which could result in a temporary drop in performance for the duration of the defrag.

 

On The Importance Of Building Test Environments

One of the things I didn’t quite grasp when I first started using SCOM a few years back was the importance of test environments. SCOM was this bright and shiny new tool that was going to help proactively monitor our servers, increase uptime, and as long as I only installed Microsoft approved Management Packs everything would be alright. This was admittedly extremely naive– but it was good starting point. I was enthusiastic as well as fortunate enough to learn that this was a terrible idea long before making a critical mistake.

SCOM is an incredibly powerful tool, but it has to be used and implemented intelligently:

-Installation guides must be read.

-MP’s should be evaluated in Test or Dev environments first (If you don’t have a test environment build one)

-Blogs should be scoured for relevant info.

-Management Packs should be installed in production because they provide value not just because you happen to have the associated product installed.

Anytime an engineer or admin asks to have a shiny new management pack installed in production and doesn’t want to test it first I remember this slide from a talk I stumbled across from Microsoft’s Management Pack University entitled “Getting Manageability Right” given by Nistha Soni, a program manager on the Ops Manager team at Microsoft:

Getting Managability Right Nista Soni

The talk was for the different Microsoft product teams to help them think about how to build better management packs that are useful to their customers. If a MP reduces total cost of ownership this is a good thing, if it increases TCO then we have a problem. This slide was referencing an iteration of a Microsoft MP–name omitted to protect the guilty– which provided feedback that while potentially useful for a developer at Microsoft, was also inundating their customers/operators with alerts.

Building a useful MP is a delicate balancing act and its important to remember that even the ones made by Microsoft are essentially a work in progress. Each successive iteration tends to get better, but if you just import into production without testing and research you are asking for trouble.

The talk itself is an interesting look at how Microsoft thinks about monitoring and building management packs and is still available here.

The contents of this site are provided “AS IS” with no warranties, or rights conferred. Example code could harm your environment, and is not intended for production use. Content represents point in time snapshots of information and may no longer be accurate. (I work @ MSFT. Thoughts and opinions are my own.)