How do I: Create An Advanced SQL Database Backup Monitor In Visual Studio?

My favorite part of my job is getting to work with customers and understand exactly how they use, and also need a product to work. Some of the time this just means listening closely, answering questions, and relaying information back to the product group. But often there are those tiny changes and use cases that fall outside the realm of things the product group is likely to address.

A change that might be hugely valuable for Customer A, won’t happen if it breaks backwards compatibility for Customers B-Z.

One of my customers has a SCOM environment for SQL that monitors over 20,000 databases. Due to their size there are often cases where the out-of-box monitoring provided by the SQL Product Groups management packs isn’t able to completely meet their needs. While the pack provides fantastic monitoring for SQL, there are times where they need a more granular view of the health of their environment.

An instance of this from the past year was monitoring SQL Database backups. The SQL Product Group’s pack gives you the ability to alert if a database hasn’t been backed up in a certain number of days. By default it is set to 7 days, but you can override that to any integer value of days.

For my customer this wasn’t really good enough. They wanted to be able to drill down and alert on hours since last backup. They also wanted multiple severities so if it had been 20 hours since a backup, an e-mail could go out, but at 30 hours we would generate a page. The 20 & 30 hours would be customizable at the individual database level, and they also wanted some added logic that would check database backups for databases that had a status of “ONLINE”. We have other monitors that look at DB Status in general so in this case if a database was OFFLINE they either knew about it from the other monitors and were fixing it, or it was intentional in which case they didn’t want a backup alert.

The basic logic behind the SQL PG’s MP is a simple T-SQL query wrapped in a fairly complex vbscript. The unwrapped T-SQL is below:

The T-SQL modifications we need to make are relatively simple swap DAY to HOUR, and add in a line to only return database backup info for databases with a status of ONLINE.

To get this into a working Management Pack is a little bit more complex and requires isolating and cloning the Product Groups Database Backup Monitor in Visual Studio, and then making a few changes to the XML for our custom iteration.  To prevent screenshot overload I did a quick step-by-step walkthrough of the process. For this video I opted to leave out three-state severity request, and will show how to add that functionality in a follow up video.

If you have any questions or need any help, just leave a comment.

How do I: Create a task that will allow me to bulk adjust a regkey via the SCOM Console

There are lots of ways to adjust reg keys in bulk. SCCM, Group Policy, Remote PowerShell to name a few.

Occasionally I find that SCOM customers like to have the ability to modify a registry setting via a Task in the SCOM console. This gives them the ability to modify the regkey for a single server, a group of servers, all servers, whatever they want in a matter of seconds without having to rely on outside tools.

Recently I have had a few customers need to adjust the MaxQueueSize reg key for their agents:

This is actually a fairly good simple MP Authoring exercise so I will quick walkthrough the process.

The end design in Visual Studio will look like this:

regkeymp

Easy enough, two Tasks, and two Scripts, standard out-of-box references – which then generate two tasks in the console:

task

Usually for something like this I like to start with the PowerShell before I break open Visual Studio. It is easier for me to get the script working in the PowerShell ISE and then start a new MP once I know I have the PowerShell working.

For the most part the PowerShell is pretty straight forward. The only complication I ran into in testing was that  since some of my customer’s agents are multi-homed and some aren’t I needed a way to handle either scenario without erroring out. Handling multiple management groups adds three lines of code to my original script, but still not too bad:

$GetParentKey = Get-Item -Path ‘HKLM:\SYSTEM\CurrentControlSet\services\HealthService\Parameters\Management Groups’
$MGName = $GetParentKey.getsubkeynames()
Foreach ($Name in $MGName)
{
Set-ItemProperty -Path “HKLM:\SYSTEM\CurrentControlSet\services\HealthService\Parameters\Management Groups\$Name” -Name ‘maximumQueueSizeKb’ -Value 76800 -Force
}


To make things as simple as possible in this example I am using hardcoded QueueSize Values. One Task to increase the queue size to 75 MB, and one to set it back to the default of 15 MB.

$GetParentKey = Get-Item -Path ‘HKLM:\SYSTEM\CurrentControlSet\services\HealthService\Parameters\Management Groups’
$MGName = $GetParentKey.getsubkeynames()
Foreach($Name in $MGName){
Set-ItemProperty-Path “HKLM:\SYSTEM\CurrentControlSet\services\HealthService\Parameters\Management Groups\$Name” -Name ‘maximumQueueSizeKb’ -Value 15360 -Force
}

Now that we have the scripts we can open up our copy of Visual Studio with the Visual Studio Authoring Extensions:

File – New Project

newproj

Management Pack – Operations Manager 2012 R2

opsproj

create

We are going to create two folders. These aren’t required, I just like adding a little bit of organization rather than dealing with one large .mpx file. Ultimately how you divide things up is somewhat arbitrary and more a matter of personal preference rather than any specifc rules.

To create a folder. Right-click MaxQueueSize – Add – New Folder

new-folder

Do this two times. We will create one folder called Scripts and one called Tasks:

scripts

Now we need to populate our Scripts folders with the two PowerShell scripts we wrote in the ISE earlier.

Right-Click the Scripts folder – Add – New Item

addnewitem

PowerShell script file – Name file – Add

powershell-script-file

Now you can paste in the code we wrote in the PowerShell ISE

increasemaxsize

This takes care of the Increase Max Queue Size PowerShell. Now repeat the steps above for the reset max queue size script:

powershell-scripts

Now we need to populate our Tasks folder

Right-Click Tasks Folder – Add – New Item

add-new-task

Empty Management Pack Fragment – IncreaseMaxQueueSize.mpx – Add

increasetask

The code for a task that kicks off a PowerShell script is pretty easy:


<ManagementPackFragment><SchemaVersion>
=”2.0xmlns:xsd=”http://www.w3.org/2001/XMLSchema“>
<Monitoring>
<Tasks>
<Task ID=”Sample.RegKey.IncreaseMaxQueueSize.AgentTaskAccessibility=”InternalTarget=”SC!Microsoft.SystemCenter.ManagedComputerEnabled=”trueTimeout=”300Remotable=”true“>
<Category>Custom</Category>
<ProbeAction ID=”ProbeTypeID=”Windows!Microsoft.Windows.PowerShellProbe“>
<ScriptName>IncreaseMaxQueueSize.ps1</ScriptName>
<ScriptBody>$IncludeFileContent/Scripts/IncreaseMaxQueueSize.ps1$ </ScriptBody>
<SnapIns />
<Parameters />
<TimeoutSeconds>300</TimeoutSeconds>
<StrictErrorHandling>true</StrictErrorHandling>
</ProbeAction>
</Task>
</Tasks>
</Monitoring>
<LanguagePacks>
<LanguagePack ID=”ENUIsDefault=”true“>
<DisplayStrings>
<DisplayString ElementID=”Sample.RegKey.IncreaseMaxQueueSize.AgentTask“>
<Name>Max Queue Size Increase</Name>
<Description>Increase Max Queue Size Regkey to 75 MB</Description>
</DisplayString>
</DisplayStrings>
</LanguagePack>
</LanguagePacks>
</ManagementPackFragment>


taskxml

You do this for both tasks and associate each with the appropriate PowerShell file.

So Visual Studio will look like this:

regkeymp

And once you build and import the pack you will have two tasks that will show up as options when you are in the Windows Computer State view:

task

If anyone wants these instructions in video form, just post a comment below and I will record a step-by-step video walkthrough.

If the source files or finished MP are helpful again don’t hesitate to ask. Just post a comment and I will zip up the files and upload to TechNet or GitHub.

How do I: Generate a single report of all healthy agents + grey agents +timestamp of last recorded heartbeat?

This week is a training week, which means I have tiny windows of time to catch up on some blogging.

I have had this question a few times over the years. It seems like it should have a straightforward answer, but if there is one, I have not been able to find it.

When customers have asked this in the past I usually refer them to the following three posts:

https://blogs.msdn.microsoft.com/mariussutara/2008/07/24/last-contacted/

http://www.systemcentercentral.com/quicktricks-last-agent-heartbeat/

http://blog.scomskills.com/grey-agents-with-reason-gray-agents/

These do an excellent job in different ways of getting at the question of what agents are greyed out and when did heartbeats stop coming in.

Unfortunately, these do nothing to address the first part of the question, they want all agents, those that have stopped heart beating and also those that haven’t.

This is a little bit more tricky. It is easy enough to get a list of all agents, a list of grey agents, and to query for when health service heartbeat failures occur. But there is nothing easily accessible via the SDK or via the DW that (at least that I am aware of) allows us to capture a timestamp for when a non-grey agents last heartbeat came in.

So my natural question to my customer is why do you need the healthy agents heartbeat timestamp? The answer was basically that they want to feed that data into other systems in their org and they don’t want to deal with two different lists/files. They want one file, but at the end of the day they don’t actually need an exact timestamp for last heartbeat of a healthy agent.

This makes things a lot easier and lends itself to a relatively simple potential solution:

Import-Module OperationsManager

$Agent = get-scomclass -name “Microsoft.SystemCenter.Agent”
$MonitoringObjects = Get-SCOMMonitoringObject $Agent
$Date= Get-Date | Where-Object {$_.ToShortDateString()}
$DateSString= $Date.ToShortDateString()
$TimeLString= $Date.ToLongTimeString()
$DateTimeCombine = $DateSString + ” “ + $TimeLString
$UserDesktop = [Environment]::GetFolderPath(“Desktop”)
 
function GenerateAgentReport

{
    foreach ($object in $MonitoringObjects)
        {
    $result = New-Object –TypeName PSObject
    $result | Add-Member -MemberType NoteProperty -Name DisplayName -Value $object.DisplayName 
    $result | Add-Member -MemberType NoteProperty -Name Agent_Healthy -Value $object.IsAvailable
        if ($object.IsAvailable -contains “True”)
            {
             $result | Add-Member -MemberType NoteProperty -Name LastHeartbeat -Value $DateTimeCombine -PassThru
            }
        else
            {
            $result | Add-Member -MemberType NoteProperty -Name LastHeartbeat -Value $object.AvailabilityLastModified -PassThru
            }
        }
}

GenerateAgentReport | out-gridview

heartbeat

Basically this returns each agent in your management group. If the Agent is greyed out we use the AvailabilityLastModified property to pull an approximate timestamp. If the agent is still heartbeating as determined by the IsAvailable property then the AvailabilityLastModified property isn’t going to contain useful information, so in this case we substitute the current date/time for that field indicating that we have had a successful heartbeat within the past 5 minutes.

I said “approximate timestamp” when referring to agents with an IsAvailable value of false (greyed out agent) in that while in many cases AvailabilityLastModified should correspond to a when a heartbeat failure occurs flipping the agent from healthy to critical. If for some reason the agent was already in a critical state, but was still heartbeating the AvailabilityLastModified property would only be capturing when the agent went into the critical state, not the moment of last heartbeat. If you need a more or less exact moment of last heartbeat report I suggest using one of the links above. But if you need a quick PowerShell report to feed into other systems to help prioritize agent remediation the above script or some modified form of it might be mildly useful.

How do I: Create an Event View that excludes a particular Event ID

I had a large enterprise customer recently who was monitoring ADFS with the default management pack. They liked being able to glance at the event view which gave them a single place where they could look at the ADFS events occurring across their environment. They were using this event data as part of their correlation and tuning process to determine if there were additional actionable events that were being missed for their unique infrastructure. The eventual goal being to stop collecting the events altogether and only have alert generating rules/monitors in place for patterns of events that they cared about.

01

They quickly found that at least for their environment some of the events being collected were essentially noise, and they asked how to adjust the view so it would exclude one particular event.

This is one of those sounds really easy and of course the product should do this out of box questions that SCOM has never really had a great answer for.

If we take a look at the view it is populated by the following criteria:

02

And if we dig into the corresponding rule that collects the events we find a wildcard regex-style collection rule targeted at the ADFS log:

03

04

05

Since the collection rule is part of a sealed MP the best we could do at the rule level is to shut off this collection rule, and create a new collection rule with a modified wildcard expression such that it would collect everything the old rule did with the exception of the event ID the customer doesn’t like.

The problem with this solution is it isn’t particularly efficient/self-service friendly. If next week the customer realizes there is an additional event they want excluded the AD team has to contact the SCOM team and request further modifications.

In an ideal world the exclusion would be possible at the View level, but if you ever dig into modifying the classic OpsMgr views you will find that while you can use WildCards for some fields like Event Source to perform exclusions:

06

The same is not true for event ID’s, where wildcard exclusions are not allowed:

07

I briefly toyed with the idea of making modifications to the MP at the XML level to allow exclusions as I have occasionally done in the past to hack a subscription into meeting a customer need, but in this case such a solution doesn’t really fit. The customer needed something that was easy for them to change as they gradually winnow down the list of events they see to only the ones they care about.

They needed something that was extremely easy to edit.

Enter PowerShell and the SCOM SDK.

The first solution I put together for them to test was the following:

PowerShell Grid Widget

08

with a where-object {$_.Number -ne 31552 -and $_.PublisherName -eq “Health Service Modules” } I used a SCOM publishername since I didn’t have any ADFS events in my test environment and I wanted to use something that I could confirm that the exclusion was working as expected: 

11

Everything looked good the event I wanted excluded was dealt with properly  (Description dataObject is commented out in the code for this screenshot to make it easier to view. With Description uncommented each event takes up more lines of screen real-estate. I recommend creating two views, one with description commented out, and one where it is uncommented so customers can easily toggle between views.)

12

And if we remove the -ne $_.Number 31152 I get results as below with the event present:

10

In theory this should be all we needed, but when my customer tested out the script nothing happened. After a little bit of head scratching it became apparent what the problem was.

We were calling Get-SCOMEvent | Where-Object

which means we were telling the OpsMgr SDK to please go retrieve every single event in the OpsDB, and then once you are done with that we are going to pipe the results to a Where-Object and tell you what we really need.

In my relatively small test environment this wasn’t that big of an ask and the results returned quickly.

In my customer’s environment with thousands of servers and friendly event generating MP’s like the Exchange 2010 MP, getting every event in the OpsDB was basically a great way to enter an endless loop of dashboard timeouts with nothing ever being displayed.

So we needed to filter things down a bit up front, before piping to the Where-Object.

If you search the blogs you will find that Stefan Stranger has a nice post describing how to deal with this issue when calling the Get-SCOMAlert cmdlet with a Where-Object. Basically you use Get-SCOMAlert -criteria and then pipe to a Where-Object if still needed.

Unfortunately, Get-SCOMEvent doesn’t have a -criteria parameter because that would make things too easy and intuitive.

It does, however, have a -rule parameter which looked promising:

13

First I tried passing it a rule Name, followed by a second try with a rule GUID for an event collection rule I was interested in. In both cases I got a nice red error message:

14

While a little a cryptic it is saying that I am passing a parameter of the type string, and it wants a special SCOM specific rule type.

To give it what it wants we need to first retrieve the -rule parameter using the get-scomrule cmdlet and then pass it to get-scomevent as a variable:

$rule = get-scomrule -DisplayName “Operations Manager Data Access Service Event Collector Rule”

15

$rule = get-scomrule -DisplayName “Operations Manager Data Access Service Event Collector Rule”

get-scomevent -rule $rule

16

So our final script would look something like this: (I have added some additional filtering to be able to allow if you just want events from the past hour. *Keep in mind this date/time filtering doesn’t increase the efficiency of the script since it occurs after the Where-Object, the only thing making this script more efficient is that we are first only pulling back events collected from a specific rule*)

$rule = get-scomrule -DisplayName “Operations Manager Data Access Service Event Collector Rule”

$DateNow = date

#Modify the .AddMinutes below to determine how far back to pull events

$DateAgo = $DateNow.AddMinutes(-60)

#$_.Number -ne(not equals) is used to indicate the event number that you want to exclude from the view

$eventView = Get-scomevent -rule $rule |where-object {$_.Number -ne 17 -and $_.TimeGenerated -ge $DateAgo -And $_.TimeGenerated -le $DateNow}|Select Id, MonitoringObjectDisplayName,  Number, TimeGenerated, PublisherName, Description| sort-object TimeRaised -descending

foreach ($object in $eventView){

     $dataObject = $ScriptContext.CreateInstance(“xsd://OpsConfig!sample/dashboard”)

     $dataObject[“Id”] = [String]($object.Id)

     $dataObject[“Event Number”] = [Int]($object.Number)

     $dataObject[“Source”] = [String]($object.MonitoringObjectDisplayName)

     $dataObject[“Time Created”] = [String]($object.TimeGenerated)

     $dataObject[“Event Source”] = [String]($object.PublisherName)

     $dataObject[“Description”] = [String]($object.Description)

     $ScriptContext.ReturnCollection.Add($dataObject)

}

And then the ADFS code would look like this, though event 17 was not the event they wanted to exclude:

$rule = get-scomrule -DisplayName “Federation server events collection”

$DateNow = date

#Modify the .AddMinutes below to determine how far back to pull events

$DateAgo = $DateNow.AddMinutes(-60)

#$_.Number -ne(not equals) is used to indicate the event number that you want to exclude from the view

$eventView = Get-scomevent -rule $rule |where-object {$_.Number -ne 17 -and $_.TimeGenerated -ge $DateAgo -And $_.TimeGenerated -le $DateNow}|Select Id, MonitoringObjectDisplayName,  Number, TimeGenerated, PublisherName, Description| sort-object TimeRaised -descending

foreach ($object in $eventView){

     $dataObject = $ScriptContext.CreateInstance(“xsd://OpsConfig!sample/dashboard”)

     $dataObject[“Id”] = [String]($object.Id)

     $dataObject[“Event Number”] = [Int]($object.Number)

     $dataObject[“Source”] = [String]($object.MonitoringObjectDisplayName)

     $dataObject[“Time Created”] = [String]($object.TimeGenerated)

     $dataObject[“Event Source”] = [String]($object.PublisherName)

     $dataObject[“Description”] = [String]($object.Description)

     $ScriptContext.ReturnCollection.Add($dataObject)

Hopefully this helps save a little bit of time for anyone else who comes across a question like this one.

How do I: Change the default behavior of the DB Mirror Status Monitor

MP Authoring Series (For an explanation of this series read this post first.)

Real World Issue: Customer is seeing a lot of Critical alerts for Mirrored Databases in a Disconnected state, but only Warning alerts for Mirrored Databases in Suspended state. In this customer’s environment brief disconnects are common and not necessarily indicative of a Critical issue, whereas a mirrored database in a Suspended state is always Critical for this customer. The customer wants to swap the default states such that Disconnected will now be Warning and Suspended will be Critical.

There are three Mirrored Database Mirror Status Monitors

01 02

When we dig into the properties

2.5

We see the three states as well as the corresponding Statuses that map to those states.

03

When we check Overrides we find there is nothing we can override to meet the customer needs:

04

So to accommodate this request we need to do a little custom authoring in Visual Studio + VSAE.

The process isn’t too complex, however, it is much easier to absorb via video than the many page article that would result if I tried to document the steps by hand:

Once you have your final successful build you will find your files in the bin–Debug folder of your project:

05

Once you import your custom MP into SCOM you will have a cloned monitor with your modified behavior:

06

 

The contents of this site are provided “AS IS” with no warranties, or rights conferred. Example code could harm your environment, and is not intended for production use. Content represents point in time snapshots of information and may no longer be accurate. (I work @ MSFT. Thoughts and opinions are my own.)