Tag Archives: Best Practices

Best Practices: Building a home System Center Test Environment (Part I)

Part I (Choosing Hardware, Install Base Windows OS, Install Hyper-V)

I have always found that the vast majority of my learning comes from actually getting to do something.  In the world of IT and System Center this means I have to have my own personal test environment where I can experiment, iterate, and try out new ideas. Having a test/dev environments that your company provides for you is critical as well, but there is a need to be able to safely test within the comfort of your home network rather than testing out some crazy idea and then breaking the test environment which people other than yourself at your company depend on. Thus the need for a completely autonomous home or portable test environment.

The problem is that while there are tons of fantastic guides for how to install System Center Products, there are really no good guides (at least not that I am aware of) that take you through the entire process which at its most simplistic high level tends to look like:

Procure Hardware–Install Base Windows OS–Install Hyper-V–Create New Virtual Machine–Install Base Server OS–Configure VM as Domain Controller–Install SQL–Install System Center product of choice. (Basic)

Procure Hardware–Install Base Windows OS–Install Hyper-V–Create New Virtual Machine–Install Base Server OS–Configure VM as Domain Controller–Create New VM–Install SQL–Create New VM–Install System Center product of choice (Advanced)

There are of course ways around this like Hydration Kits which require some pointers to media, and a few inputs and will automate the process for you, but if you have never gone through the process from start to finish something is lost, and if a new version comes out it means you can’t build a test environment until someone else builds a hydration kit for you.

Once you make a basic standalone system center test environment you can then experiment with building a distributed one: Dedicated VM’s for DC, SQL, and various system center roles etc.

First thing you need is a computer/server with enough horsepower to handle running a few VM’s without breaking a sweat. You can skimp on test environment hardware, but you will pay in the form of slow performance. My theory with test environments is they should be fast–not necessarily blazingly fast, but preferably faster than whatever prod environment you work with. This may seem counter intuitive, but if I am going to productively use my test environment on a regular basis I need something that can rapidly spin up and spin down systems to simulate different scenarios. Also your production environment will be a lot larger than test, so to achieve fast speeds in a test environment you can use less robust hardware since your scale is much smaller. Keep in mind though,  if prod is signficantly faster, you will inevitably gravitate towards takings risks that you shouldn’t in prod (even with the best change management process in place). If test is faster, it will become your new home.

So you have to start with hardware. If you decide you want to use a laptop I recommend something that meets the following specs:

Processor: Quad-Core with Hyper-threading (You want at least 8 logical processors)

Memory: 16-32 GB Minimum (Yes this can be kind of expensive to do with a laptop since it often means 8 GB Chips. I recommend buying a base model laptop and then upgrading the RAM yourself via Newegg or Amazon.com it is much cheaper)

Disk: Minimum of two disks at least 1 disk should be an SSD. Depending on how ambitious you are feeling I recommend going with a laptop that has space for 2 SSD’s (again buy a base model and buy the drives yourself) then add a CD-ROM hard drive replacement to the mix and purchase a third SSD. I personally use multiple Mushkin Chronos Deluxe 240 GB SATA 6.0 Gb-s 2.5-Inch Solid State Drive (MKNSSDCR240GB-DX) though I have also heard good things about the Samsung Electronics 840 Pro Series 2.5-Inch 256 GB SATA 6GB/s Solid State Drive MZ-7PD256BW

This may sound like a pretty expensive combo, but it can be done quite affordably. One of my first test systems was an HP Pavilion dv7 notebook. It was quad core with 8 logical processors, I added 16 GB of RAM, and 3 SSD’s + cdrom harddrive adapter and was safely hovering under $1,500.

(The dv7 is a bit dated now, and HP has newer offerings, and you can certainly find other quality systems from manufactures like Lenovo which you can get the same functionality out of, though finding a Lenovo that can natively support two hard drives tends to only happen with their W series laptops which while fantastic machines tend to be a bit too pricy for once I start adding in SSDs from Newegg or Amazon.)

If you happen to have a server lying around, or if you are of the savvy EBay persuasion you may decide that you want more power than what your laptop is going to offer. Home servers are awesome, but they are also generally pretty noisy, less kind to your electrical bill, and more likely to cause those close to you in your life to look at you with mild trepidation as you slink down to the basement to tinker with the strange creature that sits next to the washing machine. (I am currently the proud owner of two home servers and am in the market for a third)

Base recommended server specs:

If you are going to have a server it is best to have one that at least has some basic specs to support heavier workloads, otherwise you might as well just get a laptop and pimp it out. If you are going with a home server, I cannot stress enough to buy used from reputable sellers. Buying new servers is prohibitively expensive, and while completely advisable for your business, it makes no sense for a home test lab.  Here are the base specs of one of home servers:

Processor:  2 Quad-core processors with hyper-threading support (This means 8 cores = 16 logical processors)

Memory: The key here is expandability, I bought a used server off Ebay that came with 24 GB, I have since upgraded it to 48, I will probably upgrade it again in a few months to 64 GB, it is capable of upgrading to 96 GB of RAM.

Disk: Enterprise grade storage is absurdly expensive, a lot of servers you order on Ebay will come with at least two 10 or 15 K SAS disks. I set these as a RAID 0 pair for the OS and then I add individual RAID 0 consumer grade 240 GB SSD disks. I will usually also have at least 1 large consumer grade SATA disk. Why RAID 0? This is a test environment, and my concern is speed and agility at the lowest possible cost. If I really want some added redundancy I will add a second server and setup Hyper-V Replica. With that said not having local mirrors is certainly a calculated risk, but if I can cut the costs of my test lab in half by not having quite as much redundancy, to me this is worth the time I will spend rebuilding from scratch every once in awhile.

*Make sure your server supports hardware virtualization, if it does not it is not a viable candidate.

So once you have selected a hardware platform you need to install a base Windows Server OS, if you are going with a laptop you can begin right from inside Windows 7/8. If you have a clean slate server you can start by loading the OS there.

Laptop w/Windows 8

Control Panel — Programs and Features — Turn Windows features on or off

Select All Items underneath Hyper-V including Platform. If Platform is greyed out this usually means that virtualization support is disabled at the bios level. If this is the case you will need to reboot your computer and turn on hardware virtualization support. (Each bios may have a slightly different name for this HP uses Virtualization Technology VTx to indicate this function. Reboot your computer go into the bios and enable this setting and then head back to turn on this feature)

Now you should see Hyper-V Platform as an option

It will search for the required files:

And then it will request a reboot:

Now when you go to Administrative Tools you will have the option Hyper-V Manager:

Pro Tip *If Installing this on a laptop you may notice that your battery life goes down drastically post install. This is a result of how laptop processors, and Hyper-V interact. Most laptop processors will use some type of speed-step technology to try to only have your processor running at its maximum value when there is a workload which demands it. This results in a slight performance hit, but can provide significant savings from a battery perspective. You can watch this happening by going to Taskmgr and launching various programs your CPU might start at a max of 1.6 GHZ and then jump to 2.0 GHZ, then to 2.8 GHZ then down to 900 MHZ when idle. When you install Hyper-V this will cease to occur if your Max CPU is 2.8 GHZ, you CPU will remain parked at 2.8 GHZ, this is not a bug this is by design to insure that Hyper-V functions properly as your provision multiple VM’s which will be making demands on those resources. I accept the fact that if I am using a laptop as a mini Hyper-V test farm that it will be a battery hog. I recommend you do the same*

If you were going the server based test environment route the basic process is similar but it will look a little different. To setup your server you generally are going to want to wipe the box and install the latest version of Server 2012 R2 from a CD. This brings up the question of where to get media from?

Below are my top recommendations for your home test environment:

MSDN  (If you have a MSDN subscription this is a great place to get access to media which is not going to expire after 180 days for testing purposes)

msdn.microsoft.com

Dreamspark (If you are a student, teacher, or academically affiliated you can get a FREE Microsoft Dreamspark  account and are givenaccess to a catalog of Microsoft software-again no expiration, but the catalog is not as expansive as MSDN.)

Dreamspark.com

Bizspark (For the more startup/entrepreneurial minded you can get a FREE Bizspark account with a similar catalog of software to Dreamspark)

Bizspark.com

TechNet (Gone are the days of unlimited trials but if you want to spin up a test environment for 180-days and do a quick rebuild 2x a year this can be a great option too.

TechNet

You would then choose where you want to get your media from. Download the Windows Server 2012 R2 media, and burn a DVD which you would boot your server to and install.

You will also need the same media for spinning up your VMs.

To add Hyper-V to a server once you have Windows Server 2012 R2 installed the process is:

Navigate to Server Manager–Add Roles and features: (Or Manage–Add roles and features)

Click Next

Select Role-based or feature-based installation. (Click Next)

Select your local system and click next:

Select Hyper-V (In this case I already have it installed on this system)

Then click Next, it will prompt you if there are any additional features that need to be installed and you will need to reboot in order for the installation to complete.

You now have either a Win 7/Win 8 or Server 2012 R2 system which has Hyper-V installed and you are ready to proceed onto Part II to begin creating your first virtual machine.

Tagged , ,

On The Importance Of Building Test Environments

One of the things I didn’t quite grasp when I first started using SCOM a few years back was the importance of test environments. SCOM was this bright and shiny new tool that was going to help proactively monitor our servers, increase uptime, and as long as I only installed Microsoft approved Management Packs everything would be alright. This was admittedly extremely naive– but it was good starting point. I was enthusiastic as well as fortunate enough to learn that this was a terrible idea long before making a critical mistake.

SCOM is an incredibly powerful tool, but it has to be used and implemented intelligently:

-Installation guides must be read.

-MP’s should be evaluated in Test or Dev environments first (If you don’t have a test environment build one)

-Blogs should be scoured for relevant info.

-Management Packs should be installed in production because they provide value not just because you happen to have the associated product installed.

Anytime an engineer or admin asks to have a shiny new management pack installed in production and doesn’t want to test it first I remember this slide from a talk I stumbled across from Microsoft’s Management Pack University entitled “Getting Manageability Right” given by Nistha Soni, a program manager on the Ops Manager team at Microsoft:

Getting Managability Right Nista Soni

The talk was for the different Microsoft product teams to help them think about how to build better management packs that are useful to their customers. If a MP reduces total cost of ownership this is a good thing, if it increases TCO then we have a problem. This slide was referencing an iteration of a Microsoft MP–name omitted to protect the guilty– which provided feedback that while potentially useful for a developer at Microsoft, was also inundating their customers/operators with alerts.

Building a useful MP is a delicate balancing act and its important to remember that even the ones made by Microsoft are essentially a work in progress. Each successive iteration tends to get better, but if you just import into production without testing and research you are asking for trouble.

The talk itself is an interesting look at how Microsoft thinks about monitoring and building management packs and is still available here.

Tagged , , , ,