Wednesday, May 2nd, 2012
I use a number of different backup products in my SBSfaq.com office environment, one of those is ShadowProtect. My Terminal Server is Windows Server 2008 R2 and it’s been running on top of a Windows 8 Hyper-V server as I test it out. I’ve had ShadowProtect running inside the Guest Virtual Machine so as to ensure I got a solid backup of my data should something go wrong. Well this week something went wrong. I had a hardware failure of the server and just my luck, it’s not covered under warranty and I’m not sure if I will get it repaired or not. My solution is quite simple. Take the backups and convert them to a VHD and get them running on hardware platform. Not many people know that you can convert a ShadowProtect backup directly into a VHD. This is how you do it.
Note – you can also use this process to convert a ShadowProtect backup to a VMDK or a SPF (ShadowProtect Full Backup) file as well.
1. Install ShadowProtect onto a machine – it does not have to be licensed as you are only using the conversion tool. Reboot as needed.
2. Open ShadowProtect
3. Select Image Conversion Tool from the Tools menu on the left hand pane
4. Select Next to continue
5. Select Network Locations
6. Fill out the form listing the network location where the images are stored
7. It will now list the volumes that it has found here
8. Select the volume you wish to convert and then select Next to continue. If you have password protected the images, you will now be prompted for the password. Enter it and select Ok to continue
9. You will see the a list of all the backup points that you can restore this system from. Select the one you want and then select Next to continue
10. Specify the destination location where you want the VHD file created. Also specify that you want it to create a VHD and select Next to continue
11. It will display a summary screen for you to review. Select Finish for it to start the image creation process.
12. ShadowProtect will now run the job and you can see the status below.
13. The time it takes to complete the job is a direct relationship to the number of files in the image chain. The more files in the chain, the longer it will take.
14. Once the conversion is complete, you will need to do a couple of things. Boot into the ShadowProtect Recovery Environment.
15. Set the partition to be Active as below
16. Select the Boot Configuration Utility from the Tools menu
17. You will see that the BCD Store is broken on this VHD – you can select Auto Repair to fix it
18. This is it fixed!
19. Now boot the Virtual Machine and you should be good… for the most part anyway
A couple of things to be aware of here too…
1. The image you’ve just created does not have the same disk signature as the disk – therefore if you try to run any backups using ShadowProtect from within this newly created machine, they WILL FAIL. The error message will be something like the screenshot below. The only way to resolve this is to edit the backup job and reselect the volumes you wish to backup and start again with a new full backup image.
2. Given you need to create a new base image, you will also need to prune/delete the older images in the destination folder. This might well affect your archival requirements. Therefore assess if this method is the best way for you before you start!
Thursday, April 12th, 2012
If you are running a Windows 8 VM on your Hyper-V 2008 R2 host, then you really need to check out this blog post. It turns out that there can be some issues with the Windows 8 VM if you don’t have the patch and it’s likely that most people will be testing Windows 8 on Windows Server 2008 R2 hosts as I have been. Note that Microsoft state that this is a REQUIRED hotfix, therefore it gives you an idea of how critical they see this issue.
Monday, December 19th, 2011
That’s it – it’s done. All virtualised servers for SBSfaq.com (aside from our public website) are now running on Windows 8 Hyper-V.
Yes – I know that Microsoft have said do not run this in production, but I am taking steps to minimise the risk so that I can learn the technology that will be the basis for whatever Microsoft does in the SMB market over the next few years. SBSfaq.com has a couple of servers – there’s our main SBS server, a W2008R2 Terminal Server, a W2008R2/SQL Server, and then a couple of additional servers for monitoring the network and things like that. A few weeks back, I moved over all of the servers with the exception of the SBS server to the Windows 8 with Hyper-V. Yesterday I moved over the SBS server to it and it’s running faster than ever and has been stable for the past 24 hours. I actually didn’t want to move it just yet, but a sudden bout of hardware instability in the old server forced my hand sooner than I had planned.
Now – I get to play with more cool stuff. I’m going to dig into Hyper-V Replica as a means for helping our clients at Correct Solutions to have a very cool way to have redundancy in their network at an affordable price. I’ll share some of my experiences here as they happen and as I’m permitted by the NDA I’ve signed with Microsoft.
Thursday, December 15th, 2011
Veeam is a product that I’ve been meaning to look at for some time now since a good friend of mine introduced me to them a while back. One of the things that has stopped me however is that I’m a user of Hyper-V and not VMWare. Their latest version v6 supports Hyper-V now so it’s time to check it out.
I’ve just been alerted to a free NFR offer that they have for anyone who is a Microsoft Certified Professional (MCP).
They are offering a 12 month free license for testing use to MCPs. You can register for your free license here
Check it out – I’ll be playing with this some more over the coming holiday season.
Tuesday, October 4th, 2011
I’ve done a fair bit of work now with Windows Server 2008 and 2008R2 and Hyper-V. One of the key things I’ve looked at is how to correctly size the pagefile for these systems for the host machine. I’ve searched high and low for a real recommendation from Microsoft on this topic and sadly have not been able to find anything concrete to use as a guide.
In short – I recommend a fixed size 4GB pagefile for my Hyper-V host. This recommendation is based on Windows Server 2008 and Windows Server 200R2 WITHOUT SP1 with the Hyper-V roles enabled. Windows Server 2008 R2 SP1 introduced the concept of dynamic memory for the guest machines and at present, I don’t have enough experience with it to make a judgement call. If you want to know why and how I come to that then read on below.
Ok – so when you can’t find anything concrete what do you do? You use the knowledge that you have to figure out what seems reasonable and then put that to the test in a real world situation. The knowledge below is the summation now of the last few years of running both test and production environments on Hyper-V and from all experiences so far seems to be very accurate indeed.
So what do we do? First up I looked at Microsoft “normal” recommendation is to use 1.2 to 2 times the size of physical RAM for a pagefile on a Windows machine. Microsoft make this recommendation based on the fact that the total RAM requirement of the programs you run simultaneously on the machine will be well in excess of the physical RAM in the machine. Having a page file either system managed or fixed at a specific size, allows the system to page to disk sections of RAM that are not being used so that it can be freed up for other programs to use them. This is “common” knowledge in the industry and pretty accurate.
Well then how does Hyper-V work? Firstly, Windows Server with Hyper-V or Windows Hyper-V server both handle RAM in the same way – so for the purposes of this FAQ, we’ll call them the same. When you create a guest machine in Hyper-V you allocate it a fixed amount of RAM to that Virtual machine. Microsoft recommend and have hard coded blocks in place that mean you can never start more virtual machines than you have physical RAM for. Microsoft also recommend you leave at least 512MB to 1GB of free RAM for the base Windows Host installation to process it’s things. In addition to that Microsoft also recommend that you do not run any other programs or services on the Hyper-V host therefore limiting the amount of RAM they can consume. If we are limit the programs that we run on the host machine, it also therefore limits the amount of pagefile we will need. I figured some time back that it was pretty safe to allocate a 4GB fixed page file as in reality nothing on the Hyper-v Host should really be being paged out to it in the first place. Based on perfmon indicators on my hyper-V hosts this is pretty accurate and the only time the pagefile gets hit is when I’m running other programs on the Hyper-V hosts itself. Therefore I’ve come to the conclusion that 4GB is more than enough for me. Could I allocate less? Yes probably, but disk is pretty cheap and 4GB is nothing to worry about. On the other hand had I left it as a system managed pagefile then Windows would start by allocating at least the physical RAM amount of space for the pagefile – which on some of my systems is 64GB+ – and that just does not make sense to me.
That’s what I’m doing – how about yourself – do you have any thoughts to contribute?
Thursday, June 23rd, 2011
If you like me have little experience with Hyper-V, Clustering and iSCSI, then you will appreciate how much “fun” we can have in getting it all working. Thankfully, I stumbled across this blog post from Ben Armstrong and a further blog from Aidan Finn with detailed step by step guide on how to use the Microsoft iSCSI Software Target to create a Hyper-V cluster.
If you need to get into iSCSI and Hyper-V Clustering then check out these two guys blogs – there’s a heap of good information there.
Wednesday, June 22nd, 2011
Whilst digging for PowerShell scripts for Hyper-V, I found this very cool resource on the Microsoft TechNet Wiki site so I thought I would share it here.
The Wiki has a load of scripts that will do basic things as well as more complex items such as compacting VHDs. Basically I’m looking to put a few scripts together to enable me to quickly create entire VMs for testing purposes and it looks like I will be able to do it using the knowledge on this wiki.
Tuesday, June 7th, 2011
This is a cool tool I saw some time back but never really used. It’s a Windows Gadget that can sit on your desktop and monitor your Hyper-V servers. From here it can show you the status of the running machines on each Hyper-V host, and CPU utilisation. Below you can see the Virtual Guests that are stopped as well as the ones running.
You can also mouse over one of the machines as below and do some basic control over it and if you want remote console to it – all without the need to start the Hyper-V MMC. Very cool indeed.
You can get more info and download this gadget from the developer here
Monday, March 21st, 2011
Over the weekend, I spent some time rejigging the disk structure in my R&D Hyper-V server. I used StorageCraft ShadowProtect to backup the entire server as part of a regular backup process. I installed all the new disks into the server and then proceeded to restore all the partitions. Because I was not moving it to new server hardware, I did not do a Hardware Independent Restore at all – just a restore and then a reboot as that was all that was needed.
When I restored the server, it booted just fine, but I found that the Hyper-V machines would not start. Further digging into the situation showed the error below in the System Event Log
On Windows Server 2008 R2, there is normally a 100MB partition at the start of the 1st physical drive that contains the C: drive. This partition is used to store Boot Configuration Data (BCD) information as well as other critical files. As part of my ShadowProtect backup, I do not backup this partition at all as ShadowProtect cannot restore it.
I did some digging and found that Microsoft have some info on this error here and it tracks back to the fact that I no longer have the original 100MB boot partition with its original Boot Configuration Data information. The BCDEdit command can be used to modify this information.
If you use the BCDEdit /enum command you can get a picture of what is in the Boot Configuration Data store as you can see below on this machine.
You will note there is no mention of HyperVisor.
To fix this use the following procedure
After you’ve done that if you run the BCDEdit /enum command again you will see that you now have a HyperVisor section.
Hope this helps someone else caught in this situation.
Wednesday, August 25th, 2010
The second session for today is the Hyper-V 2006 R2 SP1 Dynamic Memory Allocation session by Ben Armstrong. This is actually really cool technology that will allow Hyper-V to allocate RAM dynamically to virtual machines as it’s needed. What this means to us is that we can configure a VM to use say 2GB of RAM, but it will only use that 2GB of RAM when it really needs it. Similar to the CPU allocation that we have now, this feature really gives us the chance to add more virtualisation instances to a given host than ever before. Very cool technology indeed. Most of the time, this technology is only available in the Enterprise versions of Windows Server, the good news is that this will be available on ALL versions of Windows Server 2008 R2 and HyperV Server 2008 R2 once Service Pack 1 is released and applied to those systems.