Wednesday, May 2nd, 2012
I use a number of different backup products in my SBSfaq.com office environment, one of those is ShadowProtect. My Terminal Server is Windows Server 2008 R2 and it’s been running on top of a Windows 8 Hyper-V server as I test it out. I’ve had ShadowProtect running inside the Guest Virtual Machine so as to ensure I got a solid backup of my data should something go wrong. Well this week something went wrong. I had a hardware failure of the server and just my luck, it’s not covered under warranty and I’m not sure if I will get it repaired or not. My solution is quite simple. Take the backups and convert them to a VHD and get them running on hardware platform. Not many people know that you can convert a ShadowProtect backup directly into a VHD. This is how you do it.
Note – you can also use this process to convert a ShadowProtect backup to a VMDK or a SPF (ShadowProtect Full Backup) file as well.
1. Install ShadowProtect onto a machine – it does not have to be licensed as you are only using the conversion tool. Reboot as needed.
2. Open ShadowProtect
3. Select Image Conversion Tool from the Tools menu on the left hand pane
4. Select Next to continue
5. Select Network Locations
6. Fill out the form listing the network location where the images are stored
7. It will now list the volumes that it has found here
8. Select the volume you wish to convert and then select Next to continue. If you have password protected the images, you will now be prompted for the password. Enter it and select Ok to continue
9. You will see the a list of all the backup points that you can restore this system from. Select the one you want and then select Next to continue
10. Specify the destination location where you want the VHD file created. Also specify that you want it to create a VHD and select Next to continue
11. It will display a summary screen for you to review. Select Finish for it to start the image creation process.
12. ShadowProtect will now run the job and you can see the status below.
13. The time it takes to complete the job is a direct relationship to the number of files in the image chain. The more files in the chain, the longer it will take.
14. Once the conversion is complete, you will need to do a couple of things. Boot into the ShadowProtect Recovery Environment.
15. Set the partition to be Active as below
16. Select the Boot Configuration Utility from the Tools menu
17. You will see that the BCD Store is broken on this VHD – you can select Auto Repair to fix it
18. This is it fixed!
19. Now boot the Virtual Machine and you should be good… for the most part anyway
A couple of things to be aware of here too…
1. The image you’ve just created does not have the same disk signature as the disk – therefore if you try to run any backups using ShadowProtect from within this newly created machine, they WILL FAIL. The error message will be something like the screenshot below. The only way to resolve this is to edit the backup job and reselect the volumes you wish to backup and start again with a new full backup image.
2. Given you need to create a new base image, you will also need to prune/delete the older images in the destination folder. This might well affect your archival requirements. Therefore assess if this method is the best way for you before you start!
Thursday, April 12th, 2012
If you are running a Windows 8 VM on your Hyper-V 2008 R2 host, then you really need to check out this blog post. It turns out that there can be some issues with the Windows 8 VM if you don’t have the patch and it’s likely that most people will be testing Windows 8 on Windows Server 2008 R2 hosts as I have been. Note that Microsoft state that this is a REQUIRED hotfix, therefore it gives you an idea of how critical they see this issue.
Tuesday, October 4th, 2011
I’ve done a fair bit of work now with Windows Server 2008 and 2008R2 and Hyper-V. One of the key things I’ve looked at is how to correctly size the pagefile for these systems for the host machine. I’ve searched high and low for a real recommendation from Microsoft on this topic and sadly have not been able to find anything concrete to use as a guide.
In short – I recommend a fixed size 4GB pagefile for my Hyper-V host. This recommendation is based on Windows Server 2008 and Windows Server 200R2 WITHOUT SP1 with the Hyper-V roles enabled. Windows Server 2008 R2 SP1 introduced the concept of dynamic memory for the guest machines and at present, I don’t have enough experience with it to make a judgement call. If you want to know why and how I come to that then read on below.
Ok – so when you can’t find anything concrete what do you do? You use the knowledge that you have to figure out what seems reasonable and then put that to the test in a real world situation. The knowledge below is the summation now of the last few years of running both test and production environments on Hyper-V and from all experiences so far seems to be very accurate indeed.
So what do we do? First up I looked at Microsoft “normal” recommendation is to use 1.2 to 2 times the size of physical RAM for a pagefile on a Windows machine. Microsoft make this recommendation based on the fact that the total RAM requirement of the programs you run simultaneously on the machine will be well in excess of the physical RAM in the machine. Having a page file either system managed or fixed at a specific size, allows the system to page to disk sections of RAM that are not being used so that it can be freed up for other programs to use them. This is “common” knowledge in the industry and pretty accurate.
Well then how does Hyper-V work? Firstly, Windows Server with Hyper-V or Windows Hyper-V server both handle RAM in the same way – so for the purposes of this FAQ, we’ll call them the same. When you create a guest machine in Hyper-V you allocate it a fixed amount of RAM to that Virtual machine. Microsoft recommend and have hard coded blocks in place that mean you can never start more virtual machines than you have physical RAM for. Microsoft also recommend you leave at least 512MB to 1GB of free RAM for the base Windows Host installation to process it’s things. In addition to that Microsoft also recommend that you do not run any other programs or services on the Hyper-V host therefore limiting the amount of RAM they can consume. If we are limit the programs that we run on the host machine, it also therefore limits the amount of pagefile we will need. I figured some time back that it was pretty safe to allocate a 4GB fixed page file as in reality nothing on the Hyper-v Host should really be being paged out to it in the first place. Based on perfmon indicators on my hyper-V hosts this is pretty accurate and the only time the pagefile gets hit is when I’m running other programs on the Hyper-V hosts itself. Therefore I’ve come to the conclusion that 4GB is more than enough for me. Could I allocate less? Yes probably, but disk is pretty cheap and 4GB is nothing to worry about. On the other hand had I left it as a system managed pagefile then Windows would start by allocating at least the physical RAM amount of space for the pagefile – which on some of my systems is 64GB+ – and that just does not make sense to me.
That’s what I’m doing – how about yourself – do you have any thoughts to contribute?
Friday, September 23rd, 2011
Ok – this is a cool send up video from Microsoft relating to Cloud Technology, but could easily also be of your average SMB IT reseller out there (not the SMB IT Professionals though). You know – the guy operating out of the boot of his car, offering the lowest cost for services that barely exist… yeah – those guys.
Anyway – take a look and have a laugh
Thursday, June 23rd, 2011
If you like me have little experience with Hyper-V, Clustering and iSCSI, then you will appreciate how much “fun” we can have in getting it all working. Thankfully, I stumbled across this blog post from Ben Armstrong and a further blog from Aidan Finn with detailed step by step guide on how to use the Microsoft iSCSI Software Target to create a Hyper-V cluster.
If you need to get into iSCSI and Hyper-V Clustering then check out these two guys blogs – there’s a heap of good information there.
Wednesday, June 22nd, 2011
Whilst digging for PowerShell scripts for Hyper-V, I found this very cool resource on the Microsoft TechNet Wiki site so I thought I would share it here.
The Wiki has a load of scripts that will do basic things as well as more complex items such as compacting VHDs. Basically I’m looking to put a few scripts together to enable me to quickly create entire VMs for testing purposes and it looks like I will be able to do it using the knowledge on this wiki.
Tuesday, June 7th, 2011
This is a cool tool I saw some time back but never really used. It’s a Windows Gadget that can sit on your desktop and monitor your Hyper-V servers. From here it can show you the status of the running machines on each Hyper-V host, and CPU utilisation. Below you can see the Virtual Guests that are stopped as well as the ones running.
You can also mouse over one of the machines as below and do some basic control over it and if you want remote console to it – all without the need to start the Hyper-V MMC. Very cool indeed.
You can get more info and download this gadget from the developer here
Friday, May 27th, 2011
StorageCraft last week released a new version of ShadowProtect for Virtualised Server environments. This new version is specifically linked to the virtualisation platforms for Microsoft, VMware, Citrix and Oracle. The retail price on these for a 3 virtual server pack is the same as the price for a single physical license. Great price point for sure. The virtual server edition installs into the virtual guest machines and looks for the specific virtualisation environment in order for it to complete it’s backup jobs. If it does not find this then it won’t backup – fair enough as this prevents resellers or end users from misusing the license model. They also have pricing for the virtualised desktop environments as well.
I’ve been using ShadowProtect on my virtual servers running under both VMWare and Windows Server 2008 and 2008R2 with Hyper-V for some time and it’s performed quite well for me indeed.
If you want to find out more about the new virtual edition, contact your local StorageCraft office.
Disclaimer: I previously held a role for 11 month with StorageCraft APAC as their General Manager for Technical Services
Monday, March 21st, 2011
Over the weekend, I spent some time rejigging the disk structure in my R&D Hyper-V server. I used StorageCraft ShadowProtect to backup the entire server as part of a regular backup process. I installed all the new disks into the server and then proceeded to restore all the partitions. Because I was not moving it to new server hardware, I did not do a Hardware Independent Restore at all – just a restore and then a reboot as that was all that was needed.
When I restored the server, it booted just fine, but I found that the Hyper-V machines would not start. Further digging into the situation showed the error below in the System Event Log
On Windows Server 2008 R2, there is normally a 100MB partition at the start of the 1st physical drive that contains the C: drive. This partition is used to store Boot Configuration Data (BCD) information as well as other critical files. As part of my ShadowProtect backup, I do not backup this partition at all as ShadowProtect cannot restore it.
I did some digging and found that Microsoft have some info on this error here and it tracks back to the fact that I no longer have the original 100MB boot partition with its original Boot Configuration Data information. The BCDEdit command can be used to modify this information.
If you use the BCDEdit /enum command you can get a picture of what is in the Boot Configuration Data store as you can see below on this machine.
You will note there is no mention of HyperVisor.
To fix this use the following procedure
After you’ve done that if you run the BCDEdit /enum command again you will see that you now have a HyperVisor section.
Hope this helps someone else caught in this situation.
Tuesday, October 12th, 2010
Many of my readers will know that I am very wary of just following everyone else into the cloud. I’ve spoken a few times now both in public presentations and on this blog about how the cloud is something to be cautiously evaluated and considered before jumping into it. Here then is an upside that I’ve personally experienced with the cloud.
This website SBSfaq.com is hosted on a virtualised server with a cloud vendor – VMVault. I decided to go this path after looking carefully at the options as I needed to downscale my costs for hosting. I previously had a 1/2 rack of space in a datacentre in Sydney which was costing me over $1000 per month. I had a number of servers in that rack that I used for various functions and one of them hosted SBSfaq.com and other related websites. Around 12 months ago I decided that I needed to change things. I decided that I could bring most of the gear back in house for my testing and demos, and have a single server “out there in the cloud” to host my websites. I talked to members of the SMB IT Professionals group in Sydney and they recommended VMVault
I checked out their infrastructure and found they had clustered VMWare based systems with multiple SANs etc. Good enough for me. I figured I’d setup a virtualised Windows 2008 R2 server and then take backups back down to home. Also good enough for me. What I didn’t know at the time though was that they do a daily backup of my server to the SAN (that I can access) and a quarterly DR on MY VIRTUALISED Server as part of the service they offer. Each quarter they perform the DR test and then send me the following email.
We are pleased to advise that we have performed the quarterly Disaster Recovery test of your server, SFQ-VM01, and this has restored successfully to an alternate VMware host, on an alternate SAN LUN. We have been able to successfully boot the server in an isolated environment, and the server has correctly shown the Windows CTRL-ALT-DEL login screen.
As we do not have access inside your VM, we are unable to confirm the validity of data with the VM, so if you would like access to this recovered server to verify the integrity of data inside the recovered VM please let us know within 48 hours. Otherwise we will assume that you are happy with the test that we have performed for this quarter.
If you have any questions, please feel free to email or call me.
Senior Systems Consultant
BInf Tech (Data Comms/S’ware Eng), VCP, CCNA, MCP, DCNE, AACS
VMvault – Secure Hosted Virtualisation
p: 1300 513 262
How cool is that? I don’t have to worry so much about DR now and I can focus on producing great content for the website.
Hang on though… what happens if VMVault disappears, goes bankrupt or suffers a major outage that their own DR plan does not cater for?
Easy… I’ve got my own backups happening. I’ve got ShadowProtect installed on my virtual server and it images to their SAN right now. Shortly I’ll have it commence offsite replication back to my home office where I can then virtual boot it if I need to to do any testing or DR work that is independent of VMVault.
VMVault costs me around $300/month for my virtualised server – a fraction of the cost of what I had before and I really have zero maintenance of the server infrastructure. I’m loving it and I’m really happy with my decision to have my servers with Radek and the team at VMVault.