Tuesday, November 27th, 2012
You will want to check this out – it’s a video done by StorageCraft Australia Guru Jack Alsop on how to configure ShadowProtect correctly to work with Exchange servers
Thursday, October 11th, 2012
We pickup a lot of clients that have had bad experiences with other IT resellers. As a result we get to see some pretty strange configurations that people have made which adversely affect the clients environment. Here’s a few that I’ve seen with ShadowProtect that really should NOT be done.
Running missed jobs automatically – this is an option within ShadowProtect that allows you to run a missed job automatically when ShadowProtect services start and also another to Auto Execute missed tasks . In principle this sounds like a good idea, however in practice it can be a problem. Take the example of the server that has “other issues” not related to ShadowProtect – this server would reboot with a BSOD not related to StorageCraft at all (will do another blog post on that later). Now this server would crash in such a way that many things would not work correctly and then finally it would BSOD. When it rebooted, along with the normal startup of things on the server, it would also therefore trigger the backup jobs it missed which further increases the load on the server. End result is that the server is brought to it’s knees for quite some time after it comes up as the disks get hammered like crazy. Therefore there are two things you need to do to rectify that. Firstly under Options > Agent Options – see the “Start Backup Delay” option – it’s default is 60 seconds – this means it will start any backups with ShadowProtect 60 seconds after the ShadowProtect services starts up – I recommend changing it to 300 seconds or 5 minutes. This gives the server a chance to startup, settle nicely before you allow any backup jobs to start.
The second option I HIGHLY recommend you NEVER use, is the “Auto execution of unexecuted task” option. This is specified in the Advanced Options of each individual backup job. If you enable this, then it will automatically run whatever backups are missed when a system restarts. Take the example above of the server that would BSOD – it would therefore start up and within 60 seconds of the server starting, it would run the missed backup jobs. This would place EVEN MORE load on the server and cause more problems still. Note that this feature is there to run backups that are missed regardless of the reason, so if you have your server shutdown for other maintenace, power outage etc, it will also automatically run backup jobs after it comes back up.
The third option I suggest you NEVER use is the “Enable concurrent task execution” – this allows for multiple jobs to be run AT THE SAME TIME. Now this might be fine if you have a massively high performance disk array, but it’s not something you want to do at all for the majority of servers as it will cause a server to freeze at times. End result will be that the client will likely power the server off which is even worse.
Bad Disk Configuration – this is something we see often. An example of this is a server with a HP B110i SATA RAID controller with NO additional caching. This server had 2 x 2TB drives in a RAID 1 configuration with no caching at all on the controller. Furthermore the server was configured with 3 partitions on the disks which is normally not an issue either. What compounded the issue further still was that there was a single backup job created to allow all three partitions to be backed up at once. This is a big mistake on a machine configured like this. You can see in the log below, that the backup starts at 8:05 in this instance, and it takes 4 minutes and 21 seconds to do the VSS Snapshot – during this time the server effectively pauses many of it’s operations so that it can snapshot the current state of the server. Why does it take so long? It takes so long because it’s trying to snapshot 3 partitions of information back to itself, therefore the disk IO is massively overloaded.
Solutions here could include,
BSOD or Hard Powering off of the server – at all costs you need to avoid this. StorageCraft has some cool methods to track the changes made to the disks while the server is running (won’t go into it here). But these methods rely on the server being shutdown correctly rather than crashing. If the server is powered off correctly, then it can use the cool technology within ShadowProtect to create the image using a VDIFF which is very fast and the best way to minimise the backup time. You can see above that the backup shown is being created by VDIFF by the line “image will be created by VDIFF” – this tells you that it’s the fastest way StorageCraft can do the image. If however you have the server BSOD, or you power it off with the power button, then the information StorageCraft needs to create the VDIFF backup is lost. It instead reverts to a DIFFGEN. You can see this in the log file below “Incremental image will be created from existing full by using DiffGen”
When an image is created by DiffGen, it compares the existing backup image chain with what is on the disk right now to determine what has changed and then create an incremental backup. This process is EXTREMELY slow as it has to read and compare each sector across the entire image chain. If you happen to be watching the Time remaining counter or the speed counters within ShadowProtect when it’s doing a DiffGen, you will see they fluctuate wildly as it goes over and over the image chain. Once it has completed this backup however, it will revert to doing a VDIFF backup once again.
Now – imagine if you will a clients server with pretty much ALL of the above issues at once… yeah… you get idea. It’s pretty easy to configure a great product like ShadowProtect VERY badly and create a massively bad impression on the client. From what I’m told, the previous IT provide blamed the product for all of these failings. Funny though that once we resolve them as best we can, the server performs much better (still not great due to the disk configuration) but much better and stable too!
Wednesday, August 29th, 2012
StorageCraft have this week released 18.104.22.16856 of ShadowProtect. Now I’m in the task of testing it on a few of our test and production systems before we launch into an upgrade across all of our client sites. Here’s how I do it.
First up – review the release notes. It looks like they are putting them here now. You want to verify what is being fixed to know if this is something you want to apply to your systems. It also gives you ideas on areas that you will want to specifically test after the upgrade.
Also – note that you can only do an in place upgrade from version 4.1.5 onwards. If you have a version prior to 4.0.5, then you uninstall and reinstall and reconfigure the product to get to the latest version.
Now disable any backup jobs you have on the system you are going to upgrade as below.
Next, Disable any antivirus software you have on your system – this ensures that nothing prevents the installer from replacing the critical files it needs to replace.
After that it’s a simple matter of running the installer and clicking your way through it as you can see with the screenshots below.
You might wonder what the difference is between English and English (Australia, New Zealand) – long story short, the Aussie StorageCraft guys have a cool utility called SPDiagnostics that is installed with the Aussie/NZ edition – I highly recommend it.
The screen above is an additional license agreement for the SPDiagnostic as it comes from a different team within StorageCraft.
After the reboot – verify the version as below
Then enable the job and do an incremental backup to make sure it’s all good.
Tuesday, August 28th, 2012
Here’s a few tips for troubleshooting backup failures when using StorageCraft ShadowProtect that I’ve developed.
Clarify the exact error message – you can get this from the backup job itself, if you select the backup job and then select the Details button.
Verify the existing backup image – when doing continuous incremental backups, the next backup you do relies on not only the base image being accurate, but the entire image chain being accurate and verifiable. You should run a Verify Image job from the Tools menu in ShadowProtect console. When you do this, ensure that you verify the entire image chain from the newest to oldest image. It can take a while to run, but once it’s run it’s verified not only the image, but also the access to that entire image chain. If it fails verification of this image then you either have a corrupt image or the access to the image is not what is needed for ShadowProtect to work correctly.
Collect Log files – if you can’t resolve the issue then you need to collect the relevant log files. They are located in the C:\Program Files (x86)\StorageCraft\ShadowProtect\Logs folder – zip this up and provide it to support so that they can assist you further.
Spdiagnostics – this is a tool located in the C:\Program Files (x86)\StorageCraft\ShadowProtect\SupportOnly folder. You can run the runsupportmode.cmd file and it will package up the log files as well as give you a review of your system for known issues.
Wednesday, August 15th, 2012
I decided to do some very basic performance testing of non Windows based NAS devices that I’ve got here at in my lab / production network. The aim of this is not to come up with a winner or loser, but to understand what levels of performance I might expect from these NAS devices. I’ll break this testing up into two categories, and tonight’s blog post will feature the non Windows based NAS devices I have. A future blog post will focus on Windows based NAS devices as well as Windows based Servers.
Currently I’ve got the following devices in my environment.
Now I’m going to start of by saying that these tests are NOT equal. Aside from the source machine being my desktop PC, and using ShadowProtect to backup the C: drive, there are many other variations to the configuration. The intention on this is NOT to give categorical winner/loser, but to get a feel for performance. I configured an SMB share on each of the devices and then used ShadowProtect to backup my desktop to it. My desktop has around 121GB data on it’s C: drive, so it’s a good candidate for an average desktop.
Ok – so the results of the test are as follows
Buffalo Link Station
What does this tell me? Well in all likelihood if you want the fastest backup, then a single drive is probably the way to go. Any form of RAID seems to lessen the performance a little, but interestingly the Buffalo did well with a 3 drive RAID5 array – that surprised me.
So – there you have it – some idea of how fast you can expect from some of these devices when used as a ShadowProtect target. There’s quite a few other tests that I plan to do with this such as using them as FTP targets for ImageManager, but that will come later.
If any vendor out there wants to loan me a device for testing, I’m more than happy to put it through it’s paces
Wednesday, May 2nd, 2012
I use a number of different backup products in my SBSfaq.com office environment, one of those is ShadowProtect. My Terminal Server is Windows Server 2008 R2 and it’s been running on top of a Windows 8 Hyper-V server as I test it out. I’ve had ShadowProtect running inside the Guest Virtual Machine so as to ensure I got a solid backup of my data should something go wrong. Well this week something went wrong. I had a hardware failure of the server and just my luck, it’s not covered under warranty and I’m not sure if I will get it repaired or not. My solution is quite simple. Take the backups and convert them to a VHD and get them running on hardware platform. Not many people know that you can convert a ShadowProtect backup directly into a VHD. This is how you do it.
Note – you can also use this process to convert a ShadowProtect backup to a VMDK or a SPF (ShadowProtect Full Backup) file as well.
1. Install ShadowProtect onto a machine – it does not have to be licensed as you are only using the conversion tool. Reboot as needed.
2. Open ShadowProtect
3. Select Image Conversion Tool from the Tools menu on the left hand pane
4. Select Next to continue
5. Select Network Locations
6. Fill out the form listing the network location where the images are stored
7. It will now list the volumes that it has found here
8. Select the volume you wish to convert and then select Next to continue. If you have password protected the images, you will now be prompted for the password. Enter it and select Ok to continue
9. You will see the a list of all the backup points that you can restore this system from. Select the one you want and then select Next to continue
10. Specify the destination location where you want the VHD file created. Also specify that you want it to create a VHD and select Next to continue
11. It will display a summary screen for you to review. Select Finish for it to start the image creation process.
12. ShadowProtect will now run the job and you can see the status below.
13. The time it takes to complete the job is a direct relationship to the number of files in the image chain. The more files in the chain, the longer it will take.
14. Once the conversion is complete, you will need to do a couple of things. Boot into the ShadowProtect Recovery Environment.
15. Set the partition to be Active as below
16. Select the Boot Configuration Utility from the Tools menu
17. You will see that the BCD Store is broken on this VHD – you can select Auto Repair to fix it
18. This is it fixed!
19. Now boot the Virtual Machine and you should be good… for the most part anyway
A couple of things to be aware of here too…
1. The image you’ve just created does not have the same disk signature as the disk – therefore if you try to run any backups using ShadowProtect from within this newly created machine, they WILL FAIL. The error message will be something like the screenshot below. The only way to resolve this is to edit the backup job and reselect the volumes you wish to backup and start again with a new full backup image.
2. Given you need to create a new base image, you will also need to prune/delete the older images in the destination folder. This might well affect your archival requirements. Therefore assess if this method is the best way for you before you start!
Thursday, April 19th, 2012
Let me start by saying that StorageCraft ShadowProtect is not in the slightest way at fault for the story I’m about to relay. The customer who should know better is entirely at fault. Let this be a warning to users of ShadowProtect, that there are things you do not understand and you should not mess with at all.
We’ve got a client at Correct Solutions who has a number of servers in a datacentre (DC). This is an installation that we setup earlier this year and we’ve got ShadowProtect doing the backups of the virtual servers running on Hyper-V to a local location at the DC. We are using Continuous Incrementals because they are well suited to allow us to replicate the small incremental files offsite to a DR location that they have. Continuous Incrementals are cool technology because they take one base file and then smaller incrementals after that point. ImageManager is then used to roll up those small incrementals into consolidated daily, weekly and monthly images. It’s a cool system really and one I’ve used for a number of years in many client environments.
Back to this client, we have ImageManager at the DC that does the image consolidation and replication to the ShadowStream server at the remote DR location and it’s been working pretty well for a number of months. The files in the DC effectively are replicated to the remote DR site so that we can quickly bring things up should a major event happen. The client has their own IT Manager handling things on a day to day basis and so we’re called in to help when things are not working. Yesterday he called up – apparently the ShadowProtect backups are failing on the servers at the DC. We take a look at it and we find that it looks like a stack of files including the base images are missing from the DC location. We then check and find a similar (not the same) number of files are missing from the remote location. This is pretty strange as we only ever replicate the small incremental files from the DC to the DR location and if a file is deleted from the DC location it’s never automatically deleted from the DR location unless ImageManager tells it to be… in which case the files in the DC and DR locations would be identically deleted… which it was not. Ok – so that means someone or something else deleted the files… We did some further checking and could see that the client logged onto the DC servers AFTER the 9am backup and BEFORE the 10am backup… the 10am backup was the first one to fail. The client of course says he did not delete the files at all, but in passing mentions about cleaning old files… CLICK…. Yup – what he did was to look at the files in the backup folder. Below is a screenshot from one of my own servers that I’ve had using ShadowProtect since mid 2010.
You can see the file ending in SPF – it’s the first full backup of this volume – note it’s 168GB in size. The other SPI files are the incrementals or in this case the consolidated incrementals that are the rolled up incrementals for later backups. In order to restore any file from backup, you need the SPF and all the SPI files forward until today. Now – notice the time stamps on the files. The Base SPF file was created on 25th July 2010 – and has not been changed since then. The SPI files have been changed after that to incorporate the consolidation of the incrementals in accordance with the ImageManager plans that I configured. You see the –i109-cd-cm.spi file – it’s not been touched since 6th October 2010 – that’s again due to the consolidation and retention settings I’ve configured. This is all good and everything works fine.
Back to this customer… what I believe is that he saw that there were a heap of files not touched for some time and he deleted them all… ouch… that is how he broke ShadowProtect. To make it worse – he ALSO deleted them from the DR site. Naturally the client denies this entirely, but I’m pretty sure that this is what happened as ImageManager was NOT configured to delete things like the base files etc and I’ve used ImageManager enough to know it won’t mess up like that.
How do we fix it for this client? Well luckily the solution is simple… delete EVERYTHING and start again with new Base Images (SPF). Then allow those to replicate to the DR site and everything will be good. ImageManager needs to be enabled to replicate the base files, but this is a minor config change we can do. The faster way would be to grab a USB hard drive and copy the Base Images (over 300GB in this case) and then take that to the DR site to seed the DR server. We’re waiting on the clients decision for this.
Long story short – don’t mess with ShadowProtect SPF and SPI files unless you know EXACTLY what you are doing. This client has lost his entire backup history for the past 6 months as a result of what he did. ShadowProtect and ImageManager will handle things just fine for the most part if you leave them well enough alone. I hope this story can help others better understand WHY some files look like they’ve not been touched and better understand the relationship between the files.
Monday, April 9th, 2012
StorageCraft has recently released ShadowProtect 4.2.1 on their download site. This upgrade is a little different to most in that there’s a few things to watch out for. firstly, you can only upgrade to 4.2.1 if you are already on 4.1.5. If you are not on 4.1.5 then you need to remove and reinstall ShadowProtect and reconfigure the jobs etc.
Download the update from here
Extract it to your desktop and run the upgrade. Chose your location. The Australia/New Zealand location is identical to the English option except that it will give you the option to run SPDiagnostics which is a great utility that the AU/NZ Storagecraft guys put together that really helps diagnose issues that will potentially prevent your ShadowProtect installation from working correctly.
On one of my systems, it warned me that I had some files hanging around from using Acronis TrueImage. The StorageCraft KB article was helpful in resolve that and I could then proceed.
You get warned in advance that the upgrade will need you to reactivate your ShadowProtect installation. Make sure you have your keys handy. If you don’t – that’s ok – it will go into the 30 day trial mode.
The installation is pretty much a follow the bouncing ball install after this point, so check out the screenshots below.
You only get this next screen if you install the Australian/New Zealand version
You WILL reboot – seriously – you WILL!
After the reboot, you will need to run ShadowProtect to activate it once more.
On one of my systems I had an activation issue – this was due to me decommissioning a system in the past without deactivating it at all. StorageCraft were most obliging in helping resolve this for me.
Based on my experience, the upgrade takes less than 10 minutes plus a reboot and reactivation. There was one scenario however on my Hyper-V server where ShadowProtect was installed on the Hyper-V base that it took over 40 minute to do the upgrade – not sure why, but it did.
Thursday, April 5th, 2012
Been doing a bit of work recently for a customer in migrating their physical environment into a virtual test environment. We’ve used ShadowProtect IT Edition as part of that solution. On some of the servers, we’ve received one of two errors.
Info: The boot selection failed because a required device is inaccessible.
Info: the selected entry could not be loaded because the application is missing or corrupt.
To resolve both of these is quite simple. Inside ShadowProtect, from the Tools menu, select Boot Configuration Utility. You will get the screen below. Initially the status of our servers was Broken, but you can select the Auto Repair button and it will resolve the boot problems for you.
Friday, March 9th, 2012
I’m running StorageCraft ShadowControl ImageManager on one of my Windows Storage Server 2008 R2 Essentials servers as it acts as the host for all my ShadowProtect backups. Sometime this week, my ImageManager started to log two errors when it did it’s daily processing.
Process folder exception: Unable to synchronize with directory contents
Sync exception: SyncFiles database read error
And in the daily summary report it gave me this error on one of the machines only – all other machines managed by this ImageManager were working just fine.
Unable to summarize activity for I:\ServerFolders\SPBackups\SBSFAQWEB1
I did some digging into this and found that someone else has had this issue before. They found it occurred after an unscheduled restart of their ImageManager server.
I decided to investigate further and found my ImageManager.log file in the “C:\Program Files (x86)\StorageCraft\ImageManager\Logs” path. In this log I found the following information which correlated to the alert emails I received.
07-Mar-2012 20:09:25 Initialized Remoting with secure TCP channel.
07-Mar-2012 20:09:26 Could not find file ‘C:\Program Files (x86)\StorageCraft\ImageManager\ImageManager.lic’.
07-Mar-2012 20:09:35 Watch threads started
07-Mar-2012 20:11:32 I:\ServerFolders\SPBackups\SBSFAQWEB1
Sync exception: SyncFiles database read error
08-Mar-2012 04:00:00 Queue processing
08-Mar-2012 04:00:00 I:\ServerFolders\SPBackups\SBSFAQWEB1
Sync exception: SyncFiles database read error
08-Mar-2012 04:00:00 I:\ServerFolders\SPBackups\SBSFAQWEB1
Process folder exception: Unable to synchronize with directory contents
08-Mar-2012 04:01:11 I:\ServerFolders\SPBackups\SBSFAQSVR1\F_VOL-b001-i9949-cd.spi
08-Mar-2012 04:02:04 I:\ServerFolders\SPBackups\SBSFAQSVR1\E_VOL-b001-i9951-cd.spi
08-Mar-2012 04:17:29 I:\ServerFolders\SPBackups\SBSFAQSVR1\C_VOL-b001-i9949-cd.spi
08-Mar-2012 04:17:52 I:\ServerFolders\SPBackups\ML350G5\C_VOL-b002-i351-cd.spi
08-Mar-2012 04:20:15 I:\ServerFolders\SPBackups\SBSFAQNB03\C_VOL-b005-i690-cd.spi
08-Mar-2012 04:20:38 I:\ServerFolders\SPBackups\SBSFAQTSV1\C_VOL-b001-i896-cd.spi
You’ll note above that the log starts at 20:09 on March 7th – it turns out that there was an unscheduled reboot of the server at that time – no idea why but will investigate that separately. What I need right now is get things working so that SBSFAQWEB1 will be being managed correctly once more.
In ImageManager, the settings for the managed folders are stored within the registry. The information about the consolidation status, replication status etc. is stored within an access database called ImageManager.mdb and it’s located in the “C:\Program Files (x86)\StorageCraft\ImageManager” folder. I found elsewhere that someone else had this issue, so thought that it might work for me. I checked with StorageCraft support here in Australia and Melissa agreed that this was the right course of action to follow. Here’s what I did.
When the service starts, it creates a fresh ImageManager.mdb database and starts the process of reviewing the files present and updating the status of them. After it’s created the new database, it begins it’s daily processing and verification tasks once again. Give it 15 minutes or so to process and it was showing green ticks across the board for all my managed folders.
Thanks Mel for confirming the course of action is good. Glad it worked!