Thursday, October 11th, 2012
We pickup a lot of clients that have had bad experiences with other IT resellers. As a result we get to see some pretty strange configurations that people have made which adversely affect the clients environment. Here’s a few that I’ve seen with ShadowProtect that really should NOT be done.
Running missed jobs automatically – this is an option within ShadowProtect that allows you to run a missed job automatically when ShadowProtect services start and also another to Auto Execute missed tasks . In principle this sounds like a good idea, however in practice it can be a problem. Take the example of the server that has “other issues” not related to ShadowProtect – this server would reboot with a BSOD not related to StorageCraft at all (will do another blog post on that later). Now this server would crash in such a way that many things would not work correctly and then finally it would BSOD. When it rebooted, along with the normal startup of things on the server, it would also therefore trigger the backup jobs it missed which further increases the load on the server. End result is that the server is brought to it’s knees for quite some time after it comes up as the disks get hammered like crazy. Therefore there are two things you need to do to rectify that. Firstly under Options > Agent Options – see the “Start Backup Delay” option – it’s default is 60 seconds – this means it will start any backups with ShadowProtect 60 seconds after the ShadowProtect services starts up – I recommend changing it to 300 seconds or 5 minutes. This gives the server a chance to startup, settle nicely before you allow any backup jobs to start.
The second option I HIGHLY recommend you NEVER use, is the “Auto execution of unexecuted task” option. This is specified in the Advanced Options of each individual backup job. If you enable this, then it will automatically run whatever backups are missed when a system restarts. Take the example above of the server that would BSOD – it would therefore start up and within 60 seconds of the server starting, it would run the missed backup jobs. This would place EVEN MORE load on the server and cause more problems still. Note that this feature is there to run backups that are missed regardless of the reason, so if you have your server shutdown for other maintenace, power outage etc, it will also automatically run backup jobs after it comes back up.
The third option I suggest you NEVER use is the “Enable concurrent task execution” – this allows for multiple jobs to be run AT THE SAME TIME. Now this might be fine if you have a massively high performance disk array, but it’s not something you want to do at all for the majority of servers as it will cause a server to freeze at times. End result will be that the client will likely power the server off which is even worse.
Bad Disk Configuration – this is something we see often. An example of this is a server with a HP B110i SATA RAID controller with NO additional caching. This server had 2 x 2TB drives in a RAID 1 configuration with no caching at all on the controller. Furthermore the server was configured with 3 partitions on the disks which is normally not an issue either. What compounded the issue further still was that there was a single backup job created to allow all three partitions to be backed up at once. This is a big mistake on a machine configured like this. You can see in the log below, that the backup starts at 8:05 in this instance, and it takes 4 minutes and 21 seconds to do the VSS Snapshot – during this time the server effectively pauses many of it’s operations so that it can snapshot the current state of the server. Why does it take so long? It takes so long because it’s trying to snapshot 3 partitions of information back to itself, therefore the disk IO is massively overloaded.
Solutions here could include,
BSOD or Hard Powering off of the server – at all costs you need to avoid this. StorageCraft has some cool methods to track the changes made to the disks while the server is running (won’t go into it here). But these methods rely on the server being shutdown correctly rather than crashing. If the server is powered off correctly, then it can use the cool technology within ShadowProtect to create the image using a VDIFF which is very fast and the best way to minimise the backup time. You can see above that the backup shown is being created by VDIFF by the line “image will be created by VDIFF” – this tells you that it’s the fastest way StorageCraft can do the image. If however you have the server BSOD, or you power it off with the power button, then the information StorageCraft needs to create the VDIFF backup is lost. It instead reverts to a DIFFGEN. You can see this in the log file below “Incremental image will be created from existing full by using DiffGen”
When an image is created by DiffGen, it compares the existing backup image chain with what is on the disk right now to determine what has changed and then create an incremental backup. This process is EXTREMELY slow as it has to read and compare each sector across the entire image chain. If you happen to be watching the Time remaining counter or the speed counters within ShadowProtect when it’s doing a DiffGen, you will see they fluctuate wildly as it goes over and over the image chain. Once it has completed this backup however, it will revert to doing a VDIFF backup once again.
Now – imagine if you will a clients server with pretty much ALL of the above issues at once… yeah… you get idea. It’s pretty easy to configure a great product like ShadowProtect VERY badly and create a massively bad impression on the client. From what I’m told, the previous IT provide blamed the product for all of these failings. Funny though that once we resolve them as best we can, the server performs much better (still not great due to the disk configuration) but much better and stable too!
Tuesday, July 10th, 2012
One of the challenges that I’ve had with my 12TB WD Sentinel DX 4000 is how to go about backing it up. You see Microsoft only designed the inbuilt backup utility to handle up to 2TB of space, yet my WD Sentinel has 8.12TB of data storage capacity as you can see below.
Likewise, as you can see, I have almost filled it with lots of data, so losing something could be a real problem for me.
The solution I’m using is actually quite simple. I’ve attached 2 x WD MyBook 3TB drives to the external USB 3 ports. I’ve then installed BackupAssist and am using it to do the backup of selected folders to the drives. I’ve split my backups so that half the folders will go to one drive and the other half will go to the other drive. BackupAssist gives me the chance to use the many different methods to backup my data, but I’ve found that the File Replication engine is the best for this type of backup.
What this backup does is protect my data, should I have some major corruption of the RAID array, or in case of fire (I take the 3TB Mybook drives offsite and swap them regularly). In the event that the unit is destroyed in some way, I’d get a new unit and restore just the data to it. I’ve tested this and it works just fine No more 2TB backup limit!
After each nightly backup, you have the option of having a status report emailed to you. One of the parts I like about this is the Media Usage Report which shows just how much space is free/used on your destination media.
If you are interested in BackupAssist you can check out their website here
Monday, March 5th, 2012
Windows Storage Server 2008 R2 Essentials is provided only through specific OEM. As such some OEM providers have chosen to disable the inbuilt backup utility that is used to backup the server itself. The reason vendors have chosen to do this is due to the fact that the Windows Backup utility does not support source drives or destination drives greater than 2TB in size. Therefore be advised that executing this may void any support you might get from the OEM vendor itself.
1. Open Regedit
2. Navigate to HKLM\Software\Microsoft\Windows Server\ServerBackup as the screenshot below shows
3. Delete the ProviderDisabled entry and close Regedit
4. Open up Services.msc
5. Find the “Windows Server Server Backup Service” entry as the screenshot below shows
6. Change the service from Disabled to Automatic and Start the service and then close the Services console
7. Open the Dashboard from the link on the desktop
8. Navigate to the computers tab on the console and right click on the server computer
9. Run the Setup Backup for this Server wizard and follow the bouncing ball.
If you attempt to select a source drive that is larger than 2TB to backup, you will get the error message below. This reinforces the restrictions that I mentioned at the start of this article.
Wednesday, February 29th, 2012
I’ve been getting quite a few reports back from people that after installing the latest Update Rollup 2 (UR2) on SBS 2011 Essentials and Windows Storage Server 2008 R2 Essentials, that the client computers have appeared offline and the backups have failed. I blogged about the release of UR2 earlier this month here. One of the things that I observed during the installation of the update on the server was the way in which it will force upgrade the client agent on the machines. It shows a small bubble popup indicating that the client machine needs to be rebooted as you can see below.
What happens under the covers, is that once the client agent has been upgrade, it can’t communicate with the server UNTIL after a reboot – therefore causing the client machine to appear offline in the console and causing backup to fail. The solution is simple. Reboot all your client computers after the agent has upgrade and then everything works fine. Your clients will appear online and backups will succeed once more.
Thursday, December 15th, 2011
Veeam is a product that I’ve been meaning to look at for some time now since a good friend of mine introduced me to them a while back. One of the things that has stopped me however is that I’m a user of Hyper-V and not VMWare. Their latest version v6 supports Hyper-V now so it’s time to check it out.
I’ve just been alerted to a free NFR offer that they have for anyone who is a Microsoft Certified Professional (MCP).
They are offering a 12 month free license for testing use to MCPs. You can register for your free license here
Check it out – I’ll be playing with this some more over the coming holiday season.
Monday, December 12th, 2011
Recently while troubleshooting a system for a client, I ran across the situation where the StorageCraft VSS Snapshot provider disappeared.
When you ran the vssadmin list providers command, you only got the list below.
I then decided to see if the service was present by running the services console and it was not even listed. That’s right the StorageCraft Shadow Copy Provider was not present in the list of available services.
The fix was pretty simple. Re-run setup again on the ShadowProtect installer and select the Repair option. Reboot once it was done and voila… it’s back as you can see below.
Not sure why it disappeared in the first place, but it’s back now and the system is running far more reliably than before.
Saturday, September 3rd, 2011
I had the above error this week whilst configuring BackupAssist on one of my servers. I use different products all the time and right now I’m playing around with BackupAssist to see what it can do for me that other products can’t.
This server is my HP Microserver that I’m using with Windows Storage Server 2008 R2 Essentials.
I want to use BackupAssist to test out the Archival File Sync capabilities at this point. I configured a job and ran it, but it failed with the error of “BA703 Specified file selection doesn’t exist”.
I was lazy, so I sent the information to support as detailed in this blog post here. They came back with the suggestion that the issue was due to a conflict between the StorageCraft ShadowProtect VSS provider and the Microsoft VSS provider – a perfectly valid solution given I had ShadowProtect installed on this system. Even though it was not backing up at the same time, apparently when the backup is called by BackupAssist it will default to use the StorageCraft VSS provider instead of the Microsoft one as I believe that StorageCraft have set theirs to be the default VSS provider for any application that calls VSS.
Anyway – the solution is easily worked around using the BackupAssist KB article here http://kb.backupassist.com/articles.php?aid=2725
Long story short – modify how open files are handled below by removing the selection to use the Microsoft Volume Shadow Copy Service and we’re good to go. It only affects this job itself and not other jobs.
Wednesday, July 20th, 2011
No – this is not a test to see how fast a Toshiba can reboot vs a Porsche 911, but a story from Jeff Wilson from VelocIT Business Systems and a member of the SMB IT Professionals organisation in Australia. Jeff tells the story of an urgent call he had from a customer over a weekend where the customer needed him to look at something urgently – ie before Monday. Being a customer focused professional, of course Jeff went to help out. When he got there he found that the client has “accidentally” run his Porsche 911 over the laptop… see the pictures below!
The screen still opened up
And of course – compare the good with the bad.
What is interesting from this is that the hard drive on these is stored under the keyboard palm rest area and it appears that this was not harmed – therefore the users data was still safe. The laptop though was of course totally dead. Jeff moved the hard drive over to the spare that the customer had and got them running quickly.
Now – aside from being some funny pictures of a laptop accident – it brings a few other thoughts to mind.
What are you doing for backups of your users mobile devices? Are you backing up the entire laptop and if so, how often and using what software?
Personally I’m using Trend Micro SafeSync on my laptop to sync all the user changed data – word documents, Excel, PowerPoint and the like up to the cloud and then from the cloud, down to my desktop PC in the office. This ensures that my user data is safe and sound.
In terms of the laptops operating system and applications, I’m using ShadowProtect to backup to a USB hard drive while I travel, or to my NAS in the office. I’m also using the Client PC Backup technology from Windows Storage Server 2008 R2 Essentials to do a backup of the laptops while in the office as a 2nd approach.
What do you do?
Wednesday, June 22nd, 2011
There’s a lot of news in the media at the moment about massive system hacks by a number of rogue hacker groups. One of the local ones recently has been domain registrar DistributeIT based here in Australia. Their systems were not only hacked, but totally and utterly destroyed by the hackers. The most recent news article on this has DistributeIT talking about how even there backups were erased. And based on that we can assume a few things.
No don’t get me wrong – I’m not saying they got what they deserved. No one deserves to have their business decimated in the way that I’m sure DistributeIT is right now. But they did not fully consider the potential ramifications that could happen should a deep compromise of their systems occur.
Now personally I’ve used image based backup products like StorageCraft for quite some time and I love them. I also use products that work specifically with tape such as BackupAssist as well. However if you are to use an image based product alone here, then you need to consider what would happen in the event that your backups are erased. How can you prevent that from occurring? If you have some form of replication to replicate the data offsite, then that replication is also potentially likely to replicate the deletion (depending on how you have it configured). Particular attention needs to be paid here and this is where some form off offline backup comes into play. Be it tapes or offline hard drives, you need to ensure that you have a way that will prevent hackers from getting in overnight and killing your business.
Why tapes I hear you ask? How’s this for a few reasons to start with…
If you were to change your offline media on a daily basis then you limit what they can do to your business? What are you doing to prevent this for yourself and customers?
So – I ask you – is Tape really dead? If you are thinking more about this then check out BackupAssist as an option as they can support tape on versions of Windows that don’t have native tape support which is basically everything from Windows Server 2008 or SBS 2008 onwards.
Nope – for me – I’m seriously revisiting how my backup strategies are maintained and am looking to develop some new ideas and practices around this.
Thursday, June 9th, 2011
In todays move to disk based backups, we often forget the old days of tape when you had monthly archives off to tape that some people kept for a VERY long time. Today, so many vendors and resellers focus exclusively on backing up JUST so that you can recover the business in a DR scenario. As such we focus more on the recovery process than the longer term archival process. Herein lies a problem. What happens if corruption of your data goes unnoticed for some time? What happens if you do not keep regular archive only backups?
Well in the last few weeks I’ve hit just that problem myself. Below you see a photo from a kitchen renovation I did for my parents back in January of 2002. I was digging for these to compare to now for my parents and discovered the corruption. Once I found this one, I then dug deeper into it and found to my increasing horror that every single image I’ve taken from 1997 through to August 2007 – thats 10 years of memories, was similarly corrupted.
You can bet that I was not happy. I was feeling so sick at the loss of all these memories, i cried because it was not Mum’s kitchen I was worried about, it was the family photos that are no longer stored as negatives or printed pictures in a photo album or in a box in the attic… it was those memories that I had lost.
Did I have backups – sure – I had backups of this server done on a regular basis to account for a failure of the hard drives and so on. Did I have archival backups over the last 10 years??? umm – NO!!! Over the last 10 years these photos have been stored on a number of different servers and more recently they were on my old Windows Home Server and I know that there had been at least one disk failure on that box, but Windows Home Server had said it had rebuilt the lost data. I checked some recent photos after that and they were fine.
So last night I was talking to a friend about the loss of these memories (thanks Meredith) and she just said… “surely you have older backups somewhere…”. That tripped a memory that maybe I had some old hard drives that I’d used to do the backup. I did some digging into a box of old hard drives and found one from 2009 that I had put aside because the data was too big to fit onto it. I then found all my photos were in fact intact on that backup.. you should have heard the screams of joy.
Recovery from that point was fast and now my data is once again intact. As you can see below the original image is once again complete as are the family memories that are so precious to me and my family.
I believe that the corruption was more likely a result of a Windows Home Server bug from the DriveExtender technology in the original version more so than a failed disk drive as from what I can see the problems occurred when I copied all these image over to a NEW Windows Home Server that I had built BEFORE I had it patched correctly to resolve that known issue. This also reinforces the point of checking and double checking that you have things setup correctly before putting critical information at risk (yes I know I should have my computer license stripped from me)
Ok – all of this has made me think even more seriously about archival backups and how you do them. I’m looking at options that will help me survive things as we move forward into the future. I’m considering both online and offline backups. What do you use and why?