Wednesday, July 20th, 2011
No – this is not a test to see how fast a Toshiba can reboot vs a Porsche 911, but a story from Jeff Wilson from VelocIT Business Systems and a member of the SMB IT Professionals organisation in Australia. Jeff tells the story of an urgent call he had from a customer over a weekend where the customer needed him to look at something urgently – ie before Monday. Being a customer focused professional, of course Jeff went to help out. When he got there he found that the client has “accidentally” run his Porsche 911 over the laptop… see the pictures below!
The screen still opened up
And of course – compare the good with the bad.
What is interesting from this is that the hard drive on these is stored under the keyboard palm rest area and it appears that this was not harmed – therefore the users data was still safe. The laptop though was of course totally dead. Jeff moved the hard drive over to the spare that the customer had and got them running quickly.
Now – aside from being some funny pictures of a laptop accident – it brings a few other thoughts to mind.
What are you doing for backups of your users mobile devices? Are you backing up the entire laptop and if so, how often and using what software?
Personally I’m using Trend Micro SafeSync on my laptop to sync all the user changed data – word documents, Excel, PowerPoint and the like up to the cloud and then from the cloud, down to my desktop PC in the office. This ensures that my user data is safe and sound.
In terms of the laptops operating system and applications, I’m using ShadowProtect to backup to a USB hard drive while I travel, or to my NAS in the office. I’m also using the Client PC Backup technology from Windows Storage Server 2008 R2 Essentials to do a backup of the laptops while in the office as a 2nd approach.
What do you do?
Thursday, June 9th, 2011
In todays move to disk based backups, we often forget the old days of tape when you had monthly archives off to tape that some people kept for a VERY long time. Today, so many vendors and resellers focus exclusively on backing up JUST so that you can recover the business in a DR scenario. As such we focus more on the recovery process than the longer term archival process. Herein lies a problem. What happens if corruption of your data goes unnoticed for some time? What happens if you do not keep regular archive only backups?
Well in the last few weeks I’ve hit just that problem myself. Below you see a photo from a kitchen renovation I did for my parents back in January of 2002. I was digging for these to compare to now for my parents and discovered the corruption. Once I found this one, I then dug deeper into it and found to my increasing horror that every single image I’ve taken from 1997 through to August 2007 – thats 10 years of memories, was similarly corrupted.
You can bet that I was not happy. I was feeling so sick at the loss of all these memories, i cried because it was not Mum’s kitchen I was worried about, it was the family photos that are no longer stored as negatives or printed pictures in a photo album or in a box in the attic… it was those memories that I had lost.
Did I have backups – sure – I had backups of this server done on a regular basis to account for a failure of the hard drives and so on. Did I have archival backups over the last 10 years??? umm – NO!!! Over the last 10 years these photos have been stored on a number of different servers and more recently they were on my old Windows Home Server and I know that there had been at least one disk failure on that box, but Windows Home Server had said it had rebuilt the lost data. I checked some recent photos after that and they were fine.
So last night I was talking to a friend about the loss of these memories (thanks Meredith) and she just said… “surely you have older backups somewhere…”. That tripped a memory that maybe I had some old hard drives that I’d used to do the backup. I did some digging into a box of old hard drives and found one from 2009 that I had put aside because the data was too big to fit onto it. I then found all my photos were in fact intact on that backup.. you should have heard the screams of joy.
Recovery from that point was fast and now my data is once again intact. As you can see below the original image is once again complete as are the family memories that are so precious to me and my family.
I believe that the corruption was more likely a result of a Windows Home Server bug from the DriveExtender technology in the original version more so than a failed disk drive as from what I can see the problems occurred when I copied all these image over to a NEW Windows Home Server that I had built BEFORE I had it patched correctly to resolve that known issue. This also reinforces the point of checking and double checking that you have things setup correctly before putting critical information at risk (yes I know I should have my computer license stripped from me)
Ok – all of this has made me think even more seriously about archival backups and how you do them. I’m looking at options that will help me survive things as we move forward into the future. I’m considering both online and offline backups. What do you use and why?
Wednesday, June 8th, 2011
The current version of ShadowProtect on the US website is 4.1.5, and on the Australian website is 4.1.0. I’ve recently had a few issues with high CPU utilisation in ShadowProtect and had of course upgraded to the latest version thinking the issue would be resolved. Unfortunately there is an issue under investigation at the moment that some people are experiencing that occurs with both 4.1.0 and 4.1.5. I spoke to the local ShadowProtect Guru –Jack Alsop and sought his thoughts. His comment to me was that the issue, due to flaky network connections, will be addressed shortly in a newer release, but that for now – the best version to run is 4.0.5. Now ShadowProtect 4.0.5 has some very cool features and I think you should seriously consider moving your clients up to that if they are not on it now. Some of the key features in 4.0.5 include;
1. File System Dirty Detection – when the NTFS file system is shutdown in a normal manner, it marks the volume as “clean”. When however the file system shutdown is not normal, ie a BSOD, then it marks the file system as “dirty”. When NTFS is in it’s dirty state, the data on it may not be consistent. Prior to 4.0.5, ShadowProtect would not check this flag and therefore there was the potential that it would continue to backup the system even though the file system was not in a consistent state. It can do this because ShadowProtect backs up at a sector/block level vs a file level which is cool. The downside to it though is that it might well be backing up a volume that is not restorable due to the inconsistency that is there to start with – think garbage in… garbage out. With 4.0.5, StorageCraft introduce a dirty flag check which means that when doing it’s normal incremental backups, it will check the status of the flag. If the flag is marked dirty, then it will fail any scheduled incremental backups on that volume – this way you know about it and can run a CHKDSK to clear up any issue sooner than later. As a workaround, you can still force a full backup or force an incremental backup of a dirty volume but all scheduled backups will fail – which you might want to do in situations where a disk is going bad so you can recover as much as possible.
2. Mounting\Restore of an image chain – StorageCraft have optimised the algorithms they use so that when restoring a system. The result is that in 4.0.5 it would now restore faster than previous versions.
3. The Win PE recovery environment is now based on Windows 7\Windows 2008 R2 with no added drivers. Why is this a good thing you ask? This means that this environment will boot cleaner and quicker with just the base set of drivers that these OS come with and if you look on the CD you will see a directory called “Additional Drivers” with sub directories that contain drivers that have been tested that you can load to use Mass Storage Devices ETC. This directory is never used for boot processes but somewhere where you can store your drivers and allows you to better handle your own situations easily.
4. Email reporting has been completely overhauled to provide a better experience and a much improved reliability over the previous versions.
5. Final issue – where your ShadowProtect install intermittently seems to lock the server and the ShadowProtect logs reports that for the last (server locks) backup, the server took 10 minutes instead of seconds to create the “snapshot” and it used VSNAP instead of the Shadow Protect provider. There exists a patch that will fix this issue and it is to do with the ShadowProtect Volume Storage Manager. Contact StorageCraft APAC support to get this last patch.
As always this information is accurate as of today. StorageCraft have already indicated they have another version on the way soon and pending its release and testing it may well be the better version to go for.
Friday, May 20th, 2011
A few people have noticed that Exchange 2010 on SBS 2011 Standard edition, has Circular Logging enabled by default on the Exchange Databases. They’ve wondered why this is happening.
First up a history lesson. In SBS 2003, Exchange 2003 was configured by default to have Circular Logging disabled. This is great from a recovery perspective as it allows the Exchange database to keep log files of the email changes to the database and then later replay those log files into the last backed up copy of your database. When the inbuilt SBS 2003 Backup program ran, it would backup the log files and clear them down to zero once more. The only problem is that many users who configured their Exchange 2003 servers NEVER configured a backup. This then leads to the Exchange Log files which were located on the C: drive by default, to fill up the disk space and cause the server to crash due to low disk space on the Operating System partition.
When the SBS team were designing SBS 2008, they took this into account. The decided based on community feedback and user ineptness, that they would have Circular Logging enabled by default. This meant that the Exchange database log files would not fill up the disk space like it did with SBS 2003. This same design thought was carried through for SBS 2010 Standard edition and is what you see today. This does however limit your recovery options for a corrupt Exchange Database.
What happens is that if you run the SBS 2008 or SBS 2011 Standard backup wizards, it disables circular logging so that you can have a better chance of recovery of a corrupt mail database. If you are using other third party backup products such as StorageCraft ShadowProtect or BackupAssist, then you will want to manually go into the Exchange System Manager console and disable circular logging yourself.
Tuesday, February 15th, 2011
In Trend Micro Worry-Free Business Security 6.0 and higher, Trend have implemented new technology that reduces the size of pattern files distributed to the client machines. This combined with other architecture changes means that there is quite a lot of disk activity and change at certain times during the day. The disk activity is really a reorganisation of the pattern files and the database itself and is not actually an increase in the amount of data being stored. In itself this is not a problem because a backup taken once a day with your favourite backup program will only record the differences between that point in time and the last backup which is fairly small. However if you backup more often than that you might run into a problem with having very large incremental backups. This article talks about why that happens and shows how to avoid it.
Any image based backup software such as SBS 2008/SBS2011 inbuilt backup, or third party backup programs such as StorageCraft ShadowProtect have the ability to backup very fast and multiple times a day – as much as every 15 minutes in the case of ShadowProtect. This is great from a disaster recovery perspective as it allows you to minimise the data lost due to a system failure to a very small time window. The way these work is to take a base image one time only and then take some form of incremental backup from that point forward. Windows / SBS Backup automatically consolidates these into it’s backup file which is a VHD. ShadowProtect takes these as incrementals and then ImageManager consolidates these based on various settings.
Now if we look at one of the features of StorageCraft ImageManager called replication – this replication feature allows the incremental images to be sent over a LAN/WAN to another server or via FTP to a remote server. This is a cool feature because it means as soon as an incremental image is created, it can be shipped offsite quickly and efficiently. This however relies on the incrementals being able to be small enough that they can be pushed out quickly to the remote location. Factors such as limited internet bandwidth really come into play here.
Ok – let’s tie this all together now to see the ramifications.
We have Image based backup software that can snapshot the changes made in the last 15 minutes – if there is a program such as WFBS that makes large amounts of disk change in that 15 minutes then the incremental image will be quite a bit larger than normal. It can be that you will get a few Gigabytes of changes in a short period of time. These incrementals are fundamental to restoring the server to that specific point in time and therefore we can’t do anything about them per se.
It’s worthwhile noting that programs such as disk defragmentation utilities can also cause large amounts of disk change in short period of time. Such programs should only be run outside of hours and periodically to minimise the change and therefore backup sizes. There may well be other programs like this that I’ve not specifically called out – be aware of them if you see things like the large incremental backups and investigate to find out the root cause of the problem.
So how do we get around this problem so that we can have the ability to minimise our backup sizes and give us the power to replicate our incrementals quickly? It’s actually quite simple. The solution is to NOT backup these sections of the system every 15 minutes. Now you can’t do that specifically, so what is really needed is for you to create a partition for Utility programs such as this and install those programs to that partition. You can backup the rest of your server every 15minutes if that is what you want, but with this partition, simply back it up once a day. You will find that the REAL amount of data change from the start of the day to the end of the day may only be a few hundred MB at most which can easily be replicated outside of business hours. Now – the inbuilt SBS backup can’t do this – only third party programs such as ShadowProtect or Acronis can have multiple backup jobs scheduled.
Given you now have a utility partition, you might want to think about moving other such programs or databases to it – things that are not updated frequently include WSUS – it typically will synchronise once a day and hand out patches during the day. In a disaster recovery scenario, it typically won’t be an issue to restore the main server from say 4pm today and the utilities partition from 10pm last night.
In my testing, I need to highlight that the problems of large incrementals are not unique to ShadowProtect – when running Trend WFBS on my server with the SBS backup, and 30 minute backup intervals, I observed large incrementals as well – they are just hidden inside the backup itself so it’s not as obvious. The same happened when I ran a defragmentation on my disk drive using SBS backup as well. The moral to that is that it’s very easy to blame one product for another products “working by design”.
I hope this helps you understand the issue and ways to work around it.