I have two recurring career related nightmares one of which I’ve come to discover is part of the shared consciousness of many DBAs: catastrophic database failure. Not that we haven’t trained for efficient and timely recovery in preparation for such an event, but that our preparation did not account for Murphy, our nemesis, joining the party with a corrupt backup file. This nightmare recently became my reality when I received a call from a soon-to-be client that a business-critical database failed and their subsequent attempts to recover from backups also failed.
So, how did I handle the situation? First, I reactivated my profile on Dice.com because as Steve Larrison suggests this may be the “only way to truly handle that situation.” Then, after a quick visit to the Espresso machine I called the client and got to work.
I wish I could say that through astounding technical prowess and my magical powers I was able to restore all of their data; alas, my story does not have a happy ending. Following four and one-half hours of effort all I had to say to the client was that the data in the backup file was inaccessible and the best I could do for them was to recover portions of their data from the corrupted database file. To my amazement; and, relief the client was ecstatic. End result was a new happy client for whom we implemented a robust backup strategy that included redundant backup file storage and weekly testing.
The moral of this tale-of-whoa is that a backup strategy is only as good as the last test of that strategy.