View Single Post
  #1  
Old 09-28-2007, 10:52 AM
yenant yenant is offline
Registered User
 
Join Date: Sep 2007
Posts: 1
Avoiding Propagation of Disk Corrupted Files

Happy user of SuperDuper! here, but today ran into a use case that has not been addressed.

When using SuperDuper! to pull a last ditch backup off a dying drive, I came up with two feature enhancements to make that task easier, one relatively straightforward, the other far more complex.
  1. Read-all-or-none
  2. Skip-and-log-unreadable-and-unchecked
Enabling a read-all-or-none checkbox would prevent overwriting an existing backup copy of a file until the original file's contents and attributes are completely read into a temporary file. This would tremendously slow down the backup, but it would ensure that no corrupted versions of files replace last known good copies. The most obvious way to implement this is to modify ditto and give it a flag option to do this, but unfortunately, the Darwin sources do not offer ditto. So one way to solve this is to front-end ditto with a wrapper that performs the read-all function, then passes to ditto the buffer (not the original hard disk) copy of the directories/files.

As a complementary solution, a skip-and-log-unreadable-and-unchecked feature would be desirable. This would be much harder to implement. The use case we are trying to address is cloning as much of a dying drive as we can, so we want to spend as little time copying off as much known good files as we can. If this feature is enabled, read the directory information and store the data tree, then start backing up one file at a time. Mark the status in the data tree of each file (preferably with read-all-or-none status shown), and mark which file we are working upon. If the entire session fails (SuperDuper! can hang when the hard disk interface starts going flaky, for example), then after restarting, SuperDuper! then can pick up where it left off by reading back in the data tree. The user then can choose to skip the last file that a backup was attempted upon, or resume the backup from that file.

Bonus points for implementing an update of a .dset that holds all the files that were skipped or unreadable, so the user has the choice to try to recover just those files.

For now, I've changed my standard procedure when I identify a disk problem to run Disk Warrior first to fix whatever it can, run Disk Utility's First Aid to fix whatever it can, then clone to a fresh drive what I can (can require multiple iterations, each time adding a skip file to the .dset and restarting the entire backup). If the drive fails before the clone has a chance to finish, I then manually merge with the last known good backup, which is tedious beyond belief, but it works.
Reply With Quote