#1
|
|||
|
|||
question about "bit rot"
I've been thinking about hard disks as long-term storage, and I've realized that magnetic domains on hard disk actually aren't permanent. The field strength decreases by more than 1%/year. So even for a hard disk that is never used, your data on it will gradually fade away. Errors will increase on time scales of years. This is well understood.
As far as I can tell, the only way to prevent this is to completely rewrite your hard disk data once in a while. Every year or two? Very little cogent info about mitigation strategy that I can find. That makes me wonder about SuperDuper smart updates, which just write things that have changed. Things that haven't changed don't get rewritten. So after a while, much of your archive disk data can get stale. Is this a concern? As in, is it smart to do a full up backup every once in a while instead of smart update? Some expert advice would be handy. |
#2
|
||||
|
||||
I take a major OS update as an opportunity to do a full backup, Dan. On top of that, I use a number of backup volumes, some of which are RAID and get scrubbed, to ensure that things are generally safe. Plus, I use online backup... your best approach is thorough, diverse coverage.
__________________
--Dave Nanian |
#3
|
|||
|
|||
Quote:
Are you aware of any Mac utilities that do disk refreshing? I think there used to be one, called DiskRrefresher, but that's no longer available. There are PC utilities that do this. There was, many years ago, a post about doing disk refreshes in a Unix-based system ... https://larryjordan.com/articles/tec...-disk-storage/ but I frankly think those techniques don't really do what he says they do. It would be nice to see some well-thought-out strategy for long term archiving on hard disks. I realize that's not necessarily the purpose of SuperDuper, but your insights would be welcome. |
#4
|
||||
|
||||
Well, again, the scrubbing does refresh the data (as, obviously, does the yearly rewriting). So, if you consider your NAS-based backups "archival" and have them scrubbed, they seem reasonably safe to me.
As far as there being a good chance that after a decade none will work - I've been surprised that most drives *do* work just fine after a decade, retaining all their data. Which, of course, is no guarantee.
__________________
--Dave Nanian |
#5
|
|||
|
|||
Quote:
And yes, while MTBF of hard disks is supposed to be about five years, they've always lasted much longer than that for me, at least for mechanical performance. |
#6
|
||||
|
||||
No, that's not what scrubbing does: your NAS usually will offer data integrity scrubbing.
And I'm talking about more than mechanical performance. There's a lot of "fuzziness" in magnetic drop-off, hence the ability to recover data from master tapes that are 50 years old.
__________________
--Dave Nanian |
#7
|
|||
|
|||
Hmmm. So I guess that means that, without plunking down money for a full-up NAS system, the only option for long term HD archiving is regular (though perhaps infrequent) wholesale rewrites of the data. Seems a little odd that there isn't some refresh app for Mac that will do refreshing more conveniently, and that ideally could even be scheduled.
|
#8
|
||||
|
||||
Doesn't seem that odd to me... this type of failure just isn't common (and who's to say the source isn't failing - what's the reference?)...
__________________
--Dave Nanian |
#9
|
|||
|
|||
Quote:
I suspect it isn't common at least partly because continuously powered-up drives suffer mechanical failures long before bit rot can even happen. But for disks that are on-the-shelf, unpowered archives, bit rot is likely to be more of an issue. A huge amount of effort has gone into strategies for long-term data archiving for petabytes of data, from heavy creators of at least science data. I understand it ain't cheap. It would be interesting to see some strategic thinking that would apply to a home user. Again it may just be a matter of archiving to multiple drives, and then, after checksum verification on one, do a wholesale write to the others every few years. If so, then a tool to do that would be handy. |
#10
|
||||
|
||||
Perhaps a NAS is the way to go for you, Dan? They aren't that expensive, are easy to manage, and have other means of preserving and reconstructing data as well...
__________________
--Dave Nanian |
#11
|
|||
|
|||
Thanks, Dave. I'll think about it. But it sure would be nice to see some technical report or something on options for home users to best preserve digital data. Of course, it's been noted that the lifetime of digital formats isn't that long, so if you write to an archive with a hundred year lifetime, in fifty years no one will be able to read it. I guess the default plan has to be just to make a lot of copies onto a lot of archives.
Is there some way to check a disk for errors before copying? Is that what "First Aid" does? Does SuperDuper do this? Might be smart to make sure there are no rotten bits before one starts to copy. I guess this would be a matter of scanning the whole disk for checksum errors. |
#12
|
||||
|
||||
Disk Utility checks the directory for errors. It doesn't check the "disk" as such... this is generally the area where ZFS does well and is self repairing... but, ZFS isn't terribly easy to use on OS X/macOS.
__________________
--Dave Nanian |
#13
|
|||
|
|||
Well, FWIW, I did find this interesting essay ...
https://www.osomac.com/2014/01/13/bit-rot/ which considers what a Mac user can do about bit-rot. The answer? Not a helluva lot, it seems. Now, copying to several independent archive disks doesn't really help much, because if you've got a rotten bit on your home machine I assume you're just going to faithfully copy that rotten bit to your archives. |
#14
|
||||
|
||||
Again, I had to be a broken record here, but using a NAS (Synology units, for example, use BTRFS) to archive, including hardware and "bit rot" redundancy, seems like the way to go here, not a tool to run checksums on your drive.
__________________
--Dave Nanian |
#15
|
|||
|
|||
Quote:
For a few hundred bucks, I could just burn half a terabyte to M-disks. |
Currently Active Users Viewing This Thread: 1 (0 members and 1 guests) | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Newbie question (not seen in FAQ) | Mike Nassour | General | 4 | 08-15-2009 10:40 AM |
Network backup question from a newbie | stablgr | General | 1 | 03-19-2009 08:32 AM |
Quick cloning question | Timmy | General | 5 | 12-10-2006 02:06 PM |
Schedule Question | jgrove | General | 1 | 11-13-2006 09:54 AM |
Undersized disk image and a Boot Camp-related backup question | garybollocks | General | 3 | 10-23-2006 01:07 PM |