A recent post on Jonathan “Wolf” Rentzsch’s Tales from the Red Shed reminded me to give a bit more of the philosophy behind what we’re doing in SuperDuper!

Wolf points out that we don’t do “temporal versioning”—i.e. traditional “incremental backups”—and he’s right.

The Technical Problem
Doing versioning “right” requires both a non-native file format and a database of what’s been going on over time. (I don’t consider the technique used by some programs—stuffing old versions in special folders on the backup media—a reasonable solution, since it pollutes the original and complicates restore. And, yes, you could store in a parallel location with dated folders… but read on.) You need to be able to reconstruct this database from the backup media. And you need rather extensive UI to manage this stuff. (I could keep going, bringing up other issues like the patent problem, but those are the big issues involved.)

This, by definition, significantly increases the complexity of the program’s back and front ends—which makes the program much harder to QA properly. As Wolf says, it’s incredibly important this stuff works. Of course, that’s our problem—it’s our job to ensure that the features we implement are well tested and work.

The User’s Problem
That added complexity has another major problem: it can alienate and confuse users, and a proprietary, single-vendor format leaves them without an alternative should a problem arise. So, it’s important that any solution be easy to understand, usable, and not have any “lock in”.

Staying Balanced
So, to determine whether that complexity is worth adding, it’s important to ask—when do most people need to restore? In general, we’ve found that “regular users” (and by that, I mean real “end users") need to use their backups when:

  • They’ve made a “bad mistake”, like accidentally deleting an important file, or overwriting one (this kind of mistake is almost always recognized immediately)
  • Their drive (or computer) fails catastrophically, requiring a full restore
  • They sent their computer in for service, and it came back wiped clean
  • An application they installed, or a system update, caused their system to become unusable/unstable
None of these situations require much other than a high-quality, up-to-date, full copy backup. (The last has a better solution than a backup—a “Sandbox”—which we offer in SuperDuper! as well.)

Covering the 99% Case
Given that, it’s pretty easy to see that most end users don’t need to retrieve a two-year-old (or even six-month old) version of a file from a backup. (An archive is a different thing: I’m talking about backups.) It’s just not that common a case. Developers, on the other hand, do need older versions of files, but they should be using a version control system: something a backup should absolutely not be.

But, it is possible that a user won’t notice a problem in a “bad file” until they’ve already overwritten their backup, thus losing any chance of recovery with a “full copy”. I suggest that while this is a problem for some, we have a good solution: rotate more than one full backup.


Any need for this kind of “temporal rollback” can be significantly reduced with a single rotation—say, on a weekly basis—and nearly eliminated with two—a weekly and a monthly. It’s incredibly rare that, on a non-archival basis, you’d need to go back more than four weeks. (It’s similarly likely that a daily incremental would become difficult to manage, and thus “recycled”, in this kind of timespan.)

Storage Space is Cheap and Plentiful
The only real disadvantage? It takes disk space, something that was incredibly expensive and limited when these other schemes were originally invented (floppies, anyone?). But, these days, disk space is cheaper than cheap, with the “sweet spot”, Mac-boot-compatible 200-250GB FireWire drives going for $150-$200. And most “normal” users can store a lot of backups on a 250GB drive or two.

Simple to Understand
The advantages to this kind of approach are many, not the least of which is that a non-technical user can easily understand what’s going on. It’s incredible how many people are confused by conventional backup terminology—“incremental”, “differential”, backups “sets” and the like. And, complicated storage mechanisms require a significant amount of expertise to perform a full recovery in the event of that all-too-common disaster: the total drive failure. (Look, for example, at what you have to do with Retrospect or Backup 3 should you lose your boot drive (very common)—where the vast majority of people also store their “Backup Catalog”. Yes, it can be done. Even if the program works properly, it can take days to recover.)

Simple to Restore
With SuperDuper!, recovery in that situation is literally a matter of booting from your most recent backup. And restoration—which, should you be on deadline, you need not do immediately—is just a matter of replacing the drive and copying back.

Individual files are also easy to restore: just drag and drop from the backup. (Yes, applications without drag-and-drop install, or system-level files, are harder, but can typically be reinstalled/archive-and-installed should that be necessary… or, see the Safety Clone/Sandbox for another rather unique idea...)

The Other 1%
I know this all sounds terribly simplistic to those who run data centers, or large corporate networks, and for that kind of user, it is. And, I have no doubt that some users have need of more complex systems, with the ability to roll back to any given day during a six-month period—or whatever timeframe they choose to work within.

User It or Lose It
SuperDuper!’s approach is the kind of thing that regular end users can do, and feel confident about. And, with that confidence—and with the ease of use and understanding we provide—they’ll actually back up!

Even the most perfect program can’t work unless that happens—so, in some ways, it’s the most critical thing of all.