PDA

View Full Version : Burn to DVD?


wsphish420
05-15-2004, 01:51 AM
I am looking at getting some new backup software, and I am curious if you can backup to DVD with this software. The programs I have tried, won't let you copy a large volume (my laptop) to a DVD, because it is too large. I am looking for a program that will let you backup a large volume to DVD, that automatically breaks it up into multiple DVDs so all you have to do is keep feeding DVDs to the computer. If anyone can tell me if this program can do that, that would be great!

Thanks,

Nick

dnanian
05-15-2004, 09:59 AM
Unfortunately, SuperDuper! is designed to make backups to things like hard disks and images stored elsewhere, not to DVDs, so we're not the solution for you.

However, Retrospect -- while more complex -- will certainly meet your needs.

A simpler solution would be Apple's own Backup program, which comes with .mac service.

Hope that helps!

sjk
06-18-2004, 10:54 PM
However, Retrospect -- while more complex -- will certainly meet your needs.Impression (http://babelcompany.com/impression/) has the ability to create multi-disk DVD backups, with enough scratch space (see developer comment (http://www.versiontracker.com/php/feedback/article.php?story=20040518151955490#comments)).

The rest of this might be better as a separate post but since I've already started composing it (and prepending this comment now) I'll leave it here since this isn't a particularly busy forum.

I'm in the process of designing a strategy for regular backups of my eMac and iBook using a combination of FireWire and CD/DVD media storage. I'd like to do monthly (or maybe bi-monthly) clone backups to FireWire, with some type of "incremental" backups in between. Certain directory hierarchies would be backed up to CD/DVD at different intervals, some for permanent archival.

I'm mostly familiar with traditional UNIX dump/restore utilities using different levels (0-9) to control what's saved relative to a previous backup level, with level 0 being a complete backup. An advantage with that is full backups can be saved to one media destination and incrementals to others. In my case, full (clone?) eMac/iBook backups could each exist in separate FireWire volumes, and "incrementals" for both could be written as file archives to another volume on the same drive. Fully automating this would be ideal.

My backups to CD/DVD can be distinct from fulls/incrementals, with their own schedule. The second volume of my eMac and/or one on the FireWire drive can be temporarily used for image creation. For example, my local mailstore fits on a single CD and it's trivial to generate a mountable disk image of that using a command like "hdiutil create -srcfolder Mail /Volumes/Space/Mail-20040618.dmg", then burning it at my convenience. Partly automating this would be ideal.

Lastly, there's miscellaneous multimedia data currently on the second volume of my eMac that I want backed up at irregular intervals depending on how it changes. That's the most uncertain part of all this because of the large data sizes involved. Copying some to the FireWire drive may work, while some might best be written to multiple DVDs. Some of this might be automated, some not.

It's still unclear which Apple HFS+ backup products can offer that functionality and I'm open to using a combination of them, within budget. For various reasons Retrospect is not an option. ;)

So, can SuperDuper! be folded into that proposed strategy? I'm also trying to wrap my mind around other ways to achieve a comfortable combination of disaster recovery, regular backups, and archival backups. During about ten years of the ufsdump/ufsrestore (comparable to dump/restore on OS X for UFS filesystems) usage on Sun Solaris systems at home (before migrating to OS X) I never had any irrecoverable files except for a few unimportant ones after a major disaster recovery. That level of data integrity seems elusive with OS X and HFS+ volumes. Actually ditto (which Carbon Copy Cloner is a front-end for) has proven itself the most reliable utility I've used so far, but now I'm exploring further to support the strategy I just explained.

Sometime later I may be interested in synchronization between the eMac and iBook. For that I'm curious about ChronoSync (http://www.econtechnologies.com/site/Pages/ChronoSync/chrono_overview.html). It's nearly as highly rated on VersionTracker (http://www.versiontracker.com/dyn/moreinfo/macosx/13652) as SuperDuper! (http://www.versiontracker.com/dyn/moreinfo/macosx/22126) (exclamation point) and seems reasonably priced for its functionality. As a backup utility (not synchronization) it doesn't seem to support multi-disk CD/DVD capability, but that may be irrelevant.

Enough, whew. That was sure more than I intended to write when I started. :)

dnanian
06-19-2004, 12:47 AM
I'm not even sure where to start here, I have to say!

One of the problems of 'clone' type backup utilities -- of which SuperDuper! is one -- is that it becomes awkward to try to develop a backup strategy that allows full rollback with incremental update storage. In general, doing that kind of things requires a backup catalog and a non-simple-filesystem storage mechanism, and we've been trying to avoid that.

Yet, in my quest for trying to figure out how to do this simply, I did stumble on some discussion (in the mount docs) of union mounts. It seems to be that a union mount of an image over another image might allow clone backups to be done while actually generating a storable delta in a separate image. I haven't done a full-fledged investigation into this, but it was an intriguing idea. You might want to check it out.

SuperDuper! can certainly make and update images, and you can front-end this stuff with various hdiutil functions to mount, create, or whatever, but without doing this kind of trick you won't have incremental rollback.

Of course, you could have a number of sparse images stored on an external or network drive, named things like "monday", "tuesday", etc, and Smart Update them; you could roll back as many days as you have storage for.

Another option, if you're thinking dump: rsyncx...

Anyway, just throwing some disorganized, rambling, I'm-on-a-slow-GPRS-connection-and-can't-research-much ideas out there.

sjk
06-20-2004, 03:30 AM
I'm not even sure where to start here, I have to say!Somewhere, anywhere, nowhere ...? Thanks for the ultra-quick response, which I've read you have a reputation for. :)

One of the problems of 'clone' type backup utilities -- of which SuperDuper! is one -- is that it becomes awkward to try to develop a backup strategy that allows full rollback with incremental update storage. In general, doing that kind of things requires a backup catalog and a non-simple-filesystem storage mechanism, and we've been trying to avoid that.Understood.

Hope you can clarify a few details with this simple procedure:

1) Use "Backup - all files" script to create a bootable clone of the system volume to a backup volume.

* Since it's a bootable clone it must do root authentication but there's no mention of that in the manual.
* What's the advantage of using SuperDuper for this vs. the Restore capability of Disk Copy (on 10.3)?
* Are any cache files removed, similar to Carbon Copy Cloner?
* Are Finder comment fields preserved?

2) Use "Smart Update" option later to refresh copy of the system volume on a backup volume.

* I presume that's similar to using psync with Carbon Copy Cloner (which I've never done; I'm a bit suspicious of its integrity "under duress")?
* Can any any combination of directory hierarchies be candidates for Smart Update?

And all backups are started manually; no automated scheduling (yet)?

Yet, in my quest for trying to figure out how to do this simply, I did stumble on some discussion (in the mount docs) of union mounts. It seems to be that a union mount of an image over another image might allow clone backups to be done while actually generating a storable delta in a separate image. I haven't done a full-fledged investigation into this, but it was an intriguing idea. You might want to check it out.I'd noticed support for union mounts in the man pages but hadn't considered using them in this context -- cool idea. I played with union mounts a bit to overlay local filesystems over a NFS-mounted /usr/local hierarchies on pre-Solaris versions of SunOS so I'm familiar with the concept. I'd be interested in what you discover and I might do a bit of tinkering, too. I've been trying to get more familiar with creating disk images, ensuring that owners, groups, permissions, etc. are accurately preserved.

SuperDuper! can certainly make and update images, and you can front-end this stuff with various hdiutil functions to mount, create, or whatever, but without doing this kind of trick you won't have incremental rollback.Yep.

Seems that incremental (and differential) backups on OS X are intended more for heavy-duty (and pricier) utilities like Retrospect and BRU.

Of course, you could have a number of sparse images stored on an external or network drive, named things like "monday", "tuesday", etc, and Smart Update them; you could roll back as many days as you have storage for.

Another option, if you're thinking dump: rsyncx...I don't see the correlation. Normally when using dump for backups the destination would be a single archive file whereas an rsync(x) destination would be a directory hierarchy. A dump|restore pipeline to another filesystem would be more like rsync(x), and cloning.

Anyway, just throwing some disorganized, rambling, I'm-on-a-slow-GPRS-connection-and-can't-research-much ideas out there.I'm impressed. :)

Thanks again for the feedback and ideas.

dnanian
06-20-2004, 11:44 AM
Somewhere, anywhere, nowhere ...? Thanks for the ultra-quick response, which I've read you have a reputation for. :)
Hard to keep up my end when you reply at 2:45am my time! :D

Hope you can clarify a few details with this simple procedure:

1) Use "Backup - all files" script to create a bootable clone of the system volume to a backup volume.

* Since it's a bootable clone it must do root authentication but there's no mention of that in the manual.

Yes, in the current version it will prompt for authentication when you select "Start copying".

* What's the advantage of using SuperDuper for this vs. the Restore capability of Disk Copy (on 10.3)?
Selectivity, scripts, support, UI, and other features like Smart Update, Copy Different, Copy Newer, etc.

* Are any cache files removed, similar to Carbon Copy Cloner?
You can check out the scripts to see exactly what we do. The cache files are not removed, they're not copied, and are specified in the script. We don't copy things that Apple specifically states shouldn't be copied. (Obviously, it's a bit silly to copy swap files.

* Are Finder comment fields preserved?
They should be: we clone all Finder attributes and HFS+ metadata.

2) Use "Smart Update" option later to refresh copy of the system volume on a backup volume.

* I presume that's similar to using psync with Carbon Copy Cloner (which I've never done; I'm a bit suspicious of its integrity "under duress")?

Yes, it's similar, though significantly faster. I use it all the time, and have never had any kind of problem -- it's quite well tested. No doubt by consciously trying to trick it you could, but in normal (or even abnormal) operation it should be fine.

* Can any any combination of directory hierarchies be candidates for Smart Update?
Yes. I've changed a full Jaguar into a Panther with Smart Update, for example. Note that, however, we don't do an erase pass before doing a copy pass. This means that there are cases where renaming extremely large directories may end up overflowing the disk because the total of the two directories is larger than the size of the drive. Again, rare... and the speed was worth it. We've only had one report of this in the field.

And all backups are started manually; no automated scheduling (yet)?
Correct. Yet.

I'd noticed support for union mounts in the man pages but hadn't considered using them in this context -- cool idea. I played with union mounts a bit to overlay local filesystems over a NFS-mounted /usr/local hierarchies on pre-Solaris versions of SunOS so I'm familiar with the concept. I'd be interested in what you discover and I might do a bit of tinkering, too. I've been trying to get more familiar with creating disk images, ensuring that owners, groups, permissions, etc. are accurately preserved.
I've got to find the time for exploring, but I thought it was an intriguing concept, too.

Seems that incremental (and differential) backups on OS X are intended more for heavy-duty (and pricier) utilities like Retrospect and BRU.
I think so, yes. But there may be others -- I honestly haven't done a survey of the various solutions. There are quite a few.

I don't see the correlation. Normally when using dump for backups the destination would be a single archive file whereas an rsync(x) destination would be a directory hierarchy. A dump|restore pipeline to another filesystem would be more like rsync(x), and cloning.
I thought I read somewhere that rsync would also output differential information that you could use. Yes, it's not dump (obviously it's not filesystem structures), but you might be able to cobble together a solution with it and some bailing wire and string! ;)

Thanks again for the feedback and ideas.
You're welcome. Thanks for your questions and interest.

sjk
06-21-2004, 01:34 AM
Keeping this short. You certainly covered everything to my satisfaction... thanks!
You can check out the scripts to see exactly what we do. The cache files are not removed, they're not copied, and are specified in the script. We don't copy things that Apple specifically states shouldn't be copied. (Obviously, it's a bit silly to copy swap files.Excellent. I'd like to minimize figuring out those details and avoid unpleasant surprises but I still want to understand what's happening. Having the scripts as a starting point should work well.
I thought I read somewhere that rsync would also output differential information that you could use. Yes, it's not dump (obviously it's not filesystem structures), but you might be able to cobble together a solution with it and some bailing wire and string! ;)Didn't see any mention of it in the man page. I'll probably use rsync(x) for keeping my iBook updated with some eMac changes (e.g. /usr/local) but be more conservative with backups.

Off to do some SD testing now...

dnanian
06-21-2004, 11:37 AM
There's no hidden "no copying" anywhere in SuperDuper! except, of course, that we don't copy sockets. We try to be transparent to those who need transparency, and easy for those who need easy. There are many 'building block' scripts that you'll find in the default set, and they should be named in a way that explains what they do.

Good luck with the testing; please let me know if you have any additional questions.

sjk
06-21-2004, 09:15 PM
Did a full clone volume-to-volume clone backup and noticed one minor discrepancy between the source and destination:

Two directories and one file under my home directory owned by me (created/modified last month) were owned by root on the destination (clone) volume.

No time to do a thorough check for other things but I just wanted to report that one now.

So much for the original thread topic but it seems the poster has left the building anyway. :)

dnanian
06-21-2004, 09:41 PM
You know, we've seen this happen before, and I think you'll be quite surprised if you do the following:

- On the original drive, open the Terminal and change to the parent of the directories (and/or file) that you noticed a discrepency with

- First, do n "ls -l". You should see that they're owned by you, with your current group status.

- Now, authenticate with sudo -s. Once authenticated, do an ls -l. What's the ownership now?

Needless to say, SuperDuper! runs authenticated... and, when we're authenticated, we get the owner/group the OS gives us... which seems to track the effective UID in some situations. It's weird and kinda subtle, and took us an age to at least figure out what was going on...

sjk
06-21-2004, 10:41 PM
First, do n "ls -l". You should see that they're owned by you, with your current group status.
Non-auth:

% ls -dl DiskWarrior DiskWarrior/2004-05-17 DiskWarrior/2004-05-17/Macintosh\ HD\ Report.pdf
drwxr-xr-x 3 me unknown 102 17 May 19:51 DiskWarrior
drwxr-xr-x 3 me unknown 102 17 May 19:52 DiskWarrior/2004-05-17
-rw-r--r-- 1 me unknown 61636 17 May 19:52 DiskWarrior/2004-05-17/Macintosh HD Report.pdf

Now, authenticate with sudo -s. Once authenticated, do an ls -l. What's the ownership now?
Auth:

% sudo ls -dl DiskWarrior DiskWarrior/2004-05-17 DiskWarrior/2004-05-17/Macintosh\ HD\ Report.pdf
drwxr-xr-x 3 root unknown 102 17 May 19:51 DiskWarrior
drwxr-xr-x 3 root unknown 102 17 May 19:52 DiskWarrior/2004-05-17
-rw-r--r-- 1 root unknown 61636 17 May 19:52 DiskWarrior/2004-05-17/Macintosh HD Report.pdf

Yikes, that's whacky!
Needless to say, SuperDuper! runs authenticated... and, when we're authenticated, we get the owner/group the OS gives us... which seems to track the effective UID in some situations. It's weird and kinda subtle, and took us an age to at least figure out what was going on...That's definitely a rational explanation of what's happening -- thanks! I vaguely remember noticing that in another context; now I won't forget it.

A couple more things:

Any possibility of adding an option for preserving file access times or would that make SD significantly slower? Not that they're as accurate on OS X as traditional Unix systems but I still find use for that file information.

[OS X != Unix, OS X != Unix, ... :)]

Can you briefly describe the logic Smart Update uses and if there's any way it might be accidentally (or intentionally ;)) be "tricked" into overlooking files? I can't test it w/o registering tho' with your smart, superb support so far I'm about *this* close to paying even if I don't use the program. :)

dnanian
06-21-2004, 11:00 PM
Told you you'd be surprised about the ownership thing. If you chown those files to you:staff (or whatever), it'll stick from that point forward. I think this is due to some weirdness with OS X supporting both file systems that respect ownership and file systems with ownership 'overlaid' on them for compatibility.

Basically, if it's trying to maintain compatibility, the ownership of the files on the ownership-ignored volumes tracks your own ownership. BUT, it seems that if you copy those files locally, they have some sort of wacky track-uid-and-group value in there, and they do unexpected things on a volume that respects permissions. Weird stuff.

We looked at file access preservation but decided against it: I think that we had problems actually getting the value preserved but truthfully I don't exactly remember. But I'll add the request to the list and we'll take another pass at it in the future. (I believe that it can't be done: when we tried, it just updated the access time...)

Smart Update is exactly like "Copy Different" with an added "erase" pass. I can't really think of any accidental "tricking" that might happen, unless you modified something, ended up with exactly the same number of bytes, modified the times and metadata so that it would look the same from that perspective, and then did a SU. In that case, it might not copy the file, since it doesn't look "different", and we don't use a file CRC to be extra super careful. (Frankly, it really isn't necessary when you're doing a single system to backup update, it'd just take an enormous amount of time and basically make Smart Update pointless.)

Anyway, once the copying has been completed, we erase things that are on the backup but are no longer on the source. (This is a bit of a simplification -- we don't copy everything and then erase, it happens directory by directory, mostly, not drive-wide -- if we were making an entirely separate pass, we would have done erase-first anyway.)

This means that there's one potential error case: if the union of the files being copied in a given directory exceeds the capacity of the drive (assuming that all the files are different), we fail because we erase after we copy. That also means that if you rename a large directory, it's possible that we'll copy the new one before removing the old, causing a disk space failure.

In neither cases does the failure result in the loss of any data, nor does it fail silently.

Hope that answers your questions! Glad you're happy with the support: it's part of what you're paying for when you -- hopefully -- pay!

sjk
06-23-2004, 12:26 AM
Told you you'd be surprised about the ownership thing.Yep. Fortunately a more benign surprise than discovering how deleting symbolic links with Finder can sometimes delete the target(!)

For "fun" I deleted the ~/Library/Application Support/SuperDuper!/Copy Scripts/Standard Scripts symlink (which worked okay), tried Undo, and Finder griped "The operation cannot be completed because you do not have sufficient privileges for some of the items." Whoops.

Reminds me of potentially devastating side effects of omitting the "-h" option on Unix ch{own,grp,mod} commands (OS X versions are susceptible) when symlinks are involved, which an impressive number of root-enabled Unix sysadmins don't realize as they're using those commands (often recursively). My favorite traditional example:

% ls -l /etc/passwd foo
-rw-r--r-- 1 root wheel 1374 8 Dec 2003 /etc/passwd
lrwxr-xr-x 1 me me 11 22 Jun 16:07 foo -> /etc/passwd

% sudo chown me foo
Password:

% ls -l /etc/passwd foo
-rw-r--r-- 1 me wheel 1374 8 Dec 2003 /etc/passwd
lrwxr-xr-x 1 me me 11 22 Jun 16:07 foo -> /etc/passwd

Now that I've let that cat out of the bag, back to topic at hand...
We looked at file access preservation but decided against it: I think that we had problems actually getting the value preserved but truthfully I don't exactly remember. But I'll add the request to the list and we'll take another pass at it in the future. (I believe that it can't be done: when we tried, it just updated the access time...)No worries.

Smart Update is exactly like "Copy Different" with an added "erase" pass. I can't really think of any accidental "tricking" that might happen, unless you modified something, ended up with exactly the same number of bytes, modified the times and metadata so that it would look the same from that perspective, and then did a SU. In that case, it might not copy the file, since it doesn't look "different", and we don't use a file CRC to be extra super careful. (Frankly, it really isn't necessary when you're doing a single system to backup update, it'd just take an enormous amount of time and basically make Smart Update pointless.)Is it correct that "Copy newer" will skip files with older created/modified times than when they've actually been added to the filesystem. Downloads are a good example of that kind of file so I'm careful to know how "newer" is being interpreted. But that's not relevant with SU, if I understand things correctly.

This means that there's one potential error case: if the union of the files being copied in a given directory exceeds the capacity of the drive (assuming that all the files are different), we fail because we erase after we copy. That also means that if you rename a large directory, it's possible that we'll copy the new one before removing the old, causing a disk space failure.Got it. Quite unlikely I'll encounter that with SU on the system volume.
Hope that answers your questions! Glad you're happy with the support: it's part of what you're paying for when you -- hopefully -- pay!Registered this morning. Smart Update was too much temptation to hold off any longer.

Issue with exclude:

The main "Backup - all files" script excludes var/db/BootCache.playlist and var/db/volinfo.database, but those files exist on the destination volume. Not sure if the original backup or the SU copied 'em since I only noticed after the latter.

Thanks for responding to my VersionTracker feedback. Hope I didn't sound like I was giving misinformation about the way script editing worked.

About the capacity check... after posting I thought of mentioning that a simulation mode would be convenient for certain scenarios with backup media storage planning, as I'm currently doing, especially when the the space rules for dealing with disk image files aren't known. For example, I tested creating a disk image of the system volume (~19GB) on another volume with ~25GB free. If I'd let that run it would have overflowed, as expected. A simulation, safely running non-inactively for an hour or two then warning that the real deal would have failed, would have been nicer than having to manually interact and abort.

And/or is it possible for the temporary volume to use space on a different volume than the image file's final destination? Several visits to the hdiutil man page have failed to reveal a way of doing that.

Whew. That covers everything and then some for today. :)

dnanian
06-23-2004, 08:50 AM
Is it correct that "Copy newer" will skip files with older created/modified times than when they've actually been added to the filesystem. Downloads are a good example of that kind of file so I'm careful to know how "newer" is being interpreted. But that's not relevant with SU, if I understand things correctly.

SuperDuper!'s "Newer" isn't "Newer since last backup". It's "Newer than the equivalent file on the destination". Every file is always evaluated: files aren't skipped because they're newer/older than some global timestamp. So, no, this isn't a problem.

Issue with exclude:

The main "Backup - all files" script excludes var/db/BootCache.playlist and var/db/volinfo.database, but those files exist on the destination volume. Not sure if the original backup or the SU copied 'em since I only noticed after the latter.
I'm fairly sure that excludes are excluded as they should be. If you didn't use erase-then-copy or smart update, those files would indeed still be there. Or, if you checked after you booted, they'd get recreated...

Thanks for responding to my VersionTracker feedback. Hope I didn't sound like I was giving misinformation about the way script editing worked.
Well, frankly, I wasn't quite sure what you were getting at, and VT is a terrible place to do support/ask questions. So... fill me in about what you were seeing/confused by!

About the capacity check... after posting I thought of mentioning that a simulation mode would be convenient for certain scenarios with backup media storage planning, as I'm currently doing, especially when the the space rules for dealing with disk image files aren't known. For example, I tested creating a disk image of the system volume (~19GB) on another volume with ~25GB free. If I'd let that run it would have overflowed, as expected. A simulation, safely running non-inactively for an hour or two then warning that the real deal would have failed, would have been nicer than having to manually interact and abort.
Yes, we've considered that as an extension of What's going to happen?, but other things have priority at present.

And/or is it possible for the temporary volume to use space on a different volume than the image file's final destination? Several visits to the hdiutil man page have failed to reveal a way of doing that.
Not that we've seen. Conversion is done in place. But conversion isn't strictly necessary... manual use of a sparseimage can resolve this issue, allow faster backups (by skipping the other steps), and allow future smart updates of the image to boot.

sjk
06-23-2004, 10:39 PM
SuperDuper!'s "Newer" isn't "Newer since last backup". It's "Newer than the equivalent file on the destination". Every file is always evaluated: files aren't skipped because they're newer/older than some global timestamp. So, no, this isn't a problem.The manual says:

... overwrite files that exist on the destination with those on the source if the source files are newer (with Copy newer) or different (with Copy different), leaving all other files as-is.

Does that mean only files that already exist on the destination are candidates for overwriting? Got a quick example of when those options would be useful? I'm a bit dense today.
I'm fairly sure that excludes are excluded as they should be. If you didn't use erase-then-copy or smart update, those files would indeed still be there. Or, if you checked after you booted, they'd get recreated...First ran a full erase-then-copy backup (unregistered version) to a FW drive volume, then did a couple smart updates (registered version). Haven't rebooted since installed SD. The supposedly excluded files do exist on the destination. I can try same full+SD backup again later (checking after each run) to be 100% certain there's a glitch somewhere.
Well, frankly, I wasn't quite sure what you were getting at, and VT is a terrible place to do support/ask questions. So... fill me in about what you were seeing/confused by!Yeah, I agree about VT. The forum's a bit clumsy, too, especially when the accesskey Control key shortcuts interfere with emacs-style navigation during text editing with Safari (grrr!). Know any tricks to make these inline quoted replies any easier?

I've half figured out the "backup to folder" issue. That's basically what the "backup - user files" script does. With that, is the entire volume erased if the destination is a volume and the erase-then-copy option is set?

Other thing was the summary log in addition to the normal "console" log. Something like how each Carbon Copy Cloner session is logged to a separate file.
Yes, we've considered that as an extension of What's going to happen?, but other things have priority at present.What about a simple warning of the overflow possibility right before confirmation of the backup? Or maybe that would be more confusing than helpful.
Not that we've seen. Conversion is done in place. But conversion isn't strictly necessary... manual use of a sparseimage can resolve this issue, allow faster backups (by skipping the other steps), and allow future smart updates of the image to boot.Not sure what all that mean -- "conversion is done in place" and "manual use of a sparseimage"? Maybe skip that until I come up with a specific example of something I want to do.

dnanian
06-23-2004, 11:54 PM
I'm going to quote less, like not, in this reply. Bear with me.

Copy newer/copy different will only *overwrite* files with those various values. They'll also add "new" files that have no comparable equivalent on the destination. Only files that already exist on the destination can be overwritten, of course: otherwise, they're not there to overwrite! Or, perhaps I Misunderstand.

These options are useful when you're trying to update a backup and retain obsolete files.

I don't know why those files would exist on the backup, but I'll take a look at the script and what we're doing here to see if I can see any reason why they'd be present.

The backup - user files script is just selection. The copy *mode* determines the action taken when the copies occur. So, yes, it would still erase the destination with Erase, then copy -- by definition. It would also remove all other directories with Smart Update.

Regarding another warning, I'm pretty much against a warning that doesn't really say much other than this-might-overflow-maybe. An overwrite warning is one thing -- it's safety more than anything else. A "don't do this if it won't fit" thing is sort of stating the obvious. I figure if we can't really tell you what the deal is, it's not of much use telling you what it isn't!

For the sparseimage, take a look at the FAQs here in the forums... in-place conversion is converting from a sparseimage to a DMG, which SD! does during the imaging process.

Hope that wasn't too confusing without the quoting!

sjk
07-10-2004, 07:01 AM
Quick followup on this:
The main "Backup - all files" script excludes var/db/BootCache.playlist and var/db/volinfo.database, but those files exist on the destination volume. Not sure if the original backup or the SU copied 'em since I only noticed after the latter.I ran SD with the "Backup - all files" script using the "Erase ..., then copy files from ..." option and confirmed the two files excluded by the script were copied to the destination (and not listed as ignored in the logfile). Files from the Include Directives scripts in the main script weren't copied (and listed as ignored in the logfile).

Also curious why the "Exclude system cache files.dset" script isn't used by the "Backup - all files" script.

dnanian
07-10-2004, 09:46 AM
I believe that we didn't exclude cache files from a full backup because they continue to be legitimate after a restore (unlike a safety clone, where you're running from another volume).

sjk
07-10-2004, 08:00 PM
Almost sounds like you're implying a backup volume created with the "Backup - all files.dset" script isn't intended to be booted (which I'll eventually want to do when repartitioning the original drive). That doesn't make sense because the volume is blessed after copying. :confused:

Anyhow, the script contains:

<key>Directives</key>
<array>
<dict>
<key>Directive</key>
<string>exclude</string>
<key>Item</key>
<string>var/db/BootCache.playlist</string>
</dict>
<dict>
<key>Directive</key>
<string>exclude</string>
<key>Item</key>
<string>var/db/volinfo.database</string>
</dict>
</array>Yet both those db files are copied when the script runs, as I'd originally suspected. They're also in the "Exclude system cache files.dset" script, which is included by the two "Safety clone ..." scripts. Btw, that script also excludes:

<dict>
<key>Directive</key>
<string>exclude</string>
<key>Item</key>
<string>Library/Caches/com.apple.LaunchServices.LocalCache.csstore</string>
</dict>That file doesn't exist on my 10.3.4 systems, but /Library/Caches/com.apple.LaunchServices.6B.csstore does. Any trouble there?

The objective is to have a clean bootable clone volume that's independent of the original volume. For now they'll have different volume names.

Some background...

A couple years ago I had an issue when running from a booted clone that mistakenly referenced the original volume; not sure if the volume names were the same or different. After first noticing it I unmounted the original volume to ensure nothing was being accessed on it, then made changes for the clone-booted volume as necessary. I think iTunes is what originally brought my attention to it. Since then I've been careful when using multiple volumes in ways that might cause that type of conflict.

I'll soon be working with multiple volumes more often in ways (e.g. clone booting) that I'm concerned may introduce the "identity crisis" again. Any recommendations/warnings related to that?

Wrapping this up with a summary of the main point:

The "Backup - all files" script mistakenly copies two excluded /var/db files that are undesirable on potential boot volumes.

Thanks again for the support.

dnanian
07-10-2004, 08:14 PM
I'm not sure why that script is copying those files for you. We'll look into it.

Regarding booting and having the files on the original referring to the "wrong ones" -- that's something that can definitely happen when programs save aliases including volume names, and the names aren't the same.

Even if the volume names are the same, if both are present at boot time, you can end up with a weird issue. The alias manager doesn't look at the volume's UUID when it resolves, just the name. So if the names are the same, it basically just resolves to the first one with the right name.

If that's the one you want, great. If not, it doesn't tell you what's happened, and you can be in trouble... best thing to do is to make sure the other (source) volume IS NOT AVAILABLE when you're booting from the clone.

Regarding the cache, we know, and it's already fixed for the next version. It's not a big deal, though -- you'll end up with too many entries. Apple renamed the cache between Jaguar and Panther, and we didn't catch it. You can fix it yourself if you'd like while waiting for the new version -- just add an ignore of Library/Caches/com.apple.LaunchServices.*.csstore to your own script.

Hope that helps, we'll be back with more re: the two files soon. You're sure you didn't boot from that volume?

sjk
07-10-2004, 10:11 PM
Yep, that was helpful... and reassuring. :)

I ran the full backup (with erase) specifically to check it the files were copied so I'm 100% certain I didn't reboot the target volume. Last time I booted from the FireWire drive was during last year's Panther testing/cloning.

dnanian
07-10-2004, 11:28 PM
We've done some testing here, and we can confirm that these files are copied... don't quite know why yet.

The volinfo.database file is pretty minor: it's just a log of drive UUIDs that have permissions enabled... it'll be legit on the copy. I wouldn't be terribly concerned.

We'll try to figure out why this is wrong; very strange!

sjk
07-11-2004, 12:25 AM
Thanks for the confirmation.

About volinfo.database, I read in Rapid Deployment of Mac OS X with Apple Software Restore (http://www.bombich.com/mactips/asrx-original.html):

This database keeps track of the volume serial numbers that are considered "native" to the system. A Volume whose serial number is not in this database will be considered "foreign", and ownership values on that volume will be ignored. Obviously that is bad in light of what we are trying to do here.

I couldn't find anything mentioning it served the purpose you described and I'm pretty sure the file existed before journaling was implemented.

dnanian
07-11-2004, 12:30 AM
I must be tired. I am tired. Sorry about that. I know exactly what it does, just mistyped, goofed up.

Anyway, the reason it's OK is that since you've dealt with these volumes before, and you intend for them to be permissioned anyway, you'll end up with what you expect on the backup.

That's not meant to excuse the bug, which we'll fix, of course. Just to indicate that it's not a big problem in typical use -- your backups should be fine, because volumes you expect to be permissioned will continue to be that way.

(Journaling. What was I thinking? I shouldn't post answers after going out to dinner, obviously, especially when beer was involved. :))

sjk
07-17-2004, 07:13 PM
I thought I read somewhere that rsync would also output differential information that you could use.I finally discovered the --backup-dir option that's useful for rsync incremental backups.

dnanian
07-17-2004, 07:17 PM
Ah, that's the option, yes -- stores the files to be backed up in a different directory than they'd be originally...