View Full Version : Smart update not getting rid of old files

04-15-2009, 08:39 PM
I am using Smart update to make a copy of one external HD to another of the same size. But it is leaving files on the copy that I have taken off of the original, so it runs out of room. Am I thinking incorrectly that "Smart Update" is supposed to do that?

04-15-2009, 08:53 PM
Smart Update does remove the files, but not necessarily before new files are copied to the drive. See the "Troubleshooting" section of the User's Guide for a discussion of how it works.

04-29-2009, 10:21 AM
I just read this section and learned the "work around," which if I understand correctly is to run a single Erase and Copy, then switch back to Smart Update. This logic seems counterproductive to me.

Just a suggestion for the developers, I think it would make more sense to have Super Duper analyze both the source and the backup, and determine before copying any files what data needs to be erased in order to have a seamless process. The way the procedure is currently set up, if I backup at night, while I'm away from the computer in questions, and I get one of these errors, I'm unprotected for the next day of work until I can fix the problem.

Just my 2 from a paying customer.

04-29-2009, 10:25 AM
Doing that would basically double the time it takes to perform the backup (and double the I/O) to cover a case that happens very rarely...

05-03-2009, 02:52 PM
I actually run into this problem quite often. for instance: I back up, on a daily basis, several drives full of digital audio, on which I'm constantly writing/copying/renaming very large amounts of data. I recently renamed a 50 GB folder on one of them. when superduper went to smart update the drive, it began to re-copy that 50 GB worth of data before deleting the folder with the old name; in doing so, it ran out of room on the sparse image (there was only 40 GB available on both the drive and the backup image at the time).

I even wrote a script that compacts all of my sparse images daily after SD is finished smart updating all of them; yet I still run into issues when, on a given day, I modify more data on a given drive than is free on that drive.

I'd love to have a checkbox that would avoid this problem, even if it meant double the time and I/O. or perhaps some kind of option to run an "erase, then copy" automatically if smart update ran out of of room? this is the one and only sticking point with the program that I have; everything else works flawlessly. please let us know of any plans to address it.

on the whole, I love the app and its straightforward simplicity...keep up the great work.

05-03-2009, 03:24 PM
It's an issue we're aware of, and we have some ideas for improvements that wouldn't cause additional slowdowns/problems, but I don't have a timeframe for release/implementation.

05-03-2009, 08:43 PM
II even wrote a script that compacts all of my sparse images daily after SD is finished smart updating all of them
Any chance you could share that script? Thanks.

06-03-2009, 12:36 AM

the script is very simple:

do shell script "hdiutil compact /Volumes/Drobo/'backup 1'.sparseimage'"
do shell script "hdiutil compact /Volumes/Drobo/'backup 2'.sparseimage'"
do shell script "hdiutil compact /Volumes/Drobo/'backup 3'.sparseimage'"

I run it on a schedule using Cronnix (http://h775982.serverkompetenz.net:9080/abstracture_public/projects-en/cronnix), an aqua frontend for the Unix command "cron."

does the job! good luck.

06-03-2009, 02:25 AM
Yup, much simpler than I expected

You don't even need to use AppleScript for that; it could just be a shell script like:


for i in 1 2 3; do
/usr/bin/hdiutil compact "/Volumes/Drobo/backup $i.sparseimage"
Or even a one-liner directly in the crontab file would work.

I've recently converted regular sparse images to sparse bundles and compacting them runs much quicker so I wouldn't mind switching to doing it after every backup instead of less frequently under certain conditions. I thought your script might have some clever example for the latter, but instead it helped me realize it can be simpler now. :)