Shirt Pocket Discussions

Shirt Pocket Discussions (
-   General (
-   -   why averaging only 4MB/s and how to spare bad blocks? (

sjs 02-19-2007 11:10 PM

why averaging only 4MB/s and how to spare bad blocks?
thanks for a nice program.

i am paranoid about bad blocks on my backup drives, so what i do, and i am not sure if this is the right way, is i use Disk Utility>Security Options>Write Zeros and devote 3-4 hours for that.

then when its done, i select that HFS+ drive as the target of Smart Update.

the drive is a HFS+ GUID boot drive, 178GB, 434,000 files, average speed so far is 4MB/s and it started 9 hours ago.

It should be 32MB/s / 2 = 16MB/s since both are on the same FW400 bus.

i did a Smart Update because i was afraid that Superduper's Erase then Copy might do a reformat (does it?) which might possibly delete the drive firmware's bad block sparing table (do you know if that'strue?)

Is it so slow because I chose Smart Update?

Do you know if the sparing table survives reformats and gets appended to, and do you know of any tools that let you examine the table? I'd love to look at the table of a drive I have that I know has bad blocks, then reformat it, if Disk Utility is so willing, then reexamine the table.

PS - it finished. Time Elapsed 9hrs 47min. 4.92MB/s. 434,532 files evaluated. 419,122 files copied.

Do you know why the source is 182,958,063,616 bytes and the target is 182,121,922,560 bytes? that's about 763MB. looking at it now i see you didn't copy over 512MB of swap. wheres the rest, logfiles and computer specific prefs?

thank you very much

dnanian 02-20-2007 08:42 AM

Steve -- writing zeros can help, yes. There's no real protection against bad blocks other than a good backup, RAID mirroring, etc.

You can't make predictions about drive speed based on the protocol maximum speed. That's a streaming speed, and we're doing a lot more than just writing a single huge bytestream.

The other files that are missing are generally files in /tmp, some caches, etc. You can get a full list by examining the copy script you used.

sjs 02-25-2007 04:47 PM

Thank you. I see the plist exception files, that is good to know.

I know, good backups are essential and that is why i purchased your program and it was, in fact, one of my brand new 500GB backup HD's with the bad blocks. Bad blocks stink and they are often overlooked in the logs and overlooked as to reasons things are slowing down.

I have my main data set on 2 500GB HDs and this gets Smart Copied to TWO other pairs of 500GB HDs, rotated bi-monthly.

Its just that when moving this large a data set around takes full days when using FW400.

I did another smart update and despite what you said got more what I expected, 15MB/s, pretty much saturating the FW400 bandwidth with both drives on the same bus. But this smart update was the boot drive, full of smaller files. The other drive from before, the one where the smart update got only 2MB/s to a clean new drive, was full of large movie files.

Could the nature of these files be why? Otherwise, I will assume that that last 5-10x slower sync was due to bad blocks on my backup HD, which I just returned. I would like to understand what code you are using for a smart update. Is this ditto or rsync? And should a Smart Update to a new drive take the same time as a Format then Copy, programatically?

Is the code or pseudo code for these 2 different algoritm available anywhere? Thanks a lot.... Your program is the 1st I've used that pretty much always works. And I've tried them all.

dnanian 02-25-2007 04:53 PM

Steve: we don't use either ditto or rsync -- our cloner was custom designed and written for speed.

Certainly, when I copy large files I don't get 2MB/s -- I get much more, and slower performance when copying smaller files. I can't explain why your other was going so slowly, but yes, it could have been doing retries at a lower level.

Smart Update will take the same amount of time as an erase-then-copy if the drive is empty.

Our code/pseudo code is not available, sorry! :)

sjs 03-16-2007 11:25 PM

i'm having similar problems again.
i started a Smart Update on a 300GB drive that was successfully Smart Updated 1 week ago.

in the meantime, not much changed, except perhaps 100 newly tagged mp3s and os x internals...

it started fine and said the apparent copy speed was 250MB/s.
but 3/4 of the way through it slowed down.

now here's the weird part. when i say slowed down, it is almost *stopped*.

i mean, i look at both the external source and external destination, and neither light is on.

absolutely no data being xferred while these lights are off.

yet the CPU is near pegged on a PB G4 1Ghz.

then..every 5-10 seconds or so, the HD light might flash.

its like SD is caught in analysis paralysis mode...thinking about what to do next with 99% of the CPU resources and only once in a while doing an actual copy.

i note 20 minutes later that it is *not* frozen; # of files copied continues to grow; yet HD's remain mostly off.

this is what happened before.

do you know whats going on here?

also top shows a ditto process ... is that you? i thought you said you didn't use ditto but i didn't call it...

dnanian 03-16-2007 11:29 PM

We don't normally use ditto. But if we start getting failures, we retry with different APIs, and use ditto as a "last resort" to see if it'll work.

My guess is that things aren't working. Did you Cmd+L to look at the log? You can do that while it's running...

sjs 03-16-2007 11:41 PM

ah i got it

| 07:41:45 PM | Info | WARNING: Caught I/O exception(28): No space left on device
| 07:41:45 PM | Info | WARNING: Source: /Volumes/a1/Users/.demo/demo.sparseimage, lstat(): 0
| 07:41:45 PM | Info | WARNING: Target: /Volumes/backup-a1/Users/.demo/demo.sparseimage, lstat(): 0
| 07:41:45 PM | Info | Attempting to copy file using copyfile().
| 09:28:10 PM | Info | Attempting to copy file using ditto.
| 10:51:31 PM | Error | ditto: /Volumes/backup-a1/Users/.demo/demo.sparseimage: Result too large

its the sparse image problem. i forgot i was supposed to log out of my filevault acct like i read somewhere here.

ok...i'm not sure what i'm gonna do...i wanted to daily backup my laptops homedir (filevault as well) over the network (file level one way synch) to another filefault mounted network share (the above "demo" acct.

this way i have a working 2nd copy of my data that is internet accessible on the server.

this server was to be SuperDupered each night to a 2nd FW HD that gets rotated offsite weekly.

i was trying to come up with a way to not have to continually plug in HD's into my laptop but backup only the filevault protected computer automatically.

i have to rethink my backup strategy...
i'll put up another thread here shortly when i have my goals in mind...

dnanian 03-17-2007 09:16 AM

OK. You might want to consider just doing a "Backup - all files" of the FileVault volume itself...

sjs 03-20-2007 06:10 PM

excellent. i was thinking i had to synchronize the files within my homedir myself, i didnt realize SD could backup homedirs....

this will be part of my overall scheme.

what if SD did as u suggest, using SD's ability to create a sparse image on a networked volume?

what if i saved this file on the networked servers User folder of the non-logged in filevault user, replacing the username.sparseimage file that is already there?

would that work creating a functioning filevault user on the networked machine, or does apples filevaulted sparseimage have addtl needed metadata?

dnanian 03-20-2007 06:23 PM

I don't think that's likely to work. You can try, of course... but I think it's about 99% guaranteed doomed to fail. :)

sjs 03-20-2007 07:04 PM

i know HFS+ can put metadata around a file, or even other forks, so thats the only way i thought that it wouldnt work...your estimate surprises me...

i know once i substituted a completely diff filevault image for an older one... worked fine, but they work both generated originally as filevaults.

but i always assumed a filevault = sparse image 100%.

i am very surprised at your estimate...but will try..

EDIT - the only other issue i see is perms, which i think are usally chmod 700, but it seems to me i could change that before hand, afterwords with a SD script..

sjs 03-20-2007 07:27 PM

Just an update:

As you predicted, it didnt work (at first):

| 07:16:29 PM | Info | ......COMMAND => Verifying that permissions are enabled for demo
| 07:16:29 PM | Error | Could not enable permissions for volume demo : please restart your Macintosh

so i do
#mv demo.sparseimage demo.sparseimage.orig
and the error changed to:

| 07:17:43 PM | Info | hdiutil: create failed - Permission denied
| 07:17:43 PM | Error | ****FAILED****: result=256 errno=22 (Unknown error: 0)

looking at the perms one level up i see:
dr-x------ 3 demo demo 102 Mar 20 19:17 .
drwxrwxr-t 10 root admin 340 Mar 17 06:12 ..
-rwx------ 1 demo demo 9484529664 Mar 17 06:12 demo.sparseimage.orig

so i do:
#chmod u+w .

and now.... it works!

well, is this what you mean by not working? that it wouldnt start the copy?
b/c i dont know if the resultant copy with be a "valid filefault", but i am 99% sure it will, unless u know something i dont.

dnanian 03-20-2007 08:23 PM

I was more worried about potential uid mismatches, and what it might end up doing behind the scenes...

Lurkers: be CAREFUL if you try this.

sjs 03-20-2007 08:40 PM

Oh OK well I've already made sure my UIDs matched up when I created my server, but thats a fair point.

when you say "what it might be doing behind the scenes" what is it, the finder? and are you worried here just for the case the UIDs don't match?

the copy is 1/3 through. ill let u know.

but in reality, i'm already satisfied b/c minimally the sparseimage should work on the original machine...ive just always wondered if filevaults have anything added to sparseimages but i dont think so, do you?

dnanian 03-20-2007 08:50 PM

No, I don't. But I don't know what else they might be doing: it's a bit of a black box. It'll probably be fine... probably...

All times are GMT -4. The time now is 11:06 PM.

Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2022, vBulletin Solutions, Inc.