Shirt Pocket Discussions

Shirt Pocket Discussions (https://www.shirt-pocket.com/forums/index.php)
-   General (https://www.shirt-pocket.com/forums/forumdisplay.php?f=6)
-   -   why averaging only 4MB/s and how to spare bad blocks? (https://www.shirt-pocket.com/forums/showthread.php?t=2117)

sjs 02-19-2007 11:10 PM

why averaging only 4MB/s and how to spare bad blocks?
 
hi
thanks for a nice program.

i am paranoid about bad blocks on my backup drives, so what i do, and i am not sure if this is the right way, is i use Disk Utility>Security Options>Write Zeros and devote 3-4 hours for that.

then when its done, i select that HFS+ drive as the target of Smart Update.

the drive is a HFS+ GUID boot drive, 178GB, 434,000 files, average speed so far is 4MB/s and it started 9 hours ago.

It should be 32MB/s / 2 = 16MB/s since both are on the same FW400 bus.

i did a Smart Update because i was afraid that Superduper's Erase then Copy might do a reformat (does it?) which might possibly delete the drive firmware's bad block sparing table (do you know if that'strue?)

Is it so slow because I chose Smart Update?

Do you know if the sparing table survives reformats and gets appended to, and do you know of any tools that let you examine the table? I'd love to look at the table of a drive I have that I know has bad blocks, then reformat it, if Disk Utility is so willing, then reexamine the table.

PS - it finished. Time Elapsed 9hrs 47min. 4.92MB/s. 434,532 files evaluated. 419,122 files copied.

Do you know why the source is 182,958,063,616 bytes and the target is 182,121,922,560 bytes? that's about 763MB. looking at it now i see you didn't copy over 512MB of swap. wheres the rest, logfiles and computer specific prefs?

thank you very much
steve

dnanian 02-20-2007 08:42 AM

Steve -- writing zeros can help, yes. There's no real protection against bad blocks other than a good backup, RAID mirroring, etc.

You can't make predictions about drive speed based on the protocol maximum speed. That's a streaming speed, and we're doing a lot more than just writing a single huge bytestream.

The other files that are missing are generally files in /tmp, some caches, etc. You can get a full list by examining the copy script you used.

sjs 02-25-2007 04:47 PM

Thank you. I see the plist exception files, that is good to know.

I know, good backups are essential and that is why i purchased your program and it was, in fact, one of my brand new 500GB backup HD's with the bad blocks. Bad blocks stink and they are often overlooked in the logs and overlooked as to reasons things are slowing down.

I have my main data set on 2 500GB HDs and this gets Smart Copied to TWO other pairs of 500GB HDs, rotated bi-monthly.

Its just that when moving this large a data set around takes full days when using FW400.

I did another smart update and despite what you said got more what I expected, 15MB/s, pretty much saturating the FW400 bandwidth with both drives on the same bus. But this smart update was the boot drive, full of smaller files. The other drive from before, the one where the smart update got only 2MB/s to a clean new drive, was full of large movie files.

Could the nature of these files be why? Otherwise, I will assume that that last 5-10x slower sync was due to bad blocks on my backup HD, which I just returned. I would like to understand what code you are using for a smart update. Is this ditto or rsync? And should a Smart Update to a new drive take the same time as a Format then Copy, programatically?

Is the code or pseudo code for these 2 different algoritm available anywhere? Thanks a lot.... Your program is the 1st I've used that pretty much always works. And I've tried them all.
Steve

dnanian 02-25-2007 04:53 PM

Steve: we don't use either ditto or rsync -- our cloner was custom designed and written for speed.

Certainly, when I copy large files I don't get 2MB/s -- I get much more, and slower performance when copying smaller files. I can't explain why your other was going so slowly, but yes, it could have been doing retries at a lower level.

Smart Update will take the same amount of time as an erase-then-copy if the drive is empty.

Our code/pseudo code is not available, sorry! :)

sjs 03-16-2007 11:25 PM

dave
i'm having similar problems again.
i started a Smart Update on a 300GB drive that was successfully Smart Updated 1 week ago.

in the meantime, not much changed, except perhaps 100 newly tagged mp3s and os x internals...

it started fine and said the apparent copy speed was 250MB/s.
great.
but 3/4 of the way through it slowed down.

now here's the weird part. when i say slowed down, it is almost *stopped*.

i mean, i look at both the external source and external destination, and neither light is on.

absolutely no data being xferred while these lights are off.

yet the CPU is near pegged on a PB G4 1Ghz.

then..every 5-10 seconds or so, the HD light might flash.

its like SD is caught in analysis paralysis mode...thinking about what to do next with 99% of the CPU resources and only once in a while doing an actual copy.

i note 20 minutes later that it is *not* frozen; # of files copied continues to grow; yet HD's remain mostly off.

this is what happened before.

do you know whats going on here?

also top shows a ditto process ... is that you? i thought you said you didn't use ditto but i didn't call it...

dnanian 03-16-2007 11:29 PM

We don't normally use ditto. But if we start getting failures, we retry with different APIs, and use ditto as a "last resort" to see if it'll work.

My guess is that things aren't working. Did you Cmd+L to look at the log? You can do that while it's running...

sjs 03-16-2007 11:41 PM

ah i got it

| 07:41:45 PM | Info | WARNING: Caught I/O exception(28): No space left on device
| 07:41:45 PM | Info | WARNING: Source: /Volumes/a1/Users/.demo/demo.sparseimage, lstat(): 0
| 07:41:45 PM | Info | WARNING: Target: /Volumes/backup-a1/Users/.demo/demo.sparseimage, lstat(): 0
| 07:41:45 PM | Info | Attempting to copy file using copyfile().
| 09:28:10 PM | Info | Attempting to copy file using ditto.
| 10:51:31 PM | Error | ditto: /Volumes/backup-a1/Users/.demo/demo.sparseimage: Result too large

its the sparse image problem. i forgot i was supposed to log out of my filevault acct like i read somewhere here.

ok...i'm not sure what i'm gonna do...i wanted to daily backup my laptops homedir (filevault as well) over the network (file level one way synch) to another filefault mounted network share (the above "demo" acct.

this way i have a working 2nd copy of my data that is internet accessible on the server.

this server was to be SuperDupered each night to a 2nd FW HD that gets rotated offsite weekly.

i was trying to come up with a way to not have to continually plug in HD's into my laptop but backup only the filevault protected computer automatically.

i have to rethink my backup strategy...
i'll put up another thread here shortly when i have my goals in mind...

dnanian 03-17-2007 09:16 AM

OK. You might want to consider just doing a "Backup - all files" of the FileVault volume itself...

sjs 03-20-2007 06:10 PM

excellent. i was thinking i had to synchronize the files within my homedir myself, i didnt realize SD could backup homedirs....

this will be part of my overall scheme.

what if SD did as u suggest, using SD's ability to create a sparse image on a networked volume?

what if i saved this file on the networked servers User folder of the non-logged in filevault user, replacing the username.sparseimage file that is already there?

would that work creating a functioning filevault user on the networked machine, or does apples filevaulted sparseimage have addtl needed metadata?

dnanian 03-20-2007 06:23 PM

I don't think that's likely to work. You can try, of course... but I think it's about 99% guaranteed doomed to fail. :)

sjs 03-20-2007 07:04 PM

i know HFS+ can put metadata around a file, or even other forks, so thats the only way i thought that it wouldnt work...your estimate surprises me...

i know once i substituted a completely diff filevault image for an older one... worked fine, but they work both generated originally as filevaults.

but i always assumed a filevault = sparse image 100%.

i am very surprised at your estimate...but will try..

EDIT - the only other issue i see is perms, which i think are usally chmod 700, but it seems to me i could change that before hand, afterwords with a SD script..

sjs 03-20-2007 07:27 PM

OK.
Just an update:

As you predicted, it didnt work (at first):

| 07:16:29 PM | Info | ......COMMAND => Verifying that permissions are enabled for demo
| 07:16:29 PM | Error | Could not enable permissions for volume demo : please restart your Macintosh

so i do
#mv demo.sparseimage demo.sparseimage.orig
and the error changed to:

| 07:17:43 PM | Info | hdiutil: create failed - Permission denied
| 07:17:43 PM | Error | ****FAILED****: result=256 errno=22 (Unknown error: 0)

looking at the perms one level up i see:
dr-x------ 3 demo demo 102 Mar 20 19:17 .
drwxrwxr-t 10 root admin 340 Mar 17 06:12 ..
-rwx------ 1 demo demo 9484529664 Mar 17 06:12 demo.sparseimage.orig

so i do:
#chmod u+w .

and now.... it works!

well, is this what you mean by not working? that it wouldnt start the copy?
b/c i dont know if the resultant copy with be a "valid filefault", but i am 99% sure it will, unless u know something i dont.

dnanian 03-20-2007 08:23 PM

I was more worried about potential uid mismatches, and what it might end up doing behind the scenes...

Lurkers: be CAREFUL if you try this.

sjs 03-20-2007 08:40 PM

Oh OK well I've already made sure my UIDs matched up when I created my server, but thats a fair point.

when you say "what it might be doing behind the scenes" what is it, the finder? and are you worried here just for the case the UIDs don't match?

the copy is 1/3 through. ill let u know.

but in reality, i'm already satisfied b/c minimally the sparseimage should work on the original machine...ive just always wondered if filevaults have anything added to sparseimages but i dont think so, do you?

dnanian 03-20-2007 08:50 PM

No, I don't. But I don't know what else they might be doing: it's a bit of a black box. It'll probably be fine... probably...

sjs 03-21-2007 12:40 AM

I am happy to report that this method works amazingly well.

Check out the space savings on the destination:

-rw-rw-r-- 1 demo demo 6381662208 Mar 20 21:25 demo.sparseimage
-rwx------ 1 demo demo 9484529664 Mar 17 06:12 demo.sparseimage.orig

now as you can see the only problem is i needed to
chmod 700 demo.sparseimage, but i would imagine SD can do this in a post-script i think...

the amazing this is the old demo.sparseimage.orig, which is about the same size as my source sparseimage is so big despite being routinely compacted by Finder logout.

so doing a fresh copy really is the ultimate compaction or a sparseimage. Zero bloat.

SD didnt eliminate any useless files did it? like Caches or things it might not copy if a root copy was in place? I doubt it would do that, but I could do a diff against the 2 file sets....

Also it was quite speedy. Does the algorithm change when copying to local HDs vs the network? Are you tuned for slower links like rsync?

In any event, I am convinced that filevaults are nothing more than sparseimages. In fact, the method I have been using to date for over 4 years and 5 computers, is I keep moving my sparseimage file from machine to machine over the years (I've been installing OS X, create my user accts in a particular order to preserve UIDs, enabling Filevault on the acct and finally substituting my old sparseimage from the cmd line. It has been working well for years and I'm on my original homedir since 10.0). Perhaps there is a better way now that I am using SD, I havent gotten into all its nooks and crannies what it can do...

sjs 03-21-2007 01:20 AM

whats also interesting, is that when the disk image is mounted:

Finder > Get Info reveals: 7.71GB ( byte size notwithstanding)

and yet

$ dh -kh
$ 5.1G .

and well as:
$ ls -lh
-rwx------ 1 demo demo 5G Mar 21 01:06 demo.sparseimage

quite an unexpected difference to me. 2.6GB or 2.7GB less data then I thought I had. Quite amazing.

At first I thought 1/3 of my data was missing or SD failed, but thats not the case. Just a pleasant surprise.

Thinking about it now, I think this is probably due to the cluster size within the sparseimage, as I have tons of small text files in there. I wonder if I can make the cluster size 1K or so on a sparse image.

Either way this make SD a great way to copy filevaults for me.

I would have to suspect, however, that the SD benefit is only the 1st time. If I keep doing a Smart Update, I'd guess I'd quickly fragment up and grow this image too. I'd imagine that even an "Erase, then Copy Files" would not shrink the sparse image would it? I'd imagine even if you did a format to the volume when you do your "Erase" (do you?) it would not shrink unless your Erase really deletes or compacts the image (not a bad idea if you dont)...
Just some thoughts...

dnanian 03-21-2007 07:55 AM

OSX will actually compact FileVaults automatically, so it should work automatically post-bloat.

We haven't done any thing differently in network backups, but they've always been pretty reasonable given a reasonable connection... and, no -- files shouldn't be missing.

sjs 03-21-2007 02:07 PM

hi, thanks,
i dont know what you mean with this line

"OSX will actually compact FileVaults automatically, so it should work automatically post-bloat."

are you saying that after a destination sparse image gets heavily fragmented and bloated, a simple deletion of all files automatically triggers os x to compact any given sparseimage? what is the trigger? a certain threshold of deletions or too much fragmentation? i don't think i am understanding what you are saying... i thought the trigger was a Log Out User, and it only applied to File Vaults, not sparse images destinations that SD creates that aren't subjected to any Log Out process...

can you talk about what an Erase than Copy does? Does it format first or just delete files first?

dnanian 03-21-2007 02:12 PM

It is only through log out, but since this is a FileVault, I figured it would happen for you when you really need it -- in place...


All times are GMT -4. The time now is 05:56 PM.

Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2024, vBulletin Solutions, Inc.