From: Bill on
Hi folks,

What's a good block size to use with tar? ie -b --blocking factor

In a hard disk to hard disk and DVD backup scheme, the term
block size gets used in at least six different ways. There are
the blocks on the source hard disk - in this case 512B in size
(at least until next year when the 4kiB drives will arrive). There
are the blocks in the source partition formats, in this case
1kiB and 4kiB in size. There is the tar blocking factor which
defaults to 20 (x512=10240 which is ancient, and some use
2048x512 which yeilds 1miB.) And then there is the block size
on the destination partitions which I'll probably set to 8kiB,
and the block size on the destination disk itself - again 512B.
And let's not forget the block size on the DVDs when I burn them.
So it's a tad confusing to know where to set the blocking factor
for tar.

On the source disk fdisk -l reveals cylinders of 16065 x 512B
for a total of 8225280 bytes. Now since 16065 factors as 119 x
135, I figure I could theoretically use a blocking factor with
tar of anywhere between 1 and 135. 20 is the default but
realistically what block size should I use?

I'm partial to a fast backup, but where lies block size
compatibility? Or is there such a thing? Ideally I'd like to
maximize disk-to-disk throughput for my backups.

b.


--
To UNSUBSCRIBE, email to debian-user-REQUEST(a)lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster(a)lists.debian.org
Archive: http://lists.debian.org/1281545100.2833.297.camel(a)zefram.soho.lan
From: Boyd Stephen Smith Jr. on
In <1281545100.2833.297.camel(a)zefram.soho.lan>, Bill wrote:
>On the source disk fdisk -l reveals cylinders of 16065 x 512B
>for a total of 8225280 bytes.

If using LBA, like the fast majority of drives now, this means very little.
These "cylinders" don't correspond to any meaningful property of the drive.

>Now since 16065 factors as 119 x
>135, I figure I could theoretically use a blocking factor with
>tar of anywhere between 1 and 135. 20 is the default but
>realistically what block size should I use?

119
--
Boyd Stephen Smith Jr. ,= ,-_-. =.
bss(a)iguanasuicide.net ((_/)o o(\_))
ICQ: 514984 YM/AIM: DaTwinkDaddy `-'(. .)`-'
http://iguanasuicide.net/ \_/
From: Bob McGowan on
On 08/11/2010 09:45 AM, Bill wrote:
> Hi folks,
>
> What's a good block size to use with tar? ie -b --blocking factor
>
> In a hard disk to hard disk and DVD backup scheme, the term
> block size gets used in at least six different ways. There are
> the blocks on the source hard disk - in this case 512B in size
> (at least until next year when the 4kiB drives will arrive). There
> are the blocks in the source partition formats, in this case
> 1kiB and 4kiB in size. There is the tar blocking factor which
> defaults to 20 (x512=10240 which is ancient, and some use
> 2048x512 which yeilds 1miB.) And then there is the block size
> on the destination partitions which I'll probably set to 8kiB,
> and the block size on the destination disk itself - again 512B.
> And let's not forget the block size on the DVDs when I burn them.
> So it's a tad confusing to know where to set the blocking factor
> for tar.
>
> On the source disk fdisk -l reveals cylinders of 16065 x 512B
> for a total of 8225280 bytes. Now since 16065 factors as 119 x
> 135, I figure I could theoretically use a blocking factor with
> tar of anywhere between 1 and 135. 20 is the default but
> realistically what block size should I use?
>
> I'm partial to a fast backup, but where lies block size
> compatibility? Or is there such a thing? Ideally I'd like to
> maximize disk-to-disk throughput for my backups.
>
> b.
>
>

For disks or devices that care about things like cylinders and blocks,
use a block size that is a common multiple of the source/destination
"physical" block size. For the numbers you mention above, that's 512B.

What you want to avoid is the case of having to read some block twice,
say, for example, once for the first half and again for the second half.
If you have cylinder/track/block based disk devices, you might want to
consider using a blocking factor equal to the size of a track, multiple
tracks, or even a cylinder, for the same basic reason of keeping the
read/write heads in one part of the disk as long as possible and without
having to repeat the same positioning multiple times.

Generally, the bigger the better, as regards throughput. But, you don't
want to set a size so big that you don't have enough RAM to handle it.
You could, theoretically, pick a size that would force disk swapping,
which would have a very negative impact on throughput. ;)

Not that this should be an issue for modern systems with gigabytes of
RAM. But then, modern systems also have disks that generally use LBA,
as Boyd mentioned, so the positioning argument isn't so meaningful.

--
Bob McGowan


--
To UNSUBSCRIBE, email to debian-user-REQUEST(a)lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster(a)lists.debian.org
Archive: http://lists.debian.org/4C62FD03.90802(a)symantec.com