From: Gary R. Schmidt on
I was just wondering, since I'm about to start rolling new disks into my
ZFS array, and as a result doubling the size of the zpool and once again
not being able to back it up onto a single chunk of whatever without
doing things bit-by-bit, did anyone ever do anything about solving the
"zfs send to /dev/rmt/0 doesn't work" problem?

Because if they didn't, I may be forced to actually write some real code
for a change and solve it...

What I am thinking of is basically appropriating the tape-handling code
from tar/cpio/whatever and using that in a wrapper around zfs send and
receive, like this:
magic_program -r "zfs send -R pool(a)snapshot" -f /dev/rmt/0
to put a zfs stream onto a tape, and:
magic_program -w "zfs receive -Fud filesystem" -f /dev/rmt/0
to restore a zpool from a stream.

So my magic program would chunk up the data from "zfs send" and put
headers/footers/checksums and stuff like that on it, write/read the
resulting blocks to/from the device, and, when a write/read fails, ask
for a new "tape" and carry on from there.

Does this make sense? Or am I suffering from Friday-itis?

Cheers,
Gary B-)
From: John D Groenveld on
In article <vsmuf7-uh2.ln1(a)paranoia.mcleod-schmidt.id.au>,
Gary R. Schmidt <grschmidt(a)acm.org> wrote:
>So my magic program would chunk up the data from "zfs send" and put
>headers/footers/checksums and stuff like that on it, write/read the
>resulting blocks to/from the device, and, when a write/read fails, ask
>for a new "tape" and carry on from there.

Just beware that a corrupted ZFS stream cannot be received.

I don't know the reliability of your tape streamer, but I suspect
you're better off using conventional archive tools to backup your
snapshots to it and only sending your ZFS streams to media that
has error correction.

John
groenveld(a)acm.org
From: Gary R. Schmidt on
John D Groenveld wrote:
> In article <vsmuf7-uh2.ln1(a)paranoia.mcleod-schmidt.id.au>,
> Gary R. Schmidt <grschmidt(a)acm.org> wrote:
>> So my magic program would chunk up the data from "zfs send" and put
>> headers/footers/checksums and stuff like that on it, write/read the
>> resulting blocks to/from the device, and, when a write/read fails, ask
>> for a new "tape" and carry on from there.
>
> Just beware that a corrupted ZFS stream cannot be received.
Yes, I was aware of this.

> I don't know the reliability of your tape streamer, but I suspect
> you're better off using conventional archive tools to backup your
> snapshots to it and only sending your ZFS streams to media that
> has error correction.
This lead me to think about using eSATA/USB disks - create a ZFS file
system on them and that problem goes away!

Cheers,
Gary B-)
From: Thomas Maier-Komor on
On 01.07.10 16:55, Gary R. Schmidt wrote:
> I was just wondering, since I'm about to start rolling new disks into my
> ZFS array, and as a result doubling the size of the zpool and once again
> not being able to back it up onto a single chunk of whatever without
> doing things bit-by-bit, did anyone ever do anything about solving the
> "zfs send to /dev/rmt/0 doesn't work" problem?
>
> Because if they didn't, I may be forced to actually write some real code
> for a change and solve it...
>
> What I am thinking of is basically appropriating the tape-handling code
> from tar/cpio/whatever and using that in a wrapper around zfs send and
> receive, like this:
> magic_program -r "zfs send -R pool(a)snapshot" -f /dev/rmt/0
> to put a zfs stream onto a tape, and:
> magic_program -w "zfs receive -Fud filesystem" -f /dev/rmt/0
> to restore a zpool from a stream.
>
> So my magic program would chunk up the data from "zfs send" and put
> headers/footers/checksums and stuff like that on it, write/read the
> resulting blocks to/from the device, and, when a write/read fails, ask
> for a new "tape" and carry on from there.
>
> Does this make sense? Or am I suffering from Friday-itis?
>
> Cheers,
> Gary B-)

mbuffer can do exactly what you want. Additionally, mbuffer speeds up
the whole process. Some people have reported that it works for them on
the zfs-discuss mailing list.

You can get it either as source code from the main site [1] or as a
binary package [2].

[1]: http://www.maier-komor.de/mbuffer.thml
[2]: http://www.opencsw.org/packages/CSWmbuffer/

HTH,
Thomas
From: Gary R. Schmidt on
Thomas Maier-Komor wrote:
> On 01.07.10 16:55, Gary R. Schmidt wrote:
>> I was just wondering, since I'm about to start rolling new disks into my
[SNIP]
>
> mbuffer can do exactly what you want. Additionally, mbuffer speeds up
> the whole process. Some people have reported that it works for them on
> the zfs-discuss mailing list.
>
Ahah! I *knew* that I had seen something that did this sort of thing,
but I couldn't think in relation to what.

I now recall seeing it when I was looking for sysstat some time ago.

Thank you, Thomas.

Cheers,
Gary B-)