From: cindy on
On Jul 1, 8:55 am, "Gary R. Schmidt" <grschm...(a)acm.org> wrote:
> I was just wondering, since I'm about to start rolling new disks into my
> ZFS array, and as a result doubling the size of the zpool and once again
> not being able to back it up onto a single chunk of whatever without
> doing things bit-by-bit, did anyone ever do anything about solving the
> "zfs send to /dev/rmt/0 doesn't work" problem?
>
> Because if they didn't, I may be forced to actually write some real code
> for a change and solve it...
>
> What I am thinking of is basically appropriating the tape-handling code
> from tar/cpio/whatever and using that in a wrapper around zfs send and
> receive, like this:
>     magic_program -r "zfs send -R pool(a)snapshot" -f /dev/rmt/0
> to put a zfs stream onto a tape, and:
>     magic_program -w "zfs receive -Fud filesystem" -f /dev/rmt/0
> to restore a zpool from a stream.
>
> So my magic program would chunk up the data from "zfs send" and put
> headers/footers/checksums and stuff like that on it, write/read the
> resulting blocks to/from the device, and, when a write/read fails, ask
> for a new "tape" and carry on from there.
>
> Does this make sense?  Or am I suffering from Friday-itis?
>
>         Cheers,
>                 Gary    B-)

Hi Gary,

Its only Thursday but a good topic.

Someone on zfs-discuss posted a zfsdump feature that you give some
more
ideas:

http://www.quantmodels.co.uk/zfsdump/

Or, look at the Amanda or Bacula links on the ZFS BP site:

http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide

If you have additional disks and are running a current OpenSolaris
build like 131,
you might consider the zpool split feature, where you split a pool and
move
it to another system. I'm not saying this would replace snapshots or
backing
up data, but provides a pool replication method.

Thanks,

Cindy

From: Thommy M. on
cindy <cindy.swearingen(a)sun.com> writes:

> Its only Thursday but a good topic.

Don't be to sure about that. Gary might live elsewhere from you... ;)
From: Gary R. Schmidt on
cindy wrote:
> On Jul 1, 8:55 am, "Gary R. Schmidt" <grschm...(a)acm.org> wrote:
[SNIP]
> Its only Thursday but a good topic.
Ta.

> Someone on zfs-discuss posted a zfsdump feature that you give some
> more
> ideas:
>
> http://www.quantmodels.co.uk/zfsdump/
Interesting - but it looks like you need n storage locations, rather
than a single output.


> Or, look at the Amanda or Bacula links on the ZFS BP site:
I use Bacula at work, it needs a custom file daemon to be written to use
snapshots, unless I've missed something. (I mean to use snapshots to
backup filesystems, it can backup ".zfs" directory trees quite happily,
but that's not quite what I want[1].)

I've also noticed there is/was an NDMP project on the OpenSOlarius
things, I may look into that, does anyone know of a free NDMP backup tool?

Cheers,
Gary B-)

1 - This needs some more explanation.
If a file system looks like:
/a/b/c
/a/b/.zfs/backup
then it is trivial to backup /a/b/.zfs/backup, but there is no easy
mechanism to restore /a/b/c from that backup. It is doable, but it is a
manual task.
From: Gary R. Schmidt on
Thommy M. wrote:
> cindy <cindy.swearingen(a)sun.com> writes:
>
>> Its only Thursday but a good topic.
>
> Don't be to sure about that. Gary might live elsewhere from you... ;)
Oh, I'm closer to the "today starts now" side of the International Date
Line, it was Thursday when I posted!

Cheers,
Gary B-)
From: Tristram Scott on
In alt.solaris.x86 Gary R. Schmidt <grschmidt(a)acm.org> wrote:
> cindy wrote:
>> On Jul 1, 8:55 am, "Gary R. Schmidt" <grschm...(a)acm.org> wrote:
> [SNIP]
>> Its only Thursday but a good topic.
> Ta.
>
>> Someone on zfs-discuss posted a zfsdump feature that you give some
>> more ideas:
>>
>> http://www.quantmodels.co.uk/zfsdump/
> Interesting - but it looks like you need n storage locations, rather
> than a single output.
>

Yes, I haven't implemented end of media detection yet. Instead, you
declare the tape size on the command line. It will write the first tape,
then return. You change tapes, and call zfsdump again to write the second
tape, and so on until the dump is complete.

I have it reporting checksums, per tape and for the stream as a whole, but
don't incorporate any error correction code. My experience with tape is
that errors are very infrequent, and almost always seem to show up on
write.

If you are after an archive solution, i.e. put your filesystem on tape and
keep it there for many years, then I would suggest tar or similar, because
a single error won't corrupt the entire archive, just the file it relates
to.

For a disaster recovery backup, I am happy with zfs send / receive, which
is what I have built zfsdump around. I like to backup to tape so it is
physically removed from the system, not prone to lightening strikes etc.
Chances of a catastrophic hardware failure (or operator error) are small,
but it does happen. Chances of tape corruption are also small, but this
also does happen. The key thing here, though, is the independance of the
two failures. What circumstances contribute to higher probability of
disk failure? Do these circumstances also mean higher probability of tape
error?

For partial disasters, e.g. I just deleted the wrong file or directory,
then restore from zfs snapshot without any need to refer to tape.

The zfs send method allows dumping of the complete zfs filesystem,
including ACLS and snapshots. After disaster recovery, I still have the
ability to roll back to last week or last month. An archive using tar
won't give you that. I understand that using star instead of tar you can
get close to this, so maybe that is something to look into in more detail.

Anyway, if your problem is that you want to split a zfs send stream across
several tapes, in sequence (not in parallel) then zfsdump will do this for
you. If you are keen to write something more robust, please let me know of
your progress.

My opinion is the same as it was when Sun first announced zfs. If they want
people to take zfs seriously, then they should provide a backup solution to
go with it. This is something that Sun should be implementing themselves.
I buy their servers, with their disk arrays and their tape drives. I run
their operating system, and I backup to tape media which they sell me. I
even buy their support contracts. The bit that is missing is something
from them to allow me to dump and restore zfs to tape(s). It just leaves
zfs with an unfinished feel to it.


--
Dr Tristram J. Scott
Energy Consultant