From: Karsten Kruse on 3 Jan 2010 12:15
> Actually I'm not sure which tools do correctly handle files with holes.
GNU-tar does what one would expect with sparse files. So, if you are on
Linux you are lucky because you probably have GNU-tar, otherwise you
might have to install it yourself.
() My homepage is http://www.tecneeq.de/ and your homepage sucks�!
_/\_ �) Unless it has animated gifs from 1996, then it rocks!
From: jellybean stonerfish on 3 Jan 2010 15:43
On Sun, 03 Jan 2010 03:36:32 -0500, Wayne wrote:
> superpollo wrote:
>> Seebs ha scritto:
>>> And you're right about the 4GB limit -- that's an issue, for sure.
>> isn't it possible to combine the tar approach with -- say -- 'split' or
>> 'dd' and 'cat' and then have multiple <agb chunks to be restitched
> I don't see how split helps. Tar still thinks it is creating a
> too-large archive and will choke. Or, if the source system's tar can
> handle large archives, the tar at the destination will choke after
> combining the chunks.
You don't combine the chunks into a file. You combine them with "cat'
and pipe the output through "tar" to un-archive. 'tar' works on a stream of
characters. Un-archiving a huge archive is not a problem.
> You need a modern tar (and/or cpio) at both ends
> if your archives are going to be large. Other issues can limit
> portability too, such as whether or not Unicode filenames are supported
> (and the charset they're stored in withing the archive; else the names
> be be converted if you have a different locale set at each end).
> Split is useful when you need to send large archives (or other files) as
> email attachments when your email system imposes a size limit per
> attachment and/or per email. Otherwise look into Gnu tar's multi-volume
From: jellybean stonerfish on 4 Jan 2010 01:08
On Sun, 03 Jan 2010 20:43:32 +0000, jellybean stonerfish wrote:
> You don't combine the chunks into a file. You combine them with "cat'
> and pipe the output through "tar" to un-archive. 'tar' works on a
> stream of characters. Un-archiving a huge archive is not a problem.
Or so I've been told.
From: Ben Finney on 4 Jan 2010 02:20
jellybean stonerfish <stonerfish(a)geocities.com> writes:
> On Sun, 03 Jan 2010 20:43:32 +0000, jellybean stonerfish wrote:
> > You don't combine the chunks into a file. You combine them with
> > "cat' and pipe the output through "tar" to un-archive. 'tar' works
> > on a stream of characters. Un-archiving a huge archive is not a
> > problem.
> Or so I've been told.
Decades of confusion propagated by the C language doesn't change the
fact that characters are not bytes, and vice versa :-)
\ “I may disagree with what you say, but I will defend to the |
`\ death your right to mis-attribute this quote to Voltaire.” |
_o__) —Avram Grumer, rec.arts.sf.written, May 2000 |
From: Kaz Kylheku on 6 Jan 2010 16:01
On 2010-01-02, Eze <garzon.lucero(a)gmail.com> wrote:
> I see the point of using tar in order to, say, send a whole directory
> as a single attachment file. I realize historically this "tape
> archive" utility may have been needed for some technical reasons. But
> I don't see the advantages of using tar to back up some files and copy
> them to some external storage device. Couldn't you just cp? The files
> would already been untarred...
cp requires a filesystem, which is a structured organization of data.
tar uses ... a structured organization of data. Hmm!
The difference is that tar's data structure is not mounted by your kernel
into the usual filesystem space, so you are using a different ``API''.
But that can be changed with a suitable piece of software, like
tar support in the kernel, or through a user space loop like Linux's FUSE.