From: OGAWA Hirofumi on
Nikanth Karthikesan <knikanth(a)novell.com> writes:

> I had a need to split a file into smaller files on a thumb drive with no
> free space on it or anywhere else in the system. When the filesystem
> supports sparse files(truncate_range), I could create files, while
> punching holes in the original file. But when the underlying fs is FAT,
> I couldn't. Also why should we do needless I/O, when all I want is to
> split/join files. i.e., all the data are already on the disk, under the
> same filesystem. I just want to do some metadata changes.
>
> So, I added two inode operations, namely split and join, that lets me
> tell the OS, that all I want is meta-data changes. And the file-system
> can avoid doing lots of I/O, when only metadata changes are needed.
>
> sys_split(fd1, n, fd2)
> 1. Attach the data of file after n bytes in fd1 to fd2.
> 2. Truncate fd1 to n bytes.
>
> Roughly can be thought of as equivalent of following commands:
> 1. dd if=file1 of=file2 skip=n
> 2. truncate -c -s n file1
>
> sys_join(fd1, fd2)
> 1. Extend fd1 with data of fd2
> 2. Truncate fd2 to 0.
>
> Roughly can be thought of as equivalent of following commands:
> 1. dd if=file2 of=file1 seek=`filesize file1`
> 2. truncate -c -s 0 file2
>
> Attached is the patch that adds these new syscalls and support for them
> to the FAT filesystem.
>
> I guess, this approach can be extended to splice() kind of call, between
> files, instead of pipes. On a COW fs, splice could simply setup blocks
> as shared between files, instead of doing I/O. It would be a kind of
> explicit online data-deduplication. Later when a file modifies any of
> those blocks, we copy blocks. i.e., COW.

[I'll just ignore implementation for now.., because the patch is totally
ignoring cache management.]

I have no objections to such those operations (likewise make hole,
truncate any range, etc. etc.). However, only if someone have enough
motivation to implement/maintain those operations, AND there are real
users (i.e. real sane usecase).

Otherwise, IMO it would be bad than nothing. Because, of course, if
there are such codes, we can't ignore those anymore until remove
codes completely for e.g. security reasons. And IMHO, those cache
managements to such operations are not so easy.

Thanks.
--
OGAWA Hirofumi <hirofumi(a)mail.parknet.co.jp>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Nikanth Karthikesan on
Hi OGAWA Hirofumi

Thanks a lot for looking at this and reply.

On Sunday 13 June 2010 17:12:57 OGAWA Hirofumi wrote:
> Nikanth Karthikesan <knikanth(a)novell.com> writes:
> > I had a need to split a file into smaller files on a thumb drive with no
> > free space on it or anywhere else in the system. When the filesystem
> > supports sparse files(truncate_range), I could create files, while
> > punching holes in the original file. But when the underlying fs is FAT,
> > I couldn't. Also why should we do needless I/O, when all I want is to
> > split/join files. i.e., all the data are already on the disk, under the
> > same filesystem. I just want to do some metadata changes.
> >
> > So, I added two inode operations, namely split and join, that lets me
> > tell the OS, that all I want is meta-data changes. And the file-system
> > can avoid doing lots of I/O, when only metadata changes are needed.
> >
> > sys_split(fd1, n, fd2)
> > 1. Attach the data of file after n bytes in fd1 to fd2.
> > 2. Truncate fd1 to n bytes.
> >
> > Roughly can be thought of as equivalent of following commands:
> > 1. dd if=file1 of=file2 skip=n
> > 2. truncate -c -s n file1
> >
> > sys_join(fd1, fd2)
> > 1. Extend fd1 with data of fd2
> > 2. Truncate fd2 to 0.
> >
> > Roughly can be thought of as equivalent of following commands:
> > 1. dd if=file2 of=file1 seek=`filesize file1`
> > 2. truncate -c -s 0 file2
> >
> > Attached is the patch that adds these new syscalls and support for them
> > to the FAT filesystem.
> >
> > I guess, this approach can be extended to splice() kind of call, between
> > files, instead of pipes. On a COW fs, splice could simply setup blocks
> > as shared between files, instead of doing I/O. It would be a kind of
> > explicit online data-deduplication. Later when a file modifies any of
> > those blocks, we copy blocks. i.e., COW.
>
> [I'll just ignore implementation for now.., because the patch is totally
> ignoring cache management.]
>

Ok.

> I have no objections to such those operations (likewise make hole,
> truncate any range, etc. etc.).

As far as FAT is concerned, Sparse files would break the on-disk format?

> However, only if someone have enough
> motivation to implement/maintain those operations, AND there are real
> users (i.e. real sane usecase).

I had a one-off use-case, where I had no free-space, which made me think along
this line.

1. We have the GNU split tool for example, which I guess, many of us use to
split larger files to be transfered via smaller thumb drives, for example. We
do cat many files into one, afterwards. [For this usecase, one can simply dd
with seek and skip and avoid split/cat completely, but we dont.]

2. It could be useful for multimedia editing softwares, that converts frames
into video/animation and vice versa.

3. It could be useful for archiving solutions.

4. It would make it easier to implement simple databases. Even help avoid
needing databases at times. For example, to delete a row, split before & after
that row, and join leaving it.

So I thought this could be useful generally.

I was also thinking of facilities to add/remove bytes from/at any position in
the file. As you said truncate any range, but one which can also increase the
filesize, adding blocks even in between.

IMO It is kind of Chicken-and-egg problem, where applications will start using
these, only, if it would be available.

>
> Otherwise, IMO it would be bad than nothing. Because, of course, if
> there are such codes, we can't ignore those anymore until remove
> codes completely for e.g. security reasons. And IMHO, those cache
> managements to such operations are not so easy.
>

Agreed.

Again, thanks for the comments.

Thanks
Nikanth
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: OGAWA Hirofumi on
Nikanth Karthikesan <knikanth(a)novell.com> writes:

>> I have no objections to such those operations (likewise make hole,
>> truncate any range, etc. etc.).
>
> As far as FAT is concerned, Sparse files would break the on-disk format?

Yes. In the case of making hole on FAT, I guess it will return the
error, or emulate it by zero fill.

Thanks.
--
OGAWA Hirofumi <hirofumi(a)mail.parknet.co.jp>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: David Pottage on
On 15/06/10 11:41, Nikanth Karthikesan wrote:

> I had a one-off use-case, where I had no free-space, which made me
> think along this line.
>
> 1. We have the GNU split tool for example, which I guess, many of us
> use to split larger files to be transfered via smaller thumb drives,
> for example. We do cat many files into one, afterwards. [For this
> usecase, one can simply dd with seek and skip and avoid split/cat
> completely, but we dont.]

I am not sure how you gain here as either way you have to do I/O to get
the split files on and off the thumb drive. It might make sense if the
thumb drive is formated with btrfs, and the file needs to be copied to
another filling system that can't handle large files (eg FAT-16), but I
would say that is unlikely.

> 2. It could be useful for multimedia editing softwares, that converts
> frames into video/animation and vice versa.

Agreed, it would be very useful in this case, as it would save a lot of
I/O and time.

Video files are very big, so a simple edit of removing a few minutes
here and there in an hour long HD recoding will involve copying many
gigabytes from one file to another. Imagine the time and disc space
saved, if you could just make a COW copy of your source file(s), and
then cut out the portions you don't want, and join the parts you do
want together.

Your final edited file would take no extra disc space compared with
your source files, and though it would be fragmented, the fragments
would still be large compared with most files so the performance
penalty to read the file sequentially to play it would be small. Once
you decide you are happy with the final cut, you can delete the source
files and let some background defrag demon tidy up the final file.

> 3. It could be useful for archiving solutions.

Agreed.

> 4. It would make it easier to implement simple databases. Even help
> avoid needing databases at times. For example, to delete a row, split
> before &amp; after that row, and join leaving it.

I am not sure it would be usefull in practice, as these days, if you
need a simple DB in a programming project, you just use SQLite. (Which
has an extremely liberal licence), and let it figure out how to store
your data on disc.

On the other hand, perhaps databases such as SQLite or MySQL would
benifit from this feature for improving their backend storage, especaly
if large amounts of BLOB data is inserted or deleted?

> So I thought this could be useful generally.

Agreed. I think this would be very useful.

I have proposed this kind of thing in the past, and been shouted down,
and told that it should be implemented in the userland program, however
I think it is anachronistic that Unix filesystems have supported sparse
files since the dawn of time, originaly to suit a particular way of
storing fixed size records, but do not support growing or truncating
files except at the end.

> I was also thinking of facilities to add/remove bytes from/at any
> position in the file. As you said truncate any range, but one which
> can also increase the filesize, adding blocks even in between.
>
> IMO It is kind of Chicken-and-egg problem, where applications will
> start using these, only, if it would be available.

I agree that it is a Chicken and egg problem, but I think the
advantages for video editing are so large, that the feature could
become a killer-app when it comes to video editing, as it would improve
performance so much.

--
David Pottage

Error compiling committee.c To many arguments to function.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Hubert Kario on
On Tuesday 15 June 2010 17:16:06 David Pottage wrote:
> On 15/06/10 11:41, Nikanth Karthikesan wrote:
> > I had a one-off use-case, where I had no free-space, which made me
> > think along this line.
> >
> > 1. We have the GNU split tool for example, which I guess, many of us
> > use to split larger files to be transfered via smaller thumb drives,
> > for example. We do cat many files into one, afterwards. [For this
> > usecase, one can simply dd with seek and skip and avoid split/cat
> > completely, but we dont.]
>
> I am not sure how you gain here as either way you have to do I/O to get
> the split files on and off the thumb drive. It might make sense if the
> thumb drive is formated with btrfs, and the file needs to be copied to
> another filling system that can't handle large files (eg FAT-16), but I
> would say that is unlikely.
>

But you do have to do only half as much of I/O with those features
implemented.

The old way is:
1. Have a file
2. split a file (in effect use twice as much drive space)
3. copy fragments to flash disks

The btrfs way would be:
1. Have a file
2. split the file by using COW and referencing blocks in the original file (in
effect using only a little more space after splitting)
3. copy fragments to flash disks

the amount of I/O in the second case is limited only to metadata operations,
in the first case, all data must be duplicated

--
Hubert Kario
QBS - Quality Business Software
02-656 Warszawa, ul. Ksawerów 30/85
tel. +48 (22) 646-61-51, 646-74-24
www.qbs.com.pl

System Zarządzania Jakością
zgodny z normą ISO 9001:2000
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/