From: Christoph Hellwig on
On Mon, Aug 02, 2010 at 05:09:39PM -0700, Darrick J. Wong wrote:
> Well... on my fsync-happy workloads, this seems to cut the barrier count down
> by about 20%, and speeds it up by about 20%.

Care to share the test case for this? I'd be especially interesting on
how it behaves with non-draining barriers / cache flushes in fsync.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Avi Kivity on
On 06/30/2010 03:48 PM, tytso(a)mit.edu wrote:
>
> I wonder if it's worthwhile to think about a new system call which
> allows users to provide an array of fd's which are collectively should
> be fsync'ed out at the same time. Otherwise, we end up issuing
> multiple barrier operations in cases where the application needs to
> do:
>
> fsync(control_fd);
> fsync(data_fd);
>

The system call exists, it's called io_submit().

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Jan Kara on
On Mon 02-08-10 17:09:39, Darrick J. Wong wrote:
> On Wed, Jul 21, 2010 at 07:16:09PM +0200, Jan Kara wrote:
> > Hi,
> >
> > > On Wed, Jun 30, 2010 at 09:21:04AM -0400, Ric Wheeler wrote:
> > > >
> > > > The problem with not issuing a cache flush when you have dirty meta
> > > > data or data is that it does not have any tie to the state of the
> > > > volatile write cache of the target storage device.
> > >
> > > We track whether or not there is any metadata updates associated with
> > > the inode already; if it does, we force a journal commit, and this
> > > implies a barrier operation.
> > >
> > > The case we're talking about here is one where either (a) there is no
> > > journal, or (b) there have been no metadata updates (I'm simplifying a
> > > little here; in fact we track whether there have been fdatasync()- vs
> > > fsync()- worthy metadata updates), and so there hasn't been a journal
> > > commit to do the cache flush.
> > >
> > > In this case, we want to track when is the last time an fsync() has
> > > been issued, versus when was the last time data blocks for a
> > > particular inode have been pushed out to disk.
> > >
> > > To use an example I used as motivation for why we might want an
> > > fsync2(int fd[], int flags[], int num) syscall, consider the situation
> > > of:
> > >
> > > fsync(control_fd);
> > > fdatasync(data_fd);
> > >
> > > The first fsync() will have executed a cache flush operation. So when
> > > we do the fdatasync() (assuming that no metadata needs to be flushed
> > > out to disk), there is no need for the cache flush operation.
> > >
> > > If we had an enhanced fsync command, we would also be able to
> > > eliminate a second journal commit in the case where data_fd also had
> > > some metadata that needed to be flushed out to disk.
> > Current implementation already avoids journal commit because of
> > fdatasync(data_fd). We remeber a transaction ID when inode metadata has
> > last been updated and do not force a transaction commit if it is already
> > committed. Thus the first fsync might force a transaction commit but second
> > fdatasync likely won't.
> > We could actually improve the scheme to work for data as well. I wrote
> > a proof-of-concept patches (attached) and they nicely avoid second barrier
> > when doing:
> > echo "aaa" >file1; echo "aaa" >file2; fsync file2; fsync file1
> >
> > Ted, would you be interested in something like this?
>
> Well... on my fsync-happy workloads, this seems to cut the barrier count down
> by about 20%, and speeds it up by about 20%.
Nice, thanks for measurement.

Honza
--
Jan Kara <jack(a)suse.cz>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Ted Ts'o on
On Tue, Aug 03, 2010 at 04:24:49PM +0300, Avi Kivity wrote:
> On 06/30/2010 03:48 PM, tytso(a)mit.edu wrote:
> >
> >I wonder if it's worthwhile to think about a new system call which
> >allows users to provide an array of fd's which are collectively should
> >be fsync'ed out at the same time. Otherwise, we end up issuing
> >multiple barrier operations in cases where the application needs to
> >do:
> >
> > fsync(control_fd);
> > fsync(data_fd);
> >
>
> The system call exists, it's called io_submit().

Um, not the same thing at all.

- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Avi Kivity on
On 08/05/2010 02:32 AM, Ted Ts'o wrote:
> On Tue, Aug 03, 2010 at 04:24:49PM +0300, Avi Kivity wrote:
>> On 06/30/2010 03:48 PM, tytso(a)mit.edu wrote:
>>> I wonder if it's worthwhile to think about a new system call which
>>> allows users to provide an array of fd's which are collectively should
>>> be fsync'ed out at the same time. Otherwise, we end up issuing
>>> multiple barrier operations in cases where the application needs to
>>> do:
>>>
>>> fsync(control_fd);
>>> fsync(data_fd);
>>>
>> The system call exists, it's called io_submit().
> Um, not the same thing at all.

Why not? To be clear, I'm talking about an io_submit() with multiple
IO_CMD_FSYNC requests, with a kernel implementation that is able to
batch these requests.

--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/