From: Christoph Hellwig on
Can you try with the new barrier implementation in the

[PATCH, RFC] relaxed barriers

by making cache flushes just that and not complicated drain barrier
it should speed this case up a lot.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Andreas Dilger on
On 2010-08-09, at 15:53, Darrick J. Wong wrote:
> This patch attempts to coordinate barrier requests being sent in by fsync. Instead of each fsync call initiating its own barrier, there's now a flag to indicate if (0) no barriers are ongoing, (1) we're delaying a short time to collect other fsync threads, or (2) we're actually in-progress on a barrier.
>
> So, if someone calls ext4_sync_file and no barriers are in progress, the flag shifts from 0->1 and the thread delays for 500us to see if there are any other threads that are close behind in ext4_sync_file. After that wait, the state transitions to 2 and the barrier is issued. Once that's done, the state goes back to 0 and a completion is signalled.

You shouldn't use a fixed delay for the thread. 500us _seems_ reasonable, if you have a single HDD. If you have an SSD, or an NVRAM-backed array, then 2000 IOPS is a serious limitation.

What is done in the JBD2 code is to scale the commit sleep interval based on the average commit time. In fact, the ext4_force_commit-> ...->jbd2_journal_force_commit() call will itself be waiting in the jbd2 code to merge journal commits. It looks like we are duplicating some of this machinery in ext4_sync_file() already.

It seems like a better idea to have a single piece of code to wait to merge the IOs. For the non-journal ext4 filesystems it should implement the wait for merges explicitly, otherwise it should defer the wait to jbd2.

Cheers, Andreas





--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Darrick J. Wong on
On Mon, Aug 09, 2010 at 05:19:22PM -0400, Andreas Dilger wrote:
> On 2010-08-09, at 15:53, Darrick J. Wong wrote:
> > This patch attempts to coordinate barrier requests being sent in by fsync.
> > Instead of each fsync call initiating its own barrier, there's now a flag
> > to indicate if (0) no barriers are ongoing, (1) we're delaying a short time
> > to collect other fsync threads, or (2) we're actually in-progress on a
> > barrier.
> >
> > So, if someone calls ext4_sync_file and no barriers are in progress, the
> > flag shifts from 0->1 and the thread delays for 500us to see if there are
> > any other threads that are close behind in ext4_sync_file. After that
> > wait, the state transitions to 2 and the barrier is issued. Once that's
> > done, the state goes back to 0 and a completion is signalled.
>
> You shouldn't use a fixed delay for the thread. 500us _seems_ reasonable, if
> you have a single HDD. If you have an SSD, or an NVRAM-backed array, then
> 2000 IOPS is a serious limitation.

2000 fsyncs per second, anyway. I wasn't explicitly trying to limit any other
types of IO.

> What is done in the JBD2 code is to scale the commit sleep interval based on
> the average commit time. In fact, the ext4_force_commit->
> ...->jbd2_journal_force_commit() call will itself be waiting in the jbd2 code
> to merge journal commits. It looks like we are duplicating some of this
> machinery in ext4_sync_file() already.

I actually picked 500us arbitrarily because it seemed to work, even for SSDs.
It was a convenient test vehicle, and not much more. That said, I like your
recommendation much better. I'll look into that.

> It seems like a better idea to have a single piece of code to wait to merge
> the IOs. For the non-journal ext4 filesystems it should implement the wait
> for merges explicitly, otherwise it should defer the wait to jbd2.

I wondered if this would have been better off in the block layer than ext4?
Though I suppose that could imply two kinds of flush: flush-immediately, and
flush-shortly. I intend to try those flush drain elimination patches before I
think about this much more.

--D
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/