From: Mike Snitzer on
On Thu, Apr 8, 2010 at 10:09 AM, Jens Axboe <jens.axboe(a)oracle.com> wrote:
> On Thu, Apr 08 2010, Vivek Goyal wrote:
>> On Thu, Apr 08, 2010 at 01:04:42PM +0200, Jens Axboe wrote:
>> > On Wed, Apr 07 2010, Vivek Goyal wrote:
>> > > On Wed, Apr 07, 2010 at 05:18:12PM -0400, Jeff Moyer wrote:
>> > > > Hi again,
>> > > >
>> > > > So, here's another stab at fixing this. �This patch is very much an RFC,
>> > > > so do not pull it into anything bound for Linus. �;-) �For those new to
>> > > > this topic, here is the original posting: �http://lkml.org/lkml/2010/4/1/344
>> > > >
>> > > > The basic problem is that, when running iozone on smallish files (up to
>> > > > 8MB in size) and including fsync in the timings, deadline outperforms
>> > > > CFQ by a factor of about 5 for 64KB files, and by about 10% for 8MB
>> > > > files. �From examining the blktrace data, it appears that iozone will
>> > > > issue an fsync() call, and will have to wait until it's CFQ timeslice
>> > > > has expired before the journal thread can run to actually commit data to
>> > > > disk.
>> > > >
>> > > > The approach below puts an explicit call into the filesystem-specific
>> > > > fsync code to yield the disk so that the jbd[2] process has a chance to
>> > > > issue I/O. �This bring performance of CFQ in line with deadline.
>> > > >
>> > > > There is one outstanding issue with the patch that Vivek pointed out.
>> > > > Basically, this could starve out the sync-noidle workload if there is a
>> > > > lot of fsync-ing going on. �I'll address that in a follow-on patch. �For
>> > > > now, I wanted to get the idea out there for others to comment on.
>> > > >
>> > > > Thanks a ton to Vivek for spotting the problem with the initial
>> > > > approach, and for his continued review.
>> > > >
....
>> > > So we got to take care of two issues now.
>> > >
>> > > - Make it work with dm/md devices also. Somehow shall have to propogate
>> > > � this yield semantic down the stack.
>> >
>> > The way that Jeff set it up, it's completely parallel to eg congestion
>> > or unplugging. So that should be easily doable.
>> >
>>
>> Ok, so various dm targets now need to define "yield_fn" and propogate the
>> yield call to all the component devices.
>
> Exactly.

To do so doesn't DM (and MD) need a blk_queue_yield() setter to
establish its own yield_fn? The established dm_yield_fn would call
blk_yield() for all real devices in a given DM target. Something like
how blk_queue_merge_bvec() or blk_queue_make_request() allow DM to
provide functional extensions.

I'm not seeing such a yield_fn hook for stacking drivers to use. And
as is, jbd and jbd2 just call blk_yield() directly and there is no way
for the block layer to call into DM.

What am I missing?

Thanks,
Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Jeff Moyer on
Mike Snitzer <snitzer(a)redhat.com> writes:

> On Thu, Apr 8, 2010 at 10:09 AM, Jens Axboe <jens.axboe(a)oracle.com> wrote:
>> On Thu, Apr 08 2010, Vivek Goyal wrote:
>>> On Thu, Apr 08, 2010 at 01:04:42PM +0200, Jens Axboe wrote:
>>> > On Wed, Apr 07 2010, Vivek Goyal wrote:
>>> > > On Wed, Apr 07, 2010 at 05:18:12PM -0400, Jeff Moyer wrote:
>>> > > > Hi again,
>>> > > >
>>> > > > So, here's another stab at fixing this.  This patch is very much an RFC,
>>> > > > so do not pull it into anything bound for Linus.  ;-)  For those new to
>>> > > > this topic, here is the original posting:  http://lkml.org/lkml/2010/4/1/344
>>> > > >
>>> > > > The basic problem is that, when running iozone on smallish files (up to
>>> > > > 8MB in size) and including fsync in the timings, deadline outperforms
>>> > > > CFQ by a factor of about 5 for 64KB files, and by about 10% for 8MB
>>> > > > files.  From examining the blktrace data, it appears that iozone will
>>> > > > issue an fsync() call, and will have to wait until it's CFQ timeslice
>>> > > > has expired before the journal thread can run to actually commit data to
>>> > > > disk.
>>> > > >
>>> > > > The approach below puts an explicit call into the filesystem-specific
>>> > > > fsync code to yield the disk so that the jbd[2] process has a chance to
>>> > > > issue I/O.  This bring performance of CFQ in line with deadline.
>>> > > >
>>> > > > There is one outstanding issue with the patch that Vivek pointed out.
>>> > > > Basically, this could starve out the sync-noidle workload if there is a
>>> > > > lot of fsync-ing going on.  I'll address that in a follow-on patch.  For
>>> > > > now, I wanted to get the idea out there for others to comment on.
>>> > > >
>>> > > > Thanks a ton to Vivek for spotting the problem with the initial
>>> > > > approach, and for his continued review.
>>> > > >
> ...
>>> > > So we got to take care of two issues now.
>>> > >
>>> > > - Make it work with dm/md devices also. Somehow shall have to propogate
>>> > >   this yield semantic down the stack.
>>> >
>>> > The way that Jeff set it up, it's completely parallel to eg congestion
>>> > or unplugging. So that should be easily doable.
>>> >
>>>
>>> Ok, so various dm targets now need to define "yield_fn" and propogate the
>>> yield call to all the component devices.
>>
>> Exactly.
>
> To do so doesn't DM (and MD) need a blk_queue_yield() setter to
> establish its own yield_fn? The established dm_yield_fn would call
> blk_yield() for all real devices in a given DM target. Something like
> how blk_queue_merge_bvec() or blk_queue_make_request() allow DM to
> provide functional extensions.
>
> I'm not seeing such a yield_fn hook for stacking drivers to use. And
> as is, jbd and jbd2 just call blk_yield() directly and there is no way
> for the block layer to call into DM.
>
> What am I missing?

Nothing, it is I who am missing something (extra code). When I send out
the next version, I'll add the setter function and ensure that
queue->yield_fn is called from blk_yield. Hopefully that's not viewed
as upside down. We'll see.

Thanks for the review, Mike!

-Jeff
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: KOSAKI Motohiro on
Hi Jeff,

> This patch series addresses a performance problem experienced when running
> io_zone with small file sizes (from 4KB up to 8MB) and including fsync in
> the timings. A good example of this would be the following command line:
> iozone -s 64 -e -f /mnt/test/iozone.0 -i 0
> As the file sizes get larger, the performance improves. By the time the
> file size is 16MB, there is no difference in performance between runs
> using CFQ and runs using deadline. The storage in my testing was a NetApp
> array connected via a single fibre channel link. When testing against a
> single SATA disk, the performance difference is not apparent.

offtopic:

Can this patch help to reduce a pain of following much small files issue?

http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=578635

Now, userland folks think sync() is faster than fsync() on ext4. I don't
hope spread this unrecommended habit widely.



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Jeff Moyer on
KOSAKI Motohiro <kosaki.motohiro(a)jp.fujitsu.com> writes:

> Hi Jeff,
>
>> This patch series addresses a performance problem experienced when running
>> io_zone with small file sizes (from 4KB up to 8MB) and including fsync in
>> the timings. A good example of this would be the following command line:
>> iozone -s 64 -e -f /mnt/test/iozone.0 -i 0
>> As the file sizes get larger, the performance improves. By the time the
>> file size is 16MB, there is no difference in performance between runs
>> using CFQ and runs using deadline. The storage in my testing was a NetApp
>> array connected via a single fibre channel link. When testing against a
>> single SATA disk, the performance difference is not apparent.
>
> offtopic:
>
> Can this patch help to reduce a pain of following much small files issue?
>
> http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=578635

Perhaps. I don't have a debian system handy to test that, though.

Cheers,
Jeff
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Jeff Moyer on
Christoph Hellwig <hch(a)infradead.org> writes:

> On Tue, Jun 22, 2010 at 05:34:59PM -0400, Jeff Moyer wrote:
>> Hi,
>>
>> Running iozone with the fsync flag, or fs_mark, the performance of CFQ is
>> far worse than that of deadline for enterprise class storage when dealing
>> with file sizes of 8MB or less. I used the following command line as a
>> representative test case:
>>
>> fs_mark -S 1 -D 10000 -N 100000 -d /mnt/test/fs_mark -s 65536 -t 1 -w 4096 -F
>>
>> When run using the deadline I/O scheduler, an average of the first 5 numbers
>> will give you 448.4 files / second. CFQ will yield only 106.7. With
>> this patch series applied (and the two patches I sent yesterday), CFQ now
>> achieves 462.5 files / second.
>>
>> This patch set is still an RFC. I'd like to make it perform better when
>> there is a competing sequential reader present. For now, I've addressed
>> the concerns voiced about the previous posting.
>
> What happened to the initial idea of just using the BIO_RW_META flag
> for log writes? In the end log writes are the most important writes you
> have in a journaled filesystem, and they should not be effect to any
> kind of queue idling logic or other interruption. Log I/O is usually
> very little (unless you use old XFS code with a worst-case directory
> manipulation workload), and very latency sensitive.

Vivek showed that starting firefox in the presence of a processing doing
fsyncs (using the RQ_META approach) took twice as long as without the
patch:
http://lkml.org/lkml/2010/4/6/276

Cheers,
Jeff
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/