From: Nick Piggin on
Hi,

On Tue, Mar 16, 2010 at 08:31:12AM -0700, Ben Gamari wrote:
> Hey all,
>
> Recently I started using the Xapian-based notmuch mail client for everyday
> use. One of the things I was quite surprised by after the switch was the
> incredible hit in interactive performance that is observed during database
> updates. Things are particularly bad during runs of 'notmuch new,' which scans
> the file system looking for new messages and adds them to the database.
> Specifically, the worst of the performance hit appears to occur when the
> database is being updated.
>
> During these periods, even small chunks of I/O can become minute-long ordeals.
> It is common for latencytop to show 30 second long latencies for page faults
> and writing pages. Interactive performance is absolutely abysmal, with other
> unrelated processes feeling horrible latencies, causing media players,
> editors, and even terminals to grind to a halt.
>
> Despite the system being clearly I/O bound, iostat shows pitiful disk
> throughput (700kByte/second read, 300 kByte/second write). Certainly this poor
> performance can, at least to some degree, be attributable to the fact that
> Xapian uses fdatasync() to ensure data consistency. That being said, it seems
> like Xapian's page usage causes horrible thrashing, hence the performance hit
> on unrelated processes.

Where are the unrelated processes waiting? Can you get a sample of
several backtraces? (/proc/<pid>/stack should do it)


> Moreover, the hit on unrelated processes is so bad
> that I would almost suspect that swap I/O is being serialized by fsync() as
> well, despite being on a separate swap partition beyond the control of the
> filesystem.

It shouldn't be, until it reaches the bio layer. If it is on the same
block device, it will still fight for access. It could also be blocking
on dirty data thresholds, or page reclaim though -- writeback and
reclaim could easily be getting slowed down by the fsync activity.

Swapping tends to cause fairly nasty disk access patterns, combined with
fsync it could be pretty unavoidable.

>
> Xapian, however, is far from the first time I have seen this sort of
> performance cliff. Rsync, which also uses fsync(), can also trigger this sort
> of thrashing during system backups, as can rdiff. slocate's updatedb
> absolutely kills interactive performance as well.
>
> Issues similar to this have been widely reported[1-5] in the past, and despite
> many attempts[5-8] within both I/O and memory managements subsystems to fix
> it, the problem certainly remains. I have tried reducing swappiness from 60 to
> 40, with some small improvement and it has been reported[20] that these sorts
> of symptoms can be negated through use of memory control groups to prevent
> interactive process pages from being evicted.

So the workload is causing quite a lot of swapping as well? How much
pagecache do you have? It could be that you have too much pagecache and
it is pushing out anonymous memory too easily, or you might have too
little pagecache causing suboptimal writeout patterns (possibly writeout
from page reclaim rather than asynchronous dirty page cleaner threads,
which can really hurt).

Thanks,
Nick

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Ingo Molnar on

* Nick Piggin <npiggin(a)suse.de> wrote:

> Hi,
>
> On Tue, Mar 16, 2010 at 08:31:12AM -0700, Ben Gamari wrote:
> > Hey all,
> >
> > Recently I started using the Xapian-based notmuch mail client for everyday
> > use. One of the things I was quite surprised by after the switch was the
> > incredible hit in interactive performance that is observed during database
> > updates. Things are particularly bad during runs of 'notmuch new,' which scans
> > the file system looking for new messages and adds them to the database.
> > Specifically, the worst of the performance hit appears to occur when the
> > database is being updated.
> >
> > During these periods, even small chunks of I/O can become minute-long ordeals.
> > It is common for latencytop to show 30 second long latencies for page faults
> > and writing pages. Interactive performance is absolutely abysmal, with other
> > unrelated processes feeling horrible latencies, causing media players,
> > editors, and even terminals to grind to a halt.
> >
> > Despite the system being clearly I/O bound, iostat shows pitiful disk
> > throughput (700kByte/second read, 300 kByte/second write). Certainly this poor
> > performance can, at least to some degree, be attributable to the fact that
> > Xapian uses fdatasync() to ensure data consistency. That being said, it seems
> > like Xapian's page usage causes horrible thrashing, hence the performance hit
> > on unrelated processes.
>
> Where are the unrelated processes waiting? Can you get a sample of several
> backtraces? (/proc/<pid>/stack should do it)

A call-graph profile will show the precise reason for IO latencies, and their
relatively likelihood.

It's really simple to do it with a recent kernel. Firstly, enable
CONFIG_BLK_DEV_IO_TRACE=y, CONFIG_EVENT_PROFILE=y:

Kernel performance events and counters (PERF_EVENTS) [Y/?] y
Tracepoint profiling sources (EVENT_PROFILE) [Y/n/?] y
Support for tracing block IO actions (BLK_DEV_IO_TRACE) [N/y/?] y

(boot into this kernel)

Then build perf via:

cd tools/perf/
make -j install

and then capture 10 seconds of the DB workload:

perf record -f -g -a -e block:block_rq_issue -c 1 sleep 10

[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.251 MB perf.data (~10977 samples) ]

and look at the call-graph output:

perf report

# Samples: 5
#
# Overhead Command Shared Object Symbol
# ........ ............... ................. ......
#
80.00% kjournald [kernel.kallsyms] [k] perf_trace_block_rq_issue
|
--- perf_trace_block_rq_issue
scsi_request_fn
|
|--50.00%-- __blk_run_queue
| cfq_insert_request
| elv_insert
| __elv_add_request
| __make_request
| generic_make_request
| submit_bio
| submit_bh
| sync_dirty_buffer
| journal_commit_transaction
| kjournald
| kthread
| kernel_thread_helper
|
--50.00%-- __generic_unplug_device
generic_unplug_device
blk_unplug
blk_backing_dev_unplug
sync_buffer
__wait_on_bit
out_of_line_wait_on_bit
__wait_on_buffer
wait_on_buffer
journal_commit_transaction
kjournald
kthread
kernel_thread_helper

20.00% as [kernel.kallsyms] [k] perf_trace_block_rq_issue
|
--- perf_trace_block_rq_issue
scsi_request_fn
__generic_unplug_device
generic_unplug_device
blk_unplug
blk_backing_dev_unplug
page_cache_async_readahead
generic_file_aio_read
do_sync_read
vfs_read
sys_read
system_call_fastpath
0x39f8ad4930


This (very simple) example had 80% of the IO in kjournald and 20% of it in
'as'. The precise call-paths of IO issues are visible.

For general scheduler context-switch events you can use:

perf record -f -g -a -e context-switches -c 1 sleep 10

see 'perf list' for all events.

Thanks,

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Pawel S on
Hello

I am experiencing very similar issue. My system is a regular desktop
PC and it suffers from very high I/O latencies (sometimes desktop
"hangs" for eight seconds or more) when copying large files. I tried
kernels up to 2.6.34-rc2, but without luck. This issue was raised at
Phoronix forums and Arjan (from Intel) noticed it can be VM related:

http://www.phoronix.com/forums/showpost.php?p=114975&postcount=51

Here is my perf timechart where you can notice I/O "steals" CPU from
the other tasks:

http://hotfile.com/dl/30596827/ebe566b/output.svg.gz.html

Regards!

P.S. if there is some way I can help more just let me know please.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Jens Axboe on
On Tue, Mar 23 2010, Pawel S wrote:
> Hello
>
> I am experiencing very similar issue. My system is a regular desktop
> PC and it suffers from very high I/O latencies (sometimes desktop
> "hangs" for eight seconds or more) when copying large files. I tried
> kernels up to 2.6.34-rc2, but without luck. This issue was raised at
> Phoronix forums and Arjan (from Intel) noticed it can be VM related:
>
> http://www.phoronix.com/forums/showpost.php?p=114975&postcount=51
>
> Here is my perf timechart where you can notice I/O "steals" CPU from
> the other tasks:
>
> http://hotfile.com/dl/30596827/ebe566b/output.svg.gz.html

It's also been my sneaking suspicion that swap is involved. I had lots
of RAM in anything I use, even the laptop and workstation. I'll try and
run some tests with lower memory and force it into swap, I've seen nasty
hangs that way.

--
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Jesper Krogh on
Ben Gamari wrote:
> Hey all,
>
> Recently I started using the Xapian-based notmuch mail client for everyday
> use. One of the things I was quite surprised by after the switch was the
> incredible hit in interactive performance that is observed during database
> updates. Things are particularly bad during runs of 'notmuch new,' which scans
> the file system looking for new messages and adds them to the database.
> Specifically, the worst of the performance hit appears to occur when the
> database is being updated.

I would suggest that you include a 2.6.31 kernel in your testing. I have
seen something that seems like "huge" stalls in 2.6.32 but I havent been
able to "dig into it" to find more.

In 2.6.32 I have seen IO-wait numbers around 80% on a 16 core machine
with 128GB of memory and load-numbers over 120 under workloads that
didn't make 2.6.31 sweat at all.

Filesystems are a mixture of ext3 and ext4 (so it could be the barriers)?

--
Jesper
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/