First  |  Prev |  Next  |  Last
Pages: 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534
xstat: Add a pair of system calls to make extended file stats available [ver #6]
On Mon, Jul 19, 2010 at 10:26 AM, David Howells <dhowells(a)redhat.com> wrote: Ask your samba people, for example, if they'd _ever_ do just a "xstat()"? I suspect they would, though maybe they can say otherwise. �What about SMB directory enumeration? �I believe that is effectively getdents-with-stat. H... 6 Aug 2010 23:39
[PATCH 1/1] VIDEO: ivtvfb, remove unneeded NULL test
Stanse found that in ivtvfb_callback_cleanup and ivtvfb_callback_init there are unneeded tests for itv being NULL. But itv is initialized as container_of with non-zero offset in those functions, so it is never NULL (even if v4l2_dev is). This was found because itv is dereferenced earlier than the test. Signed-of... 19 Jul 2010 14:39
[PATCH 2/3] cfq-iosched: Implement a new tunable group_idle
o Implement a new tunable group_idle, which allows idling on the group instead of a cfq queue. Hence one can set slice_idle = 0 and not idle on the individual queues but idle on the group. This way on fast storage we can get fairness between groups at the same time overall throughput improves. Signed-off... 19 Jul 2010 13:33
[PATCH 3/3] cfq-iosched: Print per slice sectors dispatched in blktrace
- Divyesh had gotten rid of this code in the past. I want to re-introduce it back as it helps me a lot during debugging. Signed-off-by: Vivek Goyal <vgoyal(a)redhat.com> --- block/cfq-iosched.c | 8 ++++++-- 1 files changed, 6 insertions(+), 2 deletions(-) diff --git a/block/cfq-iosched.c b/block/cfq-iosc... 19 Jul 2010 13:33
Fwd: persistent ramfs issue
Not sure how active fsdevel is so I forwarded my question along to you guys as well. Many thanks. ---------- Forwarded message ---------- From: Ryan O'Neill <ryan(a)bitlackeys.com> Date: Mon, Jul 19, 2010 at 10:09 AM Subject: persistent ramfs issue To: linux-fsdevel(a)vger.kernel.org I have developed a file ... 19 Jul 2010 13:33
[RFC PATCH] cfq-iosched: Implement group idle V2
[ Got Jens's mail id wrong in last post hence reposting. Sorry for cluttering your mailboxes.] Hi, This is V2 of the group_idle implementation patchset. I have done some more testing since V1 and fixed couple of bugs since V1. What's the problem ------------------ On high end storage (I got on HP EVA sto... 19 Jul 2010 13:33
[PATCH 2/3] cfq-iosched: Implement a new tunable group_idle
o Implement a new tunable group_idle, which allows idling on the group instead of a cfq queue. Hence one can set slice_idle = 0 and not idle on the individual queues but idle on the group. This way on fast storage we can get fairness between groups at the same time overall throughput improves. Signed-off... 19 Jul 2010 13:33
[PATCH 1/3] cfq-iosched: Improve time slice charging logic
- Currently in CFQ there are many situations where don't know how much time slice has been consumed by a queue. For example, all the random reader/writer queues where we don't idle on individual queues and we expire the queue either immediately after the request dispatch. - In this case time consumed by ... 19 Jul 2010 13:33
[PATCH 3/3] cfq-iosched: Print per slice sectors dispatched in blktrace
- Divyesh had gotten rid of this code in the past. I want to re-introduce it back as it helps me a lot during debugging. Signed-off-by: Vivek Goyal <vgoyal(a)redhat.com> --- block/cfq-iosched.c | 8 ++++++-- 1 files changed, 6 insertions(+), 2 deletions(-) diff --git a/block/cfq-iosched.c b/block/cfq-iosc... 19 Jul 2010 13:33
[RFC PATCH] cfq-iosched: Implement group idle V2
Hi, This is V2 of the group_idle implementation patchset. I have done some more testing since V1 and fixed couple of bugs since V1. What's the problem ------------------ On high end storage (I got on HP EVA storage array with 12 SATA disks in RAID 5), CFQ's model of dispatching requests from a single queue... 19 Jul 2010 13:33
First  |  Prev |  Next  |  Last
Pages: 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534