First  |  Prev |  Next  |  Last
Pages: 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759
procfs gives read/write access to RO/WO pipes
Using /proc/*/fd, you can get read/write access to a pipe that you have read-only or write-only access to. The program below demonstrates this. It reads and writes through the "0" fd. Tested with 2.6.34, i686. At first sight, it doesn't look too serious if a program can write to its own readable pipe, or read fr... 2 Jul 2010 16:51
[PATCH 1/6] block: Implement a blk_yield function to voluntarily give up the I/O scheduler.
This patch implements a blk_yield function to allow a process to voluntarily give up its I/O scheduler time slice. This is desirable for those processes which know that they will be blocked on I/O from another process, such as the file system journal thread. The yield call works by causing the target process to i... 2 Jul 2010 16:51
X:2252 conflicting memory types 40000000-48000000 uncached-minus<->write-combining
this is new(below) has anybody reported/bisected hit this yet (if not I'll bisect it) [drm] Num pipes: 1 [ 29.742432] [drm] writeback test succeeded in 1 usecs [ 30.089717] X:2252 conflicting memory types 40000000-48000000 uncached-minus<->write-combining [ 30.089721] reserve_memtype failed 0x40000000-0x... 8 Jul 2010 03:43
[PATCH 6/6] block: remove RQ_NOIDLE from WRITE_SYNC
In trying to get fsync-ing processes to perform as well under CFQ as they do in deadline, I found (with the current blk_yield approach) that it was necessary to get rid of the RQ_NOIDLE flag for WRITE_SYNC I/O. Instead, we do explicit yielding of the I/O scheduler. Comments, as always, are greatly appreciated. ... 2 Jul 2010 16:51
[PATCH 0/6 v6][RFC] jbd[2]: enhance fsync performance when using CFQ
Hi, Running iozone or fs_mark with fsync enabled, the performance of CFQ is far worse than that of deadline for enterprise class storage when dealing with file sizes of 8MB or less. I used the following command line as a representative test case: fs_mark -S 1 -D 10000 -N 100000 -d /mnt/test/fs_mark -s 65536... 2 Jul 2010 16:51
[PATCH 5/6] jbd2: use WRITE_SYNC for journal I/O
In my fsync testing, journal I/O most definitely was sync I/O, since another process was blocked waiting for the results. By marking all journal I/O as WRITE_SYNC, I can get better performance with CFQ. If there is a way to mark this only for cases where it is blocking progress in a dependent process, then that ... 2 Jul 2010 16:51
[PATCH 4/6] jbd: use WRITE_SYNC for journal I/O
In my fsync testing, journal I/O most definitely was sync I/O, since another process was blocked waiting for the results. By marking all journal I/O as WRITE_SYNC, I can get better performance with CFQ. If there is a way to mark this only for cases where it is blocking progress in a dependent process, then that ... 2 Jul 2010 16:51
[PATCH 2/6] jbd: yield the device queue when waiting for commits
This patch gets CFQ back in line with deadline for iozone runs, especially those testing small files + fsync timings. Signed-off-by: Jeff Moyer <jmoyer(a)redhat.com> --- fs/jbd/journal.c | 7 +++++++ 1 files changed, 7 insertions(+), 0 deletions(-) diff --git a/fs/jbd/journal.c b/fs/jbd/journal.c index 93d... 2 Jul 2010 16:51
2.6.34: simple IOMMU API extension to check safe interrupt remapping
On Friday 02 July 2010 02:26:46 am Roedel, Joerg wrote: On Thu, Jul 01, 2010 at 05:24:32PM -0400, Tom Lyon wrote: This patch allows IOMMU users to determine whether the hardware and software support safe, isolated interrupt remapping. Not all Intel IOMMUs have the hardware, and the software for AMD i... 2 Jul 2010 15:44
Break out types from <linux/list.h> to <linux/list_types.h>.
On 7/2/2010 3:19 PM, Matthew Wilcox wrote: On Fri, Jul 02, 2010 at 01:41:14PM -0400, Chris Metcalf wrote: This allows a list_head (or hlist_head, etc.) to be used from places that used to be impractical, in particular <asm/processor.h>, which used to cause include file recursion: <linux/list.h> inc... 2 Jul 2010 17:57
First  |  Prev |  Next  |  Last
Pages: 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759