First  |  Prev |  Next  |  Last
Pages: 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667
[PATCH][GIT PULL][for 2.6.35] tracing: Add alignment to syscall metadata declarations
Ingo, Please pull the latest tip/perf/urgent tree, which can be found at: git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-2.6-trace.git tip/perf/urgent Steven Rostedt (1): tracing: Add alignment to syscall metadata declarations ---- include/linux/syscalls.h | 6 ++++-- 1 file... 9 Jul 2010 16:06
[PATCH 1/2] gpiolib: get rid of struct poll_desc and worklet
As sysfs_notify_dirent has been made irq safe, there is no reason to not call it directly from irq. With the work_struct removed, the remaining element in poll_desc is a sysfs_dirent pointer which may not be NULL. We can therefore store it directly in the idr and pass it as context to the irq handler. Most part ... 9 Jul 2010 16:06
[PATCH 2/2] gpiolib: allow nested threaded irqs for poll(2)
The pca953x driver requires the use of threaded irqs as its irq demultiplexer can sleep. Our irq handler can be called from any context, so use request_any_context_irq to allow threaded irqs as well. Signed-off-by: Daniel Glöckner <dg(a)emlix.com> Reported-by: Ian Jeffray <ian(a)jeffray.co.uk> --- drivers/gpio/gp... 9 Jul 2010 16:06
[S+Q2 03/19] percpu: allow limited allocation before slab is online
This patch updates percpu allocator such that it can serve limited amount of allocation before slab comes online. This is primarily to allow slab to depend on working percpu allocator. Two parameters, PERCPU_DYNAMIC_EARLY_SIZE and SLOTS, determine how much memory space and allocation map slots are reserved. If ... 9 Jul 2010 16:06
unable to handle kernel paging request at 40000000 __alloc_memory_core_early+0x147/0x1d6
Hi, On Fri, 9 Jul 2010 15:08:52 -0400 Yinghai Lu <yinghai(a)kernel.org> wrote: On 07/09/2010 07:54 AM, Borislav Petkov wrote: Hi, this is something we're getting during testing on one of our boxes here, a dual socket Magny-Cours machine. It is oopsing on the addr variable in __alloc_memory_cor... 10 Jul 2010 05:10
[tip:x86/mm] x86, ioremap: Fix normal ram range check
Commit-ID: 35be1b716a475717611b2dc04185e9d80b9cb693 Gitweb: http://git.kernel.org/tip/35be1b716a475717611b2dc04185e9d80b9cb693 Author: Kenji Kaneshige <kaneshige.kenji(a)jp.fujitsu.com> AuthorDate: Fri, 18 Jun 2010 12:23:57 +0900 Committer: H. Peter Anvin <hpa(a)linux.intel.com> CommitDate: Fri, 9 Jul 2010 1... 9 Jul 2010 16:06
[S+Q2 12/19] slub: Dynamically size kmalloc cache allocations
kmalloc caches are statically defined and may take up a lot of space just because the sizes of the node array has to be dimensioned for the largest node count supported. This patch makes the size of the kmem_cache structure dynamic throughout by creating a kmem_cache slab cache for the kmem_cache objects. The boo... 9 Jul 2010 16:06
[S+Q2 11/19] slub: Remove static kmem_cache_cpu array for boot
The percpu allocator can now handle allocations in early boot. So drop the static kmem_cache_cpu array. Cc: Tejun Heo <tj(a)kernel.org> Signed-off-by: Christoph Lameter <cl(a)linux-foundation.org> --- mm/slub.c | 21 ++++++--------------- 1 file changed, 6 insertions(+), 15 deletions(-) Index: linux-2.6/mm/... 9 Jul 2010 16:06
[S+Q2 10/19] slub: remove dynamic dma slab allocation
Remove the dynamic dma slab allocation since this causes too many issues with nested locks etc etc. The change avoids passing gfpflags into many functions. Signed-off-by: Christoph Lameter <cl(a)linux-foundation.org> --- mm/slub.c | 151 ++++++++++++++++---------------------------------------------- 1 file cha... 9 Jul 2010 16:06
[S+Q2 16/19] slub: Resize the new cpu queues
Allow resizing of cpu queue and batch size. This is done in the basic steps that are also followed by SLAB. The statically allocated per cpu areas are removed since the per cpu allocator is already available when kmem_cache_init is called. We can dynamically size the per cpu data during bootstrap. Careful: Thi... 9 Jul 2010 16:06
First  |  Prev |  Next  |  Last
Pages: 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667