First  |  Prev |  Next  |  Last
Pages: 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625
[PATCH 21/50] memblock: Move functions around into a more sensible order
From: Benjamin Herrenschmidt <benh(a)> Some shuffling is needed for doing array resize so we may as well put some sense into the ordering of the functions in the whole memblock.c file. No code change. Added some comments. Signed-off-by: Benjamin Herrenschmidt <benh(a)> --- m... 13 Jul 2010 04:21
[PATCH 27/50] memblock: Make memblock_alloc_try_nid() fallback to MEMBLOCK_ALLOC_ANYWHERE
From: Benjamin Herrenschmidt <benh(a)> memblock_alloc_nid() used to fallback to allocating anywhere by using memblock_alloc() as a fallback. However, some of my previous patches limit memblock_alloc() to the region covered by MEMBLOCK_ALLOC_ACCESSIBLE which is not quite what we want for memblo... 13 Jul 2010 04:21
[PATCH 34/50] memblock: Add memblock_find_in_range()
it is a wrapper for memblock_find_base make it more easy for x86 to use memblock. ( rebase ) x86 early_res is using find/reserve pattern instead of alloc. keep it in weak version, so later We can use x86 own version if needed. also We need it in mm/memblock.c, so one caller mm/page_alloc.c could get compiled ... 13 Jul 2010 04:21
[PATCH 42/50] x86, memblock: Add memblock_x86_find_in_range_node()
It can be used to find NODE_DATA for numa. Need to make sure early_node_map[] is filled before it is called, otherwise it will fallback to memblock_find_in_range(), with node range. Signed-off-by: Yinghai Lu <yinghai(a)> --- arch/x86/include/asm/memblock.h | 1 + arch/x86/mm/memblock.c | ... 13 Jul 2010 04:21
[PATCH 35/50] x86, memblock: Add memblock_x86_find_in_range_size()
size is returned according free range. Will be used to find free ranges for early_memtest and memory corruption check Do not mess it up with lib/memblock.c yet. Signed-off-by: Yinghai Lu <yinghai(a)> --- arch/x86/include/asm/memblock.h | 8 ++++ arch/x86/mm/Makefile | 2 + arch/x86/... 13 Jul 2010 04:21
[PATCH 38/50] x86,memblock: Add memblock_x86_reserve_range/memblock_x86_free_range
they are wrappers for core versions. they are taking start/end/name instead of base/size. will make x86 conversion more easy could add more debug print out -v2: change get_max_mapped() to memblock.default_alloc_limit according to Michael Ellerman and Ben change to memblock_x86_reserve_range and mem... 13 Jul 2010 04:21
[PATCH 33/50] memblock: Add ARCH_DISCARD_MEMBLOCK to put memblock code to .init
So those memblock bits could be released after kernel is booted up. Arch code could define ARCH_DISCARD_MEMBLOCK in asm/memblock.h, __init_memblock will become __init, __initdata_memblock will becom __initdata x86 code will use that. if ARCH_DISCARD_MEMBLOCK is defined, debugfs is not used -v2: use ARCH_DI... 13 Jul 2010 04:21
[PATCH 10/50] memblock: Introduce default allocation limit and use it to replace explicit ones
From: Benjamin Herrenschmidt <benh(a)> This introduce memblock.current_limit which is used to limit allocations from memblock_alloc() or memblock_alloc_base(..., MEMBLOCK_ALLOC_ACCESSIBLE). The old MEMBLOCK_ALLOC_ANYWHERE changes value from 0 to ~(u64)0 and can still be used with memblock_alloc... 13 Jul 2010 04:21
[PATCH 37/50] x86, memblock: Add memblock_x86_to_bootmem()
memblock_x86_to_bootmem() will reserve memblock.reserved.region in bootmem after bootmem is set up. We can use it to with all arches that support memblock later. Signed-off-by: Yinghai Lu <yinghai(a)> --- arch/x86/include/asm/memblock.h | 1 + arch/x86/mm/memblock.c | 30 +++++++++++++++... 13 Jul 2010 04:21
[PATCH 45/50] x86, memblock: Use memblock_debug to control debug message print out
Also let memblock_x86_reserve_range/memblock_x86_free_range could print out name if memblock=debug is specified will also print ther name when reserve_memblock_area/free_memblock_area are called. -v2: according to Ingo, put " if (memblock_debug) " in one place Signed-off-by: Yinghai Lu <yinghai(a)> -... 13 Jul 2010 04:21
First  |  Prev |  Next  |  Last
Pages: 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625