From: Minchan Kim on
On Thu, Apr 29, 2010 at 11:10 AM, Rik van Riel <riel(a)redhat.com> wrote:
> On 04/28/2010 08:28 PM, Minchan Kim wrote:
>>
>> On Thu, Apr 29, 2010 at 5:57 AM, Rik van Riel<riel(a)redhat.com>  wrote:
>>>
>>> Take all the locks for all the anon_vmas in anon_vma_lock, this properly
>>> excludes migration and the transparent hugepage code from VMA changes
>>> done
>>> by mmap/munmap/mprotect/expand_stack/etc...
>>>
>>> Unfortunately, this requires adding a new lock (mm->anon_vma_chain_lock),
>>> otherwise we have an unavoidable lock ordering conflict.  This changes
>>> the
>>> locking rules for the "same_vma" list to be either mm->mmap_sem for
>>> write,
>>> or mm->mmap_sem for read plus the new mm->anon_vma_chain lock.  This
>>> limits
>>> the place where the new lock is taken to 2 locations - anon_vma_prepare
>>> and
>>> expand_downwards.
>>>
>>> Document the locking rules for the same_vma list in the anon_vma_chain
>>> and
>>> remove the anon_vma_lock call from expand_upwards, which does not need
>>> it.
>>>
>>> Signed-off-by: Rik van Riel<riel(a)redhat.com>
>>
>> This patch makes things simple. So I like this.
>> Actually, I wanted this all-at-once locks approach.
>> But I was worried about that how the patch affects AIM 7 workload
>> which is cause of anon_vma_chain about scalability by Rik.
>> But now Rik himself is sending the patch. So I assume the patch
>> couldn't decrease scalability of the workload heavily.
>
> The thing is, the number of anon_vmas attached to a VMA is
> small (depth of the tree, so for apache or aim the typical
> depth is 2). This N is between 1 and 3.
>
> The problem we had originally is the _width_ of the tree,
> where every sibling process was attached to the same anon_vma
> and the rmap code had to walk the page tables of all the
> processes, for every privately owned page in each child process.
> For large server workloads, this N is between a few hundred and
> a few thousand.
>
> What matters most at this point is correctness - we need to be
> able to exclude rmap walks when messing with a VMA in any way
> that breaks lookups, because rmap walks for page migration and
> hugepage conversion have to be 100% reliable.
>
> That is not a constraint I had in mind with the original
> anon_vma changes, so the code needs to be fixed up now...

Yes. I understand it.

When you tried anon_vma_chain patches as I pointed out, what I have a
concern is parent's vma not child's one.
The vma of parent still has N anon_vma.
AFAIR, you said it's trade-off and would be good than old at least.
I agreed. But I just want to remind you because this makes worse. :)
The corner case is that we have to hold locks of N.

Do I miss something?
Really, Can't we ignore that case latency although this happen infrequently?
I am not against this patch. I just want to listen your opinion.

--
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Rik van Riel on
On 04/28/2010 10:55 PM, Minchan Kim wrote:

> When you tried anon_vma_chain patches as I pointed out, what I have a
> concern is parent's vma not child's one.
> The vma of parent still has N anon_vma.

No, it is the other way around.

The anon_vma of the parent is also present in all of the
children, so the parent anon_vma is attached to N vmas.

However, the parent vma only has 1 anon_vma attached to
it, and each of the children will have 2 anon_vmas.

That is what should keep any locking overhead with this
patch minimal.

Yes, a deep fork bomb can slow itself down. Too bad,
don't do that :)

--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Mel Gorman on
On Wed, Apr 28, 2010 at 04:57:34PM -0400, Rik van Riel wrote:
> Take all the locks for all the anon_vmas in anon_vma_lock, this properly
> excludes migration and the transparent hugepage code from VMA changes done
> by mmap/munmap/mprotect/expand_stack/etc...
>

In vma_adjust(), what prevents something like rmap_map seeing partial
updates while the following lines execute?

vma->vm_start = start;
vma->vm_end = end;
vma->vm_pgoff = pgoff;
if (adjust_next) {
next->vm_start += adjust_next << PAGE_SHIFT;
next->vm_pgoff += adjust_next;
}

They would appear to happen outside the lock, even with this patch. The
update happened within the lock in 2.6.33.

> Unfortunately, this requires adding a new lock (mm->anon_vma_chain_lock),
> otherwise we have an unavoidable lock ordering conflict. This changes the
> locking rules for the "same_vma" list to be either mm->mmap_sem for write,
> or mm->mmap_sem for read plus the new mm->anon_vma_chain lock. This limits
> the place where the new lock is taken to 2 locations - anon_vma_prepare and
> expand_downwards.
>
> Document the locking rules for the same_vma list in the anon_vma_chain and
> remove the anon_vma_lock call from expand_upwards, which does not need it.
>
> Signed-off-by: Rik van Riel <riel(a)redhat.com>
>
> ---
> Posted quickly as an RFC patch, only compile tested so far.
> Andrea, Mel, does this look like a reasonable approach?
>

Yes.

> v3:
> - change anon_vma_unlock into a macro so lockdep works right
> - fix lock ordering in anon_vma_prepare
> v2:
> - also change anon_vma_unlock to walk the loop
> - add calls to anon_vma_lock & anon_vma_unlock to vma_adjust
> - introduce a new lock for the vma->anon_vma_chain list, to prevent
> the lock inversion that Andrea pointed out
>
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index b8bb9a6..a0679c6 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -239,6 +239,7 @@ struct mm_struct {
> int map_count; /* number of VMAs */
> struct rw_semaphore mmap_sem;
> spinlock_t page_table_lock; /* Protects page tables and some counters */
> + spinlock_t anon_vma_chain_lock; /* Protects vma->anon_vma_chain, with mmap_sem */
>
> struct list_head mmlist; /* List of maybe swapped mm's. These are globally strung
> * together off init_mm.mmlist, and are protected
> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> index d25bd22..703c472 100644
> --- a/include/linux/rmap.h
> +++ b/include/linux/rmap.h
> @@ -52,11 +52,15 @@ struct anon_vma {
> * all the anon_vmas associated with this VMA.
> * The "same_anon_vma" list contains the anon_vma_chains
> * which link all the VMAs associated with this anon_vma.
> + *
> + * The "same_vma" list is locked by either having mm->mmap_sem
> + * locked for writing, or having mm->mmap_sem locked for reading
> + * AND holding the mm->anon_vma_chain_lock.
> */
> struct anon_vma_chain {
> struct vm_area_struct *vma;
> struct anon_vma *anon_vma;
> - struct list_head same_vma; /* locked by mmap_sem & page_table_lock */
> + struct list_head same_vma; /* see above */
> struct list_head same_anon_vma; /* locked by anon_vma->lock */
> };
>
> @@ -90,18 +94,24 @@ static inline struct anon_vma *page_anon_vma(struct page *page)
> return page_rmapping(page);
> }
>
> -static inline void anon_vma_lock(struct vm_area_struct *vma)
> -{
> - struct anon_vma *anon_vma = vma->anon_vma;
> - if (anon_vma)
> - spin_lock(&anon_vma->lock);
> -}
> +#define anon_vma_lock(vma, nest_lock) \
> +({ \
> + struct anon_vma *anon_vma = vma->anon_vma; \
> + if (anon_vma) { \
> + struct anon_vma_chain *avc; \
> + list_for_each_entry(avc, &vma->anon_vma_chain, same_vma) \
> + spin_lock_nest_lock(&avc->anon_vma->lock, nest_lock); \
> + } \
> +})
>
> static inline void anon_vma_unlock(struct vm_area_struct *vma)
> {
> struct anon_vma *anon_vma = vma->anon_vma;
> - if (anon_vma)
> - spin_unlock(&anon_vma->lock);
> + if (anon_vma) {
> + struct anon_vma_chain *avc;
> + list_for_each_entry(avc, &vma->anon_vma_chain, same_vma)
> + spin_unlock(&avc->anon_vma->lock);
> + }
> }
>
> /*
> diff --git a/kernel/fork.c b/kernel/fork.c
> index 44b0791..83b1ba2 100644
> --- a/kernel/fork.c
> +++ b/kernel/fork.c
> @@ -468,6 +468,7 @@ static struct mm_struct * mm_init(struct mm_struct * mm, struct task_struct *p)
> mm->nr_ptes = 0;
> memset(&mm->rss_stat, 0, sizeof(mm->rss_stat));
> spin_lock_init(&mm->page_table_lock);
> + spin_lock_init(&mm->anon_vma_chain_lock);
> mm->free_area_cache = TASK_UNMAPPED_BASE;
> mm->cached_hole_size = ~0UL;
> mm_init_aio(mm);
> diff --git a/mm/init-mm.c b/mm/init-mm.c
> index 57aba0d..3ce8a1f 100644
> --- a/mm/init-mm.c
> +++ b/mm/init-mm.c
> @@ -15,6 +15,7 @@ struct mm_struct init_mm = {
> .mm_count = ATOMIC_INIT(1),
> .mmap_sem = __RWSEM_INITIALIZER(init_mm.mmap_sem),
> .page_table_lock = __SPIN_LOCK_UNLOCKED(init_mm.page_table_lock),
> + .anon_vma_chain_lock = __SPIN_LOCK_UNLOCKED(init_mm.anon_vma_chain_lock),
> .mmlist = LIST_HEAD_INIT(init_mm.mmlist),
> .cpu_vm_mask = CPU_MASK_ALL,
> };
> diff --git a/mm/mmap.c b/mm/mmap.c
> index f90ea92..4602358 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -452,7 +452,7 @@ static void vma_link(struct mm_struct *mm, struct vm_area_struct *vma,
> spin_lock(&mapping->i_mmap_lock);
> vma->vm_truncate_count = mapping->truncate_count;
> }
> - anon_vma_lock(vma);
> + anon_vma_lock(vma, &mm->mmap_sem);
>
> __vma_link(mm, vma, prev, rb_link, rb_parent);
> __vma_link_file(vma);
> @@ -578,6 +578,7 @@ again: remove_next = 1 + (end > next->vm_end);
> }
> }
>
> + anon_vma_lock(vma, &mm->mmap_sem);
> if (root) {
> flush_dcache_mmap_lock(mapping);
> vma_prio_tree_remove(vma, root);
> @@ -599,6 +600,7 @@ again: remove_next = 1 + (end > next->vm_end);
> vma_prio_tree_insert(vma, root);
> flush_dcache_mmap_unlock(mapping);
> }
> + anon_vma_unlock(vma);
>
> if (remove_next) {
> /*
> @@ -1705,12 +1707,11 @@ int expand_upwards(struct vm_area_struct *vma, unsigned long address)
> return -EFAULT;
>
> /*
> - * We must make sure the anon_vma is allocated
> - * so that the anon_vma locking is not a noop.
> + * Unlike expand_downwards, we do not need to take the anon_vma lock,
> + * because we leave vma->vm_start and vma->pgoff untouched.
> + * This means rmap lookups of pages inside this VMA stay valid
> + * throughout the stack expansion.
> */
> - if (unlikely(anon_vma_prepare(vma)))
> - return -ENOMEM;
> - anon_vma_lock(vma);
>
> /*
> * vma->vm_start/vm_end cannot change under us because the caller
> @@ -1721,7 +1722,6 @@ int expand_upwards(struct vm_area_struct *vma, unsigned long address)
> if (address < PAGE_ALIGN(address+4))
> address = PAGE_ALIGN(address+4);
> else {
> - anon_vma_unlock(vma);
> return -ENOMEM;
> }
> error = 0;
> @@ -1737,7 +1737,6 @@ int expand_upwards(struct vm_area_struct *vma, unsigned long address)
> if (!error)
> vma->vm_end = address;
> }
> - anon_vma_unlock(vma);
> return error;
> }
> #endif /* CONFIG_STACK_GROWSUP || CONFIG_IA64 */
> @@ -1749,6 +1748,7 @@ static int expand_downwards(struct vm_area_struct *vma,
> unsigned long address)
> {
> int error;
> + struct mm_struct *mm = vma->vm_mm;
>
> /*
> * We must make sure the anon_vma is allocated
> @@ -1762,7 +1762,8 @@ static int expand_downwards(struct vm_area_struct *vma,
> if (error)
> return error;
>
> - anon_vma_lock(vma);
> + spin_lock(&mm->anon_vma_chain_lock);
> + anon_vma_lock(vma, &mm->anon_vma_chain_lock);
>
> /*
> * vma->vm_start/vm_end cannot change under us because the caller
> @@ -1784,6 +1785,8 @@ static int expand_downwards(struct vm_area_struct *vma,
> }
> }
> anon_vma_unlock(vma);
> + spin_unlock(&mm->anon_vma_chain_lock);
> +
> return error;
> }
>
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 526704e..98d6289 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -23,6 +23,7 @@
> * inode->i_mutex (while writing or truncating, not reading or faulting)
> * inode->i_alloc_sem (vmtruncate_range)
> * mm->mmap_sem
> + * mm->anon_vma_chain_lock (mmap_sem for read, protects vma->anon_vma_chain)
> * page->flags PG_locked (lock_page)
> * mapping->i_mmap_lock
> * anon_vma->lock
> @@ -133,10 +134,11 @@ int anon_vma_prepare(struct vm_area_struct *vma)
> goto out_enomem_free_avc;
> allocated = anon_vma;
> }
> +
> + /* anon_vma_chain_lock to protect against threads */
> + spin_lock(&mm->anon_vma_chain_lock);
> spin_lock(&anon_vma->lock);
>
> - /* page_table_lock to protect against threads */
> - spin_lock(&mm->page_table_lock);
> if (likely(!vma->anon_vma)) {
> vma->anon_vma = anon_vma;
> avc->anon_vma = anon_vma;
> @@ -145,9 +147,9 @@ int anon_vma_prepare(struct vm_area_struct *vma)
> list_add(&avc->same_anon_vma, &anon_vma->head);
> allocated = NULL;
> }
> - spin_unlock(&mm->page_table_lock);
> -
> spin_unlock(&anon_vma->lock);
> + spin_unlock(&mm->anon_vma_chain_lock);
> +
> if (unlikely(allocated)) {
> anon_vma_free(allocated);
> anon_vma_chain_free(avc);
>

--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Mel Gorman on
On Thu, Apr 29, 2010 at 05:32:17PM +0900, Minchan Kim wrote:
> On Thu, 2010-04-29 at 09:15 +0100, Mel Gorman wrote:
> > On Wed, Apr 28, 2010 at 04:57:34PM -0400, Rik van Riel wrote:
> > > Take all the locks for all the anon_vmas in anon_vma_lock, this properly
> > > excludes migration and the transparent hugepage code from VMA changes done
> > > by mmap/munmap/mprotect/expand_stack/etc...
> > >
> >
> > In vma_adjust(), what prevents something like rmap_map seeing partial
> > updates while the following lines execute?
> >
> > vma->vm_start = start;
> > vma->vm_end = end;
> > vma->vm_pgoff = pgoff;
> > if (adjust_next) {
> > next->vm_start += adjust_next << PAGE_SHIFT;
> > next->vm_pgoff += adjust_next;
> > }
> > They would appear to happen outside the lock, even with this patch. The
> > update happened within the lock in 2.6.33.
> >
> >
> >
> This part does it. :)
>
> ----
> @@ -578,6 +578,7 @@ again: remove_next = 1 + (end >
> next->vm_end);
> }
> }
>
> + anon_vma_lock(vma, &mm->mmap_sem);
> if (root) {
> flush_dcache_mmap_lock(mapping);
> vma_prio_tree_remove(vma, root);
> @@ -599,6 +600,7 @@ again: remove_next = 1 + (end >
> next->vm_end);
> vma_prio_tree_insert(vma, root);
> flush_dcache_mmap_unlock(mapping);
> }
> + anon_vma_unlock(vma);
> ---
>

I'm blind. You're right.

> But we still need patch about shift_arg_pages.
>

Assuming you are referring to migration, it's easiest to just not migrate
pages within the stack until after shift_arg_pages runs. The locks
cannot be held during move_page_tables() because the page allocator is
called. It could be done in two stages where pages are allocated outside
the lock and then passed to move_page_tables() but I don't think
increasing the cost of exec() is justified just so a page can be
migrated during exec.

--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Minchan Kim on
On Thu, Apr 29, 2010 at 11:55 AM, Minchan Kim <minchan.kim(a)gmail.com> wrote:
> On Thu, Apr 29, 2010 at 11:10 AM, Rik van Riel <riel(a)redhat.com> wrote:
>> On 04/28/2010 08:28 PM, Minchan Kim wrote:
>>>
>>> On Thu, Apr 29, 2010 at 5:57 AM, Rik van Riel<riel(a)redhat.com>  wrote:
>>>>
>>>> Take all the locks for all the anon_vmas in anon_vma_lock, this properly
>>>> excludes migration and the transparent hugepage code from VMA changes
>>>> done
>>>> by mmap/munmap/mprotect/expand_stack/etc...
>>>>
>>>> Unfortunately, this requires adding a new lock (mm->anon_vma_chain_lock),
>>>> otherwise we have an unavoidable lock ordering conflict.  This changes
>>>> the
>>>> locking rules for the "same_vma" list to be either mm->mmap_sem for
>>>> write,
>>>> or mm->mmap_sem for read plus the new mm->anon_vma_chain lock.  This
>>>> limits
>>>> the place where the new lock is taken to 2 locations - anon_vma_prepare
>>>> and
>>>> expand_downwards.
>>>>
>>>> Document the locking rules for the same_vma list in the anon_vma_chain
>>>> and
>>>> remove the anon_vma_lock call from expand_upwards, which does not need
>>>> it.
>>>>
>>>> Signed-off-by: Rik van Riel<riel(a)redhat.com>
>>>
>>> This patch makes things simple. So I like this.
>>> Actually, I wanted this all-at-once locks approach.
>>> But I was worried about that how the patch affects AIM 7 workload
>>> which is cause of anon_vma_chain about scalability by Rik.
>>> But now Rik himself is sending the patch. So I assume the patch
>>> couldn't decrease scalability of the workload heavily.
>>
>> The thing is, the number of anon_vmas attached to a VMA is
>> small (depth of the tree, so for apache or aim the typical
>> depth is 2). This N is between 1 and 3.
>>
>> The problem we had originally is the _width_ of the tree,
>> where every sibling process was attached to the same anon_vma
>> and the rmap code had to walk the page tables of all the
>> processes, for every privately owned page in each child process.
>> For large server workloads, this N is between a few hundred and
>> a few thousand.
>>
>> What matters most at this point is correctness - we need to be
>> able to exclude rmap walks when messing with a VMA in any way
>> that breaks lookups, because rmap walks for page migration and
>> hugepage conversion have to be 100% reliable.
>>
>> That is not a constraint I had in mind with the original
>> anon_vma changes, so the code needs to be fixed up now...
>
> Yes. I understand it.
>
> When you tried anon_vma_chain patches as I pointed out, what I have a
> concern is parent's vma not child's one.
> The vma of parent still has N anon_vma.
> AFAIR, you said it's trade-off and would be good than old at least.
> I agreed.  But I just want to remind you because this makes worse.  :)
> The corner case is that we have to hold locks of N.
>
> Do I miss something?
> Really, Can't we ignore that case latency although this happen infrequently?
> I am not against this patch. I just want to listen your opinion.

me/ slaps self.

It's about height of tree and I can't imagine high height of
scenario.(fork->fork->fork->...->fork)
So as Rik pointed out, It's a not big overhead about latency latency,
at least. I think.

I supports this approach.

--
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/