From: Minchan Kim on
On Fri, Apr 2, 2010 at 2:36 AM, Mel Gorman <mel(a)csn.ul.ie> wrote:
> On Thu, Apr 01, 2010 at 07:51:31PM +0900, Minchan Kim wrote:
>> On Thu, Apr 1, 2010 at 2:42 PM, KAMEZAWA Hiroyuki
>> <kamezawa.hiroyu(a)jp.fujitsu.com> wrote:
>> > On Thu, 1 Apr 2010 13:44:29 +0900
>> > Minchan Kim <minchan.kim(a)gmail.com> wrote:
>> >
>> >> On Thu, Apr 1, 2010 at 12:01 PM, KAMEZAWA Hiroyuki
>> >> <kamezawa.hiroyu(a)jp.fujitsu.com> wrote:
>> >> > On Thu, 1 Apr 2010 11:43:18 +0900
>> >> > Minchan Kim <minchan.kim(a)gmail.com> wrote:
>> >> >
>> >> >> On Wed, Mar 31, 2010 at 2:26 PM, KAMEZAWA Hiroyuki       /*
>> >> >> >> diff --git a/mm/rmap.c b/mm/rmap.c
>> >> >> >> index af35b75..d5ea1f2 100644
>> >> >> >> --- a/mm/rmap.c
>> >> >> >> +++ b/mm/rmap.c
>> >> >> >> @@ -1394,9 +1394,11 @@ int rmap_walk(struct page *page, int (*rmap_one)(struct page *,
>> >> >> >>
>> >> >> >>       if (unlikely(PageKsm(page)))
>> >> >> >>               return rmap_walk_ksm(page, rmap_one, arg);
>> >> >> >> -     else if (PageAnon(page))
>> >> >> >> +     else if (PageAnon(page)) {
>> >> >> >> +             if (PageSwapCache(page))
>> >> >> >> +                     return SWAP_AGAIN;
>> >> >> >>               return rmap_walk_anon(page, rmap_one, arg);
>> >> >> >
>> >> >> > SwapCache has a condition as (PageSwapCache(page) && page_mapped(page) == true.
>> >> >> >
>> >> >>
>> >> >> In case of tmpfs, page has swapcache but not mapped.
>> >> >>
>> >> >> > Please see do_swap_page(), PageSwapCache bit is cleared only when
>> >> >> >
>> >> >> > do_swap_page()...
>> >> >> >       swap_free(entry);
>> >> >> >        if (vm_swap_full() || (vma->vm_flags & VM_LOCKED) || PageMlocked(page))
>> >> >> >                try_to_free_swap(page);
>> >> >> >
>> >> >> > Then, PageSwapCache is cleared only when swap is freeable even if mapped.
>> >> >> >
>> >> >> > rmap_walk_anon() should be called and the check is not necessary.
>> >> >>
>> >> >> Frankly speaking, I don't understand what is Mel's problem, why he added
>> >> >> Swapcache check in rmap_walk, and why do you said we don't need it.
>> >> >>
>> >> >> Could you explain more detail if you don't mind?
>> >> >>
>> >> > I may miss something.
>> >> >
>> >> > unmap_and_move()
>> >> >  1. try_to_unmap(TTU_MIGRATION)
>> >> >  2. move_to_newpage
>> >> >  3. remove_migration_ptes
>> >> >        -> rmap_walk()
>> >> >
>> >> > Then, to map a page back we unmapped we call rmap_walk().
>> >> >
>> >> > Assume a SwapCache which is mapped, then, PageAnon(page) == true.
>> >> >
>> >> >  At 1. try_to_unmap() will rewrite pte with swp_entry of SwapCache.
>> >> >       mapcount goes to 0.
>> >> >  At 2. SwapCache is copied to a new page.
>> >> >  At 3. The new page is mapped back to the place. Now, newpage's mapcount is 0.
>> >> >       Before patch, the new page is mapped back to all ptes.
>> >> >       After patch, the new page is not mapped back because its mapcount is 0.
>> >> >
>> >> > I don't think shared SwapCache of anon is not an usual behavior, so, the logic
>> >> > before patch is more attractive.
>> >> >
>> >> > If SwapCache is not mapped before "1", we skip "1" and rmap_walk will do nothing
>> >> > because page->mapping is NULL.
>> >> >
>> >>
>> >> Thanks. I agree. We don't need the check.
>> >> Then, my question is why Mel added the check in rmap_walk.
>> >> He mentioned some BUG trigger and fixed things after this patch.
>> >> What's it?
>> >> Is it really related to this logic?
>> >> I don't think so or we are missing something.
>> >>
>> > Hmm. Consiering again.
>> >
>> > Now.
>> >        if (PageAnon(page)) {
>> >                rcu_locked = 1;
>> >                rcu_read_lock();
>> >                if (!page_mapped(page)) {
>> >                        if (!PageSwapCache(page))
>> >                                goto rcu_unlock;
>> >                } else {
>> >                        anon_vma = page_anon_vma(page);
>> >                        atomic_inc(&anon_vma->external_refcount);
>> >                }
>> >
>> >
>> > Maybe this is a fix.
>> >
>> > ==
>> >        skip_remap = 0;
>> >        if (PageAnon(page)) {
>> >                rcu_read_lock();
>> >                if (!page_mapped(page)) {
>> >                        if (!PageSwapCache(page))
>> >                                goto rcu_unlock;
>> >                        /*
>> >                         * We can't convice this anon_vma is valid or not because
>> >                         * !page_mapped(page). Then, we do migration(radix-tree replacement)
>> >                         * but don't remap it which touches anon_vma in page->mapping.
>> >                         */
>> >                        skip_remap = 1;
>> >                        goto skip_unmap;
>> >                } else {
>> >                        anon_vma = page_anon_vma(page);
>> >                        atomic_inc(&anon_vma->external_refcount);
>> >                }
>> >        }
>> >        .....copy page, radix-tree replacement,....
>> >
>>
>> It's not enough.
>> we uses remove_migration_ptes in  move_to_new_page, too.
>> We have to prevent it.
>> We can check PageSwapCache(page) in move_to_new_page and then
>> skip remove_migration_ptes.
>>
>> ex)
>> static int move_to_new_page(....)
>> {
>>      int swapcache = PageSwapCache(page);
>>      ...
>>      if (!swapcache)
>>          if(!rc)
>>              remove_migration_ptes
>>          else
>>              newpage->mapping = NULL;
>> }
>>
>
> This I agree with.
>
>> And we have to close race between PageAnon(page) and rcu_read_lock.
>
> Not so sure on this. The page is locked at this point and that should
> prevent it from becoming !PageAnon

page lock can't prevent anon_vma free.
It's valid just only file-backed page, I think.

>> If we don't do it, anon_vma could be free in the middle of operation.
>> I means
>>
>>          * of migration. File cache pages are no problem because of page_lock()
>>          * File Caches may use write_page() or lock_page() in migration, then,
>>          * just care Anon page here.
>>          */
>>         if (PageAnon(page)) {
>>                 !!! RACE !!!!
>>                 rcu_read_lock();
>>                 rcu_locked = 1;
>>
>> +
>> +               /*
>> +                * If the page has no mappings any more, just bail. An
>> +                * unmapped anon page is likely to be freed soon but worse,
>>
>
> I am not sure this race exists because the page is locked but a key
> observation has been made - A page that is unmapped can be migrated if
> it's PageSwapCache but it may not have a valid anon_vma. Hence, in the
> !page_mapped case, the key is to not use anon_vma. How about the
> following patch?

I like this. Kame. How about your opinion?
please, look at a comment.

>
> ==== CUT HERE ====
>
> mm,migration: Allow the migration of PageSwapCache pages
>
> PageAnon pages that are unmapped may or may not have an anon_vma so are
> not currently migrated. However, a swap cache page can be migrated and
> fits this description. This patch identifies page swap caches and allows
> them to be migrated but ensures that no attempt to made to remap the pages
> would would potentially try to access an already freed anon_vma.
>
> Signed-off-by: Mel Gorman <mel(a)csn.ul.ie>
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 35aad2a..5d0218b 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -484,7 +484,8 @@ static int fallback_migrate_page(struct address_space *mapping,
>  *   < 0 - error code
>  *  == 0 - success
>  */
> -static int move_to_new_page(struct page *newpage, struct page *page)
> +static int move_to_new_page(struct page *newpage, struct page *page,
> +                                               int safe_to_remap)
>  {
>        struct address_space *mapping;
>        int rc;
> @@ -519,10 +520,12 @@ static int move_to_new_page(struct page *newpage, struct page *page)
>        else
>                rc = fallback_migrate_page(mapping, newpage, page);
>
> -       if (!rc)
> -               remove_migration_ptes(page, newpage);
> -       else
> -               newpage->mapping = NULL;
> +       if (safe_to_remap) {
> +               if (!rc)
> +                       remove_migration_ptes(page, newpage);
> +               else
> +                       newpage->mapping = NULL;
> +       }
>
>        unlock_page(newpage);
>
> @@ -539,6 +542,7 @@ static int unmap_and_move(new_page_t get_new_page, unsigned long private,
>        int rc = 0;
>        int *result = NULL;
>        struct page *newpage = get_new_page(page, private, &result);
> +       int safe_to_remap = 1;
>        int rcu_locked = 0;
>        int charge = 0;
>        struct mem_cgroup *mem = NULL;
> @@ -600,18 +604,26 @@ static int unmap_and_move(new_page_t get_new_page, unsigned long private,
>                rcu_read_lock();
>                rcu_locked = 1;
>
> -               /*
> -                * If the page has no mappings any more, just bail. An
> -                * unmapped anon page is likely to be freed soon but worse,
> -                * it's possible its anon_vma disappeared between when
> -                * the page was isolated and when we reached here while
> -                * the RCU lock was not held
> -                */
> -               if (!page_mapped(page))
> -                       goto rcu_unlock;
> +               /* Determine how to safely use anon_vma */
> +               if (!page_mapped(page)) {
> +                       if (!PageSwapCache(page))
> +                               goto rcu_unlock;
>
> -               anon_vma = page_anon_vma(page);
> -               atomic_inc(&anon_vma->external_refcount);
> +                       /*
> +                        * We cannot be sure that the anon_vma of an unmapped
> +                        * page is safe to use. In this case, the page still

How about changing comment?
"In this case, swapcache page still "
Also, I want to change "safe_to_remap" to "remap_swapcache".
I think it's just problem related to swapcache page.
So I want to represent it explicitly although we can know it's swapcache
by code.


--
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Minchan Kim on
On Sat, Apr 3, 2010 at 1:02 AM, Mel Gorman <mel(a)csn.ul.ie> wrote:
> PageAnon pages that are unmapped may or may not have an anon_vma so are
> not currently migrated. However, a swap cache page can be migrated and
> fits this description. This patch identifies page swap caches and allows
> them to be migrated but ensures that no attempt to made to remap the pages
> would would potentially try to access an already freed anon_vma.
>
> Signed-off-by: Mel Gorman <mel(a)csn.ul.ie>
Reviewed-by: Minchan Kim <minchan.kim(a)gmail.com>

Thanks for your effort, Mel.

--
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Minchan Kim on
On Thu, Apr 22, 2010 at 11:14 PM, Mel Gorman <mel(a)csn.ul.ie> wrote:
> On Thu, Apr 22, 2010 at 07:51:53PM +0900, KAMEZAWA Hiroyuki wrote:
>> On Thu, 22 Apr 2010 19:31:06 +0900
>> KAMEZAWA Hiroyuki <kamezawa.hiroyu(a)jp.fujitsu.com> wrote:
>>
>> > On Thu, 22 Apr 2010 19:13:12 +0900
>> > Minchan Kim <minchan.kim(a)gmail.com> wrote:
>> >
>> > > On Thu, Apr 22, 2010 at 6:46 PM, KAMEZAWA Hiroyuki
>> > > <kamezawa.hiroyu(a)jp.fujitsu.com> wrote:
>> >
>> > > > Hmm..in my test, the case was.
>> > > >
>> > > > Before try_to_unmap:
>> > > >        mapcount=1, SwapCache, remap_swapcache=1
>> > > > After remap
>> > > >        mapcount=0, SwapCache, rc=0.
>> > > >
>> > > > So, I think there may be some race in rmap_walk() and vma handling or
>> > > > anon_vma handling. migration_entry isn't found by rmap_walk.
>> > > >
>> > > > Hmm..it seems this kind patch will be required for debug.
>> > >
>>
>> Ok, here is my patch for _fix_. But still testing...
>> Running well at least for 30 minutes, where I can see bug in 10minutes.
>> But this patch is too naive. please think about something better fix.
>>
>> ==
>> From: KAMEZAWA Hiroyuki <kamezawa.hiroyu(a)jp.fujitsu.com>
>>
>> At adjust_vma(), vma's start address and pgoff is updated under
>> write lock of mmap_sem. This means the vma's rmap information
>> update is atoimic only under read lock of mmap_sem.
>>
>>
>> Even if it's not atomic, in usual case, try_to_ummap() etc...
>> just fails to decrease mapcount to be 0. no problem.
>>
>> But at page migration's rmap_walk(), it requires to know all
>> migration_entry in page tables and recover mapcount.
>>
>> So, this race in vma's address is critical. When rmap_walk meet
>> the race, rmap_walk will mistakenly get -EFAULT and don't call
>> rmap_one(). This patch adds a lock for vma's rmap information.
>> But, this is _very slow_.
>
> Ok wow. That is exceptionally well-spotted. This looks like a proper bug
> that compaction exposes as opposed to a bug that compaction introduces.
>
>> We need something sophisitcated, light-weight update for this..
>>
>
> In the event the VMA is backed by a file, the mapping i_mmap_lock is taken for
> the duration of the update and is  taken elsewhere where the VMA information
> is read such as rmap_walk_file()
>
> In the event the VMA is anon, vma_adjust currently talks no locks and your
> patch introduces a new one but why not use the anon_vma lock here? Am I
> missing something that requires the new lock?

rmap_walk_anon doesn't hold vma's anon_vma->lock.
It holds page->anon_vma->lock.

--
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Minchan Kim on
On Thu, Apr 22, 2010 at 6:46 PM, KAMEZAWA Hiroyuki
<kamezawa.hiroyu(a)jp.fujitsu.com> wrote:
> On Thu, 22 Apr 2010 10:28:20 +0100
> Mel Gorman <mel(a)csn.ul.ie> wrote:
>
>> On Wed, Apr 21, 2010 at 10:46:45AM -0500, Christoph Lameter wrote:
>> > On Wed, 21 Apr 2010, Mel Gorman wrote:
>> >
>> > > > > 2. Is the BUG_ON check in
>> > > > >    include/linux/swapops.h#migration_entry_to_page() now wrong? (I
>> > > > >    think yes, but I'm not sure and I'm having trouble verifying it)
>> > > >
>> > > > The bug check ensures that migration entries only occur when the page
>> > > > is locked. This patch changes that behavior. This is going too oops
>> > > > therefore in unmap_and_move() when you try to remove the migration_ptes
>> > > > from an unlocked page.
>> > > >
>> > >
>> > > It's not unmap_and_move() that the problem is occurring on but during a
>> > > page fault - presumably in do_swap_page but I'm not 100% certain.
>> >
>> > remove_migration_pte() calls migration_entry_to_page(). So it must do that
>> > only if the page is still locked.
>> >
>>
>> Correct, but the other call path is
>>
>> do_swap_page
>>   -> migration_entry_wait
>>     -> migration_entry_to_page
>>
>> with migration_entry_wait expecting the page to be locked. There is a dangling
>> migration PTEs coming from somewhere. I thought it was from unmapped swapcache
>> first, but that cannot be the case. There is a race somewhere.
>>
>> > You need to ensure that the page is not unlocked in move_to_new_page() if
>> > the migration ptes are kept.
>> >
>> > move_to_new_page() only unlocks the new page not the original page. So that is safe.
>> >
>> > And it seems that the old page is also unlocked in unmap_and_move() only
>> > after the migration_ptes have been removed? So we are fine after all...?
>> >
>>
>> You'd think but migration PTEs are being left behind in some circumstance. I
>> thought it was due to this series, but it's unlikely. It's more a case that
>> compaction heavily exercises migration.
>>
>> We can clean up the old migration PTEs though when they are encountered
>> like in the following patch for example? I'll continue investigating why
>> this dangling migration pte exists as closing that race would be a
>> better fix.
>>
>> ==== CUT HERE ====
>> mm,migration: Remove dangling migration ptes pointing to unlocked pages
>>
>> Due to some yet-to-be-identified race, it is possible for migration PTEs
>> to be left behind, When later paged-in, a BUG is triggered that assumes
>> that all migration PTEs are point to a page currently being migrated and
>> so must be locked.
>>
>> Rather than calling BUG, this patch notes the existance of dangling migration
>> PTEs in migration_entry_wait() and cleans them up.
>>
>
> I use similar patch for debugging. In my patch, this when this function founds
> dangling migration entry, return error code and do_swap_page() returns
> VM_FAULT_SIGBUS.
>
>
> Hmm..in my test, the case was.
>
> Before try_to_unmap:
>        mapcount=1, SwapCache, remap_swapcache=1
> After remap
>        mapcount=0, SwapCache, rc=0.
>
> So, I think there may be some race in rmap_walk() and vma handling or
> anon_vma handling. migration_entry isn't found by rmap_walk.
>
> Hmm..it seems this kind patch will be required for debug.

I looked do_swap_page, again.
lock_page is called long after migration_entry_wait.
It means lock_page can't close the race.

So I think this BUG is possible.
What do you think?

> -Kame
>
>
>
>



--
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Minchan Kim on
Hi, Christoph.

On Fri, Apr 23, 2010 at 12:14 AM, Christoph Lameter <cl(a)linux.com> wrote:
> On Thu, 22 Apr 2010, Minchan Kim wrote:
>
>> For further optimization, we can hold vma->adjust_lock if vma_address
>> returns -EFAULT. But I hope we redesigns it without new locking.
>> But I don't have good idea, now. :(
>
> You could make it atomic through the use of RCU.
>
> Create a new vma entry with the changed parameters and then atomically
> switch to the new vma.
> Problem is that you have some list_heads in there.

That's a good idea if we can do _simply_.
That's because there are many confusion anon_vma and vma handling nowadays.
(http://thread.gmane.org/gmane.linux.kernel/969907)
So I hope we solve the problem without rather complicated rcu locking
if it isn't critical path.

--
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/