From: Mel Gorman on
On Thu, Jul 29, 2010 at 07:34:19PM +0900, KOSAKI Motohiro wrote:
> > > <SNIP>
> > >
> > > 2. synchronous lumpy reclaim condition is insane.
> > >
> > > currently, synchrounous lumpy reclaim will be invoked when following
> > > condition.
> > >
> > > if (nr_reclaimed < nr_taken && !current_is_kswapd() &&
> > > sc->lumpy_reclaim_mode) {
> > >
> > > but "nr_reclaimed < nr_taken" is pretty stupid. if isolated pages have
> > > much dirty pages, pageout() only issue first 113 IOs.
> > > (if io queue have >113 requests, bdi_write_congested() return true and
> > > may_write_to_queue() return false)
> > >
> > > So, we haven't call ->writepage(), congestion_wait() and wait_on_page_writeback()
> > > are surely stupid.
> > >
> >
> > This is somewhat intentional though. See the comment
> >
> > /*
> > * Synchronous reclaim is performed in two passes,
> > * first an asynchronous pass over the list to
> > * start parallel writeback, and a second synchronous
> > * pass to wait for the IO to complete......
> >
> > If all pages on the list were not taken, it means that some of the them
> > were dirty but most should now be queued for writeback (possibly not all if
> > congested). The intention is to loop a second time waiting for that writeback
> > to complete before continueing on.
>
> May I explain more a bit? Generically, a worth of retrying depend on successful ratio.
> now shrink_page_list() can't free the page when following situation.
>
> 1. trylock_page() failure
> 2. page is unevictable
> 3. zone reclaim and page is mapped
> 4. PageWriteback() is true and not synchronous lumpy reclaim
> 5. page is swapbacked and swap is full
> 6. add_to_swap() fail (note, this is frequently fail rather than expected because
> it is using GFP_NOMEMALLOC)
> 7. page is dirty and gfpmask don't have GFP_IO, GFP_FS
> 8. page is pinned
> 9. IO queue is congested
> 10. pageout() start IO, but not finished
>
> So, (4) and (10) are perfectly good condition to wait.

Sure

> (1) and (8) might be solved
> by sleeping awhile, but it's unrelated on io-congestion. but might not be. It only works
> by lucky. So I don't like to depned on luck.

In this case, waiting a while really in the right thing to do. It stalls
the caller, but it's a high-order allocation. The alternative is for it
to keep scanning which when under memory pressure could result in far
too many pages being evicted. How long to wait is a tricky one to answer
but I would recommend making this a low priority.

> (9) can be solved by io
> waiting. but congestion_wait() is NOT correct wait. congestion_wait() mean
> "sleep until one or more block device in the system are no congested". That said,
> if the system have two or more disks, congestion_wait() doesn't works well for
> synchronous lumpy reclaim purpose. btw, desktop user oftern use USB storage
> device.


Indeed not. Eliminating congestion_wait there and depending instad on
wait_on_page_writeback() is a better feedback mechanism and makes sense.

> (2), (3), (5), (6) and (7) can't be solved by waiting. It's just silly.
>

Agreed.

> In the other hand, synchrounous lumpy reclaim work fine following situation.
>
> 1. called shrink_page_list(PAGEOUT_IO_ASYNC)
> 2. pageout() kicked IO
> 3. waiting by wait_on_page_writeback()
> 4. application touched the page again. and the page became dirty again
> 5. IO finished, and wakeuped reclaim thread
> 6. called pageout()
> 7. called wait_on_page_writeback() again
> 8. ok. we are successful high order reclaim
>
> So, I'd like to narrowing to invoke synchrounous lumpy reclaim condtion.
>

Which is reasonable.

>
> >
> > > 3. pageout() is intended anynchronous api. but doesn't works so.
> > >
> > > pageout() call ->writepage with wbc->nonblocking=1. because if the system have
> > > default vm.dirty_ratio (i.e. 20), we have 80% clean memory. so, getting stuck
> > > on one page is stupid, we should scan much pages as soon as possible.
> > >
> > > HOWEVER, block layer ignore this argument. if slow usb memory device connect
> > > to the system, ->writepage() will sleep long time. because submit_bio() call
> > > get_request_wait() unconditionally and it doesn't have any PF_MEMALLOC task
> > > bonus.
> >
> > Is this not a problem in the writeback layer rather than pageout()
> > specifically?
>
> Well, outside pageout(), probably only XFS makes PF_MEMALLOC + writeout.
> because PF_MEMALLOC is enabled only very limited situation. but I don't know
> XFS detail at all. I can't tell this area...
>

All direct reclaimers have PF_MEMALLOC set so it's not that limited a
situation. See here

p->flags |= PF_MEMALLOC;
lockdep_set_current_reclaim_state(gfp_mask);
reclaim_state.reclaimed_slab = 0;
p->reclaim_state = &reclaim_state;

*did_some_progress = try_to_free_pages(zonelist, order, gfp_mask, nodemask);

p->reclaim_state = NULL;
lockdep_clear_current_reclaim_state();
p->flags &= ~PF_MEMALLOC;

> > > 4. synchronous lumpy reclaim call clear_active_flags(). but it is also silly.
> > >
> > > Now, page_check_references() ignore pte young bit when we are processing lumpy reclaim.
> > > Then, In almostly case, PageActive() mean "swap device is full". Therefore,
> > > waiting IO and retry pageout() are just silly.
> > >
> >
> > try_to_unmap also obey reference bits. If you remove the call to
> > clear_active_flags, then pageout should pass TTY_IGNORE_ACCESS to
> > try_to_unmap(). I had a patch to do this but it didn't improve
> > high-order allocation success rates any so I dropped it.
>
> I think this is unrelated issue. actually, page_referenced() is called before try_to_unmap()
> and page_referenced() will drop pte young bit. This logic have very narrowing race. but
> I don't think this is big matter practically.
>
> And, As I said, PageActive() mean retry is not meaningful. usuallty swap full doen't clear
> even if waiting a while.
>

Ok.

--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: KOSAKI Motohiro on
> > (1) and (8) might be solved
> > by sleeping awhile, but it's unrelated on io-congestion. but might not be. It only works
> > by lucky. So I don't like to depned on luck.
>
> In this case, waiting a while really in the right thing to do. It stalls
> the caller, but it's a high-order allocation. The alternative is for it
> to keep scanning which when under memory pressure could result in far
> too many pages being evicted. How long to wait is a tricky one to answer
> but I would recommend making this a low priority.

For case (1), just lock_page() instead trylock is brilliant way than random sleep.
Is there any good reason to give up synchrounous lumpy reclaim when trylock_page() failed?
IOW, briefly lock_page() and wait_on_page_writeback() have the same latency. why should
we only avoid former?

side note: page lock contention is very common case.

For case (8), I don't think sleeping is right way. get_page() is used in really various place of
our kernel. so we can't assume it's only temporary reference count increasing. In the other
hand, this contention is not so common because shrink_page_list() is excluded from IO
activity by page-lock and wait_on_page_writeback(). so I think giving up this case don't
makes too many pages eviction.
If you disagree, can you please explain your expected bad scinario?



> > > > 3. pageout() is intended anynchronous api. but doesn't works so.
> > > >
> > > > pageout() call ->writepage with wbc->nonblocking=1. because if the system have
> > > > default vm.dirty_ratio (i.e. 20), we have 80% clean memory. so, getting stuck
> > > > on one page is stupid, we should scan much pages as soon as possible.
> > > >
> > > > HOWEVER, block layer ignore this argument. if slow usb memory device connect
> > > > to the system, ->writepage() will sleep long time. because submit_bio() call
> > > > get_request_wait() unconditionally and it doesn't have any PF_MEMALLOC task
> > > > bonus.
> > >
> > > Is this not a problem in the writeback layer rather than pageout()
> > > specifically?
> >
> > Well, outside pageout(), probably only XFS makes PF_MEMALLOC + writeout.
> > because PF_MEMALLOC is enabled only very limited situation. but I don't know
> > XFS detail at all. I can't tell this area...
> >
>
> All direct reclaimers have PF_MEMALLOC set so it's not that limited a
> situation. See here

Yes, all direct reclaimers have PF_MEMALLOC. but usually all direct reclaimers don't call
any IO related function except pageout(). As far as I know, current shrink_icache() and
shrink_dcache() doesn't make IO. Am I missing something?



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Mel Gorman on
On Fri, Jul 30, 2010 at 01:54:53PM +0900, KOSAKI Motohiro wrote:
> > > (1) and (8) might be solved
> > > by sleeping awhile, but it's unrelated on io-congestion. but might not be. It only works
> > > by lucky. So I don't like to depned on luck.
> >
> > In this case, waiting a while really in the right thing to do. It stalls
> > the caller, but it's a high-order allocation. The alternative is for it
> > to keep scanning which when under memory pressure could result in far
> > too many pages being evicted. How long to wait is a tricky one to answer
> > but I would recommend making this a low priority.
>
> For case (1), just lock_page() instead trylock is brilliant way than random sleep.
> Is there any good reason to give up synchrounous lumpy reclaim when trylock_page() failed?
> IOW, briefly lock_page() and wait_on_page_writeback() have the same latency. why should
> we only avoid former?
>

No reason. Using lock_page() in the synchronous case would be a sensible
choice. As you are realising, there are a number of warts around lumpy
reclaim that are long overdue for a good look :/

> side note: page lock contention is very common case.
>
> For case (8), I don't think sleeping is right way. get_page() is used in really various place of
> our kernel. so we can't assume it's only temporary reference count increasing.

In what case is a munlocked pages reference count permanently increased and
why is this not a memory leak?

> In the other
> hand, this contention is not so common because shrink_page_list() is excluded from IO
> activity by page-lock and wait_on_page_writeback(). so I think giving up this case don't
> makes too many pages eviction.
> If you disagree, can you please explain your expected bad scinario?
>

Right now, I can't think of a problem with calling lock_page instead of
trylock for synchronous lumpy reclaim.

> > > > > 3. pageout() is intended anynchronous api. but doesn't works so.
> > > > >
> > > > > pageout() call ->writepage with wbc->nonblocking=1. because if the system have
> > > > > default vm.dirty_ratio (i.e. 20), we have 80% clean memory. so, getting stuck
> > > > > on one page is stupid, we should scan much pages as soon as possible.
> > > > >
> > > > > HOWEVER, block layer ignore this argument. if slow usb memory device connect
> > > > > to the system, ->writepage() will sleep long time. because submit_bio() call
> > > > > get_request_wait() unconditionally and it doesn't have any PF_MEMALLOC task
> > > > > bonus.
> > > >
> > > > Is this not a problem in the writeback layer rather than pageout()
> > > > specifically?
> > >
> > > Well, outside pageout(), probably only XFS makes PF_MEMALLOC + writeout.
> > > because PF_MEMALLOC is enabled only very limited situation. but I don't know
> > > XFS detail at all. I can't tell this area...
> > >
> >
> > All direct reclaimers have PF_MEMALLOC set so it's not that limited a
> > situation. See here
>
> Yes, all direct reclaimers have PF_MEMALLOC. but usually all direct reclaimers don't call
> any IO related function except pageout(). As far as I know, current shrink_icache() and
> shrink_dcache() doesn't make IO. Am I missing something?
>

Not that I'm aware of but it's not something I would know offhand. Will
go digging.

--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: KOSAKI Motohiro on
> > side note: page lock contention is very common case.
> >
> > For case (8), I don't think sleeping is right way. get_page() is used in really various place of
> > our kernel. so we can't assume it's only temporary reference count increasing.
>
> In what case is a munlocked pages reference count permanently increased and
> why is this not a memory leak?

V4L, audio, GEM and/or other multimedia driver?



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Mel Gorman on
On Sun, Aug 01, 2010 at 05:47:08PM +0900, KOSAKI Motohiro wrote:
> > > side note: page lock contention is very common case.
> > >
> > > For case (8), I don't think sleeping is right way. get_page() is used in really various place of
> > > our kernel. so we can't assume it's only temporary reference count increasing.
> >
> > In what case is a munlocked pages reference count permanently increased and
> > why is this not a memory leak?
>
> V4L, audio, GEM and/or other multimedia driver?
>

Ok, that is quite likely. Have you made a start on a series related to
lumpy reclaim? I was holding off making a start on such a thing while I
reviewed the other writeback issues and travelling to MM Summit is going
to delay things for me. If you haven't started when I get back, I'll
make some sort of stab at it.

--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/