From: Andrea Righi on
On Fri, Feb 26, 2010 at 04:48:11PM -0500, Vivek Goyal wrote:
> On Thu, Feb 25, 2010 at 04:12:11PM +0100, Andrea Righi wrote:
> > On Tue, Feb 23, 2010 at 04:29:43PM -0500, Vivek Goyal wrote:
> > > On Sun, Feb 21, 2010 at 04:18:45PM +0100, Andrea Righi wrote:
> > >
> > > [..]
> > > > diff --git a/mm/page-writeback.c b/mm/page-writeback.c
> > > > index 0b19943..c9ff1cd 100644
> > > > --- a/mm/page-writeback.c
> > > > +++ b/mm/page-writeback.c
> > > > @@ -137,10 +137,11 @@ static struct prop_descriptor vm_dirties;
> > > > */
> > > > static int calc_period_shift(void)
> > > > {
> > > > - unsigned long dirty_total;
> > > > + unsigned long dirty_total, dirty_bytes;
> > > >
> > > > - if (vm_dirty_bytes)
> > > > - dirty_total = vm_dirty_bytes / PAGE_SIZE;
> > > > + dirty_bytes = mem_cgroup_dirty_bytes();
> > > > + if (dirty_bytes)
> > > > + dirty_total = dirty_bytes / PAGE_SIZE;
> > > > else
> > > > dirty_total = (vm_dirty_ratio * determine_dirtyable_memory()) /
> > > > 100;
> > >
> > > Ok, I don't understand this so I better ask. Can you explain a bit how memory
> > > cgroup dirty ratio is going to play with per BDI dirty proportion thing.
> > >
> > > Currently we seem to be calculating per BDI proportion (based on recently
> > > completed events), of system wide dirty ratio and decide whether a process
> > > should be throttled or not.
> > >
> > > Because throttling decision is also based on BDI and its proportion, how
> > > are we going to fit it with mem cgroup? Is it going to be BDI proportion
> > > of dirty memory with-in memory cgroup (and not system wide)?
> >
> > IMHO we need to calculate the BDI dirty threshold as a function of the
> > cgroup's dirty memory, and keep BDI statistics system wide.
> >
> > So, if a task is generating some writes, the threshold to start itself
> > the writeback will be calculated as a function of the cgroup's dirty
> > memory. If the BDI dirty memory is greater than this threshold, the task
> > must start to writeback dirty pages until it reaches the expected dirty
> > limit.
> >
>
> Ok, so calculate dirty per cgroup and calculate BDI's proportion from
> cgroup dirty? So will you be keeping track of vm_completion events per
> cgroup or will rely on existing system wide and per BDI completion events
> to calculate BDI proportion?
>
> BDI proportion are more of an indication of device speed and faster device
> gets higher share of dirty, so may be we don't have to keep track of
> completion events per cgroup and can rely on system wide completion events
> for calculating the proportion of a BDI.
>
> > OK, in this way a cgroup with a small dirty limit may be forced to
> > writeback a lot of pages dirtied by other cgroups on the same device.
> > But this is always related to the fact that tasks are forced to
> > writeback dirty inodes randomly, and not the inodes they've actually
> > dirtied.
>
> So we are left with following two issues.
>
> - Should we rely on global BDI stats for BDI_RECLAIMABLE and BDI_WRITEBACK
> or we need to make these per cgroup to determine actually how many pages
> have been dirtied by a cgroup and force writeouts accordingly?
>
> - Once we decide to throttle a cgroup, it should write its inodes and
> should not be serialized behind other cgroup's inodes.

We could try to save who made the inode dirty
(inode->cgroup_that_made_inode_dirty) so that during the active
writeback each cgroup can be forced to write only its own inodes.

-Andrea
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Vivek Goyal on
On Fri, Feb 26, 2010 at 11:21:21PM +0100, Andrea Righi wrote:
> On Fri, Feb 26, 2010 at 04:48:11PM -0500, Vivek Goyal wrote:
> > On Thu, Feb 25, 2010 at 04:12:11PM +0100, Andrea Righi wrote:
> > > On Tue, Feb 23, 2010 at 04:29:43PM -0500, Vivek Goyal wrote:
> > > > On Sun, Feb 21, 2010 at 04:18:45PM +0100, Andrea Righi wrote:
> > > >
> > > > [..]
> > > > > diff --git a/mm/page-writeback.c b/mm/page-writeback.c
> > > > > index 0b19943..c9ff1cd 100644
> > > > > --- a/mm/page-writeback.c
> > > > > +++ b/mm/page-writeback.c
> > > > > @@ -137,10 +137,11 @@ static struct prop_descriptor vm_dirties;
> > > > > */
> > > > > static int calc_period_shift(void)
> > > > > {
> > > > > - unsigned long dirty_total;
> > > > > + unsigned long dirty_total, dirty_bytes;
> > > > >
> > > > > - if (vm_dirty_bytes)
> > > > > - dirty_total = vm_dirty_bytes / PAGE_SIZE;
> > > > > + dirty_bytes = mem_cgroup_dirty_bytes();
> > > > > + if (dirty_bytes)
> > > > > + dirty_total = dirty_bytes / PAGE_SIZE;
> > > > > else
> > > > > dirty_total = (vm_dirty_ratio * determine_dirtyable_memory()) /
> > > > > 100;
> > > >
> > > > Ok, I don't understand this so I better ask. Can you explain a bit how memory
> > > > cgroup dirty ratio is going to play with per BDI dirty proportion thing.
> > > >
> > > > Currently we seem to be calculating per BDI proportion (based on recently
> > > > completed events), of system wide dirty ratio and decide whether a process
> > > > should be throttled or not.
> > > >
> > > > Because throttling decision is also based on BDI and its proportion, how
> > > > are we going to fit it with mem cgroup? Is it going to be BDI proportion
> > > > of dirty memory with-in memory cgroup (and not system wide)?
> > >
> > > IMHO we need to calculate the BDI dirty threshold as a function of the
> > > cgroup's dirty memory, and keep BDI statistics system wide.
> > >
> > > So, if a task is generating some writes, the threshold to start itself
> > > the writeback will be calculated as a function of the cgroup's dirty
> > > memory. If the BDI dirty memory is greater than this threshold, the task
> > > must start to writeback dirty pages until it reaches the expected dirty
> > > limit.
> > >
> >
> > Ok, so calculate dirty per cgroup and calculate BDI's proportion from
> > cgroup dirty? So will you be keeping track of vm_completion events per
> > cgroup or will rely on existing system wide and per BDI completion events
> > to calculate BDI proportion?
> >
> > BDI proportion are more of an indication of device speed and faster device
> > gets higher share of dirty, so may be we don't have to keep track of
> > completion events per cgroup and can rely on system wide completion events
> > for calculating the proportion of a BDI.
> >
> > > OK, in this way a cgroup with a small dirty limit may be forced to
> > > writeback a lot of pages dirtied by other cgroups on the same device.
> > > But this is always related to the fact that tasks are forced to
> > > writeback dirty inodes randomly, and not the inodes they've actually
> > > dirtied.
> >
> > So we are left with following two issues.
> >
> > - Should we rely on global BDI stats for BDI_RECLAIMABLE and BDI_WRITEBACK
> > or we need to make these per cgroup to determine actually how many pages
> > have been dirtied by a cgroup and force writeouts accordingly?
> >
> > - Once we decide to throttle a cgroup, it should write its inodes and
> > should not be serialized behind other cgroup's inodes.
>
> We could try to save who made the inode dirty
> (inode->cgroup_that_made_inode_dirty) so that during the active
> writeback each cgroup can be forced to write only its own inodes.

Yes, but that will require to store a reference to memcg and will become
little complicated.

I was thinking of just matching the cgroup of task being throttled and
memcg of first dirty page in the inode. So we can possibly implement
something like in memcontroller.

bool memcg_task_inode_cgroup_match(inode)

and this function will retrieve first dirty page and compare the cgroup of
that with task memory cgroup. No hassle of storing a pointer hence
reference to memcg.

Well, we could store css_id, and no need to keep a reference to the
memcg. But I guess not storing anything in inode will be simpler.

Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: KAMEZAWA Hiroyuki on
On Fri, 26 Feb 2010 16:48:11 -0500
Vivek Goyal <vgoyal(a)redhat.com> wrote:

> On Thu, Feb 25, 2010 at 04:12:11PM +0100, Andrea Righi wrote:
> > On Tue, Feb 23, 2010 at 04:29:43PM -0500, Vivek Goyal wrote:
> > > On Sun, Feb 21, 2010 at 04:18:45PM +0100, Andrea Righi wrote:

> Because bdi_thres calculation will be based on per cgroup dirty and
> bdi_nr_reclaimable and bdi_nr_writeback will be system wide, we will be
> doing much more aggressive writeouts.
>
> But we will not achieve parallel writeback paths so probably will not help IO
> controller a lot.
>
> Kame-san, is it a problem, with current memory cgroups where writeback is
> not happening that actively, and you run into situation where there are too
> many dirty pages in a cgroup and reclaim can take long time?
>
Hmm, not same situation to the global memory management, but we have similar.

In memcg, we just count user's page, "hard to reclaim" situation doesn't happen.
But "reclaim is slower than expected" is an usual problem.

When you try
% dd id=/dev/zero of=./tmpfifle .....
under proper limitation of memcg, you'll find dd is very slow.
We know background writeback helps this situation. We need to kick background
write-back.

Thanks,
-Kame

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Vivek Goyal on
On Mon, Mar 01, 2010 at 10:23:40PM +0100, Andrea Righi wrote:
> Apply the cgroup dirty pages accounting and limiting infrastructure to
> the opportune kernel functions.
>
> Signed-off-by: Andrea Righi <arighi(a)develer.com>
> ---
> fs/fuse/file.c | 5 +++
> fs/nfs/write.c | 4 ++
> fs/nilfs2/segment.c | 10 +++++-
> mm/filemap.c | 1 +
> mm/page-writeback.c | 84 ++++++++++++++++++++++++++++++++------------------
> mm/rmap.c | 4 +-
> mm/truncate.c | 2 +
> 7 files changed, 76 insertions(+), 34 deletions(-)
>
> diff --git a/fs/fuse/file.c b/fs/fuse/file.c
> index a9f5e13..dbbdd53 100644
> --- a/fs/fuse/file.c
> +++ b/fs/fuse/file.c
> @@ -11,6 +11,7 @@
> #include <linux/pagemap.h>
> #include <linux/slab.h>
> #include <linux/kernel.h>
> +#include <linux/memcontrol.h>
> #include <linux/sched.h>
> #include <linux/module.h>
>
> @@ -1129,6 +1130,8 @@ static void fuse_writepage_finish(struct fuse_conn *fc, struct fuse_req *req)
>
> list_del(&req->writepages_entry);
> dec_bdi_stat(bdi, BDI_WRITEBACK);
> + mem_cgroup_update_stat(req->pages[0],
> + MEM_CGROUP_STAT_WRITEBACK_TEMP, -1);
> dec_zone_page_state(req->pages[0], NR_WRITEBACK_TEMP);
> bdi_writeout_inc(bdi);
> wake_up(&fi->page_waitq);
> @@ -1240,6 +1243,8 @@ static int fuse_writepage_locked(struct page *page)
> req->inode = inode;
>
> inc_bdi_stat(mapping->backing_dev_info, BDI_WRITEBACK);
> + mem_cgroup_update_stat(tmp_page,
> + MEM_CGROUP_STAT_WRITEBACK_TEMP, 1);
> inc_zone_page_state(tmp_page, NR_WRITEBACK_TEMP);
> end_page_writeback(page);
>
> diff --git a/fs/nfs/write.c b/fs/nfs/write.c
> index b753242..7316f7a 100644
> --- a/fs/nfs/write.c
> +++ b/fs/nfs/write.c
> @@ -439,6 +439,7 @@ nfs_mark_request_commit(struct nfs_page *req)
> req->wb_index,
> NFS_PAGE_TAG_COMMIT);
> spin_unlock(&inode->i_lock);
> + mem_cgroup_update_stat(req->wb_page, MEM_CGROUP_STAT_UNSTABLE_NFS, 1);
> inc_zone_page_state(req->wb_page, NR_UNSTABLE_NFS);
> inc_bdi_stat(req->wb_page->mapping->backing_dev_info, BDI_UNSTABLE);
> __mark_inode_dirty(inode, I_DIRTY_DATASYNC);
> @@ -450,6 +451,7 @@ nfs_clear_request_commit(struct nfs_page *req)
> struct page *page = req->wb_page;
>
> if (test_and_clear_bit(PG_CLEAN, &(req)->wb_flags)) {
> + mem_cgroup_update_stat(page, MEM_CGROUP_STAT_UNSTABLE_NFS, -1);
> dec_zone_page_state(page, NR_UNSTABLE_NFS);
> dec_bdi_stat(page->mapping->backing_dev_info, BDI_UNSTABLE);
> return 1;
> @@ -1273,6 +1275,8 @@ nfs_commit_list(struct inode *inode, struct list_head *head, int how)
> req = nfs_list_entry(head->next);
> nfs_list_remove_request(req);
> nfs_mark_request_commit(req);
> + mem_cgroup_update_stat(req->wb_page,
> + MEM_CGROUP_STAT_UNSTABLE_NFS, -1);
> dec_zone_page_state(req->wb_page, NR_UNSTABLE_NFS);
> dec_bdi_stat(req->wb_page->mapping->backing_dev_info,
> BDI_UNSTABLE);
> diff --git a/fs/nilfs2/segment.c b/fs/nilfs2/segment.c
> index ada2f1b..aef6d13 100644
> --- a/fs/nilfs2/segment.c
> +++ b/fs/nilfs2/segment.c
> @@ -1660,8 +1660,11 @@ nilfs_copy_replace_page_buffers(struct page *page, struct list_head *out)
> } while (bh = bh->b_this_page, bh2 = bh2->b_this_page, bh != head);
> kunmap_atomic(kaddr, KM_USER0);
>
> - if (!TestSetPageWriteback(clone_page))
> + if (!TestSetPageWriteback(clone_page)) {
> + mem_cgroup_update_stat(clone_page,
> + MEM_CGROUP_STAT_WRITEBACK, 1);
> inc_zone_page_state(clone_page, NR_WRITEBACK);
> + }
> unlock_page(clone_page);
>
> return 0;
> @@ -1783,8 +1786,11 @@ static void __nilfs_end_page_io(struct page *page, int err)
> }
>
> if (buffer_nilfs_allocated(page_buffers(page))) {
> - if (TestClearPageWriteback(page))
> + if (TestClearPageWriteback(page)) {
> + mem_cgroup_update_stat(clone_page,
> + MEM_CGROUP_STAT_WRITEBACK, -1);
> dec_zone_page_state(page, NR_WRITEBACK);
> + }
> } else
> end_page_writeback(page);
> }
> diff --git a/mm/filemap.c b/mm/filemap.c
> index fe09e51..f85acae 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -135,6 +135,7 @@ void __remove_from_page_cache(struct page *page)
> * having removed the page entirely.
> */
> if (PageDirty(page) && mapping_cap_account_dirty(mapping)) {
> + mem_cgroup_update_stat(page, MEM_CGROUP_STAT_FILE_DIRTY, -1);
> dec_zone_page_state(page, NR_FILE_DIRTY);
> dec_bdi_stat(mapping->backing_dev_info, BDI_DIRTY);
> }
> diff --git a/mm/page-writeback.c b/mm/page-writeback.c
> index 5a0f8f3..d83f41c 100644
> --- a/mm/page-writeback.c
> +++ b/mm/page-writeback.c
> @@ -137,13 +137,14 @@ static struct prop_descriptor vm_dirties;
> */
> static int calc_period_shift(void)
> {
> - unsigned long dirty_total;
> + unsigned long dirty_total, dirty_bytes;
>
> - if (vm_dirty_bytes)
> - dirty_total = vm_dirty_bytes / PAGE_SIZE;
> + dirty_bytes = mem_cgroup_dirty_bytes();
> + if (dirty_bytes)
> + dirty_total = dirty_bytes / PAGE_SIZE;
> else
> - dirty_total = (vm_dirty_ratio * determine_dirtyable_memory()) /
> - 100;
> + dirty_total = (mem_cgroup_dirty_ratio() *
> + determine_dirtyable_memory()) / 100;
> return 2 + ilog2(dirty_total - 1);
> }
>
> @@ -408,14 +409,16 @@ static unsigned long highmem_dirtyable_memory(unsigned long total)
> */
> unsigned long determine_dirtyable_memory(void)
> {
> - unsigned long x;
> -
> - x = global_page_state(NR_FREE_PAGES) + global_reclaimable_pages();
> + unsigned long memory;
> + s64 memcg_memory;
>
> + memory = global_page_state(NR_FREE_PAGES) + global_reclaimable_pages();
> if (!vm_highmem_is_dirtyable)
> - x -= highmem_dirtyable_memory(x);
> -
> - return x + 1; /* Ensure that we never return 0 */
> + memory -= highmem_dirtyable_memory(memory);
> + memcg_memory = mem_cgroup_page_stat(MEMCG_NR_DIRTYABLE_PAGES);
> + if (memcg_memory < 0)
> + return memory + 1;
> + return min((unsigned long)memcg_memory, memory + 1);
> }
>
> void
> @@ -423,26 +426,28 @@ get_dirty_limits(unsigned long *pbackground, unsigned long *pdirty,
> unsigned long *pbdi_dirty, struct backing_dev_info *bdi)
> {
> unsigned long background;
> - unsigned long dirty;
> + unsigned long dirty, dirty_bytes, dirty_background;
> unsigned long available_memory = determine_dirtyable_memory();
> struct task_struct *tsk;
>
> - if (vm_dirty_bytes)
> - dirty = DIV_ROUND_UP(vm_dirty_bytes, PAGE_SIZE);
> + dirty_bytes = mem_cgroup_dirty_bytes();
> + if (dirty_bytes)
> + dirty = DIV_ROUND_UP(dirty_bytes, PAGE_SIZE);
> else {
> int dirty_ratio;
>
> - dirty_ratio = vm_dirty_ratio;
> + dirty_ratio = mem_cgroup_dirty_ratio();
> if (dirty_ratio < 5)
> dirty_ratio = 5;
> dirty = (dirty_ratio * available_memory) / 100;
> }
>
> - if (dirty_background_bytes)
> - background = DIV_ROUND_UP(dirty_background_bytes, PAGE_SIZE);
> + dirty_background = mem_cgroup_dirty_background_bytes();
> + if (dirty_background)
> + background = DIV_ROUND_UP(dirty_background, PAGE_SIZE);
> else
> - background = (dirty_background_ratio * available_memory) / 100;
> -
> + background = (mem_cgroup_dirty_background_ratio() *
> + available_memory) / 100;
> if (background >= dirty)
> background = dirty / 2;
> tsk = current;
> @@ -508,9 +513,13 @@ static void balance_dirty_pages(struct address_space *mapping,
> get_dirty_limits(&background_thresh, &dirty_thresh,
> &bdi_thresh, bdi);
>
> - nr_reclaimable = global_page_state(NR_FILE_DIRTY) +
> + nr_reclaimable = mem_cgroup_page_stat(MEMCG_NR_RECLAIM_PAGES);
> + nr_writeback = mem_cgroup_page_stat(MEMCG_NR_WRITEBACK);
> + if ((nr_reclaimable < 0) || (nr_writeback < 0)) {
> + nr_reclaimable = global_page_state(NR_FILE_DIRTY) +
> global_page_state(NR_UNSTABLE_NFS);
> - nr_writeback = global_page_state(NR_WRITEBACK);
> + nr_writeback = global_page_state(NR_WRITEBACK);
> + }
>
> bdi_nr_reclaimable = bdi_stat(bdi, BDI_DIRTY);
> if (bdi_cap_account_unstable(bdi)) {
> @@ -611,10 +620,12 @@ static void balance_dirty_pages(struct address_space *mapping,
> * In normal mode, we start background writeout at the lower
> * background_thresh, to keep the amount of dirty memory low.
> */
> + nr_reclaimable = mem_cgroup_page_stat(MEMCG_NR_RECLAIM_PAGES);
> + if (nr_reclaimable < 0)
> + nr_reclaimable = global_page_state(NR_FILE_DIRTY) +
> + global_page_state(NR_UNSTABLE_NFS);
> if ((laptop_mode && pages_written) ||
> - (!laptop_mode && ((global_page_state(NR_FILE_DIRTY)
> - + global_page_state(NR_UNSTABLE_NFS))
> - > background_thresh)))
> + (!laptop_mode && (nr_reclaimable > background_thresh)))
> bdi_start_writeback(bdi, NULL, 0);
> }
>
> @@ -678,6 +689,8 @@ void throttle_vm_writeout(gfp_t gfp_mask)
> unsigned long dirty_thresh;
>
> for ( ; ; ) {
> + unsigned long dirty;
> +
> get_dirty_limits(&background_thresh, &dirty_thresh, NULL, NULL);
>
> /*
> @@ -686,10 +699,14 @@ void throttle_vm_writeout(gfp_t gfp_mask)
> */
> dirty_thresh += dirty_thresh / 10; /* wheeee... */
>
> - if (global_page_state(NR_UNSTABLE_NFS) +
> - global_page_state(NR_WRITEBACK) <= dirty_thresh)
> - break;
> - congestion_wait(BLK_RW_ASYNC, HZ/10);
> +
> + dirty = mem_cgroup_page_stat(MEMCG_NR_DIRTY_WRITEBACK_PAGES);
> + if (dirty < 0)
> + dirty = global_page_state(NR_UNSTABLE_NFS) +
> + global_page_state(NR_WRITEBACK);

dirty is unsigned long. As mentioned last time, above will never be true?
In general these patches look ok to me. I will do some testing with these.

Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Andrea Righi on
On Mon, Mar 01, 2010 at 05:02:08PM -0500, Vivek Goyal wrote:
> > @@ -686,10 +699,14 @@ void throttle_vm_writeout(gfp_t gfp_mask)
> > */
> > dirty_thresh += dirty_thresh / 10; /* wheeee... */
> >
> > - if (global_page_state(NR_UNSTABLE_NFS) +
> > - global_page_state(NR_WRITEBACK) <= dirty_thresh)
> > - break;
> > - congestion_wait(BLK_RW_ASYNC, HZ/10);
> > +
> > + dirty = mem_cgroup_page_stat(MEMCG_NR_DIRTY_WRITEBACK_PAGES);
> > + if (dirty < 0)
> > + dirty = global_page_state(NR_UNSTABLE_NFS) +
> > + global_page_state(NR_WRITEBACK);
>
> dirty is unsigned long. As mentioned last time, above will never be true?
> In general these patches look ok to me. I will do some testing with these.

Re-introduced the same bug. My bad. :(

The value returned from mem_cgroup_page_stat() can be negative, i.e.
when memory cgroup is disabled. We could simply use a long for dirty,
the unit is in # of pages so s64 should be enough. Or cast dirty to long
only for the check (see below).

Thanks!
-Andrea

Signed-off-by: Andrea Righi <arighi(a)develer.com>
---
mm/page-writeback.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index d83f41c..dbee976 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -701,7 +701,7 @@ void throttle_vm_writeout(gfp_t gfp_mask)


dirty = mem_cgroup_page_stat(MEMCG_NR_DIRTY_WRITEBACK_PAGES);
- if (dirty < 0)
+ if ((long)dirty < 0)
dirty = global_page_state(NR_UNSTABLE_NFS) +
global_page_state(NR_WRITEBACK);
if (dirty <= dirty_thresh)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/