From: Nick Piggin on
On Thu, May 27, 2010 at 01:51:09PM +0300, Artem Bityutskiy wrote:
> Nick, thanks for serialization suggestion.
>
> On Thu, 2010-05-27 at 17:22 +1000, Nick Piggin wrote:
> > Yeah, we definitely don't want to add global cacheline writes in the
> > common case. Also I don't know why you do the strange -1 value. I
> > couldn't seem to find where you defined bdi_arm_supers_timer();
>
> It is in mm/backing-dev.c:376 in today's Linus' tree. The -1 is used to

Yep I should have grepped. /hangs head


> indicate that 'sync_supers()' is in progress and avoid arming timer in
> that case. But yes, this is not really needed.

OK please remove it.


> > But why doesn't this work?
> >
> > sb->s_dirty = 1;
> > smp_mb(); /* corresponding MB is in test_and_clear_bit */
>
> AFAIU, test_and_clear_bit assumes 2 barriers - before the test and after
> the clear. Then I do not really understand why this smp_mb is needed.

You almost always need barriers executed on all sides of the
synchronisation protocol. Actually we need another, I confused
myself with the test_and_clear at the end.

1. sb->s_dirty = 1; /* store */
2. if (!supers_timer_armed) /* load */
3. supers_timer_armed = 1; /* store */

and

A. supers_timer_armed = 0; /* store */
B. if (sb->s_dirty) /* load */
C. sb->s_dirty = 0 /* store */

If these two sequences are executed, it must result in
sb->s_dirty == 1 iff supers_timer_armed

* If 2 is executed before 1 is visible, then 2 may miss A before B sees 1.
* If B is executed before A is visible, then B may miss 1 before 2 sees A.

So we need smp_mb() between 1/2 and A/B (I missed the 2nd one).

Now we still have a problem. After sync task rechecks
supers_timer_armed, the supers timer might execute before we mark
ourself as sleeping, and so we have another lost wakeup. It needs
to be checked after set_current_state.

Let's try this again. I much prefer to name the variable something
that indicates whether there is more work to be done, or whether we
can sleep.

How about something like this?
--

Index: linux-2.6/mm/backing-dev.c
===================================================================
--- linux-2.6.orig/mm/backing-dev.c
+++ linux-2.6/mm/backing-dev.c
@@ -45,6 +45,7 @@ LIST_HEAD(bdi_pending_list);

static struct task_struct *sync_supers_tsk;
static struct timer_list sync_supers_timer;
+static unsigned long supers_dirty __read_mostly;

static int bdi_sync_supers(void *);
static void sync_supers_timer_fn(unsigned long);
@@ -251,7 +252,6 @@ static int __init default_bdi_init(void)

init_timer(&sync_supers_timer);
setup_timer(&sync_supers_timer, sync_supers_timer_fn, 0);
- bdi_arm_supers_timer();

err = bdi_init(&default_backing_dev_info);
if (!err)
@@ -362,17 +362,28 @@ static int bdi_sync_supers(void *unused)

while (!kthread_should_stop()) {
set_current_state(TASK_INTERRUPTIBLE);
- schedule();
+ if (!supers_dirty)
+ schedule();
+ else
+ __set_current_state(TASK_RUNNING);

+ supers_dirty = 0;
/*
- * Do this periodically, like kupdated() did before.
+ * supers_dirty store must be visible to mark_sb_dirty (below)
+ * before sync_supers runs (which loads sb->s_dirty).
*/
+ smp_mb();
sync_supers();
}

return 0;
}

+static void sync_supers_timer_fn(unsigned long unused)
+{
+ wake_up_process(sync_supers_tsk);
+}
+
void bdi_arm_supers_timer(void)
{
unsigned long next;
@@ -384,9 +395,17 @@ void bdi_arm_supers_timer(void)
mod_timer(&sync_supers_timer, round_jiffies_up(next));
}

-static void sync_supers_timer_fn(unsigned long unused)
+void mark_sb_dirty(struct super_block *sb)
{
- wake_up_process(sync_supers_tsk);
+ sb->s_dirty = 1;
+ /*
+ * sb->s_dirty store must be visible to sync_supers (above) before we
+ * load supers_dirty in case we need to re-arm the timer.
+ */
+ smp_mb();
+ if (likely(supers_dirty))
+ return;
+ supers_dirty = 1;
bdi_arm_supers_timer();
}

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Artem Bityutskiy on
On Thu, 2010-05-27 at 22:07 +1000, Nick Piggin wrote:
> 1. sb->s_dirty = 1; /* store */
> 2. if (!supers_timer_armed) /* load */
> 3. supers_timer_armed = 1; /* store */
>
> and
>
> A. supers_timer_armed = 0; /* store */
> B. if (sb->s_dirty) /* load */
> C. sb->s_dirty = 0 /* store */
>
> If these two sequences are executed, it must result in
> sb->s_dirty == 1 iff supers_timer_armed
>
> * If 2 is executed before 1 is visible, then 2 may miss A before B sees 1.
> * If B is executed before A is visible, then B may miss 1 before 2 sees A.
>
> So we need smp_mb() between 1/2 and A/B (I missed the 2nd one).

Yes, thanks for elaboration.

> How about something like this?

It looks good, many thanks! But I have few small notes.

> Index: linux-2.6/mm/backing-dev.c
> ===================================================================
> --- linux-2.6.orig/mm/backing-dev.c
> +++ linux-2.6/mm/backing-dev.c
> @@ -45,6 +45,7 @@ LIST_HEAD(bdi_pending_list);
>
> static struct task_struct *sync_supers_tsk;
> static struct timer_list sync_supers_timer;
> +static unsigned long supers_dirty __read_mostly;
>
> static int bdi_sync_supers(void *);
> static void sync_supers_timer_fn(unsigned long);
> @@ -251,7 +252,6 @@ static int __init default_bdi_init(void)
>
> init_timer(&sync_supers_timer);
> setup_timer(&sync_supers_timer, sync_supers_timer_fn, 0);
> - bdi_arm_supers_timer();
>
> err = bdi_init(&default_backing_dev_info);
> if (!err)
> @@ -362,17 +362,28 @@ static int bdi_sync_supers(void *unused)
>
> while (!kthread_should_stop()) {
> set_current_state(TASK_INTERRUPTIBLE);
> - schedule();
> + if (!supers_dirty)
> + schedule();
> + else
> + __set_current_state(TASK_RUNNING);

I think this will change the behavior of 'sync_supers()' too much. ATM,
it makes only one SB pass, then sleeps, then another one, then sleeps.
And we should probably preserve this behavior. So I'd rather make it:

if (supers_dirty)
bdi_arm_supers_timer();
set_current_state(TASK_INTERRUPTIBLE);
schedule();

This way we will keep the behavior closer to the original.

> + supers_dirty = 0;
> /*
> - * Do this periodically, like kupdated() did before.
> + * supers_dirty store must be visible to mark_sb_dirty (below)
> + * before sync_supers runs (which loads sb->s_dirty).
> */

Very minor, but the code tends to change quickly, and this note (below)
will probably become out-of-date soon.

> + smp_mb();

There is spin_lock(&sb_lock) in sync_supers(), so strictly speak this
'smp_mb()' is not needed if we move supers_dirty = 0 into
'sync_supers()' and add a comment that a mb is required, in case some
one modifies the code later?

Or this is not worth it?

> sync_supers();
> }
>
> return 0;
> }
>
> +static void sync_supers_timer_fn(unsigned long unused)
> +{
> + wake_up_process(sync_supers_tsk);
> +}
> +
> void bdi_arm_supers_timer(void)
> {
> unsigned long next;
> @@ -384,9 +395,17 @@ void bdi_arm_supers_timer(void)
> mod_timer(&sync_supers_timer, round_jiffies_up(next));
> }
>
> -static void sync_supers_timer_fn(unsigned long unused)
> +void mark_sb_dirty(struct super_block *sb)
> {
> - wake_up_process(sync_supers_tsk);
> + sb->s_dirty = 1;
> + /*
> + * sb->s_dirty store must be visible to sync_supers (above) before we
> + * load supers_dirty in case we need to re-arm the timer.
> + */
Similar for the "(above)" note.

> + smp_mb();
> + if (likely(supers_dirty))
> + return;
> + supers_dirty = 1;
> bdi_arm_supers_timer();
> }

Here is the with my modifications.

BTW, do you want me to keep you to be the patch author, add your
signed-off-by and my original commit message?

---
fs/super.c | 7 +++++++
mm/backing-dev.c | 21 ++++++++++++++++++---
2 files changed, 25 insertions(+), 3 deletions(-)

diff --git a/fs/super.c b/fs/super.c
index 2b418fb..c9ff6e2 100644
--- a/fs/super.c
+++ b/fs/super.c
@@ -364,6 +364,13 @@ void sync_supers(void)
{
struct super_block *sb, *n;

+ supers_dirty = 0;
+ /* smp_mb();
+ *
+ * supers_dirty store must be visible to mark_sb_dirty before
+ * sync_supers runs (which loads sb->s_dirty), so a barrier is needed
+ * but there is a spin_lock, thus smp_mb is commented out.
+ */
spin_lock(&sb_lock);
list_for_each_entry_safe(sb, n, &super_blocks, s_list) {
if (list_empty(&sb->s_instances))
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 660a87a..be7f734 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -45,6 +45,7 @@ LIST_HEAD(bdi_pending_list);

static struct task_struct *sync_supers_tsk;
static struct timer_list sync_supers_timer;
+static unsigned long supers_dirty __read_mostly;

static int bdi_sync_supers(void *);
static void sync_supers_timer_fn(unsigned long);
@@ -251,7 +252,6 @@ static int __init default_bdi_init(void)

init_timer(&sync_supers_timer);
setup_timer(&sync_supers_timer, sync_supers_timer_fn, 0);
- bdi_arm_supers_timer();

err = bdi_init(&default_backing_dev_info);
if (!err)
@@ -361,6 +361,8 @@ static int bdi_sync_supers(void *unused)
set_user_nice(current, 0);

while (!kthread_should_stop()) {
+ if (supers_dirty)
+ bdi_arm_supers_timer();
set_current_state(TASK_INTERRUPTIBLE);
schedule();

@@ -373,6 +375,11 @@ static int bdi_sync_supers(void *unused)
return 0;
}

+static void sync_supers_timer_fn(unsigned long unused)
+{
+ wake_up_process(sync_supers_tsk);
+}
+
void bdi_arm_supers_timer(void)
{
unsigned long next;
@@ -384,9 +391,17 @@ void bdi_arm_supers_timer(void)
mod_timer(&sync_supers_timer, round_jiffies_up(next));
}

-static void sync_supers_timer_fn(unsigned long unused)
+void mark_sb_dirty(struct super_block *sb)
{
- wake_up_process(sync_supers_tsk);
+ sb->s_dirty = 1;
+ /*
+ * sb->s_dirty store must be visible to sync_supers before we load
+ * supers_dirty in case we need to re-arm the timer.
+ */
+ smp_mb();
+ if (likely(supers_dirty))
+ return;
+ supers_dirty = 1;
bdi_arm_supers_timer();
}

--

--
Best Regards,
Artem Bityutskiy (Артём Битюцкий)

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Nick Piggin on
On Thu, May 27, 2010 at 06:21:33PM +0300, Artem Bityutskiy wrote:
> On Thu, 2010-05-27 at 22:07 +1000, Nick Piggin wrote:
> > 1. sb->s_dirty = 1; /* store */
> > 2. if (!supers_timer_armed) /* load */
> > 3. supers_timer_armed = 1; /* store */
> >
> > and
> >
> > A. supers_timer_armed = 0; /* store */
> > B. if (sb->s_dirty) /* load */
> > C. sb->s_dirty = 0 /* store */
> >
> > If these two sequences are executed, it must result in
> > sb->s_dirty == 1 iff supers_timer_armed
> >
> > * If 2 is executed before 1 is visible, then 2 may miss A before B sees 1.
> > * If B is executed before A is visible, then B may miss 1 before 2 sees A.
> >
> > So we need smp_mb() between 1/2 and A/B (I missed the 2nd one).
>
> Yes, thanks for elaboration.
>
> > How about something like this?
>
> It looks good, many thanks! But I have few small notes.
>
> > Index: linux-2.6/mm/backing-dev.c
> > ===================================================================
> > --- linux-2.6.orig/mm/backing-dev.c
> > +++ linux-2.6/mm/backing-dev.c
> > @@ -45,6 +45,7 @@ LIST_HEAD(bdi_pending_list);
> >
> > static struct task_struct *sync_supers_tsk;
> > static struct timer_list sync_supers_timer;
> > +static unsigned long supers_dirty __read_mostly;
> >
> > static int bdi_sync_supers(void *);
> > static void sync_supers_timer_fn(unsigned long);
> > @@ -251,7 +252,6 @@ static int __init default_bdi_init(void)
> >
> > init_timer(&sync_supers_timer);
> > setup_timer(&sync_supers_timer, sync_supers_timer_fn, 0);
> > - bdi_arm_supers_timer();
> >
> > err = bdi_init(&default_backing_dev_info);
> > if (!err)
> > @@ -362,17 +362,28 @@ static int bdi_sync_supers(void *unused)
> >
> > while (!kthread_should_stop()) {
> > set_current_state(TASK_INTERRUPTIBLE);
> > - schedule();
> > + if (!supers_dirty)
> > + schedule();
> > + else
> > + __set_current_state(TASK_RUNNING);
>
> I think this will change the behavior of 'sync_supers()' too much. ATM,
> it makes only one SB pass, then sleeps, then another one, then sleeps.
> And we should probably preserve this behavior. So I'd rather make it:
>
> if (supers_dirty)
> bdi_arm_supers_timer();
> set_current_state(TASK_INTERRUPTIBLE);
> schedule();
>
> This way we will keep the behavior closer to the original.

Well your previous code had the same issue (ie. it could loop again
in sync_supers). But fair point perhaps.

But we cannot do the above, because again the timer might go off
before we set current state. We'd lose the wakeup and never wake
up again.

Putting it inside set_current_state() should be OK. I suppose.


> > + supers_dirty = 0;
> > /*
> > - * Do this periodically, like kupdated() did before.
> > + * supers_dirty store must be visible to mark_sb_dirty (below)
> > + * before sync_supers runs (which loads sb->s_dirty).
> > */
>
> Very minor, but the code tends to change quickly, and this note (below)
> will probably become out-of-date soon.

Oh sure, get rid of the "(below)"


> > + smp_mb();
>
> There is spin_lock(&sb_lock) in sync_supers(), so strictly speak this
> 'smp_mb()' is not needed if we move supers_dirty = 0 into
> 'sync_supers()' and add a comment that a mb is required, in case some
> one modifies the code later?
>
> Or this is not worth it?

It's a bit tricky. spin_lock only gives an acquire barrier, which
prevents CPU executing instructions inside the critical section
before acquiring the lock. It actually allows stores to be deferred
from becoming visible to other CPUs until inside the critical section.
So the load of sb->s_dirty could indeed still happen before the
store is seen.

Locks do allow you to avoid thinking about barriers, but *only* when
all memory accesses to all shared variables are inside the locks
(or when a section has just a single access, which by definition don't
need ordering with another access).


> > sync_supers();
> > }
> >
> > return 0;
> > }
> >
> > +static void sync_supers_timer_fn(unsigned long unused)
> > +{
> > + wake_up_process(sync_supers_tsk);
> > +}
> > +
> > void bdi_arm_supers_timer(void)
> > {
> > unsigned long next;
> > @@ -384,9 +395,17 @@ void bdi_arm_supers_timer(void)
> > mod_timer(&sync_supers_timer, round_jiffies_up(next));
> > }
> >
> > -static void sync_supers_timer_fn(unsigned long unused)
> > +void mark_sb_dirty(struct super_block *sb)
> > {
> > - wake_up_process(sync_supers_tsk);
> > + sb->s_dirty = 1;
> > + /*
> > + * sb->s_dirty store must be visible to sync_supers (above) before we
> > + * load supers_dirty in case we need to re-arm the timer.
> > + */
> Similar for the "(above)" note.

Sure.


> > + smp_mb();
> > + if (likely(supers_dirty))
> > + return;
> > + supers_dirty = 1;
> > bdi_arm_supers_timer();
> > }
>
> Here is the with my modifications.
>
> BTW, do you want me to keep you to be the patch author, add your
> signed-off-by and my original commit message?

I'm not concerned. You contributed more to the idea+implementation,
so record yourself as author.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Artem Bityutskiy on
On Fri, 2010-05-28 at 01:44 +1000, Nick Piggin wrote:
> > I think this will change the behavior of 'sync_supers()' too much. ATM,
> > it makes only one SB pass, then sleeps, then another one, then sleeps.
> > And we should probably preserve this behavior. So I'd rather make it:
> >
> > if (supers_dirty)
> > bdi_arm_supers_timer();
> > set_current_state(TASK_INTERRUPTIBLE);
> > schedule();
> >
> > This way we will keep the behavior closer to the original.
>
> Well your previous code had the same issue (ie. it could loop again
> in sync_supers). But fair point perhaps.

I think no, it either armed the timer of went to sleep, but it does not
matter much :-)

> But we cannot do the above, because again the timer might go off
> before we set current state. We'd lose the wakeup and never wake
> up again.
>
> Putting it inside set_current_state() should be OK. I suppose.

Oh, right!

> > There is spin_lock(&sb_lock) in sync_supers(), so strictly speak this
> > 'smp_mb()' is not needed if we move supers_dirty = 0 into
> > 'sync_supers()' and add a comment that a mb is required, in case some
> > one modifies the code later?
> >
> > Or this is not worth it?
>
> It's a bit tricky. spin_lock only gives an acquire barrier, which
> prevents CPU executing instructions inside the critical section
> before acquiring the lock. It actually allows stores to be deferred
> from becoming visible to other CPUs until inside the critical section.
> So the load of sb->s_dirty could indeed still happen before the
> store is seen.
>
> Locks do allow you to avoid thinking about barriers, but *only* when
> all memory accesses to all shared variables are inside the locks
> (or when a section has just a single access, which by definition don't
> need ordering with another access).

Oh, ok. I need to read carefully Documentation/memory-barriers.txt.

> > BTW, do you want me to keep you to be the patch author, add your
> > signed-off-by and my original commit message?
>
> I'm not concerned. You contributed more to the idea+implementation,
> so record yourself as author.

Ok, but thank you a lot for helping!

--
Best Regards,
Artem Bityutskiy (Артём Битюцкий)

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Andrew Morton on
On Tue, 25 May 2010 16:49:12 +0300
Artem Bityutskiy <dedekind1(a)gmail.com> wrote:

> From: Artem Bityutskiy <Artem.Bityutskiy(a)nokia.com>
>
> The 'sync_supers' thread wakes up every 5 seconds (by default) and
> writes back all super blocks. It keeps waking up even if there
> are no dirty super-blocks. For many file-systems the superblock
> becomes dirty very rarely, if ever, so 'sync_supers' does not do
> anything most of the time.
>
> This patch improves 'sync_supers' and makes sleep if all superblocks
> are clean and there is nothing to do. This helps saving the power.
> This optimization is important for small battery-powered devices.
>
> Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy(a)nokia.com>
> ---
> include/linux/fs.h | 5 +----
> mm/backing-dev.c | 36 +++++++++++++++++++++++++++++++++++-
> 2 files changed, 36 insertions(+), 5 deletions(-)
>
> diff --git a/include/linux/fs.h b/include/linux/fs.h
> index c2ddeee..2d2560b 100644
> --- a/include/linux/fs.h
> +++ b/include/linux/fs.h
> @@ -1786,10 +1786,7 @@ extern void simple_set_mnt(struct vfsmount *mnt, struct super_block *sb);
> * Note, VFS does not provide any serialization for the super block clean/dirty
> * state changes, file-systems should take care of this.
> */
> -static inline void mark_sb_dirty(struct super_block *sb)
> -{
> - sb->s_dirty = 1;
> -}
> +void mark_sb_dirty(struct super_block *sb);
> static inline void mark_sb_clean(struct super_block *sb)
> {
> sb->s_dirty = 0;
> diff --git a/mm/backing-dev.c b/mm/backing-dev.c
> index 660a87a..14f3eb7 100644
> --- a/mm/backing-dev.c
> +++ b/mm/backing-dev.c
> @@ -45,6 +45,8 @@ LIST_HEAD(bdi_pending_list);
>
> static struct task_struct *sync_supers_tsk;
> static struct timer_list sync_supers_timer;
> +static int supers_timer_armed;

This thing's a bit ugly.

> +static DEFINE_SPINLOCK(supers_timer_lock);
>
> static int bdi_sync_supers(void *);
> static void sync_supers_timer_fn(unsigned long);
> @@ -355,6 +357,11 @@ static void bdi_flush_io(struct backing_dev_info *bdi)
> * or we risk deadlocking on ->s_umount. The longer term solution would be
> * to implement sync_supers_bdi() or similar and simply do it from the
> * bdi writeback tasks individually.
> + *
> + * Historically this thread woke up periodically, regardless of whether
> + * there was any dirty superblock. However, nowadays it is optimized to
> + * wake up only when there is something to synchronize - this helps to save
> + * power.
> */
> static int bdi_sync_supers(void *unused)
> {
> @@ -364,10 +371,24 @@ static int bdi_sync_supers(void *unused)
> set_current_state(TASK_INTERRUPTIBLE);
> schedule();
>
> + spin_lock(&supers_timer_lock);
> + /* Indicate that 'sync_supers' is in progress */
> + supers_timer_armed = -1;
> + spin_unlock(&supers_timer_lock);
> +
> /*
> * Do this periodically, like kupdated() did before.
> */
> sync_supers();
> +
> + spin_lock(&supers_timer_lock);
> + if (supers_timer_armed == 1)
> + /* A super block was marked as dirty meanwhile */
> + bdi_arm_supers_timer();
> + else
> + /* No more dirty superblocks - we've synced'em all */
> + supers_timer_armed = 0;
> + spin_unlock(&supers_timer_lock);
> }

I suspect the spinlock could be removed if you switched to bitops:
test_and_set_bit(0, supers_timer_armed);

Ahother possibility is to nuke supers_timer_armed() and use
timer_pending(), mod_timer(), etc directly.


> return 0;
> @@ -387,9 +408,22 @@ void bdi_arm_supers_timer(void)
> static void sync_supers_timer_fn(unsigned long unused)
> {
> wake_up_process(sync_supers_tsk);
> - bdi_arm_supers_timer();
> }
>
> +void mark_sb_dirty(struct super_block *sb)
> +{
> + sb->s_dirty = 1;
> +
> + spin_lock(&supers_timer_lock);
> + if (!supers_timer_armed) {
> + bdi_arm_supers_timer();
> + supers_timer_armed = 1;
> + } else if (supers_timer_armed == -1)
> + supers_timer_armed = 1;
> + spin_unlock(&supers_timer_lock);
> +}
> +EXPORT_SYMBOL(mark_sb_dirty);

This looks inefficient. Could we not do

void mark_sb_dirty(struct super_block *sb)
{
sb->s_dirty = 1;

if (!supers_timer_armed) {
spin_lock(&supers_timer_lock);
if (!supers_timer_armed) {
bdi_arm_supers_timer();
supers_timer_armed = 1;
}
} else if (supers_timer_armed == -1)
spin_lock(&supers_timer_lock);
if (supers_timer_armed == -1)
supers_timer_armed = 1;
spin_unlock(&supers_timer_lock);
}
}

I didn't try very hard there, but you get the idea: examine the state
before taking that expensive global spinlock, so we only end up taking
the lock once per five seconds, rather than once per possible
superblock dirtying. That's like a six-orders-of-magnitude reduction
in locking frequency, which is worth putting some effort into.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/