From: Stefan Lippers-Hollmann on
Hi

On Friday 23 April 2010, Greg KH wrote:
> On Thu, Apr 22, 2010 at 01:36:26PM +0200, Stefan Lippers-Hollmann wrote:
> > Hi
> >
> > On Thursday 22 April 2010, Neil Brown wrote:
> > > On Thu, 22 Apr 2010 04:08:30 +0200
> > > "Stefan Lippers-Hollmann" <s.L-H(a)gmx.de> wrote:
> > > > On Thursday 22 April 2010, gregkh(a)suse.de wrote:
> > [...]
> > > > > From 35f2a591192d0a5d9f7fc696869c76f0b8e49c3d Mon Sep 17 00:00:00 2001
> > > > > From: NeilBrown <neilb(a)suse.de>
> > > > > Date: Tue, 20 Apr 2010 14:13:34 +1000
> > > > > Subject: md/raid5: allow for more than 2^31 chunks.
> > > > >
> > > > > From: NeilBrown <neilb(a)suse.de>
> > > > >
> > > > > commit 35f2a591192d0a5d9f7fc696869c76f0b8e49c3d upstream.
> > > > >
> > > > > With many large drives and small chunk sizes it is possible
> > > > > to create a RAID5 with more than 2^31 chunks. Make sure this
> > > > > works.
> > > > >
> > > > > Reported-by: Brett King <king.br(a)gmail.com>
> > > > > Signed-off-by: NeilBrown <neilb(a)suse.de>
> > > > > Signed-off-by: Greg Kroah-Hartman <gregkh(a)suse.de>
> > > >
> > > > This patch, as part of the current 2.6.33 stable queue, breaks compiling
> > > > on i386 (CONFIG_LBDAF=y) for me (amd64 builds fine):
> > > >
> > > > [...]
> > > > BUILD arch/x86/boot/bzImage
> > > > Root device is (254, 6)
> > > > Setup is 12700 bytes (padded to 12800 bytes).
> > > > System is 2415 kB
> > > > CRC db6fa5fa
> > > > Kernel: arch/x86/boot/bzImage is ready (#1)
> > > > ERROR: "__umoddi3" [drivers/md/raid456.ko] undefined!
> > > >
> > > > reverting just this patch fixes the problem for me.
> > >
> > > Thanks for testing and reporting.
> > >
> > > If you could verify that this additional patch fixes the compile error I
> > > would really appreciate it.
> >
> > I can confirm that this patch on top of the original
> > md-raid5-allow-for-more-than-2-31-chunks.patch fixes the build problem on
> > i386 for me (amd64 continues to build fine as well).
> >
> > Tested-by: Stefan Lippers-Hollmann <s.l-h(a)gmx.de>
>
> Neil, care to push this to Linus?

Gitweb: http://git.kernel.org/linus/6e3b96ed610e5a1838e62ddae9fa0c3463f235fa
Commit: 6e3b96ed610e5a1838e62ddae9fa0c3463f235fa
Parent: 35f2a591192d0a5d9f7fc696869c76f0b8e49c3d
Author: NeilBrown <neilb(a)suse.de>
AuthorDate: Fri Apr 23 07:08:28 2010 +1000
Committer: NeilBrown <neilb(a)suse.de>
CommitDate: Fri Apr 23 07:08:28 2010 +1000

md/raid5: fix previous patch.

Previous patch changes stripe and chunk_number to sector_t but
mistakenly did not update all of the divisions to use sector_dev().

This patch changes all the those divisions (actually the '%' operator)
to sector_div.

Signed-off-by: NeilBrown <neilb(a)suse.de>
Cc: stable(a)kernel.org
Tested-by: Stefan Lippers-Hollmann <s.l-h(a)gmx.de>

Regards
Stefan Lippers-Hollmann
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Stefan Lippers-Hollmann on
Hi

On Thursday 22 April 2010, Neil Brown wrote:
> On Thu, 22 Apr 2010 04:08:30 +0200
> "Stefan Lippers-Hollmann" <s.L-H(a)gmx.de> wrote:
> > On Thursday 22 April 2010, gregkh(a)suse.de wrote:
[...]
> > > From 35f2a591192d0a5d9f7fc696869c76f0b8e49c3d Mon Sep 17 00:00:00 2001
> > > From: NeilBrown <neilb(a)suse.de>
> > > Date: Tue, 20 Apr 2010 14:13:34 +1000
> > > Subject: md/raid5: allow for more than 2^31 chunks.
> > >
> > > From: NeilBrown <neilb(a)suse.de>
> > >
> > > commit 35f2a591192d0a5d9f7fc696869c76f0b8e49c3d upstream.
> > >
> > > With many large drives and small chunk sizes it is possible
> > > to create a RAID5 with more than 2^31 chunks. Make sure this
> > > works.
> > >
> > > Reported-by: Brett King <king.br(a)gmail.com>
> > > Signed-off-by: NeilBrown <neilb(a)suse.de>
> > > Signed-off-by: Greg Kroah-Hartman <gregkh(a)suse.de>
> >
> > This patch, as part of the current 2.6.33 stable queue, breaks compiling
> > on i386 (CONFIG_LBDAF=y) for me (amd64 builds fine):
> >
> > [...]
> > BUILD arch/x86/boot/bzImage
> > Root device is (254, 6)
> > Setup is 12700 bytes (padded to 12800 bytes).
> > System is 2415 kB
> > CRC db6fa5fa
> > Kernel: arch/x86/boot/bzImage is ready (#1)
> > ERROR: "__umoddi3" [drivers/md/raid456.ko] undefined!
> >
> > reverting just this patch fixes the problem for me.
>
> Thanks for testing and reporting.
>
> If you could verify that this additional patch fixes the compile error I
> would really appreciate it.

I can confirm that this patch on top of the original
md-raid5-allow-for-more-than-2-31-chunks.patch fixes the build problem on
i386 for me (amd64 continues to build fine as well).

Tested-by: Stefan Lippers-Hollmann <s.l-h(a)gmx.de>

Thank you a lot
Stefan Lippers-Hollmann

--
> Thanks,
> NeilBrown
>
>
> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
> index 20e4840..58ea0ec 100644
> --- a/drivers/md/raid5.c
> +++ b/drivers/md/raid5.c
> @@ -1650,7 +1650,7 @@ static sector_t raid5_compute_sector(raid5_conf_t *conf, sector_t r_sector,
> int previous, int *dd_idx,
> struct stripe_head *sh)
> {
> - sector_t stripe;
> + sector_t stripe, stripe2;
> sector_t chunk_number;
> unsigned int chunk_offset;
> int pd_idx, qd_idx;
> @@ -1677,7 +1677,7 @@ static sector_t raid5_compute_sector(raid5_conf_t *conf, sector_t r_sector,
> */
> stripe = chunk_number;
> *dd_idx = sector_div(stripe, data_disks);
> -
> + stripe2 = stripe;
> /*
> * Select the parity disk based on the user selected algorithm.
> */
> @@ -1689,21 +1689,21 @@ static sector_t raid5_compute_sector(raid5_conf_t *conf, sector_t r_sector,
> case 5:
> switch (algorithm) {
> case ALGORITHM_LEFT_ASYMMETRIC:
> - pd_idx = data_disks - stripe % raid_disks;
> + pd_idx = data_disks - sector_div(stripe2, raid_disks);
> if (*dd_idx >= pd_idx)
> (*dd_idx)++;
> break;
> case ALGORITHM_RIGHT_ASYMMETRIC:
> - pd_idx = stripe % raid_disks;
> + pd_idx = sector_div(stripe2, raid_disks);
> if (*dd_idx >= pd_idx)
> (*dd_idx)++;
> break;
> case ALGORITHM_LEFT_SYMMETRIC:
> - pd_idx = data_disks - stripe % raid_disks;
> + pd_idx = data_disks - sector_div(stripe2, raid_disks);
> *dd_idx = (pd_idx + 1 + *dd_idx) % raid_disks;
> break;
> case ALGORITHM_RIGHT_SYMMETRIC:
> - pd_idx = stripe % raid_disks;
> + pd_idx = sector_div(stripe2, raid_disks);
> *dd_idx = (pd_idx + 1 + *dd_idx) % raid_disks;
> break;
> case ALGORITHM_PARITY_0:
> @@ -1723,7 +1723,7 @@ static sector_t raid5_compute_sector(raid5_conf_t *conf, sector_t r_sector,
>
> switch (algorithm) {
> case ALGORITHM_LEFT_ASYMMETRIC:
> - pd_idx = raid_disks - 1 - (stripe % raid_disks);
> + pd_idx = raid_disks - 1 - sector_div(stripe2, raid_disks);
> qd_idx = pd_idx + 1;
> if (pd_idx == raid_disks-1) {
> (*dd_idx)++; /* Q D D D P */
> @@ -1732,7 +1732,7 @@ static sector_t raid5_compute_sector(raid5_conf_t *conf, sector_t r_sector,
> (*dd_idx) += 2; /* D D P Q D */
> break;
> case ALGORITHM_RIGHT_ASYMMETRIC:
> - pd_idx = stripe % raid_disks;
> + pd_idx = sector_div(stripe2, raid_disks);
> qd_idx = pd_idx + 1;
> if (pd_idx == raid_disks-1) {
> (*dd_idx)++; /* Q D D D P */
> @@ -1741,12 +1741,12 @@ static sector_t raid5_compute_sector(raid5_conf_t *conf, sector_t r_sector,
> (*dd_idx) += 2; /* D D P Q D */
> break;
> case ALGORITHM_LEFT_SYMMETRIC:
> - pd_idx = raid_disks - 1 - (stripe % raid_disks);
> + pd_idx = raid_disks - 1 - sector_div(stripe2, raid_disks);
> qd_idx = (pd_idx + 1) % raid_disks;
> *dd_idx = (pd_idx + 2 + *dd_idx) % raid_disks;
> break;
> case ALGORITHM_RIGHT_SYMMETRIC:
> - pd_idx = stripe % raid_disks;
> + pd_idx = sector_div(stripe2, raid_disks);
> qd_idx = (pd_idx + 1) % raid_disks;
> *dd_idx = (pd_idx + 2 + *dd_idx) % raid_disks;
> break;
> @@ -1765,7 +1765,7 @@ static sector_t raid5_compute_sector(raid5_conf_t *conf, sector_t r_sector,
> /* Exactly the same as RIGHT_ASYMMETRIC, but or
> * of blocks for computing Q is different.
> */
> - pd_idx = stripe % raid_disks;
> + pd_idx = sector_div(stripe2, raid_disks);
> qd_idx = pd_idx + 1;
> if (pd_idx == raid_disks-1) {
> (*dd_idx)++; /* Q D D D P */
> @@ -1780,7 +1780,8 @@ static sector_t raid5_compute_sector(raid5_conf_t *conf, sector_t r_sector,
> * D D D P Q rather than
> * Q D D D P
> */
> - pd_idx = raid_disks - 1 - ((stripe + 1) % raid_disks);
> + stripe2 += 1;
> + pd_idx = raid_disks - 1 - sector_div(stripe2, raid_disks);
> qd_idx = pd_idx + 1;
> if (pd_idx == raid_disks-1) {
> (*dd_idx)++; /* Q D D D P */
> @@ -1792,7 +1793,7 @@ static sector_t raid5_compute_sector(raid5_conf_t *conf, sector_t r_sector,
>
> case ALGORITHM_ROTATING_N_CONTINUE:
> /* Same as left_symmetric but Q is before P */
> - pd_idx = raid_disks - 1 - (stripe % raid_disks);
> + pd_idx = raid_disks - 1 - sector_div(stripe2, raid_disks);
> qd_idx = (pd_idx + raid_disks - 1) % raid_disks;
> *dd_idx = (pd_idx + 1 + *dd_idx) % raid_disks;
> ddf_layout = 1;
> @@ -1800,27 +1801,27 @@ static sector_t raid5_compute_sector(raid5_conf_t *conf, sector_t r_sector,
>
> case ALGORITHM_LEFT_ASYMMETRIC_6:
> /* RAID5 left_asymmetric, with Q on last device */
> - pd_idx = data_disks - stripe % (raid_disks-1);
> + pd_idx = data_disks - sector_div(stripe2, raid_disks-1);
> if (*dd_idx >= pd_idx)
> (*dd_idx)++;
> qd_idx = raid_disks - 1;
> break;
>
> case ALGORITHM_RIGHT_ASYMMETRIC_6:
> - pd_idx = stripe % (raid_disks-1);
> + pd_idx = sector_div(stripe2, raid_disks-1);
> if (*dd_idx >= pd_idx)
> (*dd_idx)++;
> qd_idx = raid_disks - 1;
> break;
>
> case ALGORITHM_LEFT_SYMMETRIC_6:
> - pd_idx = data_disks - stripe % (raid_disks-1);
> + pd_idx = data_disks - sector_div(stripe2, raid_disks-1);
> *dd_idx = (pd_idx + 1 + *dd_idx) % (raid_disks-1);
> qd_idx = raid_disks - 1;
> break;
>
> case ALGORITHM_RIGHT_SYMMETRIC_6:
> - pd_idx = stripe % (raid_disks-1);
> + pd_idx = sector_div(stripe2, raid_disks-1);
> *dd_idx = (pd_idx + 1 + *dd_idx) % (raid_disks-1);
> qd_idx = raid_disks - 1;
> break;
>
>
>

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Stefan Lippers-Hollmann on
Hi

On Thursday 22 April 2010, gregkh(a)suse.de wrote:
> This is a note to let you know that we have just queued up the patch titled
>
> Subject: md/raid5: allow for more than 2^31 chunks.
>
> to the 2.6.33-stable tree. Its filename is
>
> md-raid5-allow-for-more-than-2-31-chunks.patch
>
> A git repo of this tree can be found at
> http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary
>
>
> From 35f2a591192d0a5d9f7fc696869c76f0b8e49c3d Mon Sep 17 00:00:00 2001
> From: NeilBrown <neilb(a)suse.de>
> Date: Tue, 20 Apr 2010 14:13:34 +1000
> Subject: md/raid5: allow for more than 2^31 chunks.
>
> From: NeilBrown <neilb(a)suse.de>
>
> commit 35f2a591192d0a5d9f7fc696869c76f0b8e49c3d upstream.
>
> With many large drives and small chunk sizes it is possible
> to create a RAID5 with more than 2^31 chunks. Make sure this
> works.
>
> Reported-by: Brett King <king.br(a)gmail.com>
> Signed-off-by: NeilBrown <neilb(a)suse.de>
> Signed-off-by: Greg Kroah-Hartman <gregkh(a)suse.de>

This patch, as part of the current 2.6.33 stable queue, breaks compiling
on i386 (CONFIG_LBDAF=y) for me (amd64 builds fine):

[...]
BUILD arch/x86/boot/bzImage
Root device is (254, 6)
Setup is 12700 bytes (padded to 12800 bytes).
System is 2415 kB
CRC db6fa5fa
Kernel: arch/x86/boot/bzImage is ready (#1)
ERROR: "__umoddi3" [drivers/md/raid456.ko] undefined!

reverting just this patch fixes the problem for me.

Current Debian/ unstable:
ii binutils 2.20.1-7 The GNU assembler, linker and binary utilities
ii gcc-4.4 4.4.3-9 The GNU C compiler
ii make 3.81-8 An utility for Directing compilation.

Regards
Stefan Lippers-Hollmann

--
> ---
> drivers/md/raid5.c | 19 +++++++------------
> 1 file changed, 7 insertions(+), 12 deletions(-)
>
> --- a/drivers/md/raid5.c
> +++ b/drivers/md/raid5.c
> @@ -1649,8 +1649,8 @@ static sector_t raid5_compute_sector(rai
> int previous, int *dd_idx,
> struct stripe_head *sh)
> {
> - long stripe;
> - unsigned long chunk_number;
> + sector_t stripe;
> + sector_t chunk_number;
> unsigned int chunk_offset;
> int pd_idx, qd_idx;
> int ddf_layout = 0;
> @@ -1670,17 +1670,12 @@ static sector_t raid5_compute_sector(rai
> */
> chunk_offset = sector_div(r_sector, sectors_per_chunk);
> chunk_number = r_sector;
> - BUG_ON(r_sector != chunk_number);
>
> /*
> * Compute the stripe number
> */
> - stripe = chunk_number / data_disks;
> -
> - /*
> - * Compute the data disk and parity disk indexes inside the stripe
> - */
> - *dd_idx = chunk_number % data_disks;
> + stripe = chunk_number;
> + *dd_idx = sector_div(stripe, data_disks);
>
> /*
> * Select the parity disk based on the user selected algorithm.
> @@ -1869,14 +1864,14 @@ static sector_t compute_blocknr(struct s
> : conf->algorithm;
> sector_t stripe;
> int chunk_offset;
> - int chunk_number, dummy1, dd_idx = i;
> + sector_t chunk_number;
> + int dummy1, dd_idx = i;
> sector_t r_sector;
> struct stripe_head sh2;
>
>
> chunk_offset = sector_div(new_sector, sectors_per_chunk);
> stripe = new_sector;
> - BUG_ON(new_sector != stripe);
>
> if (i == sh->pd_idx)
> return 0;
> @@ -1969,7 +1964,7 @@ static sector_t compute_blocknr(struct s
> }
>
> chunk_number = stripe * data_disks + i;
> - r_sector = (sector_t)chunk_number * sectors_per_chunk + chunk_offset;
> + r_sector = chunk_number * sectors_per_chunk + chunk_offset;
>
> check = raid5_compute_sector(conf, r_sector,
> previous, &dummy1, &sh2);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/