From: Neil Brown on
On Thu, 28 Jan 2010 10:24:43 +0100
"Ing. Daniel RozsnyĆ³" <daniel(a)rozsnyo.com> wrote:

> Neil Brown wrote:
> > On Mon, 25 Jan 2010 19:27:53 +0100
> > Milan Broz <mbroz(a)redhat.com> wrote:
> >
> >> On 01/25/2010 04:25 PM, Marti Raudsepp wrote:
> >>> 2010/1/24 "Ing. Daniel RozsnyĆ³" <daniel(a)rozsnyo.com>:
> >>>> Hello,
> >>>> I am having troubles with nested RAID - when one array is added to the
> >>>> other, the "bio too big device md0" messages are appearing:
> >>>>
> >>>> bio too big device md0 (144 > 8)
> >>>> bio too big device md0 (248 > 8)
> >>>> bio too big device md0 (32 > 8)
> >>> I *think* this is the same bug that I hit years ago when mixing
> >>> different disks and 'pvmove'
> >>>
> >>> It's a design flaw in the DM/MD frameworks; see comment #3 from Milan Broz:
> >>> http://bugzilla.kernel.org/show_bug.cgi?id=9401#c3
> >> Hm. I don't think it is the same problem, you are only adding device to md array...
> >> (adding cc: Neil, this seems to me like MD bug).
> >>
> >> (original report for reference is here http://lkml.org/lkml/2010/1/24/60 )
> >
> > No, I think it is the same problem.
> >
> > When you have a stack of devices, the top level client needs to know the
> > maximum restrictions imposed by lower level devices to ensure it doesn't
> > violate them.
> > However there is no mechanism for a device to report that its restrictions
> > have changed.
> > So when md0 gains a linear leg and so needs to reduce the max size for
> > requests, there is no way to tell DM, so DM doesn't know. And as the
> > filesystem only asks DM for restrictions, it never finds out about the
> > new restrictions.
>
> Neil, why does it even reduce its block size? I've tried with both
> "linear" and "raid0" (as they are the only way to get 2T from 4x500G)
> and both behave the same (sda has 512, md0 127, linear 127 and raid0 has
> 512 kb block size).
>
> I do not see the mechanism how 512:127 or 512:512 leads to 4 kb limit

Both raid0 and linear register a 'bvec_mergeable' function (or whatever it is
called today).
This allows for the fact that these devices have restrictions that cannot be
expressed simply with request sizes. In particular they only handle requests
that don't cross a chunk boundary.

As raid1 never calls the bvec_mergeable function of it's components (it would
be very hard to get that to work reliably, maybe impossible), it treats any
device with a bvec_mergeable function as though the max_sectors were one page.
This is because the interface guarantees that a one page request will always
be handled.

>
> Is it because:
> - of rebuilding the array?
> - of non-multiplicative max block size
> - of non-multiplicative total device size
> - of nesting?
> - of some other fallback to 1 page?

The last I guess.

>
> I ask because I can not believe that a pre-assembled nested stack would
> result in 4kb max limit. But I haven't tried yet (e.g. from a live cd).

When people say "I can not believe" I always chuckle to myself. You just
aren't trying hard enough. There is adequate evidence that people can
believe whatever they want to believe :-)

>
> The block device should not do this kind of "magic", unless the higher
> layers support it. Which one has proper support then?
> - standard partition table?
> - LVM?
> - filesystem drivers?
>

I don't understand this question, sorry.

Yes, there is definitely something broken here. Unfortunately fixing it is
non-trivial.

NeilBrown
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Boaz Harrosh on
On 01/28/2010 12:50 PM, Neil Brown wrote:
>
> Both raid0 and linear register a 'bvec_mergeable' function (or whatever it is
> called today).
> This allows for the fact that these devices have restrictions that cannot be
> expressed simply with request sizes. In particular they only handle requests
> that don't cross a chunk boundary.
>
> As raid1 never calls the bvec_mergeable function of it's components (it would
> be very hard to get that to work reliably, maybe impossible), it treats any
> device with a bvec_mergeable function as though the max_sectors were one page.
> This is because the interface guarantees that a one page request will always
> be handled.
>

I'm also guilty of doing some mirror work, in exofs, over osd objects.

I was thinking about that reliability problem with mirrors, also related
to that infamous problem of coping the mirrored buffers so they do not
change while writing at the page cache level.

So what if we don't fight it? what if we just keep a journal of the mirror
unbalanced state and do not page_uptodate until the mirror is finally balanced.
Only then pages can be dropped from the cache, and journal cleared.

(Balanced-mirror-page is when a page has participated in an IO to all devices
without being marked dirty from the get-go to the completion of IO)

I think Trond's last work with adding that un_updated-but-committed state to
pages can facilitate in doing that, though I do understand that it is a major
conceptual change to the the VFS-BLOCKS relationship in letting the block devices
participate in the pages state machine (And md keeping a journal). Sigh

??
Boaz
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Neil Brown on
On Thu, 28 Jan 2010 14:07:31 +0200
Boaz Harrosh <bharrosh(a)panasas.com> wrote:

> On 01/28/2010 12:50 PM, Neil Brown wrote:
> >
> > Both raid0 and linear register a 'bvec_mergeable' function (or whatever it is
> > called today).
> > This allows for the fact that these devices have restrictions that cannot be
> > expressed simply with request sizes. In particular they only handle requests
> > that don't cross a chunk boundary.
> >
> > As raid1 never calls the bvec_mergeable function of it's components (it would
> > be very hard to get that to work reliably, maybe impossible), it treats any
> > device with a bvec_mergeable function as though the max_sectors were one page.
> > This is because the interface guarantees that a one page request will always
> > be handled.
> >
>
> I'm also guilty of doing some mirror work, in exofs, over osd objects.
>
> I was thinking about that reliability problem with mirrors, also related
> to that infamous problem of coping the mirrored buffers so they do not
> change while writing at the page cache level.

So this is a totally new topic, right?

>
> So what if we don't fight it? what if we just keep a journal of the mirror
> unbalanced state and do not page_uptodate until the mirror is finally balanced.
> Only then pages can be dropped from the cache, and journal cleared.

I cannot see what you are suggesting, but it seems like a layering violation.
The block device level cannot see anything about whether the page is up to
date or not. The page it has may not even be in the page cache.

The only thing that the block device can do is make a copy of the page and
write that out twice.

If we could have a flag which the filesystem can send to say "I promise not
to change this page until the IO completes", then that copy could be
optimised away in lots of common cases.


>
> (Balanced-mirror-page is when a page has participated in an IO to all devices
> without being marked dirty from the get-go to the completion of IO)
>

Block device cannot see the 'dirty' flag.


> I think Trond's last work with adding that un_updated-but-committed state to
> pages can facilitate in doing that, though I do understand that it is a major
> conceptual change to the the VFS-BLOCKS relationship in letting the block devices
> participate in the pages state machine (And md keeping a journal). Sigh
>
> ??
> Boaz

NeilBrown
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Boaz Harrosh on
On 01/29/2010 12:14 AM, Neil Brown wrote:
> On Thu, 28 Jan 2010 14:07:31 +0200
> Boaz Harrosh <bharrosh(a)panasas.com> wrote:
>
>> On 01/28/2010 12:50 PM, Neil Brown wrote:
>>>

I'm totally theoretical on this. So feel free to ignore me, if it gets
boring.

>>> Both raid0 and linear register a 'bvec_mergeable' function (or whatever it is
>>> called today).
>>> This allows for the fact that these devices have restrictions that cannot be
>>> expressed simply with request sizes. In particular they only handle requests
>>> that don't cross a chunk boundary.
>>>
>>> As raid1 never calls the bvec_mergeable function of it's components (it would
>>> be very hard to get that to work reliably, maybe impossible), it treats any
>>> device with a bvec_mergeable function as though the max_sectors were one page.
>>> This is because the interface guarantees that a one page request will always
>>> be handled.
>>>
>>
>> I'm also guilty of doing some mirror work, in exofs, over osd objects.
>>
>> I was thinking about that reliability problem with mirrors, also related
>> to that infamous problem of coping the mirrored buffers so they do not
>> change while writing at the page cache level.
>
> So this is a totally new topic, right?
>

Not new, I'm talking about that (no) guaranty of page not changing while in
flight, as you mention below. Which is why we need to copy the to-be-mirrored
page.

>>
>> So what if we don't fight it? what if we just keep a journal of the mirror
>> unbalanced state and do not page_uptodate until the mirror is finally balanced.
>> Only then pages can be dropped from the cache, and journal cleared.
>
> I cannot see what you are suggesting, but it seems like a layering violation.
> The block device level cannot see anything about whether the page is up to
> date or not. The page it has may not even be in the page cache.
>

It is certainly a layering violation today, but theoretically speaking ,it does
not have to be. An abstract API can be made so block devices notify when page's
IO is done, at this point VFS can decide if it must resubmit do to page changing
while IO or the IO is actually valid at this point.

> The only thing that the block device can do is make a copy of the page and
> write that out twice.
>

That is the copy I was referring to.

> If we could have a flag which the filesystem can send to say "I promise not
> to change this page until the IO completes", then that copy could be
> optimised away in lots of common cases.
>

What I meant is: What if we only have that knowledge at end of IO, So we can decide
at that point if the page is up-to-date and is allowed to be evicted from cache.
It's the same as if we have a crash/power-failure while IO, surely the mirrors are
not balanced, and each device's file content cannot be determained some of the
last written buffer is old, some new, and some undefined. That is the roll of the
file-system to keep a journal and decide what data can be guaranteed and what
data must be reverted to a last known good state. Now what I'm wondering is what
if we prolong this window to until we know the mirrors match. The window for disaster
is wider, but should never matter in normal use. Most setups could tolerate the
bad statistics, and could use the extra bandwidth.

>
>>
>> (Balanced-mirror-page is when a page has participated in an IO to all devices
>> without being marked dirty from the get-go to the completion of IO)
>>
>
> Block device cannot see the 'dirty' flag.
>

Right, but is there some additional information a block device should communicate
to the FS so it can make a decision?

>
>> I think Trond's last work with adding that un_updated-but-committed state to
>> pages can facilitate in doing that, though I do understand that it is a major
>> conceptual change to the the VFS-BLOCKS relationship in letting the block devices
>> participate in the pages state machine (And md keeping a journal). Sigh
>>
>> ??
>> Boaz
>
> NeilBrown

Thanks
Boaz
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/