From: Eric W. Biederman on
Neil Horman <nhorman(a)tuxdriver.com> writes:

> On Wed, Mar 31, 2010 at 11:57:46AM -0700, Eric W. Biederman wrote:
>> Neil Horman <nhorman(a)tuxdriver.com> writes:
>>
>> > On Wed, Mar 31, 2010 at 11:54:30AM -0400, Vivek Goyal wrote:
>>
>> >> So this call amd_iommu_flush_all_devices() will be able to tell devices
>> >> that don't do any more DMAs and hence it is safe to reprogram iommu
>> >> mapping entries.
>> >>
>> > It blocks the cpu until any pending DMA operations are complete. Hmm, as I
>> > think about it, there is still a small possibility that a device like a NIC
>> > which has several buffers pre-dma-mapped could start a new dma before we
>> > completely disabled the iommu, althought thats small. I never saw that in my
>> > testing, but hitting that would be fairly difficult I think, since its literally
>> > just a few hundred cycles between the flush and the actual hardware disable
>> > operation.
>> >
>> > According to this though:
>> > http://support.amd.com/us/Processor_TechDocs/34434-IOMMU-Rev_1.26_2-11-09.pdf
>> > That window could be closed fairly easily, but simply disabling read and write
>> > permissions for each device table entry prior to calling flush. If we do that,
>> > then flush the device table, any subsequently started dma operation would just
>> > get noted in the error log, which we could ignore, since we're abot to boot to
>> > the kdump kernel anyway.
>> >
>> > Would you like me to respin w/ that modification?
>>
>> Disabling permissions on all devices sounds good for the new virtualization
>> capable iommus. I think older iommus will still be challenged. I think
>> on x86 we have simply been able to avoid using those older iommus.
>>
>> I like the direction you are going but please let's put this in a
>> paranoid iommu enable routine.
>>
> You mean like initialize the device table so that all devices are default
> disabled on boot, and then selectively enable them (perhaps during a
> device_attach)? I can give that a spin.

That sounds good.

Eric
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Neil Horman on
On Wed, Mar 31, 2010 at 12:51:25PM -0700, Eric W. Biederman wrote:
> Neil Horman <nhorman(a)tuxdriver.com> writes:
>
> > On Wed, Mar 31, 2010 at 11:57:46AM -0700, Eric W. Biederman wrote:
> >> Neil Horman <nhorman(a)tuxdriver.com> writes:
> >>
> >> > On Wed, Mar 31, 2010 at 11:54:30AM -0400, Vivek Goyal wrote:
> >>
> >> >> So this call amd_iommu_flush_all_devices() will be able to tell devices
> >> >> that don't do any more DMAs and hence it is safe to reprogram iommu
> >> >> mapping entries.
> >> >>
> >> > It blocks the cpu until any pending DMA operations are complete. Hmm, as I
> >> > think about it, there is still a small possibility that a device like a NIC
> >> > which has several buffers pre-dma-mapped could start a new dma before we
> >> > completely disabled the iommu, althought thats small. I never saw that in my
> >> > testing, but hitting that would be fairly difficult I think, since its literally
> >> > just a few hundred cycles between the flush and the actual hardware disable
> >> > operation.
> >> >
> >> > According to this though:
> >> > http://support.amd.com/us/Processor_TechDocs/34434-IOMMU-Rev_1.26_2-11-09.pdf
> >> > That window could be closed fairly easily, but simply disabling read and write
> >> > permissions for each device table entry prior to calling flush. If we do that,
> >> > then flush the device table, any subsequently started dma operation would just
> >> > get noted in the error log, which we could ignore, since we're abot to boot to
> >> > the kdump kernel anyway.
> >> >
> >> > Would you like me to respin w/ that modification?
> >>
> >> Disabling permissions on all devices sounds good for the new virtualization
> >> capable iommus. I think older iommus will still be challenged. I think
> >> on x86 we have simply been able to avoid using those older iommus.
> >>
> >> I like the direction you are going but please let's put this in a
> >> paranoid iommu enable routine.
> >>
> > You mean like initialize the device table so that all devices are default
> > disabled on boot, and then selectively enable them (perhaps during a
> > device_attach)? I can give that a spin.
>
> That sounds good.
>

So I'm officially rescinding this patch. It apparently just covered up the
problem, rather than solved it outright. This is going to take some more
thought on my part. I read the code a bit closer, and the amd iommu on boot up
currently marks all its entries as valid and having a valid translation (because
if they're marked as invalid they're passed through untranslated which strikes
me as dangerous, since it means a dma address treated as a bus address could
lead to memory corruption. The saving grace is that they are marked as
non-readable and non-writeable, so any device doing a dma after the reinit
should get logged (which it does), and then target aborted (which should
effectively squash the translation)

I'm starting to wonder if:

1) some dmas are so long lived they start aliasing new dmas that get mapped in
the kdump kernel leading to various erroneous behavior

or

2) a slew of target aborts to some hardware results in them being in an
inconsistent state

I'm going to try marking the dev table on shutdown such that all devices have no
read/write permissions to see if that changes the situation. It should I think
give me a pointer as to weather (1) or (2) is the more likely problem.

Lots more thinking to do....
Neil

> Eric
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo(a)vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Chris Wright on
* Neil Horman (nhorman(a)tuxdriver.com) wrote:
> Flush iommu during shutdown
>
> When using an iommu, its possible, if a kdump kernel boot follows a primary
> kernel crash, that dma operations might still be in flight from the previous
> kernel during the kdump kernel boot. This can lead to memory corruption,
> crashes, and other erroneous behavior, specifically I've seen it manifest during
> a kdump boot as endless iommu error log entries of the form:
> AMD-Vi: Event logged [IO_PAGE_FAULT device=00:14.1 domain=0x000d
> address=0x000000000245a0c0 flags=0x0070]

We've already fixed this problem once before, so some code shift must
have brought it back. Personally, I prefer to do this on the bringup
path than the teardown path. Besides keeping the teardown path as
simple as possible (goal is to get to kdump kernel asap), there's also
reason to competely flush on startup in genernal in case BIOS has done
anything unsavory.

thanks,
-chris
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Neil Horman on
On Wed, Mar 31, 2010 at 02:25:35PM -0700, Chris Wright wrote:
> * Neil Horman (nhorman(a)tuxdriver.com) wrote:
> > Flush iommu during shutdown
> >
> > When using an iommu, its possible, if a kdump kernel boot follows a primary
> > kernel crash, that dma operations might still be in flight from the previous
> > kernel during the kdump kernel boot. This can lead to memory corruption,
> > crashes, and other erroneous behavior, specifically I've seen it manifest during
> > a kdump boot as endless iommu error log entries of the form:
> > AMD-Vi: Event logged [IO_PAGE_FAULT device=00:14.1 domain=0x000d
> > address=0x000000000245a0c0 flags=0x0070]
>
> We've already fixed this problem once before, so some code shift must
> have brought it back. Personally, I prefer to do this on the bringup
> path than the teardown path. Besides keeping the teardown path as
> simple as possible (goal is to get to kdump kernel asap), there's also
> reason to competely flush on startup in genernal in case BIOS has done
> anything unsavory.
>
Chris,
Can you elaborate on what you did with the iommu to make this safe? It
will save me time digging through the history on this code, and help me
understand better whats going on here.

I was starting to think that we should just leave the iommu on through a kdump,
and re-construct a new page table based on the old table (filtered by the error
log) on kdump boot, but it sounds like a better solution might be in place.

Thanks
Neil

> thanks,
> -chris
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Chris Wright on
* Neil Horman (nhorman(a)tuxdriver.com) wrote:
> On Wed, Mar 31, 2010 at 02:25:35PM -0700, Chris Wright wrote:
> > * Neil Horman (nhorman(a)tuxdriver.com) wrote:
> > > Flush iommu during shutdown
> > >
> > > When using an iommu, its possible, if a kdump kernel boot follows a primary
> > > kernel crash, that dma operations might still be in flight from the previous
> > > kernel during the kdump kernel boot. This can lead to memory corruption,
> > > crashes, and other erroneous behavior, specifically I've seen it manifest during
> > > a kdump boot as endless iommu error log entries of the form:
> > > AMD-Vi: Event logged [IO_PAGE_FAULT device=00:14.1 domain=0x000d
> > > address=0x000000000245a0c0 flags=0x0070]
> >
> > We've already fixed this problem once before, so some code shift must
> > have brought it back. Personally, I prefer to do this on the bringup
> > path than the teardown path. Besides keeping the teardown path as
> > simple as possible (goal is to get to kdump kernel asap), there's also
> > reason to competely flush on startup in genernal in case BIOS has done
> > anything unsavory.
> >
> Chris,
> Can you elaborate on what you did with the iommu to make this safe? It
> will save me time digging through the history on this code, and help me
> understand better whats going on here.
>
> I was starting to think that we should just leave the iommu on through a kdump,
> and re-construct a new page table based on the old table (filtered by the error
> log) on kdump boot, but it sounds like a better solution might be in place.

The code used to simply insure a clean slate on startup by flushing the
relevant domain table entry and the cached translations as devices were
attached (happens during init of the kernel either base kernel or kdump
one).

See here:

42a49f965a8d24ed92af04f5b564d63f17fd9c56
a8c485bb6857811807d42f9fd1fde2f5f89cc5c9

What's changed is the initialization doesn't appear to do the proper
flushes anymore. Your patch has the effect of puting the back, but
during shtudown rather than initialization.

thanks,
-chris
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/