From: Trond Myklebust on
On Tue, 2010-05-25 at 10:10 -0400, William A. (Andy) Adamson wrote:
> 2010/5/25 Lukas Hejtmanek <xhejtman(a)ics.muni.cz>:
> > On Tue, May 25, 2010 at 09:45:32AM -0400, William A. (Andy) Adamson wrote:
> >> Not get into the problem in the first place: this means
> >>
> >> 1) determine a 'lead time' where the NFS client declares a context
> >> expired even though it really as 'lead time' until it actually
> >> expires.
> >>
> >> 2) flush all writes on any contex that will expire within the lead
> >> time which needs to be long enough for flushes to take place.
> >
> > I think you cannot give any guarantees that the flush happens on time. There
> > can be server overload, network overload, anything and you are out of luck.
>
> True - but this will be the case no matter what scheme is in place.
> The above is to handle the normal working situation. When this fails
> due to network, server overload, server reboot, i.e. not-normal
> situation, then use the machine credential.

Use of the machine credential also requires help from the rpc.gssd
daemon. It's not a solution to the deadlock Lukas is describing.

Trond

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Sunil Mushran on
On 05/25/2010 05:28 AM, Trond Myklebust wrote:
>>> I encountered the following problem. We use short expiration time for
>>> kerberos contexts created by rpc.gssd (some patches were included in mainline
>>> nfs-utils). In particular, we use 120secs expiration time.
>>>
>>> Now, I run application that eats 80% of available RAM. Then I run 10 parallel
>>> dd processes that write data into NFS4 volume with sec=krb5.
>>>
>>> As soon as the kerberos context expires (i.e., up to 120 secs), the whole
>>> system gets stuck in do_page_fault and succesive functions. It is because
>>> there is no free memory in kernel, all free memory is used as cache for NFS4
>>> (due to dd traffic), kernel ask NFS to write back its pages but NFS cannot do
>>> anything as it is missing valid context. NFS contacts rpc.gssd to provide
>>> a renewed context, the rpc.gssd does not provide the context as it needs some memory
>>> to scan /tmp for a ticket. I.e., it deadlocks.
>>>
>>> Longer context expiration time is no real solution as it only makes the
>>> deadlock less often.
>>>
>>> Any ideas what can be done here? (Please cc me.) We could preallocate some
>>> memory in rpc.gssd and use mlockall but not sure whether this proctects also
>>> kernel malloc for things related to rpc.gssd and context creation (new file
>>> descriptors and so on).
>>>
>>> This is seen in 2.6.32 kernel but most probably this is related to all kernel
>>> versions.
>>>
>> Seems like pretty fundamental problem in nfs :-(. Limiting writeback
>> caches for nfs, so that system has enough memory to perform rpc calls
>> with the rest might do the trick, but...
>>
> It's the same problem that you have for any file or storage system that
> has initiators in userland. On the storage side, iSCSI in particular has
> the same problem. On the filesystem side, CIFS, AFS, coda, .... do too.
> The clustered filesystems can deadlock if the node that is running the
> DLM runs out of memory...
>

Not so trivially. In ocfs2, the dlm allocates small blocks with GFP_NOFS.
Furthermore, in the time-sensitive recovery thread, it preallocates buffers,
what it can, at create time. That does not mean it is not affected by
memory pressure. It is. But that shows up as slower response and not
a deadlock.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/