From: Peter Zijlstra on
On Fri, 2010-05-21 at 19:38 -0700, Salman wrote:
> If one or more readers are holding the lock, and one or more writers
> are contending for it, then do not admit any new readers. However,
> if a writer is holding a lock, then let readers contend for it at
> equal footing with the writers.
>
> This fixes a pathological case (see the code below), where the
> tasklist_lock is continuously held by the readers, and the writers starve.
>
> The change does not introduce any unexpected test failures in the locking
> self-test. Furthermore, it makes the original problem go away. In
> particular, after the change, the following code can run without
> causing a lockup:

So how does this work with recursion?

rwlock_t is assumed recursive and quite a lot of code relies on that.

CPU0 CPU1

read_lock(&A)
write_lock_irq(&A)

<IRQ>
read_lock(&A) <-- deadlock because there's a pending writer


Also, I really think having config options for lock behaviour is utter
suckage, either a new implementation is better or its not.

If you want your waitpid() case to work better, try converting its
tasklist_lock usage to RCU, or try and break the lock into smaller
locks.

NAK on both your patch and your approach, rwlock_t should be killed off,
not 'improved'.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/