From: Oleg Nesterov on
Veaceslav doesn't have the time to continue, but he gave me
access to rhts machine ;)

The kernel is 2.6.31.6 btw.

On 11/26, Veaceslav Falico wrote:
>
> > Just noticed the test-case fails in handler_fail(). Most probably
> > this means it is killed by SIGALRM because either parent or child
> > hang in wait(). Perhaps we have another (ppc specific?) bug, but
> > currently I do not understand how this is possible, this should
> > not be arch-dependent.
>
> I can confirm that we have another bug on ppc arch. The test case below
> is spinning forever,
>
> [...]
>
> it doesn't hang, the parent is spinning around for, the test case
> isn't printing anything. Seems like fork() can't complete under
> PTRACE_SINGLESTEP.

Yep, thanks a lot Veaceslav.

I modified this test-case to print si_addr:

int main(void)
{
int pid, status;

if (!(pid = fork())) {
assert(ptrace(PTRACE_TRACEME) == 0);
kill(getpid(), SIGSTOP);

if (!fork())
return 0;

printf("fork passed..\n");

return 0;
}

for (;;) {
siginfo_t info;

assert(pid == wait(&status));
assert(status = 0x57f);

assert(ptrace(PTRACE_GETSIGINFO, pid, 0,&info) == 0);
printf("%p\n", info.si_addr);

if (WIFEXITED(status))
break;
assert(ptrace(PTRACE_SINGLESTEP, pid, 0,0) == 0);
}

printf("Parent exit.\n");

return 0;
}

the output is:

...
0xfedf880
0xfedf884
...
0xfedf96c
0xfedf970

this is fork which calls __GI__IO_list_lock

Dump of assembler code for function fork:
0x0fedf880 <fork+0>: mflr r0
...
0x0fedf96c <fork+236>: li r28,0
0x0fedf970 <fork+240>: bl 0xfeacce0 <__GI__IO_list_lock>

Then it loops inside __GI__IO_list_lock

...
0xfeacd24
0xfeacd28
0xfeacd2c
0xfeacd30
0xfeacd34

0xfeacd24
0xfeacd28
0xfeacd2c
0xfeacd30
0xfeacd34

0xfeacd24
0xfeacd28
0xfeacd2c
0xfeacd30
0xfeacd34
...

and so on forever,

Dump of assembler code for function __GI__IO_list_lock:
0x0feacce0 <__GI__IO_list_lock+0>: mflr r0
0x0feacce4 <__GI__IO_list_lock+4>: stwu r1,-32(r1)
0x0feacce8 <__GI__IO_list_lock+8>: li r11,0
0x0feaccec <__GI__IO_list_lock+12>: bcl- 20,4*cr7+so,0xfeaccf0 <__GI__IO_list_lock+16>
0x0feaccf0 <__GI__IO_list_lock+16>: li r9,1
0x0feaccf4 <__GI__IO_list_lock+20>: stw r0,36(r1)
0x0feaccf8 <__GI__IO_list_lock+24>: stw r30,24(r1)
0x0feaccfc <__GI__IO_list_lock+28>: mflr r30
0x0feacd00 <__GI__IO_list_lock+32>: stw r31,28(r1)
0x0feacd04 <__GI__IO_list_lock+36>: stw r29,20(r1)
0x0feacd08 <__GI__IO_list_lock+40>: addi r29,r2,-29824
0x0feacd0c <__GI__IO_list_lock+44>: addis r30,r30,16
0x0feacd10 <__GI__IO_list_lock+48>: addi r30,r30,13060
0x0feacd14 <__GI__IO_list_lock+52>: lwz r31,-6436(r30)
0x0feacd18 <__GI__IO_list_lock+56>: lwz r0,8(r31)
0x0feacd1c <__GI__IO_list_lock+60>: cmpw cr7,r0,r29
0x0feacd20 <__GI__IO_list_lock+64>: beq- cr7,0xfeacd4c <__GI__IO_list_lock+108>

beg-> 0x0feacd24 <__GI__IO_list_lock+68>: lwarx r0,0,r31
0x0feacd28 <__GI__IO_list_lock+72>: cmpw r0,r11
0x0feacd2c <__GI__IO_list_lock+76>: bne- 0xfeacd38 <__GI__IO_list_lock+88>
0x0feacd30 <__GI__IO_list_lock+80>: stwcx. r9,0,r31
end-> 0x0feacd34 <__GI__IO_list_lock+84>: bne+ 0xfeacd24 <__GI__IO_list_lock+68>

I don't even know whether this is user-space bug or kernel bug,
the asm above is the black magic for me.

Anyone who knows something about powerpc can give me a hint?

Oleg.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Oleg Nesterov on
On 11/26, Oleg Nesterov wrote:
>
> Then it loops inside __GI__IO_list_lock
>
> 0xfeacd24
> 0xfeacd28
> 0xfeacd2c
> 0xfeacd30
> 0xfeacd34
> ...
>
> and so on forever,
>
> Dump of assembler code for function __GI__IO_list_lock:
> 0x0feacce0 <__GI__IO_list_lock+0>: mflr r0
> 0x0feacce4 <__GI__IO_list_lock+4>: stwu r1,-32(r1)
> 0x0feacce8 <__GI__IO_list_lock+8>: li r11,0
> 0x0feaccec <__GI__IO_list_lock+12>: bcl- 20,4*cr7+so,0xfeaccf0 <__GI__IO_list_lock+16>
> 0x0feaccf0 <__GI__IO_list_lock+16>: li r9,1
> 0x0feaccf4 <__GI__IO_list_lock+20>: stw r0,36(r1)
> 0x0feaccf8 <__GI__IO_list_lock+24>: stw r30,24(r1)
> 0x0feaccfc <__GI__IO_list_lock+28>: mflr r30
> 0x0feacd00 <__GI__IO_list_lock+32>: stw r31,28(r1)
> 0x0feacd04 <__GI__IO_list_lock+36>: stw r29,20(r1)
> 0x0feacd08 <__GI__IO_list_lock+40>: addi r29,r2,-29824
> 0x0feacd0c <__GI__IO_list_lock+44>: addis r30,r30,16
> 0x0feacd10 <__GI__IO_list_lock+48>: addi r30,r30,13060
> 0x0feacd14 <__GI__IO_list_lock+52>: lwz r31,-6436(r30)
> 0x0feacd18 <__GI__IO_list_lock+56>: lwz r0,8(r31)
> 0x0feacd1c <__GI__IO_list_lock+60>: cmpw cr7,r0,r29
> 0x0feacd20 <__GI__IO_list_lock+64>: beq- cr7,0xfeacd4c <__GI__IO_list_lock+108>
>
> beg-> 0x0feacd24 <__GI__IO_list_lock+68>: lwarx r0,0,r31
> 0x0feacd28 <__GI__IO_list_lock+72>: cmpw r0,r11
> 0x0feacd2c <__GI__IO_list_lock+76>: bne- 0xfeacd38 <__GI__IO_list_lock+88>
> 0x0feacd30 <__GI__IO_list_lock+80>: stwcx. r9,0,r31
> end-> 0x0feacd34 <__GI__IO_list_lock+84>: bne+ 0xfeacd24 <__GI__IO_list_lock+68>
>
> I don't even know whether this is user-space bug or kernel bug,
> the asm above is the black magic for me.

When I use gdb to step over __GI__IO_list_lock(), it doesn't loop.
I straced gdb and noticed that when the trace reaches

0x0feacd24: lwarx r0,0,r31

gdb does PTRACE_CONT, not PTRACE_SINGLESTEP. After that the child
stops at 0x0feacd38, the next insn (isync).

> Anyone who knows something about powerpc can give me a hint?

Please ;)

Oleg.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Paul Mackerras on
Oleg Nesterov writes:

> 0xfeacd24
> 0xfeacd28
> 0xfeacd2c
> 0xfeacd30
> 0xfeacd34
> ...
>
> and so on forever,
....
> beg-> 0x0feacd24 <__GI__IO_list_lock+68>: lwarx r0,0,r31
> 0x0feacd28 <__GI__IO_list_lock+72>: cmpw r0,r11
> 0x0feacd2c <__GI__IO_list_lock+76>: bne- 0xfeacd38 <__GI__IO_list_lock+88>
> 0x0feacd30 <__GI__IO_list_lock+80>: stwcx. r9,0,r31
> end-> 0x0feacd34 <__GI__IO_list_lock+84>: bne+ 0xfeacd24 <__GI__IO_list_lock+68>
>
> I don't even know whether this is user-space bug or kernel bug,
> the asm above is the black magic for me.

The lwarx and stwcx. work together to do an atomic update to the word
whose address is in r31. They are like LL (load-linked) and SC
(store-conditional) on other architectures such as alpha. Basically
the lwarx creates an internal "reservation" on the word pointed to by
r31 and loads its value into r0. The stwcx. stores into that word but
only if the reservation still exists. The reservation gets cleared
(in hardware) if any other cpu writes to that word in the meantime.
If the reservation did get cleared, the bne (branch if not equal)
instruction will be taken and we loop around to try again.

There is a difficulty when single-stepping through such a sequence
because the process of taking the single-step exception and returning
will clear the reservation. Thus if you single-step through that
sequence it will never succeed. I believe gdb has code to recognize
this kind of sequence and run through it without stopping until after
the bne, precisely to avoid this problem.

Paul.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Andreas Schwab on
Paul Mackerras <paulus(a)samba.org> writes:

> I believe gdb has code to recognize this kind of sequence and run
> through it without stopping until after the bne, precisely to avoid
> this problem.

See gdb/rs6000-tdep.c:ppc_deal_with_atomic_sequence.

Andreas.

--
Andreas Schwab, schwab(a)linux-m68k.org
GPG Key fingerprint = 58CA 54C7 6D53 942B 1756 01D3 44D5 214B 8276 4ED5
"And now for something completely different."
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Oleg Nesterov on
On 11/27, Ananth N Mavinakayanahalli wrote:
>
> On Thu, Nov 26, 2009 at 03:50:51PM +0100, Oleg Nesterov wrote:
>
> > Ananth, could you please run the test-case from the changelog
> > below ? I do not really expect this can help, but just in case.
>
> Right, it doesn't help :-(
>
> GDB shows that the parent is forever struck at wait().

Now this is interesting. Could you please double check the parent hangs
in wait() ?

This doesn't match the testing we did on powerpc machine with Veaceslav,
and I hoped the problem was already resolved?

Please see other emails in this thread.


Hmm. Fortunately I still have the access to the testing machine.
Yes, according to gdb it looks as if it "hangs" in wait(). This
is not true. You can strace gdb itself, or look at xxx_ctxt_switches
in /proc/pid_of_parent/status.

Better yet, do not use gdb at all. Just strace (without -f) the parent,
you should see it continues to trace the child and loops forever.

Oleg.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/