From: Borislav Petkov on
Hi,

a plain -rc5 triggers at net/core/dev.c:1993 here too:

[ 12.889090] ===================================================
[ 12.889387] [ INFO: suspicious rcu_dereference_check() usage. ]
[ 12.889533] ---------------------------------------------------
[ 12.889679] net/core/dev.c:1993 invoked rcu_dereference_check() without protection!
[ 12.889929]
[ 12.889929] other info that might help us debug this:
[ 12.889930]
[ 12.890368]
[ 12.890369] rcu_scheduler_active = 1, debug_locks = 0
[ 12.890659] 2 locks held by swapper/0:
[ 12.890803] #0: (&idev->mc_ifc_timer){+.-...}, at: [<ffffffff81045f4a>] run_timer_softirq+0x266/0x503
[ 12.891227] #1: (rcu_read_lock_bh){.+....}, at: [<ffffffff81397eb4>] dev_queue_xmit+0x153/0x512
[ 12.891647]
[ 12.891648] stack backtrace:
[ 12.891934] Pid: 0, comm: swapper Not tainted 2.6.34-rc5 #1
[ 12.892085] Call Trace:
[ 12.892231] <IRQ> [<ffffffff81065d8f>] lockdep_rcu_dereference+0xaa/0xb2
[ 12.892430] [<ffffffff81397fbf>] dev_queue_xmit+0x25e/0x512
[ 12.892576] [<ffffffff81397eb4>] ? dev_queue_xmit+0x153/0x512
[ 12.892723] [<ffffffff81066a4a>] ? trace_hardirqs_on+0xd/0xf
[ 12.892871] [<ffffffff8103f4fb>] ? local_bh_enable_ip+0xbc/0xda
[ 12.893024] [<ffffffff8139ea67>] neigh_resolve_output+0x323/0x36a
[ 12.893183] [<ffffffffa00ae6b7>] ? ipv6_chk_mcast_addr+0x0/0x1fa [ipv6]
[ 12.893338] [<ffffffffa0094860>] ip6_output_finish+0x81/0xb9 [ipv6]
[ 12.893492] [<ffffffffa0096067>] ip6_output2+0x2a9/0x2b4 [ipv6]
[ 12.893644] [<ffffffffa0096c33>] ip6_output+0xbc1/0xbd0 [ipv6]
[ 12.893797] [<ffffffffa00a2a06>] ? fib6_force_start_gc+0x30/0x32 [ipv6]
[ 12.893951] [<ffffffffa00b04e8>] mld_sendpack+0x30b/0x435 [ipv6]
[ 12.894109] [<ffffffffa00b01dd>] ? mld_sendpack+0x0/0x435 [ipv6]
[ 12.894264] [<ffffffff8106676d>] ? mark_held_locks+0x52/0x70
[ 12.894418] [<ffffffffa00b0d2d>] mld_ifc_timer_expire+0x254/0x28d [ipv6]
[ 12.894570] [<ffffffff81046065>] run_timer_softirq+0x381/0x503
[ 12.894717] [<ffffffff81045f4a>] ? run_timer_softirq+0x266/0x503
[ 12.894870] [<ffffffffa00b0ad9>] ? mld_ifc_timer_expire+0x0/0x28d [ipv6]
[ 12.895024] [<ffffffff8103f708>] ? __do_softirq+0x79/0x2f5
[ 12.895174] [<ffffffff8103f80f>] __do_softirq+0x180/0x2f5
[ 12.895323] [<ffffffff810030cc>] call_softirq+0x1c/0x28
[ 12.895472] [<ffffffff81004d91>] do_softirq+0x3d/0x85
[ 12.895619] [<ffffffff8103f2b5>] irq_exit+0x4a/0x95
[ 12.895766] [<ffffffff81413a3d>] smp_apic_timer_interrupt+0x8c/0x9a
[ 12.895913] [<ffffffff81002b93>] apic_timer_interrupt+0x13/0x20
[ 12.896065] <EOI> [<ffffffff81412e1e>] ? _raw_spin_unlock_irqrestore+0x38/0x69
[ 12.896363] [<ffffffff8100a189>] ? default_idle+0xd8/0x10a
[ 12.896512] [<ffffffff8100a187>] ? default_idle+0xd6/0x10a
[ 12.896658] [<ffffffff8100a5b3>] c1e_idle+0xcd/0xf4
[ 12.896805] [<ffffffff8100138c>] cpu_idle+0x5e/0xb5
[ 12.896952] [<ffffffff813ff0eb>] rest_init+0xff/0x106
[ 12.897104] [<ffffffff813fefec>] ? rest_init+0x0/0x106
[ 12.897260] [<ffffffff818e7c2a>] start_kernel+0x30f/0x31a
[ 12.897409] [<ffffffff818e726d>] x86_64_start_reservations+0x7d/0x81
[ 12.897560] [<ffffffff818e7355>] x86_64_start_kernel+0xe4/0xeb


--
Regards/Gruss,
Boris.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Paul E. McKenney on
On Fri, Apr 23, 2010 at 08:50:59AM -0400, Miles Lane wrote:
> Hi Paul,
> There has been a bit of back and forth, and I am not sure what patches
> I should test now.
> Could you send me a bundle of whatever needs testing now?

Hello, Miles,

I am posting my set as replies to this message. There are a couple
of KVM fixes that are going up via Avi's tree, and a number of networking
fixes that are going up via Dave Miller's tree -- a number of these
are against quickly changing code, so it didn't make sense for me to
keep them separately.

I believe that the two splats below are addressed by this patch set
carried in the networking tree:

https://patchwork.kernel.org/patch/90754/

Thanx, Paul

> I currently have a build of 2.6.34-rc5-git3 with the same patch I
> tested before applied.
> I notice a few minor differences in the warnings given. I suspect
> these do not indicate
> new issues, since the trace from <IRQ> through <EOI> is the same as before.
>
> [ 60.174809] [ INFO: suspicious rcu_dereference_check() usage. ]
> [ 60.174812] ---------------------------------------------------
> [ 60.174816] net/mac80211/sta_info.c:886 invoked
> rcu_dereference_check() without protection!
> [ 60.174820]
> [ 60.174821] other info that might help us debug this:
> [ 60.174822]
> [ 60.174825]
> [ 60.174826] rcu_scheduler_active = 1, debug_locks = 1
> [ 60.174829] no locks held by wpa_supplicant/3973.
> [ 60.174832]
> [ 60.174833] stack backtrace:
> [ 60.174838] Pid: 3973, comm: wpa_supplicant Not tainted 2.6.34-rc5-git3 #19
> [ 60.174841] Call Trace:
> [ 60.174844] <IRQ> [<ffffffff81067faa>] lockdep_rcu_dereference+0x9d/0xa5
> [ 60.174873] [<ffffffffa014e9ae>]
> ieee80211_find_sta_by_hw+0x46/0x10f [mac80211]
> [ 60.174886] [<ffffffffa014ea8e>] ieee80211_find_sta+0x17/0x19 [mac80211]
> [ 60.174902] [<ffffffffa01a60f2>] iwl_tx_queue_reclaim+0xdb/0x1b1 [iwlcore]
> [ 60.174909] [<ffffffff81068417>] ? mark_lock+0x2d/0x235
> [ 60.174920] [<ffffffffa01d5f1c>] iwl5000_rx_reply_tx+0x4a9/0x556 [iwlagn]
> [ 60.174927] [<ffffffff8120a2d3>] ? is_swiotlb_buffer+0x2e/0x3b
> [ 60.174936] [<ffffffffa01cebf4>] iwl_rx_handle+0x163/0x2b5 [iwlagn]
> [ 60.174943] [<ffffffff810688f0>] ? trace_hardirqs_on_caller+0xfa/0x13f
> [ 60.174952] [<ffffffffa01cf3ac>] iwl_irq_tasklet+0x2bb/0x3c0 [iwlagn]
> [ 60.174959] [<ffffffff810411df>] tasklet_action+0xa7/0x10f
> [ 60.174965] [<ffffffff810421f1>] __do_softirq+0x144/0x252
> [ 60.174972] [<ffffffff81003a8c>] call_softirq+0x1c/0x34
> [ 60.174977] [<ffffffff810050e4>] do_softirq+0x38/0x80
> [ 60.174982] [<ffffffff81041cbe>] irq_exit+0x45/0x94
> [ 60.174987] [<ffffffff81004829>] do_IRQ+0xad/0xc4
> [ 60.174994] [<ffffffff813cfb13>] ret_from_intr+0x0/0xf
> [ 60.174997] <EOI> [<ffffffff810e5114>] ? kmem_cache_alloc+0xa9/0x15f
> [ 60.175010] [<ffffffff81342182>] ? __alloc_skb+0x3d/0x155
> [ 60.175016] [<ffffffff81342182>] __alloc_skb+0x3d/0x155
> [ 60.175023] [<ffffffff8133d237>] sock_alloc_send_pskb+0xc0/0x2e5
> [ 60.175030] [<ffffffff8133d46c>] sock_alloc_send_skb+0x10/0x12
> [ 60.175036] [<ffffffff813b1ab5>] unix_stream_sendmsg+0x117/0x2e2
> [ 60.175044] [<ffffffff811bdca8>] ? avc_has_perm+0x57/0x69
> [ 60.175050] [<ffffffff8133b892>] ? sock_aio_write+0x0/0xcf
> [ 60.175056] [<ffffffff813392c2>] __sock_sendmsg+0x59/0x64
> [ 60.175062] [<ffffffff8133b94d>] sock_aio_write+0xbb/0xcf
> [ 60.175069] [<ffffffff810e98b1>] do_sync_readv_writev+0xbc/0xfb
> [ 60.175077] [<ffffffff811c1726>] ? selinux_file_permission+0xa2/0xaf
> [ 60.175082] [<ffffffff810e9638>] ? copy_from_user+0x2a/0x2c
> [ 60.175089] [<ffffffff811baf85>] ? security_file_permission+0x11/0x13
> [ 60.175095] [<ffffffff810ea64e>] do_readv_writev+0xa2/0x122
> [ 60.175101] [<ffffffff810ead3b>] ? fcheck_files+0x8f/0xc9
> [ 60.175107] [<ffffffff810ea70c>] vfs_writev+0x3e/0x49
> [ 60.175113] [<ffffffff810ea7f2>] sys_writev+0x45/0x8e
> [ 60.175119] [<ffffffff81002b6b>] system_call_fastpath+0x16/0x1b
>
> [ 60.223213] [ INFO: suspicious rcu_dereference_check() usage. ]
> [ 60.223216] ---------------------------------------------------
> [ 60.223221] net/mac80211/sta_info.c:886 invoked
> rcu_dereference_check() without protection!
> [ 60.223224]
> [ 60.223225] other info that might help us debug this:
> [ 60.223227]
> [ 60.223230]
> [ 60.223230] rcu_scheduler_active = 1, debug_locks = 1
> [ 60.223234] no locks held by udisks-daemon/4398.
> [ 60.223236]
> [ 60.223237] stack backtrace:
> [ 60.223242] Pid: 4398, comm: udisks-daemon Not tainted 2.6.34-rc5-git3 #19
> [ 60.223245] Call Trace:
> [ 60.223249] <IRQ> [<ffffffff81067faa>] lockdep_rcu_dereference+0x9d/0xa5
> [ 60.223275] [<ffffffffa014e9fe>]
> ieee80211_find_sta_by_hw+0x96/0x10f [mac80211]
> [ 60.223288] [<ffffffffa014ea8e>] ieee80211_find_sta+0x17/0x19 [mac80211]
> [ 60.223304] [<ffffffffa01a60f2>] iwl_tx_queue_reclaim+0xdb/0x1b1 [iwlcore]
> [ 60.223310] [<ffffffff81068417>] ? mark_lock+0x2d/0x235
> [ 60.223321] [<ffffffffa01d5f1c>] iwl5000_rx_reply_tx+0x4a9/0x556 [iwlagn]
> [ 60.223329] [<ffffffff8120a2d3>] ? is_swiotlb_buffer+0x2e/0x3b
> [ 60.223338] [<ffffffffa01cebf4>] iwl_rx_handle+0x163/0x2b5 [iwlagn]
> [ 60.223344] [<ffffffff810688f0>] ? trace_hardirqs_on_caller+0xfa/0x13f
> [ 60.223353] [<ffffffffa01cf3ac>] iwl_irq_tasklet+0x2bb/0x3c0 [iwlagn]
> [ 60.223360] [<ffffffff810411df>] tasklet_action+0xa7/0x10f
> [ 60.223367] [<ffffffff810421f1>] __do_softirq+0x144/0x252
> [ 60.223374] [<ffffffff81003a8c>] call_softirq+0x1c/0x34
> [ 60.223379] [<ffffffff810050e4>] do_softirq+0x38/0x80
> [ 60.223384] [<ffffffff81041cbe>] irq_exit+0x45/0x94
> [ 60.223389] [<ffffffff81004829>] do_IRQ+0xad/0xc4
> [ 60.223396] [<ffffffff813cfb13>] ret_from_intr+0x0/0xf
> [ 60.223399] <EOI> [<ffffffff810e34f1>] ? kmem_cache_free+0xb0/0x134
> [ 60.223412] [<ffffffff810f391a>] ? putname+0x2d/0x36
> [ 60.223417] [<ffffffff810f391a>] putname+0x2d/0x36
> [ 60.223423] [<ffffffff810f5536>] user_path_at+0x5f/0x8e
> [ 60.223429] [<ffffffff81068671>] ? mark_held_locks+0x52/0x70
> [ 60.223435] [<ffffffff810e34ee>] ? kmem_cache_free+0xad/0x134
> [ 60.223441] [<ffffffff8106890a>] ? trace_hardirqs_on_caller+0x114/0x13f
> [ 60.223447] [<ffffffff81068942>] ? trace_hardirqs_on+0xd/0xf
> [ 60.223454] [<ffffffff810ed93f>] vfs_fstatat+0x32/0x5d
> [ 60.223460] [<ffffffff810ed9bb>] vfs_lstat+0x19/0x1b
> [ 60.223465] [<ffffffff810ed9d7>] sys_newlstat+0x1a/0x38
> [ 60.223471] [<ffffffff8106890a>] ? trace_hardirqs_on_caller+0x114/0x13f
> [ 60.223477] [<ffffffff813cec00>] ? trace_hardirqs_on_thunk+0x3a/0x3f
> [ 60.223485] [<ffffffff81002b6b>] system_call_fastpath+0x16/0x1b
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Paul E. McKenney on
On Tue, Apr 20, 2010 at 11:38:28AM -0400, Miles Lane wrote:
> Excellent. Here are the results on my machine. .config appended.

First, thank you very much for testing this, Miles!

> [ 0.177300] [ INFO: suspicious rcu_dereference_check() usage. ]
> [ 0.177428] ---------------------------------------------------
> [ 0.177557] include/linux/cgroup.h:533 invoked
> rcu_dereference_check() without protection!
> [ 0.177760]
> [ 0.177761] other info that might help us debug this:
> [ 0.177762]
> [ 0.178123]
> [ 0.178124] rcu_scheduler_active = 1, debug_locks = 1
> [ 0.178369] no locks held by watchdog/0/5.
> [ 0.178493]
> [ 0.178494] stack backtrace:
> [ 0.178735] Pid: 5, comm: watchdog/0 Not tainted 2.6.34-rc5 #18
> [ 0.178863] Call Trace:
> [ 0.178994] [<ffffffff81067fc2>] lockdep_rcu_dereference+0x9d/0xa5
> [ 0.179127] [<ffffffff8102d667>] task_subsys_state+0x48/0x60
> [ 0.179259] [<ffffffff810328e5>] __sched_setscheduler+0x19d/0x300
> [ 0.179392] [<ffffffff8102b477>] ? need_resched+0x1e/0x28
> [ 0.179523] [<ffffffff813cd501>] ? schedule+0x643/0x66e
> [ 0.179653] [<ffffffff81091903>] ? watchdog+0x0/0x8c
> [ 0.179783] [<ffffffff81032a63>] sched_setscheduler+0xe/0x10
> [ 0.179913] [<ffffffff8109192d>] watchdog+0x2a/0x8c
> [ 0.180010] [<ffffffff81091903>] ? watchdog+0x0/0x8c
> [ 0.180142] [<ffffffff8105713e>] kthread+0x89/0x91
> [ 0.180272] [<ffffffff81068922>] ? trace_hardirqs_on_caller+0x114/0x13f
> [ 0.180405] [<ffffffff81003994>] kernel_thread_helper+0x4/0x10
> [ 0.180537] [<ffffffff813cfcc0>] ? restore_args+0x0/0x30
> [ 0.180667] [<ffffffff810570b5>] ? kthread+0x0/0x91
> [ 0.180796] [<ffffffff81003990>] ? kernel_thread_helper+0x0/0x10

I have a prototype patch for this way down below, but someone who knows
more about CONFIG_RT_GROUP_SCHED than I do should look it over. In the
meantime, could you please see if it helps?

> [ 3.116754] [ INFO: suspicious rcu_dereference_check() usage. ]
> [ 3.116754] ---------------------------------------------------
> [ 3.116754] kernel/cgroup.c:4432 invoked rcu_dereference_check()
> without protection!
> [ 3.116754]
> [ 3.116754] other info that might help us debug this:
> [ 3.116754]
> [ 3.116754]
> [ 3.116754] rcu_scheduler_active = 1, debug_locks = 1
> [ 3.116754] 2 locks held by async/1/666:
> [ 3.116754] #0: (&shost->scan_mutex){+.+.+.}, at:
> [<ffffffff812df0a0>] __scsi_add_device+0x83/0xe4
> [ 3.116754] #1: (&(&blkcg->lock)->rlock){......}, at:
> [<ffffffff811f2e8d>] blkiocg_add_blkio_group+0x29/0x7f
> [ 3.116754]
> [ 3.116754] stack backtrace:
> [ 3.116754] Pid: 666, comm: async/1 Not tainted 2.6.34-rc5 #18
> [ 3.116754] Call Trace:
> [ 3.116754] [<ffffffff81067fc2>] lockdep_rcu_dereference+0x9d/0xa5
> [ 3.116754] [<ffffffff8107f9b1>] css_id+0x3f/0x51
> [ 3.116754] [<ffffffff811f2e9c>] blkiocg_add_blkio_group+0x38/0x7f
> [ 3.116754] [<ffffffff811f4e64>] cfq_init_queue+0xdf/0x2dc
> [ 3.116754] [<ffffffff811e3445>] elevator_init+0xba/0xf5
> [ 3.116754] [<ffffffff812dc02a>] ? scsi_request_fn+0x0/0x451
> [ 3.116754] [<ffffffff811e696b>] blk_init_queue_node+0x12f/0x135
> [ 3.116754] [<ffffffff811e697d>] blk_init_queue+0xc/0xe
> [ 3.116754] [<ffffffff812dc49c>] __scsi_alloc_queue+0x21/0x111
> [ 3.116754] [<ffffffff812dc5a4>] scsi_alloc_queue+0x18/0x64
> [ 3.116754] [<ffffffff812de5a0>] scsi_alloc_sdev+0x19e/0x256
> [ 3.116754] [<ffffffff812de73e>] scsi_probe_and_add_lun+0xe6/0x9c5
> [ 3.116754] [<ffffffff81068922>] ? trace_hardirqs_on_caller+0x114/0x13f
> [ 3.116754] [<ffffffff813ce0d6>] ? __mutex_lock_common+0x3e4/0x43a
> [ 3.116754] [<ffffffff812df0a0>] ? __scsi_add_device+0x83/0xe4
> [ 3.116754] [<ffffffff812d0a5c>] ? transport_setup_classdev+0x0/0x17
> [ 3.116754] [<ffffffff812df0a0>] ? __scsi_add_device+0x83/0xe4
> [ 3.116754] [<ffffffff812df0d5>] __scsi_add_device+0xb8/0xe4
> [ 3.116754] [<ffffffff812ea9c5>] ata_scsi_scan_host+0x74/0x16e
> [ 3.116754] [<ffffffff81057685>] ? autoremove_wake_function+0x0/0x34
> [ 3.116754] [<ffffffff812e8e64>] async_port_probe+0xab/0xb7
> [ 3.116754] [<ffffffff8105e1b5>] ? async_thread+0x0/0x1f4
> [ 3.116754] [<ffffffff8105e2ba>] async_thread+0x105/0x1f4
> [ 3.116754] [<ffffffff81033d79>] ? default_wake_function+0x0/0xf
> [ 3.116754] [<ffffffff8105e1b5>] ? async_thread+0x0/0x1f4
> [ 3.116754] [<ffffffff8105713e>] kthread+0x89/0x91
> [ 3.116754] [<ffffffff81068922>] ? trace_hardirqs_on_caller+0x114/0x13f
> [ 3.116754] [<ffffffff81003994>] kernel_thread_helper+0x4/0x10
> [ 3.116754] [<ffffffff813cfcc0>] ? restore_args+0x0/0x30
> [ 3.116754] [<ffffffff810570b5>] ? kthread+0x0/0x91
> [ 3.116754] [<ffffffff81003990>] ? kernel_thread_helper+0x0/0x10

I cannot convince myself that the above access is safe. Vivek, Nauman,
thoughts?

> [ 33.425087] [ INFO: suspicious rcu_dereference_check() usage. ]
> [ 33.425090] ---------------------------------------------------
> [ 33.425094] net/core/dev.c:1993 invoked rcu_dereference_check()
> without protection!
> [ 33.425098]
> [ 33.425098] other info that might help us debug this:
> [ 33.425100]
> [ 33.425103]
> [ 33.425104] rcu_scheduler_active = 1, debug_locks = 1
> [ 33.425108] 2 locks held by canberra-gtk-pl/4208:
> [ 33.425111] #0: (sk_lock-AF_INET){+.+.+.}, at:
> [<ffffffff81394ffd>] inet_stream_connect+0x3a/0x24d
> [ 33.425125] #1: (rcu_read_lock_bh){.+....}, at:
> [<ffffffff8134a809>] dev_queue_xmit+0x14e/0x4b8
> [ 33.425137]
> [ 33.425138] stack backtrace:
> [ 33.425142] Pid: 4208, comm: canberra-gtk-pl Not tainted 2.6.34-rc5 #18
> [ 33.425146] Call Trace:
> [ 33.425154] [<ffffffff81067fc2>] lockdep_rcu_dereference+0x9d/0xa5
> [ 33.425161] [<ffffffff8134a914>] dev_queue_xmit+0x259/0x4b8
> [ 33.425167] [<ffffffff8134a809>] ? dev_queue_xmit+0x14e/0x4b8
> [ 33.425173] [<ffffffff81041c52>] ? _local_bh_enable_ip+0xcd/0xda
> [ 33.425180] [<ffffffff8135375a>] neigh_resolve_output+0x234/0x285
> [ 33.425188] [<ffffffff8136f71f>] ip_finish_output2+0x257/0x28c
> [ 33.425193] [<ffffffff8136f7bc>] ip_finish_output+0x68/0x6a
> [ 33.425198] [<ffffffff813704b3>] T.866+0x52/0x59
> [ 33.425203] [<ffffffff813706fe>] ip_output+0xaa/0xb4
> [ 33.425209] [<ffffffff8136ebb8>] ip_local_out+0x20/0x24
> [ 33.425215] [<ffffffff8136f204>] ip_queue_xmit+0x309/0x368
> [ 33.425223] [<ffffffff810e41e6>] ? __kmalloc_track_caller+0x111/0x155
> [ 33.425230] [<ffffffff813831ef>] ? tcp_connect+0x223/0x3d3
> [ 33.425236] [<ffffffff81381971>] tcp_transmit_skb+0x707/0x745
> [ 33.425243] [<ffffffff81383342>] tcp_connect+0x376/0x3d3
> [ 33.425250] [<ffffffff81268ac3>] ? secure_tcp_sequence_number+0x55/0x6f
> [ 33.425256] [<ffffffff813872f0>] tcp_v4_connect+0x3df/0x455
> [ 33.425263] [<ffffffff8133cbd9>] ? lock_sock_nested+0xf3/0x102
> [ 33.425269] [<ffffffff81395067>] inet_stream_connect+0xa4/0x24d
> [ 33.425276] [<ffffffff8133b418>] sys_connect+0x90/0xd0
> [ 33.425283] [<ffffffff81002b9c>] ? sysret_check+0x27/0x62
> [ 33.425289] [<ffffffff81068922>] ? trace_hardirqs_on_caller+0x114/0x13f
> [ 33.425296] [<ffffffff813ced00>] ? trace_hardirqs_on_thunk+0x3a/0x3f
> [ 33.425303] [<ffffffff81002b6b>] system_call_fastpath+0x16/0x1b

This looks like an rcu_dereference() needs to instead be
rcu_dereference_bh(), but the line numbering in my version of
net/core/dev.c does not match yours. CCing netdev, hopefully
someone there will know which rcu_dereference() is indicated.

> [ 52.869375] [ INFO: suspicious rcu_dereference_check() usage. ]
> [ 52.869378] ---------------------------------------------------
> [ 52.869382] net/mac80211/sta_info.c:886 invoked
> rcu_dereference_check() without protection!
> [ 52.869386]
> [ 52.869387] other info that might help us debug this:
> [ 52.869389]
> [ 52.869392]
> [ 52.869392] rcu_scheduler_active = 1, debug_locks = 1
> [ 52.869397] 1 lock held by Xorg/4051:
> [ 52.869399] #0: (&dev->struct_mutex){+.+.+.}, at:
> [<ffffffff812afdc4>] i915_gem_do_execbuffer+0xf4c/0xfda
> [ 52.869414]
> [ 52.869415] stack backtrace:
> [ 52.869420] Pid: 4051, comm: Xorg Not tainted 2.6.34-rc5 #18
> [ 52.869423] Call Trace:
> [ 52.869426] <IRQ> [<ffffffff81067fc2>] lockdep_rcu_dereference+0x9d/0xa5
> [ 52.869454] [<ffffffffa01289ae>]
> ieee80211_find_sta_by_hw+0x46/0x10f [mac80211]
> [ 52.869467] [<ffffffffa0128a8e>] ieee80211_find_sta+0x17/0x19 [mac80211]
> [ 52.869483] [<ffffffffa017a0f2>] iwl_tx_queue_reclaim+0xdb/0x1b1 [iwlcore]
> [ 52.869490] [<ffffffff8106842f>] ? mark_lock+0x2d/0x235
> [ 52.869501] [<ffffffffa01a2f1c>] iwl5000_rx_reply_tx+0x4a9/0x556 [iwlagn]
> [ 52.869508] [<ffffffff8120a3d3>] ? is_swiotlb_buffer+0x2e/0x3b
> [ 52.869518] [<ffffffffa019bbf4>] iwl_rx_handle+0x163/0x2b5 [iwlagn]
> [ 52.869524] [<ffffffff81068908>] ? trace_hardirqs_on_caller+0xfa/0x13f
> [ 52.869534] [<ffffffffa019c3ac>] iwl_irq_tasklet+0x2bb/0x3c0 [iwlagn]
> [ 52.869540] [<ffffffff810411df>] tasklet_action+0xa7/0x10f
> [ 52.869546] [<ffffffff810421f1>] __do_softirq+0x144/0x252
> [ 52.869553] [<ffffffff81003a8c>] call_softirq+0x1c/0x34
> [ 52.869559] [<ffffffff810050e4>] do_softirq+0x38/0x80
> [ 52.869564] [<ffffffff81041cbe>] irq_exit+0x45/0x94
> [ 52.869569] [<ffffffff81004829>] do_IRQ+0xad/0xc4
> [ 52.869576] [<ffffffff813cfc13>] ret_from_intr+0x0/0xf
> [ 52.869580] <EOI> [<ffffffff81068765>] ? lockdep_trace_alloc+0xbe/0xc2
> [ 52.869592] [<ffffffff810bca55>] __alloc_pages_nodemask+0x8f/0x6a5
> [ 52.869598] [<ffffffff810b70f5>] ? rcu_read_lock+0x0/0x35
> [ 52.869604] [<ffffffff810b70f5>] ? rcu_read_lock+0x0/0x35
> [ 52.869610] [<ffffffff810c33cb>] ? kmap_atomic+0x16/0x4b
> [ 52.869615] [<ffffffff810b71ad>] ? rcu_read_unlock+0x21/0x23
> [ 52.869621] [<ffffffff810b6c3c>] __page_cache_alloc+0x14/0x16
> [ 52.869627] [<ffffffff810b836d>] do_read_cache_page+0x43/0x121
> [ 52.869632] [<ffffffff810c54bd>] ? shmem_readpage+0x0/0x3c
> [ 52.869638] [<ffffffff810b8464>] read_cache_page_gfp+0x19/0x23
> [ 52.869644] [<ffffffff812aac10>] i915_gem_object_get_pages+0xa1/0x115
> [ 52.869651] [<ffffffff812ad23e>] i915_gem_object_bind_to_gtt+0x16d/0x2ce
> [ 52.869657] [<ffffffff812ad3c6>] i915_gem_object_pin+0x27/0x88
> [ 52.869663] [<ffffffff812af316>] i915_gem_do_execbuffer+0x49e/0xfda
> [ 52.869670] [<ffffffff810cbb93>] ? might_fault+0x63/0xb3
> [ 52.869676] [<ffffffff810cbbdc>] ? might_fault+0xac/0xb3
> [ 52.869681] [<ffffffff810cbb93>] ? might_fault+0x63/0xb3
> [ 52.869687] [<ffffffff812b010d>] i915_gem_execbuffer+0x192/0x221
> [ 52.869694] [<ffffffff812900d0>] drm_ioctl+0x25a/0x36e
> [ 52.869700] [<ffffffff812aff7b>] ? i915_gem_execbuffer+0x0/0x221
> [ 52.869707] [<ffffffff810e9ad1>] ? do_sync_read+0xc6/0x103
> [ 52.869714] [<ffffffff810f6dcd>] vfs_ioctl+0x2d/0xa1
> [ 52.869720] [<ffffffff810f7343>] do_vfs_ioctl+0x48b/0x4d1
> [ 52.869726] [<ffffffff810f73da>] sys_ioctl+0x51/0x74
> [ 52.869733] [<ffffffff81002b6b>] system_call_fastpath+0x16/0x1b

This one looks to be an update-side reference protected by dev->struct_mutex,
but there is no obvious way to get that information to the pair
of rcu_dereference() calls in for_each_sta_info(). Besides which,
I am not 100% certain that this one is really only a false positive.
Especially given that the next one looks similar, but uses a different
lock.

Eric, and enlightenment?

> [ 52.884563] [ INFO: suspicious rcu_dereference_check() usage. ]
> [ 52.884566] ---------------------------------------------------
> [ 52.884571] net/mac80211/sta_info.c:886 invoked
> rcu_dereference_check() without protection!
> [ 52.884574]
> [ 52.884575] other info that might help us debug this:
> [ 52.884577]
> [ 52.884580]
> [ 52.884581] rcu_scheduler_active = 1, debug_locks = 1
> [ 52.884585] 1 lock held by rsyslogd/3854:
> [ 52.884588] #0: (&sb->s_type->i_mutex_key#10){+.+.+.}, at:
> [<ffffffff810b7f97>] generic_file_aio_write+0x47/0xa8
> [ 52.884604]
> [ 52.884605] stack backtrace:
> [ 52.884610] Pid: 3854, comm: rsyslogd Not tainted 2.6.34-rc5 #18
> [ 52.884613] Call Trace:
> [ 52.884617] <IRQ> [<ffffffff81067fc2>] lockdep_rcu_dereference+0x9d/0xa5
> [ 52.884645] [<ffffffffa01289fe>]
> ieee80211_find_sta_by_hw+0x96/0x10f [mac80211]
> [ 52.884658] [<ffffffffa0128a8e>] ieee80211_find_sta+0x17/0x19 [mac80211]
> [ 52.884675] [<ffffffffa017a0f2>] iwl_tx_queue_reclaim+0xdb/0x1b1 [iwlcore]
> [ 52.884681] [<ffffffff8106842f>] ? mark_lock+0x2d/0x235
> [ 52.884693] [<ffffffffa01a2f1c>] iwl5000_rx_reply_tx+0x4a9/0x556 [iwlagn]
> [ 52.884701] [<ffffffff8120a3d3>] ? is_swiotlb_buffer+0x2e/0x3b
> [ 52.884710] [<ffffffffa019bbf4>] iwl_rx_handle+0x163/0x2b5 [iwlagn]
> [ 52.884717] [<ffffffff81068908>] ? trace_hardirqs_on_caller+0xfa/0x13f
> [ 52.884726] [<ffffffffa019c3ac>] iwl_irq_tasklet+0x2bb/0x3c0 [iwlagn]
> [ 52.884733] [<ffffffff810411df>] tasklet_action+0xa7/0x10f
> [ 52.884739] [<ffffffff810421f1>] __do_softirq+0x144/0x252
> [ 52.884746] [<ffffffff81003a8c>] call_softirq+0x1c/0x34
> [ 52.884752] [<ffffffff810050e4>] do_softirq+0x38/0x80
> [ 52.884757] [<ffffffff81041cbe>] irq_exit+0x45/0x94
> [ 52.884762] [<ffffffff81004829>] do_IRQ+0xad/0xc4
> [ 52.884769] [<ffffffff813cfc13>] ret_from_intr+0x0/0xf
> [ 52.884773] <EOI> [<ffffffff810e3509>] ? kmem_cache_free+0xb0/0x134
> [ 52.884789] [<ffffffff811913dc>] ? jbd2_journal_stop+0x32c/0x33e
> [ 52.884796] [<ffffffff811913dc>] jbd2_journal_stop+0x32c/0x33e
> [ 52.884804] [<ffffffff8115e689>] ? ext4_dirty_inode+0x40/0x45
> [ 52.884811] [<ffffffff81105fdb>] ? __mark_inode_dirty+0x2f/0x12e
> [ 52.884819] [<ffffffff81170a65>] __ext4_journal_stop+0x6f/0x75
> [ 52.884825] [<ffffffff81162949>] ext4_da_write_end+0x25c/0x2fc
> [ 52.884833] [<ffffffff810b6b2e>] generic_file_buffered_write+0x161/0x25b
> [ 52.884840] [<ffffffff810b7f1b>] __generic_file_aio_write+0x24a/0x27f
> [ 52.884845] [<ffffffff810b7f97>] ? generic_file_aio_write+0x47/0xa8
> [ 52.884852] [<ffffffff810b7faa>] generic_file_aio_write+0x5a/0xa8
> [ 52.884858] [<ffffffff8115ab2a>] ext4_file_write+0x8c/0x96
> [ 52.884864] [<ffffffff810e99ce>] do_sync_write+0xc6/0x103
> [ 52.884871] [<ffffffff810eac6d>] ? rcu_read_lock+0x0/0x35
> [ 52.884878] [<ffffffff811c17db>] ? selinux_file_permission+0x57/0xaf
> [ 52.884885] [<ffffffff811bb085>] ? security_file_permission+0x11/0x13
> [ 52.884893] [<ffffffff810e9f33>] vfs_write+0xa9/0x106
> [ 52.884898] [<ffffffff810ea046>] sys_write+0x45/0x69
> [ 52.884905] [<ffffffff81002b6b>] system_call_fastpath+0x16/0x1b

Ditto!

> [ 85.939528] [ INFO: suspicious rcu_dereference_check() usage. ]
> [ 85.939531] ---------------------------------------------------
> [ 85.939535] include/net/inet_timewait_sock.h:227 invoked
> rcu_dereference_check() without protection!
> [ 85.939539]
> [ 85.939540] other info that might help us debug this:
> [ 85.939541]
> [ 85.939544]
> [ 85.939545] rcu_scheduler_active = 1, debug_locks = 1
> [ 85.939549] 2 locks held by gwibber-service/4798:
> [ 85.939552] #0: (&p->lock){+.+.+.}, at: [<ffffffff811034b2>]
> seq_read+0x37/0x381
> [ 85.939566] #1: (&(&hashinfo->ehash_locks[i])->rlock){+.-...},
> at: [<ffffffff81386355>] established_get_next+0xc4/0x132
> [ 85.939579]
> [ 85.939580] stack backtrace:
> [ 85.939585] Pid: 4798, comm: gwibber-service Not tainted 2.6.34-rc5 #18
> [ 85.939588] Call Trace:
> [ 85.939598] [<ffffffff81067fc2>] lockdep_rcu_dereference+0x9d/0xa5
> [ 85.939604] [<ffffffff81385018>] twsk_net+0x4f/0x57
> [ 85.939610] [<ffffffff813862e5>] established_get_next+0x54/0x132
> [ 85.939615] [<ffffffff813864c7>] tcp_seq_next+0x5d/0x6a
> [ 85.939621] [<ffffffff81103701>] seq_read+0x286/0x381
> [ 85.939627] [<ffffffff8110347b>] ? seq_read+0x0/0x381
> [ 85.939633] [<ffffffff81133240>] proc_reg_read+0x8d/0xac
> [ 85.939640] [<ffffffff810ea110>] vfs_read+0xa6/0x103
> [ 85.939645] [<ffffffff810ea223>] sys_read+0x45/0x69
> [ 85.939652] [<ffffffff81002b6b>] system_call_fastpath+0x16/0x1b

This one appears to be a case of missing rcu_read_lock(), but it is
not clear to me at what level it needs to go.

Eric, any enlightenment on this one and the next one?

> [ 87.296366] [ INFO: suspicious rcu_dereference_check() usage. ]
> [ 87.296369] ---------------------------------------------------
> [ 87.296373] include/net/inet_timewait_sock.h:227 invoked
> rcu_dereference_check() without protection!
> [ 87.296377]
> [ 87.296377] other info that might help us debug this:
> [ 87.296379]
> [ 87.296382]
> [ 87.296383] rcu_scheduler_active = 1, debug_locks = 1
> [ 87.296386] no locks held by gwibber-service/4803.
> [ 87.296389]
> [ 87.296390] stack backtrace:
> [ 87.296395] Pid: 4803, comm: gwibber-service Not tainted 2.6.34-rc5 #18
> [ 87.296398] Call Trace:
> [ 87.296411] [<ffffffff81067fc2>] lockdep_rcu_dereference+0x9d/0xa5
> [ 87.296419] [<ffffffff813733d3>] twsk_net+0x4f/0x57
> [ 87.296424] [<ffffffff813737f3>] __inet_twsk_hashdance+0x50/0x158
> [ 87.296431] [<ffffffff81389239>] tcp_time_wait+0x1c1/0x24b
> [ 87.296437] [<ffffffff8137c417>] tcp_fin+0x83/0x162
> [ 87.296443] [<ffffffff8137cda7>] tcp_data_queue+0x1ff/0xa1e
> [ 87.296450] [<ffffffff810495c6>] ? mod_timer+0x1e/0x20
> [ 87.296456] [<ffffffff813809e3>] tcp_rcv_state_process+0x89d/0x8f2
> [ 87.296463] [<ffffffff8133ca0b>] ? release_sock+0x30/0x10b
> [ 87.296468] [<ffffffff81386df2>] tcp_v4_do_rcv+0x2de/0x33f
> [ 87.296475] [<ffffffff8133ca5d>] release_sock+0x82/0x10b
> [ 87.296481] [<ffffffff81376ef5>] tcp_close+0x1b5/0x37e
> [ 87.296487] [<ffffffff81395437>] inet_release+0x50/0x57
> [ 87.296493] [<ffffffff8133a134>] sock_release+0x1a/0x66
> [ 87.296498] [<ffffffff8133a1a2>] sock_close+0x22/0x26
> [ 87.296505] [<ffffffff810eb003>] __fput+0x120/0x1cd
> [ 87.296510] [<ffffffff810eb0c5>] fput+0x15/0x17
> [ 87.296516] [<ffffffff810e7f3d>] filp_close+0x63/0x6d
> [ 87.296521] [<ffffffff810e801e>] sys_close+0xd7/0x111
> [ 87.296528] [<ffffffff81002b6b>] system_call_fastpath+0x16/0x1b

commit d3b8ba1bde9afb7d50cf0712f9d95317ea66c06f
Author: Paul E. McKenney <paulmck(a)linux.vnet.ibm.com>
Date: Wed Apr 21 14:04:56 2010 -0700

sched: protect __sched_setscheduler() access to cgroups

A given task's cgroups structures must remain while that task is running
due to reference counting, so this is presumably a false positive.

Signed-off-by: Paul E. McKenney <paulmck(a)linux.vnet.ibm.com>

diff --git a/kernel/sched.c b/kernel/sched.c
index 14c44ec..1d43c1a 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -4575,9 +4575,11 @@ recheck:
* Do not allow realtime tasks into groups that have no runtime
* assigned.
*/
+ rcu_read_lock();
if (rt_bandwidth_enabled() && rt_policy(policy) &&
task_group(p)->rt_bandwidth.rt_runtime == 0)
return -EPERM;
+ rcu_read_unlock();
#endif

retval = security_task_setscheduler(p, policy, param);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Paul E. McKenney on
On Wed, Apr 21, 2010 at 02:35:43PM -0700, Paul E. McKenney wrote:
> On Tue, Apr 20, 2010 at 11:38:28AM -0400, Miles Lane wrote:
> > Excellent. Here are the results on my machine. .config appended.
>
> First, thank you very much for testing this, Miles!

And as Tetsuo Handa pointed out privately, my patch was way broken.

Here is an updated version.

Thanx, Paul

commit b15e561ed91b7a366c3cc635026f3b9ce6483070
Author: Paul E. McKenney <paulmck(a)linux.vnet.ibm.com>
Date: Wed Apr 21 14:04:56 2010 -0700

sched: protect __sched_setscheduler() access to cgroups

A given task's cgroups structures must remain while that task is running
due to reference counting, so this is presumably a false positive.
Updated to reflect feedback from Tetsuo Handa.

Signed-off-by: Paul E. McKenney <paulmck(a)linux.vnet.ibm.com>

diff --git a/kernel/sched.c b/kernel/sched.c
index 14c44ec..f425a2b 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -4575,9 +4575,13 @@ recheck:
* Do not allow realtime tasks into groups that have no runtime
* assigned.
*/
+ rcu_read_lock();
if (rt_bandwidth_enabled() && rt_policy(policy) &&
- task_group(p)->rt_bandwidth.rt_runtime == 0)
+ task_group(p)->rt_bandwidth.rt_runtime == 0) {
+ rcu_read_unlock();
return -EPERM;
+ }
+ rcu_read_unlock();
#endif

retval = security_task_setscheduler(p, policy, param);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Eric Dumazet on
Le mercredi 21 avril 2010 à 14:35 -0700, Paul E. McKenney a écrit :

> > [ 33.425087] [ INFO: suspicious rcu_dereference_check() usage. ]
> > [ 33.425090] ---------------------------------------------------
> > [ 33.425094] net/core/dev.c:1993 invoked rcu_dereference_check()
> > without protection!
> > [ 33.425098]
> > [ 33.425098] other info that might help us debug this:
> > [ 33.425100]
> > [ 33.425103]
> > [ 33.425104] rcu_scheduler_active = 1, debug_locks = 1
> > [ 33.425108] 2 locks held by canberra-gtk-pl/4208:
> > [ 33.425111] #0: (sk_lock-AF_INET){+.+.+.}, at:
> > [<ffffffff81394ffd>] inet_stream_connect+0x3a/0x24d
> > [ 33.425125] #1: (rcu_read_lock_bh){.+....}, at:
> > [<ffffffff8134a809>] dev_queue_xmit+0x14e/0x4b8
> > [ 33.425137]
> > [ 33.425138] stack backtrace:
> > [ 33.425142] Pid: 4208, comm: canberra-gtk-pl Not tainted 2.6.34-rc5 #18
> > [ 33.425146] Call Trace:
> > [ 33.425154] [<ffffffff81067fc2>] lockdep_rcu_dereference+0x9d/0xa5
> > [ 33.425161] [<ffffffff8134a914>] dev_queue_xmit+0x259/0x4b8
> > [ 33.425167] [<ffffffff8134a809>] ? dev_queue_xmit+0x14e/0x4b8
> > [ 33.425173] [<ffffffff81041c52>] ? _local_bh_enable_ip+0xcd/0xda
> > [ 33.425180] [<ffffffff8135375a>] neigh_resolve_output+0x234/0x285
> > [ 33.425188] [<ffffffff8136f71f>] ip_finish_output2+0x257/0x28c
> > [ 33.425193] [<ffffffff8136f7bc>] ip_finish_output+0x68/0x6a
> > [ 33.425198] [<ffffffff813704b3>] T.866+0x52/0x59
> > [ 33.425203] [<ffffffff813706fe>] ip_output+0xaa/0xb4
> > [ 33.425209] [<ffffffff8136ebb8>] ip_local_out+0x20/0x24
> > [ 33.425215] [<ffffffff8136f204>] ip_queue_xmit+0x309/0x368
> > [ 33.425223] [<ffffffff810e41e6>] ? __kmalloc_track_caller+0x111/0x155
> > [ 33.425230] [<ffffffff813831ef>] ? tcp_connect+0x223/0x3d3
> > [ 33.425236] [<ffffffff81381971>] tcp_transmit_skb+0x707/0x745
> > [ 33.425243] [<ffffffff81383342>] tcp_connect+0x376/0x3d3
> > [ 33.425250] [<ffffffff81268ac3>] ? secure_tcp_sequence_number+0x55/0x6f
> > [ 33.425256] [<ffffffff813872f0>] tcp_v4_connect+0x3df/0x455
> > [ 33.425263] [<ffffffff8133cbd9>] ? lock_sock_nested+0xf3/0x102
> > [ 33.425269] [<ffffffff81395067>] inet_stream_connect+0xa4/0x24d
> > [ 33.425276] [<ffffffff8133b418>] sys_connect+0x90/0xd0
> > [ 33.425283] [<ffffffff81002b9c>] ? sysret_check+0x27/0x62
> > [ 33.425289] [<ffffffff81068922>] ? trace_hardirqs_on_caller+0x114/0x13f
> > [ 33.425296] [<ffffffff813ced00>] ? trace_hardirqs_on_thunk+0x3a/0x3f
> > [ 33.425303] [<ffffffff81002b6b>] system_call_fastpath+0x16/0x1b
>
> This looks like an rcu_dereference() needs to instead be
> rcu_dereference_bh(), but the line numbering in my version of
> net/core/dev.c does not match yours. CCing netdev, hopefully
> someone there will know which rcu_dereference() is indicated.
>

This is already sorted out in David trees



> > [ 85.939528] [ INFO: suspicious rcu_dereference_check() usage. ]
> > [ 85.939531] ---------------------------------------------------
> > [ 85.939535] include/net/inet_timewait_sock.h:227 invoked
> > rcu_dereference_check() without protection!
> > [ 85.939539]
> > [ 85.939540] other info that might help us debug this:
> > [ 85.939541]
> > [ 85.939544]
> > [ 85.939545] rcu_scheduler_active = 1, debug_locks = 1
> > [ 85.939549] 2 locks held by gwibber-service/4798:
> > [ 85.939552] #0: (&p->lock){+.+.+.}, at: [<ffffffff811034b2>]
> > seq_read+0x37/0x381
> > [ 85.939566] #1: (&(&hashinfo->ehash_locks[i])->rlock){+.-...},
> > at: [<ffffffff81386355>] established_get_next+0xc4/0x132
> > [ 85.939579]
> > [ 85.939580] stack backtrace:
> > [ 85.939585] Pid: 4798, comm: gwibber-service Not tainted 2.6.34-rc5 #18
> > [ 85.939588] Call Trace:
> > [ 85.939598] [<ffffffff81067fc2>] lockdep_rcu_dereference+0x9d/0xa5
> > [ 85.939604] [<ffffffff81385018>] twsk_net+0x4f/0x57
> > [ 85.939610] [<ffffffff813862e5>] established_get_next+0x54/0x132
> > [ 85.939615] [<ffffffff813864c7>] tcp_seq_next+0x5d/0x6a
> > [ 85.939621] [<ffffffff81103701>] seq_read+0x286/0x381
> > [ 85.939627] [<ffffffff8110347b>] ? seq_read+0x0/0x381
> > [ 85.939633] [<ffffffff81133240>] proc_reg_read+0x8d/0xac
> > [ 85.939640] [<ffffffff810ea110>] vfs_read+0xa6/0x103
> > [ 85.939645] [<ffffffff810ea223>] sys_read+0x45/0x69
> > [ 85.939652] [<ffffffff81002b6b>] system_call_fastpath+0x16/0x1b
>
> This one appears to be a case of missing rcu_read_lock(), but it is
> not clear to me at what level it needs to go.
>
> Eric, any enlightenment on this one and the next one?
>

Coming from commit b099ce2602d806deb41caaa578731848995cdb2a
>From Eric Biederman (CCed)

Apparently he added rcu to twsk_net(), but Changelog doesnt mention it.

> > [ 87.296366] [ INFO: suspicious rcu_dereference_check() usage. ]
> > [ 87.296369] ---------------------------------------------------
> > [ 87.296373] include/net/inet_timewait_sock.h:227 invoked
> > rcu_dereference_check() without protection!
> > [ 87.296377]
> > [ 87.296377] other info that might help us debug this:
> > [ 87.296379]
> > [ 87.296382]
> > [ 87.296383] rcu_scheduler_active = 1, debug_locks = 1
> > [ 87.296386] no locks held by gwibber-service/4803.
> > [ 87.296389]
> > [ 87.296390] stack backtrace:
> > [ 87.296395] Pid: 4803, comm: gwibber-service Not tainted 2.6.34-rc5 #18
> > [ 87.296398] Call Trace:
> > [ 87.296411] [<ffffffff81067fc2>] lockdep_rcu_dereference+0x9d/0xa5
> > [ 87.296419] [<ffffffff813733d3>] twsk_net+0x4f/0x57
> > [ 87.296424] [<ffffffff813737f3>] __inet_twsk_hashdance+0x50/0x158
> > [ 87.296431] [<ffffffff81389239>] tcp_time_wait+0x1c1/0x24b
> > [ 87.296437] [<ffffffff8137c417>] tcp_fin+0x83/0x162
> > [ 87.296443] [<ffffffff8137cda7>] tcp_data_queue+0x1ff/0xa1e
> > [ 87.296450] [<ffffffff810495c6>] ? mod_timer+0x1e/0x20
> > [ 87.296456] [<ffffffff813809e3>] tcp_rcv_state_process+0x89d/0x8f2
> > [ 87.296463] [<ffffffff8133ca0b>] ? release_sock+0x30/0x10b
> > [ 87.296468] [<ffffffff81386df2>] tcp_v4_do_rcv+0x2de/0x33f
> > [ 87.296475] [<ffffffff8133ca5d>] release_sock+0x82/0x10b
> > [ 87.296481] [<ffffffff81376ef5>] tcp_close+0x1b5/0x37e
> > [ 87.296487] [<ffffffff81395437>] inet_release+0x50/0x57
> > [ 87.296493] [<ffffffff8133a134>] sock_release+0x1a/0x66
> > [ 87.296498] [<ffffffff8133a1a2>] sock_close+0x22/0x26
> > [ 87.296505] [<ffffffff810eb003>] __fput+0x120/0x1cd
> > [ 87.296510] [<ffffffff810eb0c5>] fput+0x15/0x17
> > [ 87.296516] [<ffffffff810e7f3d>] filp_close+0x63/0x6d
> > [ 87.296521] [<ffffffff810e801e>] sys_close+0xd7/0x111
> > [ 87.296528] [<ffffffff81002b6b>] system_call_fastpath+0x16/0x1b
>
> commit d3b8ba1bde9afb7d50cf0712f9d95317ea66c06f
> Author: Paul E. McKenney <paulmck(a)linux.vnet.ibm.com>
> Date: Wed Apr 21 14:04:56 2010 -0700
>
> sched: protect __sched_setscheduler() access to cgroups
>
> A given task's cgroups structures must remain while that task is running
> due to reference counting, so this is presumably a false positive.
>
> Signed-off-by: Paul E. McKenney <paulmck(a)linux.vnet.ibm.com>
>
> diff --git a/kernel/sched.c b/kernel/sched.c
> index 14c44ec..1d43c1a 100644
> --- a/kernel/sched.c
> +++ b/kernel/sched.c
> @@ -4575,9 +4575,11 @@ recheck:
> * Do not allow realtime tasks into groups that have no runtime
> * assigned.
> */
> + rcu_read_lock();
> if (rt_bandwidth_enabled() && rt_policy(policy) &&
> task_group(p)->rt_bandwidth.rt_runtime == 0)
> return -EPERM;
> + rcu_read_unlock();
> #endif
>
> retval = security_task_setscheduler(p, policy, param);
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo(a)vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/