From: Ian Munsie on
Excerpts from Li Zefan's message of Wed Jul 28 12:55:54 +1000 2010:
> > @@ -1112,6 +1110,7 @@ tracing_generic_entry_update(struct trace_entry *entry, unsigned long flags,
> > {
> > struct task_struct *tsk = current;
> >
> > + tracing_record_cmdline(tsk);
>
> Now this function is called everytime a tracepoint is triggered, so
> did you run some benchmarks to see if the performance is improved
> or even worse?

Admittedly when I posted the patch I had not done that. For the below
benchmark I isolated the trace_sched_switch tracepoint from the
context_switch function (since it is called often) into it's own function
(tp_benchmark) which I could then run the ftrace function profiler on
while the tracepoint was active through debugfs.

On my test system there is a performance hit for an active event of
~0.233 us per event (which I have now reduced to ~0.127 us by inlining
tracing_record_cmdline and trace_save_cmdline). At least that is only on
active events as opposed to every single context switch as before.

Before:
Function Hit Time Avg s^2
-------- --- ---- --- ---
.tp_benchmark 1494 2699.670 us 1.807 us 0.536 us
.tp_benchmark 212 357.546 us 1.686 us 0.363 us
.tp_benchmark 215 389.984 us 1.813 us 0.404 us
.tp_benchmark 649 1116.156 us 1.719 us 0.626 us
.tp_benchmark 273 483.530 us 1.771 us 0.350 us
.tp_benchmark 333 599.600 us 1.800 us 0.378 us
.tp_benchmark 203 355.038 us 1.748 us 0.351 us
.tp_benchmark 270 473.222 us 1.752 us 0.360 us

After existing patch:
Function Hit Time Avg s^2
-------- --- ---- --- ---
.tp_benchmark 1427 2815.906 us 1.973 us 0.623 us
.tp_benchmark 358 645.550 us 1.803 us 0.240 us
.tp_benchmark 437 867.762 us 1.985 us 0.684 us
.tp_benchmark 701 1445.618 us 2.062 us 0.906 us
.tp_benchmark 121 257.166 us 2.125 us 0.949 us
.tp_benchmark 162 329.536 us 2.034 us 0.671 us
.tp_benchmark 216 448.420 us 2.076 us 0.754 us
.tp_benchmark 238 452.244 us 1.900 us 0.384 us

With inlining:
Function Hit Time Avg s^2
-------- --- ---- --- ---
.tp_benchmark 1478 2834.292 us 1.917 us 0.451 us
.tp_benchmark 316 583.166 us 1.845 us 0.227 us
.tp_benchmark 160 312.752 us 1.954 us 0.302 us
.tp_benchmark 687 1251.652 us 1.821 us 0.445 us
.tp_benchmark 177 352.310 us 1.990 us 0.451 us
.tp_benchmark 324 603.848 us 1.863 us 0.239 us
.tp_benchmark 150 284.444 us 1.896 us 0.343 us
.tp_benchmark 339 617.716 us 1.822 us 0.215 us


> Another problem in this patch is, tracing_generic_entry_update() is also
> called by perf, but cmdline recoding is not needed in perf.

That's a good point - I could move the call into
trace_buffer_lock_reserve so that perf does not get the unneeded
overhead. Actually there's probably no reason I couldn't put it in
__trace_buffer_unlock_commit to avoid the overhead if the event has been
filtered out.

Anyway, what do you think? Is the extra overhead per event acceptable?
I'll go ahead and respin the patch to remove the overhead in the perf
case for the moment.

Cheers,
-Ian
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Li Zefan on
Ian Munsie wrote:
> From: Ian Munsie <imunsie(a)au1.ibm.com>
>
> Previously, when tracing was activated through debugfs, regardless of
> which tracing plugin (if any) were activated, the probe_sched_switch and
> probe_sched_wakeup probes from the sched_switch plugin would be
> activated. This appears to have been a hack to use them to record the
> command lines of active processes as they were scheduled.
>
> That approach would suffer if many processes were being scheduled that
> were not generating events as they would consume entries in the
> saved_cmdlines buffer that could otherwise have been used by other
> processes that were actually generating events.
>
> It also had the problem that events could be mis-attributed - in the
> common situation of a process forking then execing a new process, the
> change of the process command would not be noticed for some time after
> the exec until the process was next scheduled.
>
> If the trace was read after the fact this would generally go unnoticed
> because at some point the process would be scheduled and the entry in
> the saved_cmdlines buffer would be updated so that the new command would
> be reported when the trace was eventually read. However, if the events
> were being read live (e.g. through trace_pipe), the events just after
> the exec and before the process was next scheduled would show the
> incorrect command (though the PID would be correct).
>
> This patch removes the sched_switch hack altogether and instead records
> the commands at a more appropriate moment - when a new trace event is
> committed onto the ftrace ring buffer. This means that the recorded
> command line is much more likely to be correct when the trace is read,
> either live or after the fact, so long as the command line still resides
> in the saved_cmdlines buffer.
>
> It is still not guaranteed to be correct in all situations. For instance
> if the trace is read after the fact rather than live (consider events
> generated by a process before an exec - in the below example they would
> be attributed to sleep rather than stealpid since the entry in
> saved_cmdlines would have changed before the event was read), but this
> is no different to the current situation and the alternative would be to
> store the command line with each and every event.
>
> terminal 1: grep '\-12345' /sys/kernel/debug/tracing/trace_pipe
> terminal 2: ./stealpid 12345 `which sleep` 0.1
>
> Before:
> stealpid-12345 [003] 86.001826: sys_clone -> 0x0
> stealpid-12345 [003] 86.002013: compat_sys_execve(ufilename: ffaaabef, argv: ffaaa7ec, envp: ffaaa7f8)
> stealpid-12345 [002] 86.002292: sys_restart_syscall -> 0x0
> stealpid-12345 [002] 86.002336: sys_brk(brk: 0)
> stealpid-12345 [002] 86.002338: sys_brk -> 0x1007a000
> ...
> stealpid-12345 [002] 86.002582: sys_mmap(addr: 0, len: 1000, prot: 3, flags: 22, fd: ffffffff, offset: 0)
> stealpid-12345 [002] 86.002586: sys_mmap -> 0xf7c21000
> sleep-12345 [002] 86.002771: sys_mprotect(start: ffe8000, len: 4000, prot: 1)
> sleep-12345 [002] 86.002780: sys_mprotect -> 0x0
> ...
>
> After:
> stealpid-12345 [003] 1368.823626: sys_clone -> 0x0
> stealpid-12345 [003] 1368.823820: compat_sys_execve(ufilename: fffa6bef, argv: fffa5afc, envp: fffa5b08)
> sleep-12345 [002] 1368.824125: sys_restart_syscall -> 0x0
> sleep-12345 [002] 1368.824173: sys_brk(brk: 0)
> sleep-12345 [002] 1368.824175: sys_brk -> 0x104ae000
> ...
>
> Signed-off-by: Ian Munsie <imunsie(a)au1.ibm.com>

I've tested your patch using lmbench(ctx):

Context switching - times in microseconds - smaller is better
-------------------------------------------------------------------------
Host OS 2p/0K 2p/16K 2p/64K 8p/16K 8p/64K 16p/16K 16p/64K
ctxsw ctxsw ctxsw ctxsw ctxsw ctxsw ctxsw
--------- ------------- ------ ------ ------ ------ ------ ------- -------
(trace-off)
Linux 2.6.35- 2.1300 2.2100 2.0800 2.5900 2.1400 2.59000 2.19000
Linux 2.6.35- 2.1400 2.2000 2.0800 2.6000 2.0900 2.56000 2.15000

(all events on)
Linux 2.6.35- 2.8000 2.9600 2.7200 3.2500 2.8200 3.24000 2.98000
Linux 2.6.35- 2.7100 2.6900 2.7300 3.2200 2.8500 3.25000 2.79000

(all events on without cmdline-recording)
Linux 2.6.35- 2.6100 2.6900 2.5800 3.0300 2.5800 3.04000 2.67000
Linux 2.6.35- 2.5800 2.5900 2.5600 3.0300 2.6600 3.04000 2.61000

(your patch applied)
Linux 2.6.35- 2.7100 2.8000 2.7200 3.2100 2.8400 3.24000 2.82000
Linux 2.6.35- 2.6600 2.8400 2.6900 3.1900 2.7600 3.27000 2.78000

So with your patch applied, the performance is still worse than just disabling
cmdline-recording.

The performance may be worse if I choose some other benchmarks.

I'd suggest another approch, that we add a tracepiont in set_task_comm()
to record cmdname, and that's how perf does.

> ---
>
> Changes since v1 addressing feedback from Li Zefan:
> * Inline trace_save_cmdline and tracing_record_cmdline for a marginal speed
> gain when recording command lines.
> * Move call to tracing_record_cmdline from tracing_generic_entry_update to
> __trace_buffer_unlock_commit to avoid the overhead when using perf or if the
> event was filtered out.
>
> kernel/trace/trace.c | 7 +++----
> kernel/trace/trace_events.c | 11 -----------
> kernel/trace/trace_functions.c | 2 --
> kernel/trace/trace_functions_graph.c | 2 --
> kernel/trace/trace_sched_switch.c | 10 ----------
> 5 files changed, 3 insertions(+), 29 deletions(-)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Ian Munsie on
Excerpts from Frederic Weisbecker's message of Thu Jul 29 12:50:41 +1000 2010:
> So, in fact we can't do this. There is a strong reason that makes us maintaining
> the cmdline resolution on sched switch rather than on tracing time: for
> performances and scalability.
>
> Look at what does tracing_record_cmdline():
>
> - it's one more call
> - it checks a lot of conditions
> - takes a spinlock (gives up if already in use, but that's still bad for the cache
> in a tracing path)
> - deref a shared hashlist
> ...
>
>
> Currently that is made on sched switch time, which means quite often.
> Now imagine you turn on the function tracer: this is going to happen
> for _every_ function called in the kernel. There is going to be a lot
> of cache ping pong between CPUs due to the spinlock for example, for
> every function this is clearly unacceptable (it would be twice per
> function with the function graph tracer).
>
> And still there are also all the things with the hashlist deref, the checks,
> etc...
>
> It's not only the function tracers. The lock events will also show you very
> bad results. Same if you enable all the others together.

My first thought when reading this was to make the saved_cmdlines and
related data per CPU to reduce a lot of the cache ping pong, but I'm
happy to take the alternate approach you suggest.

> Better have a new call to tracing_record_cmdline() made from the fork
> and exec tracepoints to solve this problem.
> But still, that only solves the lazy update and not all the problems
> you've listed in this changelog.

Still, it would scratch my itch so I'm happy to take that approach.

> In fact cmdline tracking would grow in complexity in the kernel if we had
> to make it correctly. Ideally:
>
> * dump every tasks when we start tracing, and map their cmdlines
> * do the pid mapping per time interval. Keys in the hlist must be
> pid/start_time:end_time, not only pid anymore.
> * map new cmdlines from fork and exec events. If exec, we must open
> a new entry for our pid, closing noting end_time of this previous
> pid entry and open a new start_time for the new entry.
>
>
> And this would be way much more efficient than the sched_switch based
> thing we have. More efficient in terms of performance and per timeslice
> granularity.
>
> That's the kind of thing we'd better do from userspace, for tools like
> perf tools or trace-cmd. And perf tools do, partially (no time
> granularity yet).

I'd tend to agree, I find the in-kernel stuff most useful for watching
events on a live system. My itch was that I couldn't simply grep
trace_pipe for a command that I was about to run and reliably see all
it's events.

> But still, in-kernel made cmdline resolution is cool to have on
> some circumstances, especially on ascii tracing and dump. But
> for that I think we should just don't care further and keep this
> basic and non-perfect cmdline tracking, sufficient for most uses.
>
> In fact we could fix it by dumping the tasks comm from tasklist
> and hook on the fork and exec events, rather than sched switch.
> It would be better for performances, and then appreciated.

I guess the compromise here would be that the saved_cmdlines buffer
would need to grow to hold all the command lines for every process that
has been running since the trace started - a limit of 128 commands as it
is now wouldn't cut it on most systems. Then again, there's no reason
not to bring back Li's patch to provide the option to disable recording
the command lines for people who don't want it.

Hmmm, I suppose we could hook into the process termination and check if
any events were associated with it and free up the entry if not...

Cheers,
-Ian
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Ian Munsie on
Excerpts from Frederic Weisbecker's message of Thu Jul 29 11:58:34 +1000 2010:
> In fact I don't really understand what is this tp_benchmark function, when and
> where is it called?

The idea was just to replace a single tracepoint with a call to a
separate function (who's sole action was to call the tracepoint) so that
the ftrace function profiler could profile that function and provide
average timing data for the function call + tracepoint, i.e. like:

noinline void tp_benchmark(...)
{
trace_...
}

Mostly just laziness on my part really.

> But anyway, I'll rather comment the idea on the patch.

Cheers,
-Ian
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/