[ltt-dev] [patch] add tracepoints to trace activate/deactivate task
Mathieu Desnoyers
compudj at krystal.dyndns.org
Tue Dec 9 17:10:12 EST 2008
* Jason Baron (jbaron at redhat.com) wrote:
> On Mon, Dec 08, 2008 at 11:42:49PM +0100, Peter Zijlstra wrote:
> > On Mon, 2008-12-08 at 17:38 -0500, Jason Baron wrote:
> > > On Mon, Dec 08, 2008 at 08:54:10PM +0100, Peter Zijlstra wrote:
> > > > On Mon, 2008-12-08 at 14:49 -0500, Jason Baron wrote:
> > > > > hi,
> > > > >
> > > > > I thought it would be useful to track when a task is
> > > > > 'activated/deactivated'. This case is different from wakeup/wait, in that
> > > > > task can be activated and deactivated, when the scheduler re-balances
> > > > > tasks, the allowable cpuset changes, or cpu hotplug occurs. Using these
> > > > > patches I can more precisely figure out when a task becomes runnable and
> > > > > why.
> > > >
> > > > Then I still not agree with it because it does not expose the event that
> > > > did the change.
> > > >
> > > > If you want the cpu allowed mask, put a tracepoint there. If you want
> > > > migrate information (didn't we have that?) then put one there, etc.
> > > >
> > >
> > > well, with stap backtrace I can figure out the event, otherwise i'm
> > > sprinkling 14 more trace events in the scheduler...I can go down that
> > > patch if people think its better?
> >
> > what events are you interested in? some of them are just straight
> > syscall things like nice.
> >
> > But yes, I'd rather you'd do the events - that's what tracepoints are
> > all about, marking indivudual events, not some fugly hook for stap.
>
> well, i think that the activate/deactivate combination gives you a lot
> of interesting statistics. You could figure out how long tasks wait on
> the runqueue, when and how tasks are migrated between runqueues, queue
> lengths, average queue lengths, large queues lengths. These statistics
> could help diagnose performance problems.
>
> For example, i just wrote the systemtap script below which outputs the
> distribution of queue lengths per-cpu on my system. I'm sure Frank could
> improve the stap code, but below is the script and the output.
>
Quoting yourself in this thread :
"I thought it would be useful to track when a task is
'activated/deactivated'. This case is different from wakeup/wait, in
that task can be activated and deactivated, when the scheduler
re-balances tasks, the allowable cpuset changes, or cpu hotplug occurs.
Using these patches I can more precisely figure out when a task becomes
runnable and why."
Peter Zijlstra objected that the key events we would like to see traced
are more detailed than just "activate/deactivate" state, e.g. an event
for wakeup, one for wait, one for re-balance, one for cpuset change, one
for hotplug. Doing this will allow other tracers to do other useful
stuff with the information.
So trying to argue that "activate/deactivate" is "good" is missing the
point here. Yes, we need that information, but in fact we need _more
precise_ information, which is a superset of those "activate/deactivate"
events.
Peter, am I understanding your point correctly ?
Mathieu
> thanks,
>
> -Jason
>
> sample output (during a kernel compile). Each line is the cpu number the
> "length" of the queue, and the "number" of times that length happened.
> You'll notice that queue lengths are mostly between 0-3 but there are
> definitely some larger lengths including a length of 13.
>
> cpu: 0 length: -1 number: 5979
> cpu: 0 length: 0 number: 12462
> cpu: 0 length: 1 number: 13139
> cpu: 0 length: 2 number: 12744
> cpu: 0 length: 3 number: 9047
> cpu: 0 length: 4 number: 3965
> cpu: 0 length: 5 number: 1278
> cpu: 0 length: 6 number: 378
> cpu: 0 length: 7 number: 156
> cpu: 0 length: 8 number: 80
> cpu: 0 length: 9 number: 42
> cpu: 0 length: 10 number: 15
> cpu: 0 length: 11 number: 4
> cpu: 0 length: 12 number: 1
> cpu: 1 length: 1 number: 9260
> cpu: 1 length: 0 number: 4162
> cpu: 1 length: 2 number: 10652
> cpu: 1 length: 3 number: 10288
> cpu: 1 length: 4 number: 6645
> cpu: 1 length: 5 number: 2472
> cpu: 1 length: 6 number: 710
> cpu: 1 length: 7 number: 192
> cpu: 1 length: 8 number: 56
> cpu: 1 length: 9 number: 18
> cpu: 1 length: 10 number: 6
> cpu: 1 length: 11 number: 3
> cpu: 1 length: 12 number: 2
> cpu: 1 length: 13 number: 1
> cpu: 2 length: 1 number: 8897
> cpu: 2 length: 0 number: 4104
> cpu: 2 length: 2 number: 10322
> cpu: 2 length: 3 number: 9984
> cpu: 2 length: 4 number: 6256
> cpu: 2 length: 5 number: 2293
> cpu: 2 length: 6 number: 656
> cpu: 2 length: 7 number: 213
> cpu: 2 length: 8 number: 77
> cpu: 2 length: 9 number: 40
> cpu: 2 length: 10 number: 17
> cpu: 2 length: 11 number: 6
> cpu: 2 length: 12 number: 1
> cpu: 3 length: 1 number: 9023
> cpu: 3 length: 0 number: 4089
> cpu: 3 length: 2 number: 10605
> cpu: 3 length: 3 number: 10125
> cpu: 3 length: 4 number: 6196
> cpu: 3 length: 5 number: 2298
> cpu: 3 length: 6 number: 746
> cpu: 3 length: 7 number: 271
> cpu: 3 length: 8 number: 117
> cpu: 3 length: 9 number: 53
> cpu: 3 length: 10 number: 22
> cpu: 3 length: 11 number: 7
> cpu: 3 length: 12 number: 2
>
>
>
> #!/usr/bin/env stap
> #
>
> global cpu_queue_distribution
> global current_queue_length
>
> /* process added into runqueue : really running or well prepared */
> probe kernel.mark("kernel_activate_task"){
> current_queue_length[$arg3]++;
> cpu_queue_distribution[$arg3,current_queue_length[$arg3]]++
> }
>
> /* process removed from runqueue : in wait queue or other state */
> probe kernel.mark("kernel_deactivate_task") {
> current_queue_length[$arg3]--;
> cpu_queue_distribution[$arg3,current_queue_length[$arg3]]++
> }
>
>
> probe end{
> foreach ([cpu+, length] in cpu_queue_distribution) {
> printf("cpu: %d length: %d number: %d\n", cpu, length, cpu_queue_distribution[cpu,length]);
> }
> }
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> _______________________________________________
> ltt-dev mailing list
> ltt-dev at lists.casi.polymtl.ca
> http://lists.casi.polymtl.ca/cgi-bin/mailman/listinfo/ltt-dev
>
--
Mathieu Desnoyers
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F BA06 3F25 A8FE 3BAE 9A68
More information about the lttng-dev
mailing list