[lttng-dev] Lacking instrumentation for threaded IRQs

Julien Desfossez ju at klipix.org
Tue Oct 2 10:23:55 EDT 2018


On 10/01/2018 01:25 PM, Geneviève Bastien wrote:
> Hi,
> 
> 
> I was writing a tutorial on distributed critical path in Trace Compass.
> I was tracing a client (my laptop on wifi) and a server. And to my
> surprise, it didn't work! After further investigation, here's the issue:
> 
> 
> The critical path assumes that a network packet is received in a softirq
> that wakes up the process to whom the packet is meant. This is right for
> ethernet network, but not wifi. Wifi packets are handled by a threaded
> IRQ. Here's the flow:
> 
> 
> 1- There's an irq that is handled with a return value of 2 (IRQ_WAKE_THREAD)
> 
> 2- The 2 means to wake up a process _after_ the irq_handler_exit
> tracepoint is hit
> 
> 3- The IRQ thread is woken up, scheduled in and is running
> 
> 4- The packet is received during that time
> 
> 5- The packet reception is hence not on the critical path!
> 
> 
> The issue we have here with the critical path for network will also be
> an issue for all threaded irqs and there is some additional
> instrumentation needed to be able to automatically take those into
> account in analyses.
> 
> There's additional instrumentation needed
> 
> 1- To associate the wakeup of the IRQ thread to the actual IRQ. The way
> it is now, the wakeup happens in the context of whatever else was
> running before the IRQ handler.
> 
> 2- To make it obvious that the IRQ thread is really an IRQ thread, so
> whatever is running there should be considered in the context of a
> threaded IRQ and not a simple running process.
> 
> 
> Here are some options for each, not mutually exclusive:
> 
> For 1-
> 
> * Instrument the kernel: Add tracepoints to the beginning and end of
> do_IRQ, to show the full IRQ duration, not just the handler and all
> events happening during that time can be associated with the IRQ.
> 
> * Instrument the kernel: Add tracepoints before and after waking
> processes if the return value of the handler is IRQ_WAKE_THREAD. Those
> tracepoints would be hit only if a process needs to be woken and would
> not add overhead otherwise
> 
> * At analysis time: Look at the return value of irq_handler_exit and if
> 2, wait for the next waking on that CPU and associate it with the last
> IRQ. But could there be more than one process to wakeup? Looking at the
> kernel code, it seems that not. But this is more error-prone.
> 
> 
> For 2-
> 
> * Instrument the kernel: Add tracepoints to the beginning and end of the
> irq_thread_fn function. Those events would indicate the context of a
> threaded IRQ so all events inside can be considered the results of an IRQ.
> 
> * Instrument LTTng: Add a field to the lttng_statedump_interrupt
> tracepoint to add the thread handling this IRQ if available. That would
> give this thread a special state: instead of running, whatever happens
> there would be considered as the result of an IRQ. But only LTTng traces
> would benefit from this tracepoint, and only if there's a statedump (so
> not snapshot). It's sad... But would make a quick win for now.
> 
> 
> Here's a pad showing those possible options:
> https://hebdo.framapad.org/p/threadedirqlttnglinux8gjkh51yhfha
> 
> What do you think? I'm mostly interested to know if there's a case to
> add instrumentation to the kernel and where.

For #1, I think adding the do_IRQ begin/end instrumentation, would be
very valuable, we did it with a kretprobe in the latency-tracker, but I
think this deserves a real tracepoint. In your example trace, we can see
with the do_IRQ event that the whole interruption lasted 5.2
micro-seconds, whereas the interrupt handler only lasted 482ns, so
without this event we are clearly missing long periods of interruption.

Using the "ret = 2" would work as a temporary fix, but there are some
corner cases where it could lead us into wrong assumptions (the process
to be woken up is exiting somehow and the interrupt arrived during a
syscall that needed to wake another process), so I think the real fix is
the do_IRQ instrumentation.

For #2, I am not exactly sure why we need special events for those
kernel threads, with the fix for #1, we know they are interrupt-related
threads, but they are not interrupts in the sense of "doing work while
the current task is interrupted". To me, everything this thread does is
related to the previous source of wakeup and I would imagine every chunk
of work it does is between an irq_thread_fn_entry/return block.
I am not against adding this information (or the statedump approach if
we are sure there is a 1 to 1 mapping), but I would like to understand
more precisely what this information would add to our understanding of
the trace.

Thanks,

Julien



More information about the lttng-dev mailing list