[lttng-dev] basic questions on LTTng
michel.dagenais at polymtl.ca
Fri Mar 27 15:15:30 EDT 2015
> - is it possible to use LTTng from Xenomai RT threads? (old mails on the
> Xenomai list suggest so, but it is unclear if special
> precautions/incantations/patches are needed)
I have not looked into this. There should not be big problem since LTTng is fairly self standing to minimize the impact and reliance on the rest of the kernel. Mathieu can probably tell you more about this.
> - do tracepoints use ANY linux system services (system calls in the UST
> context, and kernel API in the kernel context)? (background - if that is the
> case with Xenomai, an RT thread could be relaxed; with RTAI things could go
> really haywire as RT threads are running in something similar to an IRQ
In user-space, the interactions are at sub-buffer switch time. If I remember correctly, a few bytes are written to a pipe connected to the consumer daemon. There is also a mode where nothing is done by the traced application and the consumer daemon polls the sub-buffer metadata to detect when it is full. In kernel mode, tracepoints can appear even in interrupt and NMI contexts.
> - are there any precautions to take when using an RT-PREEMPT kernel?
We have used it with PREEMPT-RT. I do not believe that anyhting special is required.
> - is there a ballpark figure for the cost (roundabout ns on typical hardware)
> for a dormant and a fired tracepoint?
We essentially cannot measure the cost when dormant. When tracing to a file, the cost can be around 200ns. In between, you can have a conditional tracepoint (faster that a tracepoint when the condition false), and tracing to memory but not saving to disk. Tracing to memory is useful to limit the overhead while keeping the option to save a snapshot to disk of the recent events whenever a problem (latency) is encountered.
> - related - does it make sense to conditionally compile in tracepoints, or
> are they so cheap they could just as well stay in production code?
Definitely, you can keep them in production code. On systems with very limited memory, the overhead of the information stored for each tracepoint may be a concern though.
> - in our scenario, we'd like to find sources of delay which could vary
> according to arguments (e.g. the math library function I mentioned, which
> runs exceedingly long for certain argument values). Is there a way to
> trigger on the time delta between tracepoints, like as a filter? would the
> Python API help me here?
Julien has been working on the latency tracker which can be an excellent tool for such situations. You just measure the latency of a number of operations and, when a problem is encountered, you collect much more information such as stack dumps. You therefore have a very low overhead when no latency problem is present, and a lot of details when there is a problem.
> A note on demons and tracing: one demon we use does the classic double fork
> to go into the background, and in that case the
> "LD_PRELOAD=liblttng-ust-fork.so" support seems not to work. Not an issue, I
> patched it so it can stay in the foreground which takes care of the problem.
> It's this piece I disabled with an option:
Julien or Mathieu should be able to help you there.
More information about the lttng-dev