[lttng-dev] LTTng UST Benchmarks
Kienan Stewart
kstewart at efficios.com
Thu Apr 25 13:53:27 EDT 2024
Hi Aditya,
On 4/24/24 11:25 AM, Aditya Kurdunkar via lttng-dev wrote:
> Hello everyone, I was working on a use case where I am working on
> enabling LTTng on an embedded ARM device running the OpenBMC linux
> distribution. I have enabled the lttng yocto recipe and I am able to
> trace my code. The one thing I am concerned about is the performance
> overhead. Although the documentation mentions that LTTng has the lowest
> overhead amongst all the available solutions, I am concerned about the
> overhead of the LTTng UST in comparison to
> other available tracers/profilers. I have used the benchmarking setup
> from lttng-ust/tests/benchmark at master · lttng/lttng-ust (github.com)
> <https://github.com/lttng/lttng-ust/tree/master/tests/benchmark> to
> benchmark the overhead of the tracepoints (on the device). The
> benchmark, please correct me if I am wrong, gives the overhead of a
> single tracepoint in your code.
This seems to be what it does.
Although this might be fine for now, I
> was just wondering if there are any published benchmarks comparing LTTng
> with the available tracing/profiling solutions.
I don't know of any published ones that do an exhaustive comparison.
There is this one[1] which references a comparison with some parts of
eBPF. The source for the benchmarking is also available[2].
If not, how can I go
> about benchmarking the overhead of the applications?
>
I'm not really sure how to answer you here.
I guess the most pertinent to your use case is to test your application
with and without tracing to see the complete effect?
It would be good to have a dedicated system, disable CPU frequency
scaling, and to perform the tests repeatedly and measure the mean,
median, and standard deviation.
You could pull methodological inspiration from prior publications[3],
which while outdated in terms of software version and hardware
demonstrate the process of creating and comparing benchmarks.
It would also be useful to identify how your application and tracing
setup works, and to understand which parts of the system you are
interested in measuring.
For example, the startup time of tracing rapidly spawning processes will
depend on the type of buffering scheme in use, if the tracing
infrastructure is loaded before or after forking, etc.
Your case might be a long running application and you aren't interested
in startup time performance but more concretely the impact of the static
instrumentation on one of your hot paths.
If you're not sure what kind of tracing setups work best in your case,
or would like us to characterize at certain aspect of the tool-set's
performance, EfficiOS[4] offers consultation and support for
instrumentation and performance in applications.
> I have come across the lttng/lttng-ust-benchmarks (github.com)
> <https://github.com/lttng/lttng-ust-benchmarks> repository which has no
> documentation on how to run it, apart from one commit message on how to
> run the benchmark script.
>
To run those benchmarks when you have babeltrace2, lttng-tools, urcu,
lttng-ust, and optional lttng-modules installed:
```
$ make
$ python3 ./benchmark.py
```
This should produce a file, `benchmarks.json`
You can also inspect how the CI job runs it:
https://ci.lttng.org/view/LTTng-ust/job/lttng-ust-benchmarks_master_linuxbuild/
> Any help is really appreciated. Thank you.
>
> Regards,
> Aditya
>
> _______________________________________________
> lttng-dev mailing list
> lttng-dev at lists.lttng.org
> https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev
[1]:
https://tracingsummit.org/ts/2022/files/Tracing_Summit_2022-LTTng_Beyond_Ring-Buffer_Based_Tracing_Jeremie_Galarneau_.pdf
[2]: https://github.com/jgalar/LinuxCon2022-Benchmarks
[3]: https://www.dorsal.polymtl.ca/files/publications/desnoyers.pdf
[4]: https://www.efficios.com/contact/
thanks,
kienan
More information about the lttng-dev
mailing list