[lttng-dev] what will happen during tracing if getcpu() doesn't return correct cpu number occasionally?
Mathieu Desnoyers
mathieu.desnoyers at efficios.com
Wed Jul 7 09:46:05 EDT 2021
----- On Jul 6, 2021, at 10:37 PM, lttng-dev <lttng-dev at lists.lttng.org> wrote:
> Hi,
> I know lttng-ust tracepoint process uses per cpu lockless ringbuffer algorithm
> for high performance so that it relies on getcpu() to return the cpu number on
> which the app is running.
> Unfortunately, I am working on arm that linux kernel does not support vdso
> getcpu() implemention and one getcpu() will take 200ns!!!
> My question is :
> 1. do you have any advice for that?
You might want to try wiring up the "rseq" system call in user-space to provide an accurate cpu_id
field in a __rseq_abi TLS variable. It is always kept up to date by the kernel. The rseq system call
is implemented on ARM. However the __rseq_abi TLS is a shared resource across libraries, and
we have not agreed with glibc people on how exactly it must be shared within a process.
> 2. If I implement a cache-version for getcpu()(just like getcpu() implemention
> before kernel 2.6.23 ), what will happen during tracing process?
You'd have to give more details on how this "cache-version" works.
> Since use of the cache could speed getcpu() calls, at the cost that there was a
> very small chance that the returned cpu number would be out of date, I am not
> sure whether the "wrong" cpu number will result in the tracing app crashing?
LTTng-UST always has to expect that it can be migrated at any point between getcpu and writes to
per-cpu data. Therefore, it always relies on atomic operations when interacting with the ring buffer,
and there is no expectation that it runs on the "right" CPU compared to the ring buffer data structure
for consistency. Therefore, you won't experience crashes nor corruption even if the CPU number is
wrong once in a while, as long as it belongs to the "possible CPUs".
This behavior is selected by lttng's libringbuffer "RING_BUFFER_SYNC_GLOBAL" configuration
option internally, as selected by lttng-ust.
Note that the kernel tracer instead selects "RING_BUFFER_SYNC_PER_CPU", which is faster, but
requires that preemption (or migration) be disabled between the "smp_processor_id()" and writes to
the ring buffer per-cpu data structures.
Thanks,
Mathieu
> Thanks
> zhenyu.ren
> _______________________________________________
> lttng-dev mailing list
> lttng-dev at lists.lttng.org
> https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.lttng.org/pipermail/lttng-dev/attachments/20210707/6c9e698e/attachment.htm>
More information about the lttng-dev
mailing list