[lttng-dev] lttng_ust_cyg_profile
Alok Priyadarshi
alokpr at gmail.com
Thu Mar 22 21:13:28 EDT 2018
Response inline.
On Thu, Mar 22, 2018 at 12:25 PM Jonathan Rajotte-Julien <
jonathan.rajotte-julien at efficios.com> wrote:
> Hi Alok,
>
> On Thu, Mar 22, 2018 at 06:31:41PM +0000, Alok Priyadarshi wrote:
> > I am trying to collect a trace for visualizing callstack using
> > lttng_ust_cyg_profile.
> >
> > Even though I have compiled a single source file with
> > -finstrument-functions, I guess it is producing too many events to
> > considerably slow down the instrumented process and cause lost events.
>
> Did you have a look at [1]. A "fast" shared object is available.
>
> [1] https://lttng.org/man/3/lttng-ust-cyg-profile/v2.10/
Yes - I tried both - fast and regular. I did not notice any difference.
>
> >
> > I am using clang, which does not support
> > -finstrument-functions-exclude-file-list.
>
> Might want to give gcc a chance then.
>
The version of gcc I am using has a bug which produces linker errors for
intrinsics when compiled with -finstrument-functions.
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=78333
I will either need to find a gcc version where this bug is fixed or figure
out a workaround. Using clang seemed easier, but then it does not support
-finstrument-functions-exclude-file-list :|
> > Is there any way to
> > filter lttng_ust_cyg_profile events before they are written to the
> trace? I
> > have tried various things:
> > 1. lttng-track to only track one process
> >
> > 2. Only enable lttng_ust_cyg_profile:* events
> >
> > 3. Increase the subbuffer size:
> > lttng enable-channel --userspace --buffers-pid --subbuf-size=2M
> > --num-subbuf=2 mychannel
>
> Is there any reason you are using --buffer-pid instead of the default
> per-uid buffering?
>
No particular reason. I read somewhere in the mailing list that
--buffer-pid might be less susceptible to lose events. I can switch back to
using the default per-uid buffering if that is better for my usecase.
> > Using this option is very flaky however. It crashes lttng-sessiond.
> >
>
> We will need the log (lttng-sessiond -vvv) (pastebin) and the version of
> lttng for this.
>
OK. I will get you that when I back at work.
> Does your workflow looks somewhat similar to this?
>
> 1) Instrumented app is started with relevant LD_PRELOAD
> 2) lttng create my_session
> 3) lttng enable-channel -u --subbuf-size=2M --num-subbuf=8 my_channel
> 4) lttng enable-event -u -c my_channel 'lttng_ust_cyg_profile:*'
> 5) lttng track -u -p <PID>
> 6) lttng start
> ... some time pass
> 7) lttng stop
> 8) lttng destroy
>
> If a discarded events warning is reported the solution is to increase
> either the
> size or the number of subbuffers per cpu. Note that the upper limit for
> the total
> subbuffers size is dictated by the available free memory of the system.
>
Yup - exactly. Except I am using --num-subbuf=2. I have also tried
--subbuf-size as 4M and 8M, without much success.
> > Any other ideas to filter events?
> >
> > I am also open to explicitly adding tracepoints to the functions I care
> > about. But I am not sure how they can be visualized in trace-compass
> > callstack view.
>
> Neither am I, this might be a better question for the tracecompass mailing
> list.
>
OK. I will ping the tracecompass mailing list. Is there any tool that can
just show events on each thread timeline? I have a multi-threaded
application, and I want to just visualize when functions enter/exit on
various thread with respect to each other.
Using lttng_ust_cyg_profile seemed straightforward, but apparently not.
Thanks for your help Jonathan.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.lttng.org/pipermail/lttng-dev/attachments/20180323/3147caa6/attachment.html>
More information about the lttng-dev
mailing list