[lttng-dev] issues converting large amount of traces using babeltrace

Philippe Proulx eeppeliteloop at gmail.com
Sat Mar 26 02:49:24 UTC 2016


On Fri, Mar 25, 2016 at 10:08 PM, syed zaidi <zaidi1039 at gmail.com> wrote:
> Hey guys,
> I am using babeltrace to convert the traces produced by Lttng. It works
> perfectly if I have small amount of data. That is if I run the session for
> small amount of time.
> But if the the session is ran for a long time, while converting the traces,
> the babel trace process gets killed and it does not complete the conversion.

How large of a trace are we talking about? What is the size of the trace?

Could you please provide the output of:

    tree -h <your CTF trace directory>

Could you also provide (using a pastebin ideally; this could be huge) the
textual metadata content, if that's not confidential:

    babeltrace -o ctf-metadata <your CTF trace directory>

Are you using a custom subbuffer size when tracing with LTTng?

> I just wanted to ask if this is a known issue. I guess the babeltrace
> process getting killed depends on the memory available on the machine and
> that should also define the limit for traces to be converted succesfully.
> I was wondering if my assumption, that the issue is the memory available, is
> true and if other people have seen this or not.

I would be surprised that the memory usage increased so much with the
quantity of trace data. The stream packets are mmap()ed. Memory usage
in Babeltrace should be somewhat proportional to the number of possible
events (the number of event blocks in the metadata), not to the size of
the trace.

Another possibility would be that all the packet indexes are kept in
memory, and that they are huge, but I doubt that.

Or it could be a simple memory leak.

BR,
Phil

>
> Thanks
> Ammar
>
> _______________________________________________
> lttng-dev mailing list
> lttng-dev at lists.lttng.org
> https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev
>


More information about the lttng-dev mailing list