[lttng-dev] Processing large trace files

Shehab Elsayed shehabyomn at gmail.com
Thu Jul 12 11:36:31 EDT 2018


Thank you very much, Phil. I will give them a try and see if they help

Shehab Y. Elsayed, MSc.
PhD Student
The Edwards S. Rogers Sr. Dept. of Electrical and Computer Engineering
University of Toronto
E-mail: shehabyomn at gmail.com
<https://webmail.rice.edu/imp/message.php?mailbox=INBOX&index=11#>

On Tue, Jul 10, 2018 at 6:51 PM, Philippe Proulx <eeppeliteloop at gmail.com>
wrote:

> On Tue, Jul 10, 2018 at 11:33 AM Shehab Elsayed <shehabyomn at gmail.com>
> wrote:
> >
> > I haven't tried the C API yet.
> >
> > A smaller trace of about 2.7GB took +5 hrs before I had to kill the
> process.
>
> Wow. The Python bindings are slow by nature, but that's a lot of time!
>
> >
> > As for trace cutting, that sounds promising. How broken is it in
> babeltrace 2? I mean, can I use it to some extent or is it not functioning
> at all?
>
> It works in fact, but the master branch of Babeltrace 2 does not contain
> some important optimizations yet so it can be slow, which is already
> your issue, therefore I can't say if it will help.
>
> Once built, the command line would look like:
>
>     babeltrace /path/to/trace/dir --begin=20:32:17.182738475 \
>                --end=20:32:24.177283746 -octf -w/output/dir
>
> See babeltrace-convert(1) for more details.
>
> Try it and see if you can save time. Moreover I'm working on various
> optimizations, one of which targets faster trimming. This will be part
> of 2.0-rc1.
>
> Phil
>
> >
> > Thanks!
> >
> > Shehab Y. Elsayed, MSc.
> > PhD Student
> > The Edwards S. Rogers Sr. Dept. of Electrical and Computer Engineering
> > University of Toronto
> > E-mail: shehabyomn at gmail.com
> >
> > On Tue, Jul 10, 2018 at 9:37 AM, Jonathan Rajotte-Julien <
> jonathan.rajotte-julien at efficios.com> wrote:
> >>
> >> Hi Shehab,
> >>
> >> On Mon, Jul 09, 2018 at 06:48:05PM -0400, Shehab Elsayed wrote:
> >> > Hello All,
> >> >
> >> > I was wondering if anyone had any suggestions on how to speedup trace
> >> > processing. Right now I am using babeltrace in python to read the
> traces
> >> > and process them. Some of the traces I am dealing with reach up to
> 10GB.
> >>
> >> Well Babeltrace also provide a C API. You might get some performance
> gains there.
> >>
> >> Depending of the data you extract you might be able to split the 10 GB
> trace
> >> into smaller chunk and parallelize your analysis. Babeltrace 2 will
> support
> >> trace cutting, it seems that it is broken for now. Babeltrace 1 does
> not support
> >> trace cutting.
> >>
> >> You might want to look at ways to reduce the size of your traces using
> tracing
> >> mechanism such as the snapshot feature and enabling/recording only
> pertinent
> >> event if it is not already the case.
> >>
> >> Could you share with us the time it take for analyzing your bigger
> trace?
> >>
> >> Cheers
> >>
> >> >
> >> > I read the trace, extract the info I need and write them to a csv
> gzipped
> >> > file (all in python). I fill up a list first with all extracted info
> for
> >> > each tracked event and then write once after the script has gone
> through
> >> > all events in the trace.
> >> >
> >> > Thanks,
> >> > Shehab
> >>
> >> > _______________________________________________
> >> > lttng-dev mailing list
> >> > lttng-dev at lists.lttng.org
> >> > https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev
> >>
> >>
> >> --
> >> Jonathan Rajotte-Julien
> >> EfficiOS
> >
> >
> > _______________________________________________
> > lttng-dev mailing list
> > lttng-dev at lists.lttng.org
> > https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.lttng.org/pipermail/lttng-dev/attachments/20180712/c28f654a/attachment.html>


More information about the lttng-dev mailing list