[lttng-dev] Processing large trace files

Jonathan Rajotte-Julien jonathan.rajotte-julien at efficios.com
Tue Jul 10 09:37:45 EDT 2018


Hi Shehab,

On Mon, Jul 09, 2018 at 06:48:05PM -0400, Shehab Elsayed wrote:
> Hello All,
> 
> I was wondering if anyone had any suggestions on how to speedup trace
> processing. Right now I am using babeltrace in python to read the traces
> and process them. Some of the traces I am dealing with reach up to 10GB.

Well Babeltrace also provide a C API. You might get some performance gains there.

Depending of the data you extract you might be able to split the 10 GB trace
into smaller chunk and parallelize your analysis. Babeltrace 2 will support
trace cutting, it seems that it is broken for now. Babeltrace 1 does not support
trace cutting.

You might want to look at ways to reduce the size of your traces using tracing
mechanism such as the snapshot feature and enabling/recording only pertinent
event if it is not already the case.

Could you share with us the time it take for analyzing your bigger trace?

Cheers

> 
> I read the trace, extract the info I need and write them to a csv gzipped
> file (all in python). I fill up a list first with all extracted info for
> each tracked event and then write once after the script has gone through
> all events in the trace.
> 
> Thanks,
> Shehab

> _______________________________________________
> lttng-dev mailing list
> lttng-dev at lists.lttng.org
> https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


-- 
Jonathan Rajotte-Julien
EfficiOS


More information about the lttng-dev mailing list