[lttng-dev] performance of lttng-analyses scripts

Mathieu Desnoyers mathieu.desnoyers at efficios.com
Tue Jun 14 16:00:39 UTC 2016


----- On Jun 14, 2016, at 6:06 AM, Milian Wolff milian.wolff at kdab.com wrote:

> Hey all,
> 
> I wanted to try out the lttng-analyses scripts today, following the
> documentation available from https://github.com/lttng/lttng-analyses.
> 
> Even for a minimal trace of just a couple of seconds (the lttng trace folder
> is ~56MB large), it takes a really long time to analyze the data using the
> python scripts. I wonder whether I'm doing something wrong, as this is
> completely unusable for larger data files that we generate to investigate
> startup-performance on an embedded device...
> 
> Profiling with perf shows this:
> 
> ~~~~~~~~~ time to check for lost events:
> Checking the trace for lost events...
> Warning: progressbar module not available, using --no-progress.
> ^CCancelled by user
> 
> Performance counter stats for '/home/milian/projects/src/lttng/lttng-
> analyses/lttng-cputop kernel/':
> 
>       4871.395990      task-clock:u (msec)       #    1.019 CPUs utilized
>                 0      context-switches:u        #    0.000 K/sec
>                 0      cpu-migrations:u          #    0.000 K/sec
>            23,927      page-faults:u             #    0.005 M/sec
>    17,324,601,224      cycles:u                  #    3.556 GHz
>    35,474,572,407      instructions:u            #    2.05  insn per cycle
>     7,496,776,724      branches:u                # 1538.938 M/sec
>        31,698,918      branch-misses:u           #    0.42% of all branches
> 
>       4.780154529 seconds time elapsed
> ~~~~~~~~~
> 
> ~~~~~~~~~ time to do full analysis:
> Performance counter stats for '/home/milian/projects/src/lttng/lttng-
> analyses/lttng-cputop kernel/':
> 
>      76048.928635      task-clock:u (msec)       #    1.000 CPUs utilized
>                 0      context-switches:u        #    0.000 K/sec
>                 0      cpu-migrations:u          #    0.000 K/sec
>            25,766      page-faults:u             #    0.339 K/sec
>   291,464,040,754      cycles:u                  #    3.833 GHz
>   516,839,520,205      instructions:u            #    1.77  insn per cycle
>   105,416,447,495      branches:u                # 1386.166 M/sec
>     1,178,825,581      branch-misses:u           #    1.12% of all branches
> 
>      76.019929857 seconds time elapsed
> ~~~~~~~~~
> 
> perf record/report shows these hotspots:
> 
> +   18.92%  python3        libpython3.5m.so.1.0 [.] PyEval_EvalFrameEx
> +    4.63%  python3        libpython3.5m.so.1.0 [.]
> _PyObject_GenericGetAttrWithDict
> +    4.20%  lt-babeltrace  libc-2.23.so [.] vfprintf
> +    4.10%  python3        libpython3.5m.so.1.0 [.] _PyType_Lookup
> +    2.15%  python3        libpython3.5m.so.1.0 [.] PyObject_GetAttr
> +    1.88%  python3        libpython3.5m.so.1.0 [.] PyDict_GetItem
> +    1.78%  python3        libglib-2.0.so.0.4800.1 [.] g_hash_table_lookup
> +    1.59%  python3        libpython3.5m.so.1.0 [.] PyFrame_New
> 
> Is there any way to speed this process up? I don't want to wait for hours to
> do the analyses on my real data sets.

We have a few plans on our roadmap to speed up those analyses. We initially
focused on the feature rather than speed, knowing very well that we would
need to speed them up at some point.

The few ideas we have in mind:

1) Extend the Babeltrace Python interface so analyses can register which
   events they are interested in to the Babeltrace CTF reader library.
   This way, we can skip all the Python processing for all events which
   are uninteresting to the analysis directly in C.

   This should take care of analyses that care only about few events in
   the trace.

2) For analyses that care about more frequent events (e.g. interrupt handlers,
   function entry/exit), we might have to consider whether Python can give
   us sufficient performance with a few tweaks, or implement those analyses
   directly in C. Some profiling seems like a good way to start this part of
   the effort,

Thanks!

Mathieu


> 
> Thanks
> --
> Milian Wolff | milian.wolff at kdab.com | Software Engineer
> KDAB (Deutschland) GmbH&Co KG, a KDAB Group company
> Tel: +49-30-521325470
> KDAB - The Qt Experts
> _______________________________________________
> lttng-dev mailing list
> lttng-dev at lists.lttng.org
> https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev

-- 
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com


More information about the lttng-dev mailing list