[lttng-dev] performance of lttng-analyses scripts

Milian Wolff milian.wolff at kdab.com
Tue Jun 14 10:06:08 UTC 2016


Hey all,

I wanted to try out the lttng-analyses scripts today, following the 
documentation available from https://github.com/lttng/lttng-analyses.

Even for a minimal trace of just a couple of seconds (the lttng trace folder 
is ~56MB large), it takes a really long time to analyze the data using the 
python scripts. I wonder whether I'm doing something wrong, as this is 
completely unusable for larger data files that we generate to investigate 
startup-performance on an embedded device...

Profiling with perf shows this:

~~~~~~~~~ time to check for lost events:
Checking the trace for lost events...
Warning: progressbar module not available, using --no-progress.
^CCancelled by user

 Performance counter stats for '/home/milian/projects/src/lttng/lttng-
analyses/lttng-cputop kernel/':

       4871.395990      task-clock:u (msec)       #    1.019 CPUs utilized          
                 0      context-switches:u        #    0.000 K/sec                  
                 0      cpu-migrations:u          #    0.000 K/sec                  
            23,927      page-faults:u             #    0.005 M/sec                  
    17,324,601,224      cycles:u                  #    3.556 GHz                    
    35,474,572,407      instructions:u            #    2.05  insn per cycle         
     7,496,776,724      branches:u                # 1538.938 M/sec                  
        31,698,918      branch-misses:u           #    0.42% of all branches        

       4.780154529 seconds time elapsed
~~~~~~~~~

~~~~~~~~~ time to do full analysis:
 Performance counter stats for '/home/milian/projects/src/lttng/lttng-
analyses/lttng-cputop kernel/':

      76048.928635      task-clock:u (msec)       #    1.000 CPUs utilized          
                 0      context-switches:u        #    0.000 K/sec                  
                 0      cpu-migrations:u          #    0.000 K/sec                  
            25,766      page-faults:u             #    0.339 K/sec                  
   291,464,040,754      cycles:u                  #    3.833 GHz                    
   516,839,520,205      instructions:u            #    1.77  insn per cycle         
   105,416,447,495      branches:u                # 1386.166 M/sec                  
     1,178,825,581      branch-misses:u           #    1.12% of all branches        

      76.019929857 seconds time elapsed
~~~~~~~~~

perf record/report shows these hotspots:

+   18.92%  python3        libpython3.5m.so.1.0 [.] PyEval_EvalFrameEx
+    4.63%  python3        libpython3.5m.so.1.0 [.] 
_PyObject_GenericGetAttrWithDict
+    4.20%  lt-babeltrace  libc-2.23.so [.] vfprintf
+    4.10%  python3        libpython3.5m.so.1.0 [.] _PyType_Lookup
+    2.15%  python3        libpython3.5m.so.1.0 [.] PyObject_GetAttr
+    1.88%  python3        libpython3.5m.so.1.0 [.] PyDict_GetItem
+    1.78%  python3        libglib-2.0.so.0.4800.1 [.] g_hash_table_lookup
+    1.59%  python3        libpython3.5m.so.1.0 [.] PyFrame_New

Is there any way to speed this process up? I don't want to wait for hours to 
do the analyses on my real data sets.

Thanks
-- 
Milian Wolff | milian.wolff at kdab.com | Software Engineer
KDAB (Deutschland) GmbH&Co KG, a KDAB Group company
Tel: +49-30-521325470
KDAB - The Qt Experts
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 5903 bytes
Desc: not available
URL: <https://lists.lttng.org/pipermail/lttng-dev/attachments/20160614/20ffc372/attachment.bin>


More information about the lttng-dev mailing list