[ltt-dev] UST Test

David Goulet david.goulet at polymtl.ca
Thu Sep 9 08:58:51 EDT 2010


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1



On 10-09-09 02:33 AM, Pierre-Marc Fournier wrote:
> On 09/03/2010 01:37 PM, David Goulet wrote:
> Hi everyone,
> 
> I think we had some time ago a discussion about that but now I want to
> fix this
> properly and for good. The current benchmark test in the UST git is :
> 1) Only testing trace_mark
> 2) Using syscalls
> 
> This makes the per event time goes ~1.1ms for 1 millions events. I did
> some time
> 
>> Are you sure about this? I get around 1 us/event.
> 

I'm sorry. Typo on my side... You are right. It's microsec not millise.

> ago another benchmark test that tested trace_mark, tracepoint with
> trace_mark
> and custom probe but _without_ syscalls (only math calculation) and
> the time
> dropped to ~200ns per event.
> 
>> Can you explain why system calls make such a big difference in per-event
>> times? I find it disturbing and I am reluctant to remove them from the
>> benchmark just because this results in better figures without knowing
>> what is happening.
> 

I have no idea. Maybe cache pressure... I don't know. I took the current
benchmark test and added some integer manipulation to make it a long loop and I
got the 200ns that I normally get.

I'm not saying to remove the syscalls from the benchmark. Just partition the
test as mention below.

> 
> So, I don't question the current benchmark test but perhaps it should
> be good to
> design a test that first will test all "marking" technology (including
> soon to
> come trace_event), second that will use multiple use case scenario (like
> syscalls, regular calculation, string manipulation, I/O, ...) because
> each of
> these test gives different results.
> 
> We have NO test case that test all four (marker, TP, custom, TE) and I
> think
> this should be VERY important because I just found out that the the
> trace_mark_tp was not working since the changes made by Nils with the
> data
> pointer by using some old test I had. This should be detected with the
> tests/runtests script at each new release or/and pull commit.
> 
>> Good ideas, but I think tests and benchmarks should be separate
>> programs. Probably there should be one test per instrumentation type and
>> one benchmark per instrumentation type.

Absolutely. Different test for sure. Let's improve the benchmark test by adding
all possible tracing mechanism and do a separate test for the "technology" used
in UST (just to insure that we broke nothing after big changes).

Thanks
David

> 
>> pmf
> 
> 
> Cheers
> David

- -- 
David Goulet
LTTng project, DORSAL Lab.

1024D/16BD8563
BE3C 672B 9331 9796 291A  14C6 4AF7 C14B 16BD 8563
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)

iEYEARECAAYFAkyI2gsACgkQSvfBSxa9hWP70wCgvk+NCCsSdIDlEBi2DyyNPuDT
KTAAn1AKpEplkTiHeXbXZqUyEHUoqwg+
=pxwx
-----END PGP SIGNATURE-----




More information about the lttng-dev mailing list