[lttng-dev] Performance impact using the "filter" option

Amit Margalit AMITM at il.ibm.com
Thu Mar 20 03:40:15 EDT 2014


I agree here - without knowledge of the exact scenario, it's hard to tell.

Sometimes you need to run the test through many billions of events to see 
a difference.

Consider this - the filter could be complicated and the event could be 
tiny (say, one integer). In this case, filtering would hurt you even if 
99% of events are not written to the buffer.

Amit Margalit
IBM XIV - Storage Reinvented
XIV-NAS Development Team
Tel. 03-689-7774
Fax. 03-689-7230



From:   Michel Dagenais <michel.dagenais at polymtl.ca>
To:     Ilya Mirsky <ilya.mirsky at gmail.com>
Cc:     Amit Margalit/Israel/IBM at IBMIL, lttng-dev at lists.lttng.org
Date:   03/19/2014 10:48 PM
Subject:        Re: [lttng-dev] Performance impact using the "filter" 
option




That's what I thought, but benchmarking showed that there's practically no 
difference.
The filter is a simple ID comparison of the form 'id % 1,000 == 0', so 999 
out of 1K tracepoints are filtered out.
Could you please point me to some references on this topic?
What is the running time and trace size with tracing disabled, with 
tracing enabled unconditionally, and with tracing under condition? If 
tracing takes negligible time, filtering will not change much. Note that I 
have not experimented with the current UST filter implementation but with 
a similar facility in GDB and an in-kernel prototype.
This article is not directly related but discusses many of these issues
http://benthamscience.com/open/tocsj/articles/V006/11TOCSJ.htm
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lttng.org/pipermail/lttng-dev/attachments/20140320/11154bd4/attachment.html>


More information about the lttng-dev mailing list