[lttng-dev] scale numbers for LTTNG

Mathieu Desnoyers mathieu.desnoyers at efficios.com
Mon May 30 13:03:51 UTC 2016


----- On May 30, 2016, at 9:16 AM, Vijay Anand <vjanandr85 at gmail.com> wrote: 

> Hi Mathieu, Jonathan,

> I have written python scripts to simulate our requirements.
> https://github.com/vjanandr/sampleP/tree/master/lttng

> Please refer to the README file for further details about the scripts.

> Mathieu, I understand your recommendation to use per UID buffers instead of per
> PID buffers.
> But I dont seem to understand your suggestion to use filters...


> Why ? You can always collect trace data into buffers shared across processes
> (per-uid buffers)
> and filter after the fact.
> <<<<<

> Did you mean to say write all traces to a file but filter them while reading
> based on maybe process ID ?

Yes, this is what I mean, 

Thanks, 

Mathieu 

> Regards,
> Vijay

> On Fri, May 27, 2016 at 1:16 AM, Mathieu Desnoyers <
> mathieu.desnoyers at efficios.com > wrote:

>> ----- On May 26, 2016, at 8:23 AM, Vijay Anand < vjanandr85 at gmail.com > wrote:

>>> Could anyone please let us know how do we go about this ?

>>> Regards,
>>> Vijay

>>> On Tue, May 24, 2016 at 11:31 AM, Vijay Anand < vjanandr85 at gmail.com > wrote:

>>>> Hello Folks,

>>>> We have been evaluating LTTNG for use in our production systems for user space
>>>> tracing. We have evaluated most of the features supported by LTTNG and very
>>>> much see a value add LTTNG brings into our debugging infrastructure.

>>>>     * Our current requirement is to trace Userspace programs running on linux.
>>>>     * Each of the linux processes define their own tracepoint providers.

>> Sounds like a good design.

>>>>     *
>>>>     * We would like to trace event histories of each process independently.

>> Why ? You can always collect trace data into buffers shared across processes
>> (per-uid buffers)
>> and filter after the fact.

>>>>     *
>>>>     * We could potentially have 1000s of such processes running simultaneously.

>> Especially with that many processes, having per-process buffers will degrade
>> your cache
>> locality.

>>>>     *
>>>>    * We concluded on using a session/channel to trace one tracepoint provider
>>>>     corresponding to a unique process.

>>>>        * But I understand we could also create one system wide session and use channels
>>>>         to trace each of the providers. Either of the approaches seems to work for us.

>> Both approach will kill you cache locality with that many process. I don't
>> recommend either
>> of the two approaches you refer to above. You might want to consider sharing
>> your buffers
>> across processes.

>>>>    * But upon evaluating I see that we could create only 25 active process-sessions
>>>>     and not traces from all the processes are logged.

>> We will need much more details on your tracing configuration setup (exact list
>> of commands you
>> issue to create your tracing sessions).

>> Also please detail what you mean by " not traces from all the processes are
>> logged".
>> What commands do you issue, what is the exact setup, and what is the output
>> you observe (exact listing) which leads you to reach this conclusion.

>> Thanks,

>> Mathieu

>>>>        * Please note I have tried increasing the buffer size and the number of buffers.
>>>>         This doesn't help.
>>>>         * Each of the process trace 52 events at and interval of 1 second each.
>>>>     *
>>>> I have evaluated this with lttng in session,live and snapshot modes and I have
>>>> not been getting favourable.

>>>> Could you folks share the scale numbers that LTTNG supports, especially when it
>>>> comes to tracing user space programs ?

>>>> Regards,
>>>> Vijay
>>>> ​​

>> --
>> Mathieu Desnoyers
>> EfficiOS Inc.
>> http://www.efficios.com

-- 
Mathieu Desnoyers 
EfficiOS Inc. 
http://www.efficios.com 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.lttng.org/pipermail/lttng-dev/attachments/20160530/e5558c8d/attachment.html>


More information about the lttng-dev mailing list