[lttng-dev] Large number of stream files in CTF trace -- too many file handles

Jonathan Rajotte-Julien jonathan.rajotte-julien at efficios.com
Mon Mar 16 10:51:25 EDT 2020


Hi,

> If this is not the right approach, how should I proceed?  E.g., should the
> source-ctf-fs manage a limited pool of file handles?  I would think this
> would be pretty inefficient as you would need to constantly open/close
> files--expensive.

I would probably start looking at the soft and hardlimit for the babeltrace2
process in terms of open file:

On my machine:

joraj@~[]$ ulimit -Sn
1024

joraj@~[]$ ulimit -Hn
1048576

That is a lot of headspace.

I might have a setting somewhere increasing the base hardlimit but
in any case you will see how much room you have.

Based on the number of streams you have, I would say that you will
need more than 2000 as a base soft limit for this trace.

We do have a FD pooling system in place in another project we maintain
(lttng-tools[1] GPL-2.0) that might be pertinent for babeltrace2 at some point
in time. As for the overhead that would occur in a scenario with not enough FD
available, I think it is a good compromise between either reading a trace or
not reading it at all. A warning informing the user that we reached
the limit of the pool might be a good start in such case.

[1] https://github.com/lttng/lttng-tools/tree/master/src/common/fd-tracker

Cheers

-- 
Jonathan Rajotte-Julien
EfficiOS


More information about the lttng-dev mailing list