[ltt-dev] about the problem of LTTng Events lost always occurred in sys_ioctl
compudj at krystal.dyndns.org
Fri Jan 7 14:16:42 EST 2011
* Juntang Fu(David) (juntang.fu at windriver.com) wrote:
> Hi, Mathieu:
> I have a question about LTTng relay buffer management, could you help
> me on it? thanks in advance.
> in my test case in a ARM target, I start a trace with normal mode, I
> have found that always some events will be lost in cpu_0 channel, if I
> enlarge the buffer size and number of buffers, the number of lost
> events can be decreased. but when I check the traced logs, I have found
> that all of the lost events always happen in the position following
> with the events named sys_ioctl and kernel_syscall_exit(ret=54). please
> see the attached picture to check it.
> So my confused question is that:
> if the sub buffer is not enough and the data are over written, then
> the data lost should be random, right? does LTTng kernel buffer
> management have some special handling with some events?
Putting your buffers in "normal" mode never overwrites any non-read
data: when the buffers become full, then the newest data will be
discarded, until some sub-buffers get consumed, which frees up some
space to write into the buffer again.
I expect that the ioctl event you see there might just end up filling up
the end of sub-buffer because
a) it happens relatively often (lttd uses it to get/put each sub-buffer
b) these are small events (and these versions of lttng did fill the
leftover space if large enough to put small events even after an event
loss. Newer upcoming LTTng won't do this, which will make it easier to
spot exactly where the event loss occurs.)
You should probably look into disabling some markers in your
instrumentation to lower your data throughput, so your data transport
can cope with the amount of trace data.
Hope this helps,
> My kernel version: 2.6.27
> LTTng version : patch-22.214.171.124-lttng-0.72
Operating System Efficiency R&D Consultant
More information about the lttng-dev