[lttng-dev] Extract lttng trace from kernel coredump
mathieu.desnoyers at efficios.com
Sat Jan 18 14:00:50 EST 2014
----- Original Message -----
> From: "Corey Minyard" <minyard at acm.org>
> To: "Mathieu Desnoyers" <mathieu.desnoyers at efficios.com>
> Cc: "David Goulet" <dgoulet at efficios.com>, lttng-dev at lists.lttng.org
> Sent: Saturday, January 18, 2014 1:29:52 PM
> Subject: Re: [lttng-dev] Extract lttng trace from kernel coredump
> On 01/18/2014 10:47 AM, Mathieu Desnoyers wrote:
> > ----- Original Message -----
> >> From: "Corey Minyard" <minyard at acm.org>
> >> To: "David Goulet" <dgoulet at efficios.com>
> >> Cc: lttng-dev at lists.lttng.org
> >> Sent: Tuesday, January 14, 2014 3:39:43 PM
> >> Subject: Re: [lttng-dev] Extract lttng trace from kernel coredump
> >> On 01/14/2014 12:42 PM, David Goulet wrote:
> >>> On 13 Jan (16:39:47), Corey Minyard wrote:
> >>>> I'm working on a feature to allow an lttng trace to be extracted from a
> >>>> kernel coredump, and I'm wondering the best way to proceed. I've looked
> >>>> at extracting from the existing data structures, but that would require
> >>>> something to always have a trace file open and it looks more than a
> >>>> little complicated. I was thinking instead of having a different trace
> >>>> channel for configuring this coredump and some globals to point to the
> >>>> various data structures. Would that be the best way to pursue this? Is
> >>>> there a better way, perhaps?
> >>> Oh yes we do actually already have a way to do that :)
> >>> Please see:
> >>> https://git.lttng.org/?p=lttng-tools.git;a=tree;f=extras/core-handler;h=29ded89a71fd36e159d671ca1f405cea394f8683;hb=HEAD
> >>> With the snapshot feature, it's quite easy to do that. Hope that helps
> >>> you with what you are trying to achieve.
> >> I did see that, but it seems to be for an application coredump. I was
> >> looking for something that would work for a kernel coredump. So that if
> >> the kernel crashes, you can use kdump to extract the kernel core and
> >> then extract the LTT buffers from that. The idea is that if the kernel
> >> crashes, you can get some idea of what was happening before the crash.
> > Something along these lines have been proposed a while back on the ML,
> > but it was against an old LTTng version (0.x):
> > http://lists.lttng.org/pipermail/lttng-dev/2010-August/014239.html
> > It might be interesting to adapt it to the newer LTTng 2.x. Anyone
> > interested
> > to look into this ?
> That was done by a colleague of mine. However, things appear to have
> changed so much since then that it doesn't seem simple to extract a
> trace from memory. It could be done, but I'm currently looking at
> adding a separate transport that writes to a simpler per-core circular
> data structure and doesn't actually ever generate any output. Since it
> doesn't generate any output, it can be much simpler.
You can use the flight recorder mode in recent LTTng for this (2.3 and
newer). It simply writes to memory, without any output. I understand that
you want to create a contiguous ring buffer memory layout. However, you
have to be aware that this will probably be done using either
a) statically allocated memory at boot time (not very flexible),
b) vmalloc() (very flexible, but can triggers minor page faults, which
can interact badly with page fault instrumentation. vmalloc() space
is often limited by a kernel boot time parameter, and is putting
quite important limitations on systems with 32-bit address spaces).
> But I am certainly open to suggestions on how to do this, and happy to
> have anything included back into the mainline.
> And I'm still learning about the internals of LTT.
One option would be to modify the tool to understand the LTTng 2.x buffer
layout by stitching pages together by software using the LTTng
libringbuffer "subbuffer table". You can think of it as a 2-level page
table, but one level indexes the sub-buffers, and the next level indexes
the pages within a sub-buffer.
A good way to understand its layout is to look at:
In your case, you never care about the bufb->buf_rsb (read-side owned subbuffer),
because you always ever just write into it. buf_rsb is only useful when taking
bufb->buf_wsb has the mapping from sub-buffers write-side index within
the buffer to the associated index into bufb->array, which allows getting the
actual sub-buffers and memory pages associated to each buffer.
You'll notice that the "id" field within struct lib_ring_buffer_backend_subbuffer
is actually made of a mask of many fields. In order to understand how to use it,
where we provide helpers to get and set the various information elements
contained within the "id" field. See subbuffer_id*() functions and comments
So you'll need to use the structures presented above to make sense of the memory
layout of a buffer, and reorganize it into a CTF file that can be read by
Babeltrace or other CTF trace readers.
The algorithm you want to end up doing (offline, on a vmcore) is pretty much
the same as grabbing an online snapshot (iterate from the consumer position up to
the producer position, see lib/ringbuffer/frontend.h:ib_ring_buffer_snapshot() ).
You will need an extra trick to handle the sub-buffer that was being written to
at the time of the crash, by using the
lib/ringbuffer/frontend_types.h struct commit_counters_hot "seq" field
which is designed to track the contiguously committed data within the currently
written buffer. This can be used at any point in time (whenever a crash occurs) to
populate the last sub-buffer's content size, packet size, see:
and find out how much of the last sub-buffer needs to be copied into the output
More information about the lttng-dev