[lttng-dev] HugePages shared memory support in LLTng
Jonathan Rajotte-Julien
jonathan.rajotte-julien at efficios.com
Mon Jul 22 15:23:08 EDT 2019
Hi Yiteng,
On Mon, Jul 22, 2019 at 02:44:09PM -0400, Yiteng Guo wrote:
> Hi Jonathan,
>
> I spent these days on this problem and finally figured it out. Here
> are patches I've written.
Sorry for that, I had other stuff ongoing.
I had a brief discussion about this with Mathieu Desnoyers.
Mathieu mentioned that the page faults you are seeing might be related to
qemu/kvm usage of KSM [1]. I did not have time to play around with it and see if
this indeed have an effect. You might be better off trying it since you are
already all setup. Might want to disable it and retry your experiment (if only
doing this on a vm).
[1] https://www.linux-kvm.org/page/KSM
>
> https://github.com/lttng/lttng-ust/compare/master...guoyiteng:hugepages
> https://github.com/lttng/lttng-tools/compare/master...guoyiteng:hugepages
I'll have a look as soon as possible.
>
> These two patches are just ad-hoc supports for hugepages, which are
> not intended to be a pull request. If you want to support hugepages in
> future lttng releases, I am glad to help you with that. What I did
> here is to replace `shm_open` with `open` on a hugetlbfs directory. I
> also modified other parts of code (such as memory alignment) to make
> them compatible with huge pages. I didn't use `shm-path` option
> because I noticed that this option would not only relocate the shm of
> ring buffer but also other shm and metadata files. However, we only
> wanted to use huge pages for ring buffer here. Here are commands I
> used to launch an lttng session.
>
> ```
> lttng create
> lttng enable-channel --userspace --subbuf-size=4M --num-subbuf=2
> --buffers-pid my-channel
Any particular reason to user per-pid buffering?
We normally recommend per-uid tracing + lttng track when possible. Depends on
the final usecase.
> lttng add-context --userspace --type=perf:thread:page-fault
> lttng enable-event --userspace -c my-channel hello_world:my_first_tracepoint
> lttng start
> ```
>
> My patches worked very well and I didn't get page faults anymore.
> However, the only caveat of this patch is that ringbuffers are not
> destroyed correctly. It leads to a problem that every new lttng
> session acquires some hugepages but never releases them. After I
> created and destroyed several sessions, I would get an error that told
> me there were not enough hugepages to be used. I get around this
> problem by restarting the session daemon. But there should be some way
> to have ringbuffers (or its channel) destroyed elegently when its
> session is destroyed.
That is weird. I would expect the cleanup code to get rid of the ringbuffers as
needed. Or at least try and fail.
>
> In the meantime, I am also trying another way to get rid of these page
> faults, which is to prefault the ringbuffer shared memory in my
> program. This solution does not need any modification on lttng souce
> codes, which, I think, is a safer way to go. However, to prefault the
> ringbuffer shm, I need to know the address (and size) of the
> ringbuffer. Is there any way to learn this piece of information from
> the user program?
AFAIK, we do not expose the address. I might be wrong here.
How to you plan on prefaulting the pages?
MAP_POPULATE?
Cheers
More information about the lttng-dev
mailing list