[lttng-dev] [-stable 3.8.1 performance regression] madvise POSIX_FADV_DONTNEED
Mathieu Desnoyers
mathieu.desnoyers at efficios.com
Mon Jun 17 10:13:57 EDT 2013
Hi,
CCing lkml on this,
* Yannick Brosseau (yannick.brosseau at gmail.com) wrote:
> Hi all,
>
> We discovered a performance regression in recent kernels with LTTng
> related to the use of fadvise DONTNEED.
> A call to this syscall is present in the LTTng consumer.
>
> The following kernel commit cause the call to fadvise to be sometime
> really slower.
>
> Kernel commit info:
> mm/fadvise.c: drain all pagevecs if POSIX_FADV_DONTNEED fails to discard
> all pages
> main tree: (since 3.9-rc1)
> commit 67d46b296a1ba1477c0df8ff3bc5e0167a0b0732
> stable tree: (since 3.8.1)
> https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/commit?id=bb01afe62feca1e7cdca60696f8b074416b0910d
>
> On the workload test, we observe that the call to fadvise takes about
> 4-5 us before this patch is applied. After applying the patch, The
> syscall now takes values from 5 us up to 4 ms (4000 us) sometime. The
> effect on lttng is that the consumer is frozen for this long period
> which leads to dropped event in the trace.
We use POSIX_FADV_DONTNEED in LTTng so the kernel know it's not useful
to keep the trace data around after it is flushed to disk. From what I
gather from the commit changelog, it seems that the POSIX_FADV_DONTNEED
operation now touches kernel data structures shared amongst processors
that have much higher contention/overhead than previously.
How does your page cache memory usage behave prior/after this kernel
commit ?
Also, can you try instrumenting the "count", "start_index" and
"end_index" values within fadvise64_64 with commit
67d46b296a1ba1477c0df8ff3bc5e0167a0b0732 applied and log this though
LTTng ? This will tell us whether the lru_add_drain_all() hit is taken
for a good reason, or due to an unforeseen off-by-one type of issue in
the new test:
if (count < (end_index - start_index + 1)) {
Thanks,
Mathieu
>
> If we remove the call to fadvise in src/common/consumer.c, we don't
> have any dropped event and we don't observe any bad side effect.
> (The added latency seem to come from the new call to
> lru_add_drain_all(). We removed this line and the performance went back
> to normal.)
>
> It's obviously a problem in the kernel, but since it impacts LTTng, we
> wanted to report it here first and ask advice on what should be the
> next step to solve this problem.
>
> If you want to see for youself, you can find the trace with the long
> call to fadvise here:
> http://www.dorsal.polymtl.ca/~rbeamonte/3.8.0~autocreated-4469887.tar.gz
>
> Yannick and Raphael
>
> _______________________________________________
> lttng-dev mailing list
> lttng-dev at lists.lttng.org
> http://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
More information about the lttng-dev
mailing list