[lttng-dev] [PATCH lttng-tools v3 3/3] Create a dedicated test suite for Perf

Jérémie Galarneau jeremie.galarneau at efficios.com
Fri Jul 8 20:57:47 UTC 2016


On Fri, Jul 8, 2016 at 4:41 PM, Julien Desfossez
<jdesfossez at efficios.com> wrote:
>> >> have_libpfm should be used to set an automake variable and enable the test
>> >> conditionally. Similar to what is done here:
>> >>
>> >> https://github.com/lttng/lttng-tools/blob/master/tests/regression/Makefile.am#L44
>> >
>> > Hum, the intent here is to make the tests fail if the dependency is not
>> > available, but we do not want to enforce it as a dependency since most
>> > deployments cannot run these tests. What we want here if for the test to
>> > fail if a user tries to run it without the dependency installed, then
>> > they will install the dependency and run it again. Having a transparent
>> > way to disable these tests would render them useless because we will
>> > think they passed.
>>
>> Agreed to not include them in the standard test suite. However, the
>> test should be skipped if the dependency is not found on the system,
>> along with a message explaining why.
>>
>> Currently the test will fail with the following output if run without
>> libpfm4 installed on the system.
>>
>> $ ./tests/perf/test_perf_raw
>> 1..20
>> # Perf counters
>> ok 1 - Start session daemon
>> ./tests/perf/test_perf_raw: line 50: ./tests/perf//find_event: No such
>> file or directory
>> not ok 2 - Find PMU UNHALTED_REFERENCE_CYCLES
> [...]
>>
>> This does't make it obvious why the test is failing.
>>
>> >
>> > There may be a better way of doing it, but we should avoid using magical
>> > "skip" in this case, these tests should fail if the target architecture
>> > does not support all the Perf-related features we support.
>>
>> I want to make it clear to the user that the test is failing because
>> of a missing dependency, and not because his HW doesn't have the
>> capabilities needed.
>>
>> Skip with a clear error message seems appropriate here, especially
>> since the test will not be run automatically.
>
> How about having a first step in the test that checks if the dependency
> is present and fails if not ? This would make it clear that the
> dependency is needed to run the test, and it would not magically pass if
> run in the background or in batch.
>
> I am really worried that skipping here will confuse users, I see this
> test suite as a way to guarantee that perf-related features all work on
> their system if the test returns 0 at the end.

That's fair. Feel free to have a look at how we use autoconf to generate
the test scripts based on configure-time results in Babeltrace [1][2].

You should be able to export "LTTNG_TOOLS_BUILD_WITH_LIBPFM",
check its value in the script and bail out with an explanation.

[1] https://github.com/efficios/babeltrace/blob/master/tests/lib/test_ctf_writer_complete.in
[2] https://github.com/efficios/babeltrace/blob/master/configure.ac#L363

Jeremie
>
> Thanks,
>
> Julien
> _______________________________________________
> lttng-dev mailing list
> lttng-dev at lists.lttng.org
> https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev



-- 
Jérémie Galarneau
EfficiOS Inc.
http://www.efficios.com


More information about the lttng-dev mailing list