[lttng-dev] Changed scheduling when using lttng

Mathieu Desnoyers mathieu.desnoyers at efficios.com
Fri Apr 26 12:14:39 EDT 2013


* Mats Liljegren (liljegren.mats2 at gmail.com) wrote:
> On Fri, Apr 26, 2013 at 5:38 PM, Mathieu Desnoyers
> <mathieu.desnoyers at efficios.com> wrote:
> > * Mats Liljegren (liljegren.mats2 at gmail.com) wrote:
> >> >> >> I tried number 1 using --read-timer 0, but "lttng stop" hanged at
> >> >> >> "Waiting for data availability", producing lots of dots...
> >> >> >
> >> >> > As I said, we'd need to implement RING_BUFFER_WAKEUP_BY_WRITER when
> >> >> > read-timer is set to 0. It's not implemented currently.
> >> >> >
> >> >> >>
> >> >> >> Would it be possible to let some other (not using nohz mode) CPU to
> >> >> >> flush the buffers?
> >> >> >
> >> >> > I guess that would be option 3) :
> >> >> >
> >> >> > Another option would be to let a single thread in the consumer handle
> >> >> > the read-timer for all streams of the channel, like we do for UST.
> >> >>
> >> >> Ehm, well, you did say something about implement... Sorry for missing that.
> >> >>
> >> >> I guess now the question is which option that gives best
> >> >> characteristics for least amount of work... Without knowing the design
> >> >> of lttng-module, I'd believe that simply having the timer on another
> >> >> CPU should be a good candidate. Is there anything to watch out for
> >> >> with this solution?
> >> >>
> >> >> Are there any documents describing lttng-module design, or is it "join
> >> >> the force, use the source"? I've seen some high-level description
> >> >> showing how tools/libs/modules fit together, but I haven't found
> >> >> anything that describes how lttng-modules is designed.
> >> >
> >> > Papers on the ring buffer design exist, but not on lttng-modules per se.
> >> >
> >> > I think the best solution in the shortest amount of time would be (2):
> >> > using deferrable timers. It's just flags to pass to timer creation.
> >>
> >> Doesn't that require that the system is idle from time to time? My
> >> application will occupy 100% CPU until finished, and expects there are
> >> no ticks during that time.
> >
> > AFAIU, your initial question was about not preventing the CPU from going
> > to low power mode when tracing. The deferred timers should do just that.
> >
> > Indeed, the timers will fire on CPUs that are active.
> >
> > I guess what you want there is no intrusiveness for CPU shielding.
> > Indeed, for this kind of use-case, you may need to centralise the timer
> > handling into lttng-consumerd.
> 
> I realize I didn't express myself very clearly: There is a distinction
> between "full nohz" and "nohz idle". Full nohz means that if there are
> no competing threads there is no reason to have ticks, so turn ticks
> off. POSIX timers complicates things, but this was the simplistic view
> of it... This is currently implemented as an extension of nohz idle
> although they are quite orthogonal. Full nohz has been called "full
> dynticks" and "extended nohz", quite a lot of naming confusion here.
> 
> I need to read up on the code to see what seems to be the best design.
> I thought just changing lib_ring_buffer_start_switch_timer() so that
> it used "add_timer_on(&buff->switch_timer, 0)" would do the trick,
> perhaps by using some kind of config for this case... But things are
> usually not as simple as they first appear to be.

If you go for this approach, I would recommend having a single timer
handler per channel (rather than per buffer). Within the handler, you
can iterate on each buffer.

Thanks,

Mathieu


-- 
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com



More information about the lttng-dev mailing list