[lttng-dev] [RFC PATCH] timekeeping: introduce timekeeping_is_busy()

Mathieu Desnoyers mathieu.desnoyers at efficios.com
Thu Sep 12 09:48:16 EDT 2013


* Peter Zijlstra (peterz at infradead.org) wrote:
> On Wed, Sep 11, 2013 at 11:22:52PM -0400, Mathieu Desnoyers wrote:
> > Cool!
> > 
> > Your design looks good to me. It reminds me of a latch. My only fear is
> > that struct timekeeper is probably too large to be copied every time on
> > the read path. Here is a slightly reworked version that would allow
> > in-place read of "foo" without copy.
> > 
> > struct foo {
> > 	...
> > };
> > 
> > struct latchfoo {
> > 	unsigned int head, tail;
> > 	spinlock_t write_lock;
> > 	struct foo data[2];
> > };
> > 
> > static
> > void foo_update(struct latchfoo *lf, void cb(struct foo *foo), void *ctx)
> > {
> > 	spin_lock(&lf->write_lock);
> > 	lf->head++;
> > 	smp_wmb();
> > 	lf->data[lf->head & 1] = lf->data[lf->tail & 1];
> > 	cb(&lf->data[lf->head & 1], ctx);
> 
> You do that initial copy such that the cb gets the previous state to
> work from and doesn't have to do a fetch/complete rewrite?

Yep, my original intent was to simplify life for callers.

> 
> The alternative is to give the cb function both pointers, old and new
> and have it do its thing.

Good point. The caller don't necessarily need to copy the old entry into
the new one: it may very well want to overwrite all the fields.

> 
> Yet another option is to split the update side into helper functions
> just like you did below for the read side.

OK. Updated code below.

> 
> > 	smp_wmb();
> > 	lf->tail++;
> > 	spin_unlock(&lock->write_lock);
> > }
> > 
> > static
> > unsigned int foo_read_begin(struct latchfoo *lf)
> > {
> > 	unsigned int ret;
> > 
> > 	ret = ACCESS_ONCE(lf->tail);
> > 	smp_rmb();
> > 	return ret;
> > }
> > 
> > static
> > struct foo *foo_read_get(struct latchfoo *lf, unsigned int tail)
> > {
> > 	return &lf->data[tail & 1];
> > }
> > 
> > static
> > int foo_read_retry(struct latchfoo *lf, unsigned int tail)
> > {
> > 	smp_rmb();
> > 	return (ACCESS_ONCE(lf->head) - tail >= 2);
> > }
> > 
> > Comments are welcome,
> 
> Yeah this would work. The foo_read_begin() and foo_read_get() split is a
> bit awkward but C doesn't really encourage us to do any better.

We might be able to do better:


struct foo {
	...
};

spinlock_t foo_lock;

struct latchfoo {
	unsigned int head, tail;
	struct foo data[2];
};

/**
 * foo_write_begin - begin foo update.
 *
 " @lf: struct latchfoo to update.
 * @prev: pointer to previous element (output parameter).
 * @next: pointer to next element (output parameter).
 *
 * The area pointed to by "next" should be considered uninitialized.
 * The caller needs to have exclusive update access to struct latchfoo.
 */
static
void foo_write_begin(struct latchfoo *lf, const struct foo **prev,
		struct foo **next)
{
	lf->head++;
	smp_wmb();
	*prev = &lf->data[lf->tail & 1];
	*next = &lf->data[lf->head & 1];
}

/**
 * foo_write_end - end foo update.
 *
 " @lf: struct latchfoo.
 *
 * The caller needs to have exclusive update access to struct latchfoo.
 */
static void
void foo_write_end(struct latchfoo *lf)
{
	smp_wmb();
	lf->tail++;
}

/**
 * foo_read_begin - begin foo read.
 *
 " @lf: struct latchfoo to read.
 * @tail: pointer to unsigned int containing tail position (output).
 */
static
struct foo *foo_read_begin(struct latchfoo *lf, unsigned int *tail)
{
	unsigned int ret;

	ret = ACCESS_ONCE(lf->tail);
	smp_rmb();
	*tail = ret;
	return &lf->data[ret & 1];
}

/**
 * foo_read_retry - end foo read, trigger retry if needed.
 *
 " @lf: struct latchfoo read.
 * @tail: tail position returned as output by foo_read_begin().
 *
 * If foo_read_retry() returns nonzero, the content of the read should
 * be considered invalid, and the read should be performed again to
 * reattempt reading coherent data, starting with foo_read_begin().
 */
static
int foo_read_retry(struct latchfoo *lf, unsigned int tail)
{
	smp_rmb();
	return (ACCESS_ONCE(lf->head) - tail >= 2);
}


Thoughts ?

Thanks,

Mathieu


-- 
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com



More information about the lttng-dev mailing list