[lttng-dev] [rp] [commit] rculfhash: document linearizability guarantees

Paul E. McKenney paulmck at linux.vnet.ibm.com
Mon Apr 23 17:47:17 EDT 2012


On Mon, Apr 23, 2012 at 03:17:45PM -0400, Mathieu Desnoyers wrote:
> Hi Paul,
> 
> * Paul E. McKenney (paulmck at linux.vnet.ibm.com) wrote:
> > On Thu, Apr 19, 2012 at 08:02:57PM -0400, Mathieu Desnoyers wrote:
> > > * Paul E. McKenney (paulmck at linux.vnet.ibm.com) wrote:
> > > > On Wed, Apr 18, 2012 at 04:10:17PM -0400, Mathieu Desnoyers wrote:
> [...]
> 
> > > > This might be easier given a guarantee that writes involving a given
> > > > key are seen in order.  I believe that the hash table does provide this
> > > > guarantee and that it would be good to state it explicitly.
> > > 
> > > I'm wondering if the reasoning about a single "key" you are proposing
> > > takes into account that the hash table supports duplicate keys ?
> > 
> > If I understand correctly, the duplicate keys get hidden and uncovered
> > in order.  But I have not analyzed this case carefully.
> 
> When dealing with a table that may contain duplicate keys, it is
> necessary to use "lookup + iteration with get_next_duplicate" to get all
> the nodes that have duplicate keys. If one chooses to use lookup without
> get next duplicate on a table with duplicated keys, I don't think we
> should guarantee which keys are hidden and which are not (this would be
> a constraint on the table I'm not sure we want to put upon us without a
> good reason).

Ah, I was assuming ordering of the hash chains, with the old duplicates
following the new ones.  If there is no ordering on the chains, then
there can indeed be no ordering of the reads and writes.

> > > > > + * B) "write" after "read" will never be returned by the read.
> > > > 
> > > > There is a useful guarantee involving a pair of reads: If a pair
> > > > of reads of a given key are ordered (e.g., by a memory barrier),
> > > > then the second read will return the same value as the first
> > > > read or some later value.
> > > 
> > > Yep, e.g. if we use add_replace to replace a given node with a version
> > > that has an incremented counter, reads that are ordered with memory
> > > barriers are guaranteed to see this counter always incrementing.
> > 
> > Agreed.  Cross-thread ordering might be supplied by communication
> > with some other variable combined with appropriate memory barriers.
> 
> So the read after read ordering guarantee seems to only makes sense if
> we have a table that has no duplicate of a given key (and where
> add_replace is used to update the key's content).

As long as we are choosing not to constrain the ordering of the
duplicate keys in the hash chains, agreed.

> If we want to address tables with duplicate keys, then we should discuss
> iteration over duplicate keys. And in this case, we should talk about an
> iteration over duplicate keys that starts after the end of a previous
> iteration (if they overlap, we cannot say anything).

I believe that overlaps in time are OK, at least for lookup operations.
Traversals are another issue entirely, unless we are willing to model
a traversal as a series of lookup operations.

> Thoughts ?

You know me.  I never have been much for over-engineering ordering
guarantees.  So let's not guarantee any more than makes sense.

							Thanx, Paul




More information about the lttng-dev mailing list