[lttng-dev] [rp] [commit] rculfhash: document linearizability guarantees
Mathieu Desnoyers
mathieu.desnoyers at efficios.com
Mon Apr 23 15:17:45 EDT 2012
Hi Paul,
* Paul E. McKenney (paulmck at linux.vnet.ibm.com) wrote:
> On Thu, Apr 19, 2012 at 08:02:57PM -0400, Mathieu Desnoyers wrote:
> > * Paul E. McKenney (paulmck at linux.vnet.ibm.com) wrote:
> > > On Wed, Apr 18, 2012 at 04:10:17PM -0400, Mathieu Desnoyers wrote:
[...]
> > > This might be easier given a guarantee that writes involving a given
> > > key are seen in order. I believe that the hash table does provide this
> > > guarantee and that it would be good to state it explicitly.
> >
> > I'm wondering if the reasoning about a single "key" you are proposing
> > takes into account that the hash table supports duplicate keys ?
>
> If I understand correctly, the duplicate keys get hidden and uncovered
> in order. But I have not analyzed this case carefully.
When dealing with a table that may contain duplicate keys, it is
necessary to use "lookup + iteration with get_next_duplicate" to get all
the nodes that have duplicate keys. If one chooses to use lookup without
get next duplicate on a table with duplicated keys, I don't think we
should guarantee which keys are hidden and which are not (this would be
a constraint on the table I'm not sure we want to put upon us without a
good reason).
>
> > > > + * B) "write" after "read" will never be returned by the read.
> > >
> > > There is a useful guarantee involving a pair of reads: If a pair
> > > of reads of a given key are ordered (e.g., by a memory barrier),
> > > then the second read will return the same value as the first
> > > read or some later value.
> >
> > Yep, e.g. if we use add_replace to replace a given node with a version
> > that has an incremented counter, reads that are ordered with memory
> > barriers are guaranteed to see this counter always incrementing.
>
> Agreed. Cross-thread ordering might be supplied by communication
> with some other variable combined with appropriate memory barriers.
So the read after read ordering guarantee seems to only makes sense if
we have a table that has no duplicate of a given key (and where
add_replace is used to update the key's content).
If we want to address tables with duplicate keys, then we should discuss
iteration over duplicate keys. And in this case, we should talk about an
iteration over duplicate keys that starts after the end of a previous
iteration (if they overlap, we cannot say anything).
Thoughts ?
Thanks,
Mathieu
--
Mathieu Desnoyers
Operating System Efficiency R&D Consultant
EfficiOS Inc.
http://www.efficios.com
More information about the lttng-dev
mailing list