<div dir="ltr"><div>> Actually, this is not the average chain length: the resize happens when the first chain going<br>> beyond the threshold is encountered. I would expect the average chain length to be somewhere<br>> below that threshold. It would be interesting if you could provide insight into the distribution of<br>> your hash table bucket chain length by adding instrumentation code.<br>
</div><div><span class="gmail-im">Please have a look at the attached files. I printed out histograms <chain length: frequency><br></span></div><div><span class="gmail-im">right before and after each _do_cds_lfht_resize(). The average chain length computed as<br>items / buckets exceeded 8. Each millisecond I was adding a unique item in the table.</span></div><div><span class="gmail-im">The test was run twice with different hash functions, siphash (64-bit) and lookup3 (32-bit).<br><br></span></div><div><span class="gmail-im">Chains that long are ok, I think, until tables get huge and most node accesses<br>during a lookup become cache misses.<br></span></div><div><span class="gmail-im"><br>
> > Perhaps you could make CHAIN_LEN_RESIZE_THRESHOLD a tunable?<br>
> </span>That would be a possibility, yes. Would you like to propose a patch ?<br></div>Perhaps later, too busy.<br></div><div class="gmail_extra"><br><div class="gmail_quote">On 27 March 2018 at 19:17, Mathieu Desnoyers <span dir="ltr"><<a href="mailto:mathieu.desnoyers@efficios.com" target="_blank">mathieu.desnoyers@efficios.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">(Please keep the lttng-dev mailing list in CC.)<br>
<span class=""><br>
----- On Mar 27, 2018, at 12:07 PM, Denis Yevgenyevich <a href="mailto:denis.yevgenyevich@gmail.com">denis.yevgenyevich@gmail.com</a> wrote:<br>
<br>
>> When both flags are set, check_resize() is only for the bucket-local chain<br>
> > length algorithm that deals<br>
>> with very small tables. Then, when the table size goes above that threshold,<br>
> > split-counters are used to<br>
>> adjust the required table size. We don't use the split-counters approximation of<br>
> > the number of elements<br>
>> in the table for small tables because they are not accurate enough for small<br>
> > tables. It's a tradeoff<br>
> > between accuracy and accounting overhead here.<br>
> Thanks for the explanation.<br>
<br>
>> Notice that ht_count_add() and ht_count_del() both invoke<br>
> > cds_lfht_resize_lazy_count(). It's that<br>
> > function which is responsible for the split-counter based resize algorithm.<br>
>> If both (CDS_LFHT_ACCOUNTING | CDS_LFHT_AUTO_RESIZE) flags are set, the table<br>
> > will<br>
> > still grow after it has gone beyond this condition.<br>
> That's what I overlooked. I've just repeated my test, letting it run for a<br>
> longer time.<br>
> After 4096 items (4-core cpu) check_resize() stopped growing the table.<br>
<br>
</span>4096 buckets is indeed the threshold for a 4-core cpu system.<br>
<span class=""><br>
> After 32768 items ht_cound_add() finally grew the table to 32768. That is when<br>
> the check<br>
> if ((count >> CHAIN_LEN_RESIZE_THRESHOLD) < size)<br>
> went false. The average chain length might have been 8 by that time. Isn't it<br>
> too much?<br>
<br>
</span>Actually, this is not the average chain length: the resize happens when the first chain going<br>
beyond the threshold is encountered. I would expect the average chain length to be somewhere<br>
below that threshold. It would be interesting if you could provide insight into the distribution of<br>
your hash table bucket chain length by adding instrumentation code.<br>
<span class=""><br>
> Perhaps you could make CHAIN_LEN_RESIZE_THRESHOLD a tunable?<br>
<br>
</span>That would be a possibility, yes. Would you like to propose a patch ?<br>
<span class=""><br>
> Anyway, I am glad now I understand how this stuff works. Thank you again!<br>
<br>
</span>You're welcome!<br>
<br>
Thanks,<br>
<br>
Mathieu<br>
<br>
> On 27 March 2018 at 16:02, Mathieu Desnoyers < [<br>
> mailto:<a href="mailto:mathieu.desnoyers@efficios.com">mathieu.desnoyers@<wbr>efficios.com</a> | <a href="mailto:mathieu.desnoyers@efficios.com">mathieu.desnoyers@efficios.com</a> ] ><br>
> wrote:<br>
<br>
>> ----- On Mar 27, 2018, at 5:55 AM, Denis Yevgenyevich < [<br>
<div><div class="h5">>> mailto:<a href="mailto:denis.yevgenyevich@gmail.com">denis.yevgenyevich@<wbr>gmail.com</a> | <a href="mailto:denis.yevgenyevich@gmail.com">denis.yevgenyevich@gmail.com</a> ] > wrote:<br>
<br>
>>>> The "accounting" option is meant to be enabled, otherwise you're using the hash<br>
>>> > table only with<br>
>>> > the bucket-length resize threshold, which has the imperfections you noticed<br>
>>> Well, given that my software has to maintain precise counts of items inserted<br>
>>> into some of hashtables,<br>
>>> I'd like to be able to avoid any performance overhead due to accounting.<br>
<br>
>>>> only for small tables. Beyond a certain size, the numbers of elements in the<br>
>>> > hash table are used<br>
>>> > as a criterion for resize.<br>
>>>> Why is it unexpected that the table stops growing beyond 4096 after 900<br>
>>> > insertions ?<br>
>>> Ah, thanks. Turns out that the table stopped growing because of the check<br>
>>> if (count >= (1UL << (COUNT_COMMIT_ORDER + split_count_order)))<br>
<br>
>> If both (CDS_LFHT_ACCOUNTING | CDS_LFHT_AUTO_RESIZE) flags are set, the table<br>
>> will<br>
>> still grow after it has gone beyond this condition.<br>
<br>
>> When both flags are set, check_resize() is only for the bucket-local chain<br>
>> length algorithm that deals<br>
>> with very small tables. Then, when the table size goes above that threshold,<br>
>> split-counters are used to<br>
>> adjust the required table size. We don't use the split-counters approximation of<br>
>> the number of elements<br>
>> in the table for small tables because they are not accurate enough for small<br>
>> tables. It's a tradeoff<br>
>> between accuracy and accounting overhead here.<br>
<br>
>> Notice that ht_count_add() and ht_count_del() both invoke<br>
>> cds_lfht_resize_lazy_count(). It's that<br>
>> function which is responsible for the split-counter based resize algorithm.<br>
<br>
>>> I am a bit disappointed about that. Why can't we have tables that resize<br>
>>> 1. automatically<br>
>>> 2. from 10s to 100,000s of buckets<br>
>>> 3. without becoming too large too soon?<br>
<br>
>> The only thing here is that the bucket-length algorithm used for small tables<br>
>> can grow the<br>
>> table more quickly than you would expect if you have hash collisions. But it<br>
>> should not<br>
>> matter that much because it only applies to small tables. Do you experience "too<br>
>> aggressive"<br>
>> hash table growth even beyond the threshold selecting the split-counters resize<br>
>> algorithm ?<br>
<br>
>> That threshold is calculated as:<br>
<br>
>> (1UL << (COUNT_COMMIT_ORDER + split_count_order)<br>
<br>
>> Where COUNT_COMMIT_ORDER = 10<br>
>> and split_count_order = log2(nr_cpus ceiled to next power of 2)<br>
<br>
>> Assuming a 64-core machine:<br>
<br>
>> log2(64) = 6<br>
>> 2 ^ (10 + 6) = 65536<br>
<br>
>> Considering that you created your hash table with an initial size of 16 buckets,<br>
>> specified a minimum<br>
>> number of buckets of 16, and a maximum number of buckets of 1 << 16 (65536),<br>
>> you'll always be<br>
>> using the "hash-chain length" algorithm for resize on a 64-core machine. The<br>
>> split-counter algorithm<br>
>> would never kick in.<br>
<br>
>> You should therefore try tweaking the @max_nr_buckets parameter to a larger<br>
>> value when<br>
>> calling cds_lfht_new().<br>
<br>
>> How many cores do you have ?<br>
<br>
>> Let me know how is goes,<br>
<br>
>> Thanks,<br>
<br>
>> Mathieu<br>
<br>
</div></div>>>> On 26 March 2018 at 18:32, Mathieu Desnoyers < [<br>
>>> mailto:<a href="mailto:mathieu.desnoyers@efficios.com">mathieu.desnoyers@<wbr>efficios.com</a> | <a href="mailto:mathieu.desnoyers@efficios.com">mathieu.desnoyers@efficios.com</a> ] ><br>
>>> wrote:<br>
<br>
>>>> ----- On Mar 26, 2018, at 11:01 AM, Denis Yevgenyevich < [<br>
<div><div class="h5">>>>> mailto:<a href="mailto:denis.yevgenyevich@gmail.com">denis.yevgenyevich@<wbr>gmail.com</a> | <a href="mailto:denis.yevgenyevich@gmail.com">denis.yevgenyevich@gmail.com</a> ] > wrote:<br>
<br>
>>>>> Hello,<br>
<br>
>>>>> I think rculfhash autoresizing (as enabled by CDS_LFHT_AUTO_RESIZE) does not<br>
>>>>> behave well. Each time _cds_lfht_add finds a hash bucket with more than 2<br>
>>>>> items, it basically tries to quadruple the hash table size. But even with a<br>
>>>>> perfect hash function hash collisions are not that rare, given a large enough<br>
>>>>> table. That is, every now and then the algorithm will come across a 3-long<br>
>>>>> chain and will grow the table unnecessarily.<br>
<br>
>>>>> I created a table with cds_lfht_new(16, 16, 1<<16, CDS_LFHT_AUTO_RESIZE, NULL)<br>
>>>>> and began adding an item each 10ms. Each item had a unique key and the hash<br>
>>>>> function was very good (full 64-bit values of Siphash). Judging by messages I<br>
>>>>> added to rculfhash.c, this was happening:<br>
>>>>> after about 100 insertions, the table was resized to 512<br>
>>>>> after about 500 insertions, the table was resized to 4096<br>
>>>>> after about 1300 insertions, the table was resized to 16384 and immediately to<br>
>>>>> 32768<br>
<br>
>>>>> With CDS_LFHT_AUTO_RESIZE|CDS_LFHT_<wbr>ACCOUNTING the behaviour was different but<br>
>>>>> still unacceptable: the table got resized to 4096 after 900 insertions and then<br>
>>>>> stopped growing!<br>
<br>
>>>> The "accounting" option is meant to be enabled, otherwise you're using the hash<br>
>>>> table only with<br>
>>>> the bucket-length resize threshold, which has the imperfections you noticed.<br>
<br>
>>>> With accounting enabled, the bucket length is used as an approximation for when<br>
>>>> resize is needed<br>
>>>> only for small tables. Beyond a certain size, the numbers of elements in the<br>
>>>> hash table are used<br>
>>>> as a criterion for resize.<br>
<br>
>>>> Why is it unexpected that the table stops growing beyond 4096 after 900<br>
>>>> insertions ? How many<br>
>>>> more insertions did you do beyond 900 ?<br>
<br>
>>>> Thanks,<br>
<br>
>>>> Mathieu<br>
<br>
>>>>> Could someone please implement saner resize conditions, or better yet, add a<br>
>>>>> customisable hook for decisions as to when and how much should a table grow?<br>
<br>
>>>>> Cheers, Denis.<br>
<br>
>>>>> ______________________________<wbr>_________________<br>
>>>>> lttng-dev mailing list<br>
</div></div>>>>>> [ mailto:<a href="mailto:lttng-dev@lists.lttng.org">lttng-dev@lists.lttng.<wbr>org</a> | <a href="mailto:lttng-dev@lists.lttng.org">lttng-dev@lists.lttng.org</a> ]<br>
>>>>> [ <a href="https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev" rel="noreferrer" target="_blank">https://lists.lttng.org/cgi-<wbr>bin/mailman/listinfo/lttng-dev</a> |<br>
<span class="">>>>>> <a href="https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev" rel="noreferrer" target="_blank">https://lists.lttng.org/cgi-<wbr>bin/mailman/listinfo/lttng-dev</a> ]<br>
<br>
>>>> --<br>
>>>> Mathieu Desnoyers<br>
>>>> EfficiOS Inc.<br>
</span>>>>> [ <a href="http://www.efficios.com/" rel="noreferrer" target="_blank">http://www.efficios.com/</a> | <a href="http://www.efficios.com" rel="noreferrer" target="_blank">http://www.efficios.com</a> ]<br>
<span class=""><br>
>> --<br>
>> Mathieu Desnoyers<br>
>> EfficiOS Inc.<br>
</span>>> [ <a href="http://www.efficios.com/" rel="noreferrer" target="_blank">http://www.efficios.com/</a> | <a href="http://www.efficios.com" rel="noreferrer" target="_blank">http://www.efficios.com</a> ]<br>
<div class="HOEnZb"><div class="h5"><br>
--<br>
Mathieu Desnoyers<br>
EfficiOS Inc.<br>
<a href="http://www.efficios.com" rel="noreferrer" target="_blank">http://www.efficios.com</a><br>
</div></div></blockquote></div><br></div>