<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Wed, Oct 19, 2016 at 6:03 PM, Mathieu Desnoyers <span dir="ltr"><<a href="mailto:mathieu.desnoyers@efficios.com" target="_blank">mathieu.desnoyers@efficios.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div style="font-family:arial,helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)"><div>This is because we use call_rcu internally to trigger the hash table<br></div><div>resize.<br></div><div><br></div><div>In cds_lfht_destroy, we start by waiting for "in-flight" resize to complete.<br></div><div>Unfortunately, this requires that call_rcu worker thread progresses. If<br></div><div>cds_lfht_destroy is called from the call_rcu worker thread, it will wait<br></div><div>forever.<br></div><div><br></div><div>One alternative would be to implement our own worker thread scheme<br></div><div>for the rcu HT resize rather than use the call_rcu worker thread. This</div><div>would simplify cds_lfht_destroy requirements a lot.<br></div><div><br></div><div>Ideally I'd like to re-use all the call_rcu work dispatch/worker handling<br></div><div>scheme, just as a separate work queue.<br></div><div><br></div><div>Thoughts ?<br></div><div><span style="font-family:arial,sans-serif;font-size:small;color:rgb(34,34,34)"></span></div></div></div></blockquote><div><br></div><div>Thank you for explaining. Sounds like a plan: in our prod there is no issue with having extra thread for table resizes. And nested tables is important feature.<br></div><div><br></div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div style="font-family:arial,helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)"><div><span style="font-family:arial,sans-serif;font-size:small;color:rgb(34,34,34)"> </span><br></div></div></div></blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div style="font-family:arial,helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)"><div></div><div>Thanks,<br></div><div><br></div><div>Mathieu<br></div><div><div class="gmail-h5"><div><br></div><span id="gmail-m_1548600653268589583zwchr">----- On Oct 19, 2016, at 6:03 AM, Evgeniy Ivanov <<a href="mailto:i@eivanov.com" target="_blank">i@eivanov.com</a>> wrote:<br></span></div></div><div><blockquote style="border-left:2px solid rgb(16,16,255);margin-left:5px;padding-left:5px;color:rgb(0,0,0);font-weight:normal;font-style:normal;text-decoration:none;font-family:helvetica,arial,sans-serif;font-size:12pt"><div><div class="gmail-h5"><div dir="ltr">Sorry, found partial answer in docs which state that cds_lfht_destroy should not be called from a call_rcu thread context. Why does this limitation exists?</div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Oct 19, 2016 at 12:56 PM, Evgeniy Ivanov <span dir="ltr"><<a href="mailto:i@eivanov.com" target="_blank">i@eivanov.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hi,<br><br>Each node of top level rculfhash has nested rculfhash. Some thread clears the top level map and then uses rcu_barrier() to wait until everything is destroyed (it is done to check leaks). Recently it started to dead lock sometimes with following stacks:<br><br>Thread1:<br><span style="font-family:monospace,monospace" face="monospace, monospace"><br></span><div><span style="font-family:monospace,monospace" face="monospace, monospace">__poll<br>cds_lfht_destroy <---- nested map<br>...<br>free_Node(rcu_head*) <----- node of top level map<br>call_rcu_thread<br></span><br>Thread2:<div><br><span style="font-family:monospace,monospace" face="monospace, monospace">syscall <br>rcu_barrier_qsbr <br>destroy_all<br>main<br></span><br><br>Did call_rcu_thread dead lock with barrier thread? Or is it some kind of internal deadlock because of nested maps?<span class="gmail-m_1548600653268589583HOEnZb"><span style="color:rgb(136,136,136)" color="#888888"><br><br><br>-- <br>Cheers,<br>Evgeniy</span></span></div></div></div>
</blockquote></div><br><br clear="all"><br>-- <br><div class="gmail-m_1548600653268589583gmail_signature">Cheers,<br>Evgeniy</div>
</div>
<br></div></div>______________________________<wbr>_________________<br>lttng-dev mailing list<br><a href="mailto:lttng-dev@lists.lttng.org" target="_blank">lttng-dev@lists.lttng.org</a><br><a href="https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev" target="_blank">https://lists.lttng.org/cgi-<wbr>bin/mailman/listinfo/lttng-dev</a><span class="gmail-HOEnZb"><font color="#888888"><br></font></span></blockquote></div><span class="gmail-HOEnZb"><font color="#888888"><br><div>-- <br></div><div>Mathieu Desnoyers<br>EfficiOS Inc.<br><a href="http://www.efficios.com" target="_blank">http://www.efficios.com</a></div></font></span></div></div><br>______________________________<wbr>_________________<br>
lttng-dev mailing list<br>
<a href="mailto:lttng-dev@lists.lttng.org">lttng-dev@lists.lttng.org</a><br>
<a href="https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev" rel="noreferrer" target="_blank">https://lists.lttng.org/cgi-<wbr>bin/mailman/listinfo/lttng-dev</a><br>
<br></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature">Cheers,<br>Evgeniy</div>
</div></div>