[lttng-dev] question about rcu_bp_exit()

Paul E. McKenney paulmck at linux.vnet.ibm.com
Thu May 19 19:53:08 UTC 2016


On Wed, May 18, 2016 at 06:40:03PM +0000, Mathieu Desnoyers wrote:
> ----- On May 18, 2016, at 5:44 AM, songxin <songxin_1980 at 126.com> wrote: 
> 
> > Hi,
> > Now I get a crash because receiving signal SIGSEGV as below.
> 
> > #0 arena_alloc (arena=<optimized out>) at
> > /usr/src/debug/liburcu/0.9.1+git5fd33b1e5003ca316bd314ec3fd1447f6199a282-r0/git/urcu-bp.c:432
> > #1 add_thread () at
> > /usr/src/debug/liburcu/0.9.1+git5fd33b1e5003ca316bd314ec3fd1447f6199a282-r0/git/urcu-bp.c:462
> > #2 rcu_bp_register () at
> > /usr/src/debug/liburcu/0.9.1+git5fd33b1e5003ca316bd314ec3fd1447f6199a282-r0/git/urcu-bp.c:541
> 
> > I read the code of urcu-bp.c and found that "if (chunk->data_len - chunk->used <
> > len)" is in 432 line. So I guess that the chunk is a illegal pointer.
> > Below is the function rcu_bp_exit().
> 
> > static
> > void rcu_bp_exit(void)
> > {
> > mutex_lock(&init_lock);
> > if (!--rcu_bp_refcount) {
> > struct registry_chunk *chunk, *tmp;
> > int ret;
> 
> > cds_list_for_each_entry_safe(chunk, tmp,
> > &registry_arena.chunk_list, node) {
> > munmap(chunk, chunk->data_len
> > + sizeof(struct registry_chunk));
> > }
> > ret = pthread_key_delete(urcu_bp_key);
> > if (ret)
> > abort();
> > }
> > mutex_unlock(&init_lock);
> > }
> 
> > My question is below.
> > Why did not delete the chunk from registry_arena.chunk_list before munmap a
> > chunk?
> 
> It is not expected that any thread would be created after the execution of 
> rcu_bp_exit() as a library destructor. Does re-initializing the chunk_list after 
> iterating on it within rcu_bp_exit() fix your issue ? 
> 
> I'm curious about your use-case for creating threads after the library destructor 
> has run. 

I am with Mathieu on this -- not much good can be expected using things
after their cleanup.  Though I suppose that, given a sufficient use case,
there could at least in theory be an option for manual control of cleanup.

							Thanx, Paul



More information about the lttng-dev mailing list