[ltt-dev] [PATCH 3/9] rculfhash: make struct rcu_level allied

Lai Jiangshan laijs at cn.fujitsu.com
Fri Oct 14 11:00:05 EDT 2011


On 10/14/2011 10:07 PM, Mathieu Desnoyers wrote:
> * Lai Jiangshan (laijs at cn.fujitsu.com) wrote:
>> reduce memory fragment
>> add a slowpath overhead
>>
>> Signed-off-by: Lai Jiangshan <laijs at cn.fujitsu.com>
> 
> Hrm, I see what you want to do here: given we already have
> synchronize_rcu calls in the fini code, you use them to guarantee
> quiescence of the previous iteration instead of using call_rcu. I would
> argue that at this point in the development process, I would rather
> prefer to keep allocation/delayed free as decoupled as possible from the
> algorithm. So I'll keep this optimisation in mind, but for now I think I
> will keep the call_rcu scheme to deal with all memory free.  This makes
> it more easy to modify the code when the allocation/delayed free
> concerns are all encapsulated through use of call_rcu.
> 
> Moreover, this patch really is an optimisation, so it should be backed
> by benchmarks that shows if there are measurable performance
> improvements.

(1UL << order) sizes of memory is good for memory allocator,
especial when we port this into kernel.
slowpath overhead is not overhead for me.

more important, it makes small tables mergeable.
See next round patches.

I assume this patch will be merged until you reject the related
patches in the next round.

Thanks.
Lai

> 
> Thanks!
> 
> Mathieu
> 
>> ---
>>  rculfhash.c |   22 ++++++++++------------
>>  1 files changed, 10 insertions(+), 12 deletions(-)
>>
>> diff --git a/rculfhash.c b/rculfhash.c
>> index e379c71..49f3637 100644
>> --- a/rculfhash.c
>> +++ b/rculfhash.c
>> @@ -209,7 +209,6 @@ struct ht_items_count {
>>  } __attribute__((aligned(CAA_CACHE_LINE_SIZE)));
>>  
>>  struct rcu_level {
>> -	struct rcu_head head;
>>  	struct _cds_lfht_node nodes[0];
>>  };
>>  
>> @@ -712,14 +711,6 @@ unsigned long _uatomic_max(unsigned long *ptr, unsigned long v)
>>  	return v;
>>  }
>>  
>> -static
>> -void cds_lfht_free_level(struct rcu_head *head)
>> -{
>> -	struct rcu_level *l =
>> -		caa_container_of(head, struct rcu_level, head);
>> -	poison_free(l);
>> -}
>> -
>>  /*
>>   * Remove all logically deleted nodes from a bucket up to a certain node key.
>>   */
>> @@ -1131,8 +1122,7 @@ void init_table(struct cds_lfht *ht,
>>  		if (CMM_LOAD_SHARED(ht->t.resize_target) < (!i ? 1 : (1UL << i)))
>>  			break;
>>  
>> -		ht->t.tbl[i] = calloc(1, sizeof(struct rcu_level)
>> -				+ (len * sizeof(struct _cds_lfht_node)));
>> +		ht->t.tbl[i] = calloc(1, len * sizeof(struct _cds_lfht_node));
>>  		assert(ht->t.tbl[i]);
>>  
>>  		/*
>> @@ -1218,6 +1208,7 @@ void fini_table(struct cds_lfht *ht,
>>  		unsigned long first_order, unsigned long len_order)
>>  {
>>  	long i, end_order;
>> +	void *free_by_rcu = NULL;
>>  
>>  	dbg_printf("fini table: first_order %lu end_order %lu\n",
>>  		   first_order, first_order + len_order);
>> @@ -1243,6 +1234,8 @@ void fini_table(struct cds_lfht *ht,
>>  		 * return a logically removed node as insert position.
>>  		 */
>>  		ht->cds_lfht_synchronize_rcu();
>> +		if (free_by_rcu)
>> +			free(free_by_rcu);
>>  
>>  		/*
>>  		 * Set "removed" flag in dummy nodes about to be removed.
>> @@ -1252,12 +1245,17 @@ void fini_table(struct cds_lfht *ht,
>>  		 */
>>  		remove_table(ht, i, len);
>>  
>> -		ht->cds_lfht_call_rcu(&ht->t.tbl[i]->head, cds_lfht_free_level);
>> +		free_by_rcu = ht->t.tbl[i];
>>  
>>  		dbg_printf("fini new size: %lu\n", 1UL << i);
>>  		if (CMM_LOAD_SHARED(ht->in_progress_destroy))
>>  			break;
>>  	}
>> +
>> +	if (free_by_rcu) {
>> +		ht->cds_lfht_synchronize_rcu();
>> +		free(free_by_rcu);
>> +	}
>>  }
>>  
>>  struct cds_lfht *_cds_lfht_new(cds_lfht_hash_fct hash_fct,
>> -- 
>> 1.7.4.4
>>
> 





More information about the lttng-dev mailing list