[ltt-dev] [UST PATCH] libust: Add locking around fork calls
Mathieu Desnoyers
compudj at krystal.dyndns.org
Mon Feb 28 08:58:31 EST 2011
* Nils Carlson (nils.carlson at ericsson.com) wrote:
> On Thu, 24 Feb 2011, Nils Carlson wrote:
>
>> A finer scheme could be implemented by using system V semaphores
>> and could look like this:
>>
>> thread executing tracectl command:
>> lock(fork_sem_mutex)
>> increment(fork_sem)
>> unlock(fork_sem_mutex)
>> execute_command()
>> decrement(fork_sem)
>>
>> thread executing fork:
>> lock(fork_sem_mutex) // now no new commands can be issued
>> wait(fork_sem == 0) // wait for all commands to finish
>> fork()
>> unlock(fork_sem_mutex) // let commands start again.
>>
>> I think this would work well, but system V semaphores are sort
>> of heavy. So maybe somebody can come up with something prettier?
>
> This could probably be done nicer with a rwlock as well or a cond_wait.
> I think these alternatives would both be simpler than system v semaphores.
Hrm, you should have a look at pthread_atfork(3). It describes exactly
the type of behavior we are experiencing (only a single thread in the
child of a fork). The man page documents that pthread_atfork can be used
to put prepare/parent/child handlers. I think I remember Pierre-Marc not
wanting to use this mechanism because he wanted to handle applications
that did not use lib pthreads, so he did his own prepare/parent/child
handlers that override the fork symbol.
Also, the document specifically tells that "The mutexes are not usable
after the fork and must be initialized with pthread_mutex_init in the
child process. This is a limitation of the current implementation and
might or might not be present in future versions." Therefore it looks
like my advice about protecting from other threads was indeed
appropriate.
So we seem to have a choice here: either we go for mutual exclusion
between fork calls and execution of our libust listener thread, or we
might choose to re-initialize every mutex and every static data
structure used by the listener thread. We should note that any memory
that would be dynamically allocated by that thread and that would only
be reachable by pointers located on the listener thread stack would be a
memory leak. (Side-note: I just tried the pthread cleanup functions, and
they don't seem to be executed at fork time. This would have been an
elegant way to let the threads execute a cleanup when they disappear
from the child, but it does not seem to work. I'm currently wondering if
this will leak thread stacks through. :-/ )
I would personally tend to go on the safe-side: use mutual exclusion
between fork and the listener thread, so we know the thread is in a
quiescent state and don't leak memory/keep locks held etc. If we do
that, then re-initialization of all our static data structures (and
locks) used by the listener thread is not strictly required, but I would
tend to do it anyway, just to be sure that we don't end up in an
unplanned state.
We could even go for an even safer route if we see that pthread stacks
are leaked: the fork wrapper could take a lock, destroy/join the
listener thread, do the fork, and restart it. The thread that does the
fork could keep around the file descriptors used by the listener thread
opened and give them back to the new listener. But it would be good to
have a test-case looking at the process memory mappings to see if our
current approach leaks thread stack memory.
We should note that as soon as the child executes an exec(), this leak
goes away (this is the common scenario). It's applications that use fork
without exec I'm concerned about.
Thoughts ?
Thanks,
Mathieu
>
> /Nils
>
>
>> This is a little bit of a RFC patch, so please say what you think.
>>
>> Signed-off-by: Nils Carlson <nils.carlson at ericsson.com>
>> ---
>> libust/tracectl.c | 16 +++++++++++++---
>> 1 files changed, 13 insertions(+), 3 deletions(-)
>>
>> diff --git a/libust/tracectl.c b/libust/tracectl.c
>> index f720244..fd69da6 100644
>> --- a/libust/tracectl.c
>> +++ b/libust/tracectl.c
>> @@ -71,6 +71,9 @@ static struct cds_list_head open_buffers_list = CDS_LIST_HEAD_INIT(open_buffers_
>>
>> static struct cds_list_head ust_socks = CDS_LIST_HEAD_INIT(ust_socks);
>>
>> +/* Fork protection mechanism */
>> +static pthread_mutex_t fork_mutex = PTHREAD_MUTEX_INITIALIZER;
>> +
>> /* volatile because shared between the listener and the main thread */
>> int buffers_to_export = 0;
>>
>> @@ -1080,6 +1083,7 @@ void *listener_main(void *p)
>> }
>>
>> for (i = 0; i < nfds; i++) {
>> + pthread_mutex_lock(&fork_mutex);
>> epoll_sock = (struct ustcomm_sock *)events[i].data.ptr;
>> if (epoll_sock == listen_sock) {
>> addr_size = sizeof(struct sockaddr);
>> @@ -1088,6 +1092,7 @@ void *listener_main(void *p)
>> (socklen_t *)&addr_size);
>> if (accept_fd == -1) {
>> PERROR("listener_main: accept failed");
>> + pthread_mutex_unlock(&fork_mutex);
>> continue;
>> }
>> ustcomm_init_sock(accept_fd, epoll_fd,
>> @@ -1108,6 +1113,7 @@ void *listener_main(void *p)
>> epoll_sock->fd);
>> }
>> }
>> + pthread_mutex_unlock(&fork_mutex);
>> }
>> }
>>
>> @@ -1578,9 +1584,6 @@ static void ust_fork(void)
>> /* Get the pid of the new process */
>> processpid = getpid();
>>
>> - /* break lock if necessary */
>> - ltt_unlock_traces();
>> -
>> /*
>> * FIXME: This could be prettier, we loop over the list twice and
>> * following good locking practice should lock around the loop
>> @@ -1657,6 +1660,11 @@ void ust_before_fork(ust_fork_info_t *fork_info)
>> - only do this if tracing is active
>> */
>>
>> + /* Take the fork lock to make sure we are not in the middle of
>> + * something in the listener thread.
>> + */
>> + pthread_mutex_lock(&fork_mutex);
>> +
>> /* Disable signals */
>> sigfillset(&all_sigs);
>> result = sigprocmask(SIG_BLOCK, &all_sigs, &fork_info->orig_sigs);
>> @@ -1677,6 +1685,8 @@ static void ust_after_fork_common(ust_fork_info_t *fork_info)
>> PERROR("sigprocmask");
>> return;
>> }
>> +
>> + pthread_mutex_unlock(&fork_mutex);
>> }
>>
>> void ust_after_fork_parent(ust_fork_info_t *fork_info)
>> --
>> 1.7.1
>>
>>
>
> _______________________________________________
> ltt-dev mailing list
> ltt-dev at lists.casi.polymtl.ca
> http://lists.casi.polymtl.ca/cgi-bin/mailman/listinfo/ltt-dev
>
--
Mathieu Desnoyers
Operating System Efficiency R&D Consultant
EfficiOS Inc.
http://www.efficios.com
More information about the lttng-dev
mailing list