<div dir="ltr">Hi Mathieu,<div><br></div><div>Ive been able to analyze this sessiond - consumerd hang scenario and I see that the fix is not available on 2.6.1 or on 2.7.0.</div><div><br></div><div>Root Cause :</div><div><br></div><div>The hang is mainly caused because of a race condition for the consumer_metadata_socket between the following two flows of code</div><div>Flow 1. do_sync_metadata() -> lttng_ustconsumer_sync_metadat() -> lttng_ustconsumer_request_metadata() -> lttncomm_recv_unix_sock() -> recvmsg()</div><div><br></div><div>and</div><div><br></div><div>Flow 2. lttng_consumer_recv_cmd() -> lttng_ust_consumer_recv_cmd() -> lttng_ustconsumer_recv_metadata()</div><div><br></div><div>Flow 1 above initiates a metadata requests and recvmsg()s for the reply from sessiond on the consumer_metadata_socket ( it has taken the metadata_socket_lock and the stream->lock ) , but simultaneously this socket is being polled from Flow 2. Once in a while Flow 2 gets to receive while Flow 1 just hangs on recvmsg() . Since Flow 1 has taken the stream->lock, this then introduces a deadlock at a later stage (on_buffer_ready() when the metadata is being flushed. )</div><div><br></div><div>Think using the metadata_socket_lock in Flow 2 before attempting to receive may address this issue, but is a bit complicated as the message would have already been received at the point of using that lock. The other alternative is to remove the complete recv() from Flow 1 and let that be taken care by Flow 2.</div><div><br></div><div>Please let me know if this has been fixed in the later builds, may be I searched it wrong.</div><div><br></div><div><br></div><div>Regards,</div><div>Aravind.</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Sat, Dec 12, 2015 at 4:13 AM, Mathieu Desnoyers <span dir="ltr"><<a href="mailto:mathieu.desnoyers@efficios.com" target="_blank">mathieu.desnoyers@efficios.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div style="font-family:arial,helvetica,sans-serif;font-size:12pt;color:#000000"><div>Hi,</div><div><br></div><div>Please upgrade to 2.6.1 or 2.7.0, those are old issues.<br></div><div><br></div><div>Thanks,<br></div><div><br></div><div>Mathieu<br></div><div><div class="h5"><div><br></div><div><br></div><div><span>----- On Dec 11, 2015, at 1:42 PM, Aravind HT <<a href="mailto:aravind.ht@gmail.com" target="_blank">aravind.ht@gmail.com</a>> wrote:<br></span></div></div></div><div><blockquote style="border-left:2px solid #1010ff;margin-left:5px;padding-left:5px;color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt"><div><div class="h5"><div dir="ltr"><br>NPBS understand it, I will continue to investigate this, got a bit side tracked with the sessiond hang issue that I mentioned earlier. <div>Making some progress on that, will update as soon as I find the root cause, it looks like a mutex deadlock between <br></div><div><div>the following threads in the consumerd daemon</div><br><div>#1 0x0000000000410b5d in lttcomm_recv_unix_sock (sock=26, buf=buf@entry=0x7f16acca57f0, len=len@entry=4476) at unix.c:175<div>#2 0x000000000041dc96 in lttng_ustconsumer_request_metadata (ctx=ctx@entry=0x640d10, channel=0x7f16980192b0, timer=timer@entry=0, wait=wait@entry=0) at ust-consumer.c:2331</div><div>#3 0x000000000041ff3c in lttng_ustconsumer_sync_metadata (ctx=ctx@entry=0x640d10, metadata=metadata@entry=0x7f169801aa40) at ust-consumer.c:1829</div><div>#4 0x000000000040eabf in do_sync_metadata (ctx=<optimized out>, metadata=<optimized out>) at consumer-stream.c:468</div><div>#5 consumer_stream_sync_metadata (ctx=ctx@entry=0x640d10, session_id=8) at consumer-stream.c:545</div><div>#6 0x000000000041d316 in lttng_ustconsumer_read_subbuffer (stream=stream@entry=0x7f1698018fa0, ctx=ctx@entry=0x640d10) at ust-consumer.c:2063</div><div>#7 0x000000000040b613 in lttng_consumer_read_subbuffer (stream=0x7f1698018fa0, ctx=0x640d10) at consumer.c:3203</div><div>#8 0x0000000000409608 in consumer_thread_data_poll (data=0x640d10) at consumer.c:2557</div><div>#9 0x00007f16af6a6fe3 in start_thread (arg=0x7f16acca7700) at pthread_create.c:312</div><div>#10 0x00007f16af3dcafd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111</div></div></div><br><div>and </div><br><div>consumer_thread_metadata_poll() calling lttng_consumer_read_subbuffer() / on_buffer_ready() calling pthread_mutex_lock(stream->lock) and then hanging</div><div><br><br><div>This again can be reproduced by the same script, just running consecutive sessions of short duration (<10 seconds) with traces getting captured on a remote node.</div><div>I hit this every time I run the script.</div><br><div>Regards,</div><div>Aravind.</div><br><div class="gmail_extra"><br></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Dec 11, 2015 at 11:33 PM, Jonathan Rajotte Julien <span dir="ltr"><<a href="mailto:Jonathan.rajotte-julien@efficios.com" target="_blank">Jonathan.rajotte-julien@efficios.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">Hi Aravind,<span><br>
<br>
On 2015-12-11 12:16 PM, Aravind HT wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
Hi,<br>
<br>
Did you get a chance to reproduce the problem with the scripts ? Let me know if you need any help running it.<br>
</blockquote>
<br></span>
Time is a scarce resource unfortunately. But until I have the time to look at it you might want to investigate if you are able to reproduce it on 2.6.1/2.7.0/master on 2 regular (x86_64, ubuntu/debian) VM this would really help.<br>
<br>
Cheers!<br>
<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<br>
Regards,<br>
Aravind.<div><div><br>
<br>
On Wed, Dec 9, 2015 at 3:04 PM, Aravind HT <<a href="mailto:aravind.ht@gmail.com" target="_blank">aravind.ht@gmail.com</a> <mailto:<a href="mailto:aravind.ht@gmail.com" target="_blank">aravind.ht@gmail.com</a>>> wrote:<br>
<br>
Sorry about that, not sure how it got missed.<br>
Here it is.<br>
<br>
<br>
# This needs a two node set up (1. local current node 2. remote node )<br>
# relayd runs on the current node where traces are captured from<br>
the remote node<br>
# remote node runs test applications which generate traces.<br>
# the launch_script_RN is executed on the current node and uses<br>
ssh to execute commands on the remote node. So this part may not<br>
work in every case and may prompt for a password.<br>
# if experiencing problems with ssh , kindly check<br>
<a href="http://serverfault.com/questions/241588/how-to-automate-ssh-login-with-password" rel="noreferrer" target="_blank">http://serverfault.com/questions/241588/how-to-automate-ssh-login-with-password</a><br>
<br>
# ==================== To Run =============================<br>
launch_script_RN.py self_profile -c /tmp/configFile.txt<br>
<br>
<br>
<br>
# configFile.txt is the file which has configuration params that<br>
launchScript<br>
# needs to configure lttng sessions. Below is an explanation of<br>
the different options.<br>
# =================== configFile.txt =============================<br>
<br>
[section1]<br>
# final out put file path<br>
OutputFile = /tmp/Final_report.txt<br>
# the remote node hostname on which test applications run and the<br>
test sessions will be created; this should be something that could<br>
be used with ssh. Traces will be transported from this node to the<br>
lttng_relayd running on the current node.<br>
Node = MY_REMOTE_NODE<br>
# Sub buffer size to start this with<br>
SubBufSize = 16k<br>
# Sub buffer count<br>
SubBufCount = 4<br>
# per uid buffer<br>
BufferScheme = --buffers-uid<br>
# yes<br>
EnableTracing = yes<br>
# Bit rate of the test applications. Comman seperated example "1,<br>
3, 3, 50" sayss 4 test applications producing 1, 3, 3, and 50 Kb/s<br>
traces.<br>
# So with the below, we just start with 1 test application<br>
producing 10 kb/s<br>
TestApps = 10<br>
# session life time in seonds<br>
TestTime = 10<br>
# Max number of successive sessions to configure. if n then n-1<br>
sessions are run, ex MaxRun = 2 will run 1 session.<br>
MaxRun = 100<br>
<br>
<br>
# ==================== Place the following files under<br>
===============<br>
<br>
# /tmp on the remote node<br>
clean_RemNode_apps.sh<br>
report_lttng_script.sh<br>
<br>
# rest of the scripts under /usr/sbin on the current local node on<br>
which lttng_realyd runs<br>
# Define a trace point MY_TRACE to take a single string arg with<br>
LOG_TEST_APP_PROFILING as the provider, compile test lttng_testApp<br>
and place it under /usr/sbin of the remote host<br>
<br>
# in launch_script_RN.py change currentNodeIP to the IP address on<br>
which relayd is receiving, default ports are used.<br>
<br>
# lttng_relayd is started as<br>
/usr/bin/lttng-relayd -o /var/log/lttng-traces -d<br>
<br>
# lttng_sessiond is started as<br>
/usr/bin/lttng-sessiond --consumerd32-path<br>
/usr/lib/lttng/libexec/lttng-consumerd --consumerd32-libdir<br>
/usr/lib/ --consumerd64-path<br>
/usr/lib64/lttng/libexec/lttng-consumerd --consumerd64-libdir<br>
/usr/lib64/ -b --no-kernel<br>
<br>
<br>
<br>
<br>
Regards,<br>
Aravind.<br>
<br>
On Wed, Dec 9, 2015 at 12:57 AM, Jonathan Rajotte Julien<br>
<<a href="mailto:Jonathan.rajotte-julien@efficios.com" target="_blank">Jonathan.rajotte-julien@efficios.com</a><br></div></div><div><div>
<mailto:<a href="mailto:Jonathan.rajotte-julien@efficios.com" target="_blank">Jonathan.rajotte-julien@efficios.com</a>>> wrote:<br>
<br>
Hi Aravind,<br>
<br>
There is no README in the archive you sent.<br>
<br>
Cheers<br>
<br>
On 2015-12-08 07:51 AM, Aravind HT wrote:<br>
<br>
Hi,<br>
<br>
I am trying to upgrade in parallel, but this issue may<br>
still be present after I upgrade or may be temporarily<br>
masked. So I need to find the root cause for this and then<br>
see if its available on the latest before committing to<br>
upgrade.<br>
<br>
There is another issue i'm hitting, the lttng list command<br>
hangs after lttng destroy session when running the profiling.<br>
<br>
I found that consumerd 64 goes into an infinite loop<br>
waiting to flush metadata in<br>
lttng_ustconsumer_recv_metadata() :: while<br>
(consumer_metadata_cache_flushed(channel, offset + len,<br>
timer)) .<br>
In consumer_metadata_cache,<br>
channel->metadata_stream->endpoint_status is<br>
CONSUMER_ENDPOINT_ACTIVE,<br>
metadata_stream->ust_metadata_pushed is 0 with offset<br>
having some value. This call always returns a 1 from the<br>
last else{} block resulting in an infinite loop. Upon<br>
searching the forum, I found the same issue being reported<br>
here :<br>
<a href="https://www.mail-archive.com/lttng-dev@lists.lttng.org/msg07982.html" rel="noreferrer" target="_blank">https://www.mail-archive.com/lttng-dev@lists.lttng.org/msg07982.html</a><br>
<br>
Regards,<br>
Aravind.<br>
<br>
<br>
On Tue, Dec 8, 2015 at 12:43 AM, Jonathan Rajotte<br>
<<a href="mailto:jonathan.r.julien@gmail.com" target="_blank">jonathan.r.julien@gmail.com</a><br></div></div><span>
<mailto:<a href="mailto:jonathan.r.julien@gmail.com" target="_blank">jonathan.r.julien@gmail.com</a>>> wrote:<br>
<br>
Hi Aravind,<br>
<br>
Did you have the chance to upgrade to 2.6.1.If so<br>
where you able<br>
to reproduce?<br>
<br>
Cheers<br>
<br>
On Mon, Dec 7, 2015 at 1:07 PM, Aravind HT<br></span>
<<a href="mailto:aravind.ht@gmail.com" target="_blank">aravind.ht@gmail.com</a> <mailto:<a href="mailto:aravind.ht@gmail.com" target="_blank">aravind.ht@gmail.com</a>>><span><br>
wrote:<br>
<br>
Hi,<br>
<br>
I have attached the complete profiling scripts<br>
here, its a bit<br>
shabby, im new to python.<br>
<br>
There is a README which has the details on how to<br>
execute it.<br>
Im using a Yocto 1.6 on x86_64 platforms on both<br>
the nodes.<br>
<br>
<br>
Running this script when there are other sessions<br>
running<br>
seems to reproduce this problem easily.<br>
Please try it and let me know if you have any issues<br>
reproducing the problem.<br>
<br>
Regards,<br>
Aravind.<br>
<br>
On Sat, Dec 5, 2015 at 5:23 PM, Jérémie Galarneau<br>
<<a href="mailto:jeremie.galarneau@efficios.com" target="_blank">jeremie.galarneau@efficios.com</a><br>
<mailto:<a href="mailto:jeremie.galarneau@efficios.com" target="_blank">jeremie.galarneau@efficios.com</a>><br></span>
<mailto:<a href="mailto:jeremie.galarneau@efficios.com" target="_blank">jeremie.galarneau@efficios.com</a><span><br>
<mailto:<a href="mailto:jeremie.galarneau@efficios.com" target="_blank">jeremie.galarneau@efficios.com</a>>>> wrote:<br>
<br>
On Fri, Dec 4, 2015 at 11:06 PM, Aravind HT<br>
<<a href="mailto:aravind.ht@gmail.com" target="_blank">aravind.ht@gmail.com</a><br></span>
<mailto:<a href="mailto:aravind.ht@gmail.com" target="_blank">aravind.ht@gmail.com</a>> <mailto:<a href="mailto:aravind.ht@gmail.com" target="_blank">aravind.ht@gmail.com</a><div><div><br>
<mailto:<a href="mailto:aravind.ht@gmail.com" target="_blank">aravind.ht@gmail.com</a>>>> wrote:<br>
> I am using 2.6.0 .I will try to share the<br>
code that I'm<br>
using here in some<br>
> time. If there are any specific fixes that<br>
are relevant<br>
to this issue, see<br>
> if you can provide a link to them. I would<br>
ideally like<br>
to try them out<br>
> before trying a full upgrade to the latest<br>
versions.<br>
<br>
Hi,<br>
<br>
Can you provide more information on the<br>
system? Which<br>
distribution,<br>
architecture, kernel version?<br>
<br>
The verbose sessiond logs might help pinpoint any<br>
unexpected behaviour<br>
here (are all applications registering as<br>
expected?).<br>
<br>
Jérémie<br>
<br>
><br>
> On Fri, Dec 4, 2015 at 6:11 PM, Jérémie<br>
Galarneau<br>
> <<a href="mailto:jeremie.galarneau@efficios.com" target="_blank">jeremie.galarneau@efficios.com</a><br>
<mailto:<a href="mailto:jeremie.galarneau@efficios.com" target="_blank">jeremie.galarneau@efficios.com</a>><br></div></div>
<mailto:<a href="mailto:jeremie.galarneau@efficios.com" target="_blank">jeremie.galarneau@efficios.com</a><span><br>
<mailto:<a href="mailto:jeremie.galarneau@efficios.com" target="_blank">jeremie.galarneau@efficios.com</a>>>> wrote:<br>
>><br>
>> Hi Aravind,<br>
>><br>
>> Can't say I have looked at everything you<br>
sent yet, but<br>
as a<br>
>> preemptive question, which version are we<br>
talking about<br>
here? 2.6.0 or<br>
>> 2.6.1? 2.6.1 contains a lot of relay daemon<br>
fixes.<br>
>><br>
>> Thanks,<br>
>> Jérémie<br>
>><br>
>> On Thu, Dec 3, 2015 at 7:01 AM, Aravind HT<br>
<<a href="mailto:aravind.ht@gmail.com" target="_blank">aravind.ht@gmail.com</a><br></span><div><div>
<mailto:<a href="mailto:aravind.ht@gmail.com" target="_blank">aravind.ht@gmail.com</a>>> wrote:<br>
>> > Hi,<br>
>> ><br>
>> > I am trying to obtain the performance<br>
characteristics<br>
of lttng with the<br>
>> > use<br>
>> > of test applications. Traces are being<br>
produced on a<br>
local node and<br>
>> > delivered to relayd that is running on a<br>
separate<br>
node for storage.<br>
>> ><br>
>> > An lttng session with the test<br>
applications producing<br>
an initial bit<br>
>> > rate of<br>
>> > 10 kb/s is started and run for about 30<br>
seconds. The<br>
starting sub-buffer<br>
>> > size is kept at 128 kb and sub-buf count<br>
at 4. The<br>
session is then<br>
>> > stopped<br>
>> > and destroyed and traces are analyzed to<br>
see if there<br>
are any drops.<br>
>> > This is<br>
>> > being done in a loop with every<br>
subsequent session<br>
having an increment<br>
>> > of 2<br>
>> > kb/s as long as there are no drops. If<br>
there are<br>
drops, I increase the<br>
>> > buffer size by a factor of x2 without<br>
incrementing<br>
the bit rate.<br>
>> ><br>
>> > I see trace drops happening consistently<br>
with test<br>
apps producing traces<br>
>> > at<br>
>> > less than 40 kb/s, it doesnt seem to help<br>
even if I<br>
started with 1mb x 4<br>
>> > sub-buffers.<br>
>> ><br>
>> > Analysis :<br>
>> ><br>
>> > I have attached the lttng_relayd ,<br>
lttng_consumerd_64<br>
logs and the<br>
>> > entire<br>
>> > trace directory, hope you will be able to<br>
view it.<br>
>> > I have modified lttng_relayd code to dump<br>
the traces<br>
being captured in<br>
>> > the<br>
>> > lttng_relayd logs along with debug info.<br>
>> ><br>
>> > Each test app is producing logs in the<br>
form of :<br>
>> > "TraceApp PID - 31940 THID - 31970<br>
@threadRate - 1032<br>
b/s appRate - 2079<br>
>> > b/s<br>
>> > threadTraceNum - 9 appTraceNum - 18 sleepTime - 192120"<br>
>> ><br>
>> > The test application PID, test<br>
application thread id,<br>
thread bit rate,<br>
>> > test<br>
>> > app bit rate, thread trace number and<br>
application<br>
trace number s are<br>
>> > part of<br>
>> > the trace. So in the above trace, the<br>
thread is<br>
producing at 1 kb/s and<br>
>> > the<br>
>> > whole test app is producing at 2 kb/s.<br>
>> ><br>
>> > If we look at the babeltrace out put, we<br>
see that the<br>
Trace with<br>
>> > TraceApp<br>
>> > PID - 31940 appTraceNum 2 is missing ,<br>
with 1, 3, 4,<br>
5 and so on being<br>
>> > successfully captured.<br>
>> > I looked at the lttng_relayd logs and<br>
found that<br>
trace of "appTraceNum<br>
>> > 2" is<br>
>> > not delivered/generated by the consumerd<br>
to the<br>
relayd in sequence with<br>
>> > other traces. To rule out that this is<br>
not a test<br>
application problem,<br>
>> > you<br>
>> > can look at line ltttng_relayd log :<br>
12778 and see<br>
traces from<br>
>> > appTraceNum -<br>
>> > 1 to appTraceNum - 18 including the<br>
appTraceNum 2 are<br>
"re-delivered" by<br>
>> > the<br>
>> > consumerd to the relayd.<br>
>> > Essentially, I see appTraceNum 1 through<br>
appTraceNum<br>
18 being delivered<br>
>> > twice, once individually where<br>
appTraceNum 2 is<br>
missing and once as a<br>
>> > group<br>
>> > at line 12778 where its present.<br>
>> ><br>
>> ><br>
>> > Request help with<br>
>> > 1. why traces are delivered twice, is it<br>
by design or<br>
a genuine problem<br>
>> > ?<br>
>> > 2. how to avoid traces being dropped even<br>
though<br>
buffers are<br>
>> > sufficiently<br>
>> > large enough ?<br>
>> ><br>
>> ><br>
>> > Regards,<br>
>> > Aravind.<br>
>> ><br>
>> ><br>
_______________________________________________<br>
>> > lttng-dev mailing list<br>
>> > <a href="mailto:lttng-dev@lists.lttng.org" target="_blank">lttng-dev@lists.lttng.org</a><br></div></div><span>
<mailto:<a href="mailto:lttng-dev@lists.lttng.org" target="_blank">lttng-dev@lists.lttng.org</a>><br>
>> ><br>
<a href="http://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev" rel="noreferrer" target="_blank">http://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev</a><br>
>> ><br>
>><br>
>><br>
>><br>
>> --<br>
>> Jérémie Galarneau<br>
>> EfficiOS Inc.<br>
>> <a href="http://www.efficios.com" rel="noreferrer" target="_blank">http://www.efficios.com</a><br>
><br>
><br>
<br>
<br>
<br>
--<br>
Jérémie Galarneau<br>
EfficiOS Inc.<br>
<a href="http://www.efficios.com" rel="noreferrer" target="_blank">http://www.efficios.com</a><br>
<br>
<br>
<br>
_______________________________________________<br>
lttng-dev mailing list<br>
<a href="mailto:lttng-dev@lists.lttng.org" target="_blank">lttng-dev@lists.lttng.org</a><br>
<mailto:<a href="mailto:lttng-dev@lists.lttng.org" target="_blank">lttng-dev@lists.lttng.org</a>><br></span>
<mailto:<a href="mailto:lttng-dev@lists.lttng.org" target="_blank">lttng-dev@lists.lttng.org</a><span><br>
<mailto:<a href="mailto:lttng-dev@lists.lttng.org" target="_blank">lttng-dev@lists.lttng.org</a>>><br>
<a href="http://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev" rel="noreferrer" target="_blank">http://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev</a><br>
<br>
<br>
<br>
<br>
-- Jonathan Rajotte Julien<br>
<br>
<br>
<br>
<br>
_______________________________________________<br>
lttng-dev mailing list<br>
<a href="mailto:lttng-dev@lists.lttng.org" target="_blank">lttng-dev@lists.lttng.org</a> <mailto:<a href="mailto:lttng-dev@lists.lttng.org" target="_blank">lttng-dev@lists.lttng.org</a>><br>
<a href="http://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev" rel="noreferrer" target="_blank">http://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev</a><br>
<br>
<br></span>
-- Jonathan R. Julien<br>
Efficios<br>
<br>
<br>
<br>
</blockquote><div><div>
<br>
-- <br>
Jonathan R. Julien<br>
Efficios<br>
<br>
</div></div></blockquote></div><br></div></div></div>
<br>_______________________________________________<br>lttng-dev mailing list<br><a href="mailto:lttng-dev@lists.lttng.org" target="_blank">lttng-dev@lists.lttng.org</a><br></div></div><a href="http://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev" target="_blank">http://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev</a><span class="HOEnZb"><font color="#888888"><br></font></span></blockquote></div><span class="HOEnZb"><font color="#888888"><div><br></div><div>-- <br></div><div>Mathieu Desnoyers<br>EfficiOS Inc.<br><a href="http://www.efficios.com" target="_blank">http://www.efficios.com</a></div></font></span></div></div></blockquote></div><br></div>