[lttng-dev] [Qemu-devel] [PATCH 0/6] hypertrace: Lightweight guest-to-QEMU trace channel
Masami Hiramatsu
mhiramat at kernel.org
Fri Aug 19 04:45:31 UTC 2016
On Thu, 18 Aug 2016 09:37:11 -0400
Luiz Capitulino <lcapitulino at redhat.com> wrote:
> On Thu, 18 Aug 2016 11:54:24 +0100
> Stefan Hajnoczi <stefanha at gmail.com> wrote:
>
> > On Fri, Aug 05, 2016 at 06:59:23PM +0200, LluĂs Vilanova wrote:
> > > The hypertrace channel allows guest code to emit events in QEMU (the host) using
> > > its tracing infrastructure (see "docs/trace.txt"). This works in both 'system'
> > > and 'user' modes. That is, hypertrace is to tracing, what hypercalls are to
> > > system calls.
> > >
> > > You can use this to emit an event on both guest and QEMU (host) traces to easily
> > > synchronize or correlate them. You could also modify you guest's tracing system
> > > to emit all events through the hypertrace channel, providing a unified and fully
> > > synchronized trace log. Another use case is timing the performance of guest code
> > > when optimizing TCG (QEMU traces have a timestamp).
> > >
> > > See first commit for a full description.
>
> I hate the be the one asking the stupid questions, but what's
> the problem this series solves? What are the use-cases? Why
> existing solutions don't solve this problem? How does this
> compares to existing solutions?
>
> > This tracing approach has a high performance overhead, particularly for
> > SMP guests where each trace event requires writing to the global control
> > register. All CPUs will be hammering this register (heavyweight vmexit)
> > for each trace event.
That reminds me xenprobes.
https://www.usenix.org/legacy/event/usenix07/tech/full_papers/quynh/quynh.pdf
The paper also tried to implement kprobe-like feature for tracing events in
guest. Synchronous tracing maybe easy to read the raw trace data, but cause
big slowdown. Moreover, we can merge and synchronize the data easily after
traced.
> >
> > I think the folks CCed on this email all take an asynchronous approach
> > to avoid this performance overhead. Synchronous means taking a VM exit
> > for every event. Asynchronous means writing trace data to a buffer and
> > later interleaving guest data with host trace data.
> >
> > LTTng Userspace Tracer is an example of the asynchronous approach. The
> > trace data buffers are in shared memory. The LTTng process can grab
> > buffers at appropriate times.
> >
> > The ftrace virtio-serial approach has been to splice() the ftrace
> > buffers, resulting in efficient I/O.
>
> Yes. However, note that the virtio-serial device is only used to
> transfer the tracing data to the host. It has no role in the
> tracing process. Also, it's not required. I've been using the
> networking and it works OK as long as your tracing data is not big.
Right, the no-copy virtio-serial is for reducing the overhead as small as
possible but it also requires a special programs in both of guest and host.
(since it uses unix domain socket to transfer data without copy)
In normal case, you can just trace it as like as tracing on a remote machine
via network. That's more scalable and flexible.
> > Steven is working on a host/guest solution for trace-cmd. It is also
> > asynchronous. No new paravirt hardware is needed and it makes me wonder
> > whether the hypertrace PCI device is trying to solve the problem at the
> > wrong layer.
Thanks Steven to continue that!
Regards,
> >
> > If you want to play around with asynchronous tracing, you could start
> > with trace/simple.c. It has a trace buffer that is asynchronously
> > written out to file by a dedicated "writer" thread.
> >
> > The one case where hypertrace makes sense to me is for -user tracing.
> > There QEMU can efficiently interleave guest and QEMU traces, although as
> > mentioned in the patch, I don't think the SIGSEGV approach should be
> > used.
> >
> > I suggest stripping this series down to focus on -user. Synchronous
> > tracing is not a good approach for -system emulation.
> >
> > Stefan
>
--
Masami Hiramatsu <mhiramat at kernel.org>
More information about the lttng-dev
mailing list