[ltt-dev] [PATCH v2] cmm: provide lightweight smp_rmb/smp_wmb on PPC
Mathieu Desnoyers
compudj at krystal.dyndns.org
Thu Sep 22 05:10:50 EDT 2011
* Paolo Bonzini (pbonzini at redhat.com) wrote:
> lwsync orders loads in cacheable memory with respect to other loads,
> and stores in cacheable memory with respect to other stores. Use it
> to implement smp_rmb/smp_wmb.
>
> The heavy-weight sync is still used for the "full" rmb/wmb operations,
> as well as for smp_mb.
[ Edit by Mathieu Desnoyers: rephrased the comments around the memory
barriers. ]
+/*
+ * Use sync for all cmm_mb/rmb/wmb barriers because lwsync does not
+ * preserve ordering of cacheable vs. non-cacheable accesses, so it
+ * should not be used to order with respect to MMIO operations. An
+ * eieio+lwsync pair is also not enough for cmm_rmb, because it will
+ * order cacheable and non-cacheable memory operations separately---i.e.
+ * not the latter against the former.
+ */
+#define cmm_mb() asm volatile("sync":::"memory")
+
+/*
+ * lwsync orders loads in cacheable memory with respect to other loads,
+ * and stores in cacheable memory with respect to other stores.
+ * Therefore, use it for barriers ordering accesses to cacheable memory
+ * only.
+ */
+#define cmm_smp_rmb() asm volatile("lwsync":::"memory")
+#define cmm_smp_wmb() asm volatile("lwsync":::"memory")
Merged, thanks!
Mathieu
>
> Signed-off-by: Paolo Bonzini <pbonzini at redhat.com>
> ---
> urcu/arch/ppc.h | 10 +++++++++-
> 1 files changed, 9 insertions(+), 1 deletions(-)
>
> diff --git a/urcu/arch/ppc.h b/urcu/arch/ppc.h
> index a03d688..05f7db6 100644
> --- a/urcu/arch/ppc.h
> +++ b/urcu/arch/ppc.h
> @@ -32,7 +32,15 @@ extern "C" {
> /* Include size of POWER5+ L3 cache lines: 256 bytes */
> #define CAA_CACHE_LINE_SIZE 256
>
> -#define cmm_mb() asm volatile("sync":::"memory")
> +#define cmm_mb() asm volatile("sync":::"memory")
> +
> +/* lwsync does not preserve ordering of cacheable vs. non-cacheable
> + * accesses, but it is good when MMIO is not in use. An eieio+lwsync
> + * pair is also not enough for rmb, because it will order cacheable
> + * and non-cacheable memory operations separately---i.e. not the latter
> + * against the former. */
> +#define cmm_smp_rmb() asm volatile("lwsync":::"memory")
> +#define cmm_smp_wmb() asm volatile("lwsync":::"memory")
>
> #define mftbl() \
> ({ \
> --
> 1.7.6
>
--
Mathieu Desnoyers
Operating System Efficiency R&D Consultant
EfficiOS Inc.
http://www.efficios.com
More information about the lttng-dev
mailing list