[ltt-dev] [PATCH] Fix dirty page accounting in redirty_page_for_writepage()
Ingo Molnar
mingo at elte.hu
Thu Apr 30 02:50:55 EDT 2009
* Mathieu Desnoyers <compudj at krystal.dyndns.org> wrote:
> * Ingo Molnar (mingo at elte.hu) wrote:
> >
> > * Mathieu Desnoyers <mathieu.desnoyers at polymtl.ca> wrote:
> >
> > > And thanks for the review! This excercise only convinced me that
> > > the kernel memory accounting works as expected. All this gave me
> > > the chance to have a good look at the memory accounting code. We
> > > could probably benefit of Christoph Lameter's cpu ops (using
> > > segment registers to address per-cpu variables with atomic
> > > inc/dec) in there. Or at least removing interrupt disabling by
> > > using preempt disable and local_t variables for the per-cpu
> > > counters could bring some benefit.
> >
> > Note, optimized per cpu ops are already implemented upstream, by
> > Tejun Heo's percpu patches in .30:
> >
> > #define percpu_read(var) percpu_from_op("mov", per_cpu__##var)
> > #define percpu_write(var, val) percpu_to_op("mov", per_cpu__##var, val)
> > #define percpu_add(var, val) percpu_to_op("add", per_cpu__##var, val)
> > #define percpu_sub(var, val) percpu_to_op("sub", per_cpu__##var, val)
> > #define percpu_and(var, val) percpu_to_op("and", per_cpu__##var, val)
> > #define percpu_or(var, val) percpu_to_op("or", per_cpu__##var, val)
> > #define percpu_xor(var, val) percpu_to_op("xor", per_cpu__##var, val)
> >
> > See:
> >
> > 6dbde35: percpu: add optimized generic percpu accessors
> >
> > From the changelog:
> >
> > [...]
> > The advantage is that for example to read a local percpu variable,
> > instead of this sequence:
> >
> > return __get_cpu_var(var);
> >
> > ffffffff8102ca2b: 48 8b 14 fd 80 09 74 mov -0x7e8bf680(,%rdi,8),%rdx
> > ffffffff8102ca32: 81
> > ffffffff8102ca33: 48 c7 c0 d8 59 00 00 mov $0x59d8,%rax
> > ffffffff8102ca3a: 48 8b 04 10 mov (%rax,%rdx,1),%rax
> >
> > We can get a single instruction by using the optimized variants:
> >
> > return percpu_read(var);
> >
> > ffffffff8102ca3f: 65 48 8b 05 91 8f fd mov %gs:0x7efd8f91(%rip),%rax
> > [...]
> >
> > So if you want to make use of it, percpu_add()/percpu_sub() would be
> > the place to start.
> >
>
> Great !
>
> I see however that it's only guaranteed to be atomic wrt preemption.
That's really only true for the non-x86 fallback defines. If we so
decide, we could make the fallbacks in asm-generic/percpu.h irq-safe
...
> What would be even better would be to have the atomic ops wrt local irqs
> (as local.h does) available in this percpu flavor. By doing this, we
> could have interrupt and nmi-safe per-cpu counters, without even the
> need to disable preemption.
nmi-safe isnt a big issue (we have no NMI code that interacts with
MM counters) - and we could make them irq-safe by fixing the
wrapper. (and on x86 they are NMI-safe too.)
Ingo
More information about the lttng-dev
mailing list