linux-stable-rt/arch/powerpc/mm
Benjamin Herrenschmidt 94b2a4393c [POWERPC] Fix spu SLB invalidations
The SPU code doesn't properly invalidate SPUs SLBs when necessary,
for example when changing a segment size from the hugetlbfs code. In
addition, it saves and restores the SLB content on context switches
which makes it harder to properly handle those invalidations.

This patch removes the saving & restoring for now, something more
efficient might be found later on. It also adds a spu_flush_all_slbs(mm)
that can be used by the core mm code to flush the SLBs of all SPEs that
are running a given mm at the time of the flush.

In order to do that, it adds a spinlock to the list of all SPEs and move
some bits & pieces from spufs to spu_base.c

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2007-03-10 00:07:50 +01:00
..
4xx_mmu.c
44x_mmu.c
Makefile
fault.c
fsl_booke_mmu.c
hash_low_32.S
hash_low_64.S
hash_native_64.c
hash_utils_64.c [POWERPC] Fix spu SLB invalidations 2007-03-10 00:07:50 +01:00
hugetlbpage.c [POWERPC] Fix spu SLB invalidations 2007-03-10 00:07:50 +01:00
imalloc.c
init_32.c
init_64.c
lmb.c
mem.c [POWERPC] Fix vDSO page count calculation 2007-02-13 15:35:52 +11:00
mmap.c
mmu_context_32.c
mmu_context_64.c
mmu_decl.h
numa.c Fix typos concerning hierarchy 2007-02-17 19:23:03 +01:00
pgtable_32.c [POWERPC] Fix is_power_of_4(x) compile error 2007-02-09 09:30:05 -06:00
pgtable_64.c [POWERPC] Fix bug with early ioremap and 64k pages 2007-02-16 14:00:20 +11:00
ppc_mmu_32.c
slb.c
slb_low.S
stab.c
tlb_32.c
tlb_64.c