original_kernel/arch/powerpc/mm
Paul Mackerras 3b5750644b [POWERPC] Bolt in SLB entry for kernel stack on secondary cpus
This fixes a regression reported by Kamalesh Bulabel where a POWER4
machine would crash because of an SLB miss at a point where the SLB
miss exception was unrecoverable.  This regression is tracked at:

http://bugzilla.kernel.org/show_bug.cgi?id=10082

SLB misses at such points shouldn't happen because the kernel stack is
the only memory accessed other than things in the first segment of the
linear mapping (which is mapped at all times by entry 0 of the SLB).
The context switch code ensures that SLB entry 2 covers the kernel
stack, if it is not already covered by entry 0.  None of entries 0
to 2 are ever replaced by the SLB miss handler.

Where this went wrong is that the context switch code assumes it
doesn't have to write to SLB entry 2 if the new kernel stack is in the
same segment as the old kernel stack, since entry 2 should already be
correct.  However, when we start up a secondary cpu, it calls
slb_initialize, which doesn't set up entry 2.  This is correct for
the boot cpu, where we will be using a stack in the kernel BSS at this
point (i.e. init_thread_union), but not necessarily for secondary
cpus, whose initial stack can be allocated anywhere.  This doesn't
cause any immediate problem since the SLB miss handler will just
create an SLB entry somewhere else to cover the initial stack.

In fact it's possible for the cpu to go quite a long time without SLB
entry 2 being valid.  Eventually, though, the entry created by the SLB
miss handler will get overwritten by some other entry, and if the next
access to the stack is at an unrecoverable point, we get the crash.

This fixes the problem by making slb_initialize create a suitable
entry for the kernel stack, if we are on a secondary cpu and the stack
isn't covered by SLB entry 0.  This requires initializing the
get_paca()->kstack field earlier, so I do that in smp_create_idle
where the current field is initialized.  This also abstracts a bit of
the computation that mk_esid_data in slb.c does so that it can be used
in slb_initialize.

Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-05-02 15:00:45 +10:00
..
40x_mmu.c [POWERPC] Remove and replace uses of PPC_MEMSTART with memstart_addr 2008-04-17 07:46:12 +10:00
44x_mmu.c [POWERPC] Introduce lowmem_end_addr to distinguish from total_lowmem 2008-04-17 07:46:13 +10:00
Makefile
fault.c
fsl_booke_mmu.c [POWERPC] 85xx: Add support for relocatable kernel (and booting at non-zero) 2008-04-24 20:58:01 +10:00
hash_low_32.S [POWERPC] Clean up access to thread_info in assembly 2008-04-24 20:58:02 +10:00
hash_low_64.S
hash_native_64.c
hash_utils_64.c [POWERPC] htab_remove_mapping is only used by MEMORY_HOTPLUG 2008-04-07 13:49:25 +10:00
hugetlbpage.c
init_32.c [POWERPC] Port fixmap from x86 and use for kmap_atomic 2008-04-24 20:58:02 +10:00
init_64.c [POWERPC] 85xx: Add support for relocatable kernel (and booting at non-zero) 2008-04-24 20:58:01 +10:00
mem.c [POWERPC] Provide walk_memory_resource() for powerpc 2008-04-29 15:57:53 +10:00
mmap.c
mmu_context_32.c
mmu_context_64.c
mmu_decl.h [POWERPC] Rename __initial_memory_limit to __initial_memory_limit_addr 2008-04-17 07:46:13 +10:00
numa.c [POWERPC] Add include of linux/of.h to numa.c 2008-04-24 20:57:32 +10:00
pgtable_32.c [POWERPC] Port fixmap from x86 and use for kmap_atomic 2008-04-24 20:58:02 +10:00
pgtable_64.c
ppc_mmu_32.c [POWERPC] Rename __initial_memory_limit to __initial_memory_limit_addr 2008-04-17 07:46:13 +10:00
slb.c [POWERPC] Bolt in SLB entry for kernel stack on secondary cpus 2008-05-02 15:00:45 +10:00
slb_low.S
slice.c
stab.c
subpage-prot.c
tlb_32.c
tlb_64.c