Commit Graph

22 Commits

Author SHA1 Message Date
Roel Kluin 8dcd038a13 powerpc/fsl-booke: read buffer overflow
cam[tlbcam_index] is checked before tlbcam_index < ARRAY_SIZE(cam)

Signed-off-by: Roel Kluin <roel.kluin@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-08-20 10:27:12 +10:00
Benjamin Herrenschmidt 8d1cf34e7a powerpc/mm: Tweak PTE bit combination definitions
This patch tweaks the way some PTE bit combinations are defined, in such a
way that the 32 and 64-bit variant become almost identical and that will
make it easier to bring in a new common pte-* file for the new variant
of the Book3-E support.

The combination of bits defining access to kernel pages are now clearly
separated from the combination used by userspace and the core VM. The
resulting generated code should remain identical unless I made a mistake.

Note: While at it, I removed a non-sensical statement related to CONFIG_KGDB
in ppc_mmu_32.c which could cause kernel mappings to be user accessible when
that option is enabled. Probably something that bitrot.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-03-24 13:47:33 +11:00
Kumar Gala 96a8bac589 powerpc/fsl-booke: Fix compile warning
arch/powerpc/mm/fsl_booke_mmu.c: In function 'adjust_total_lowmem':
arch/powerpc/mm/fsl_booke_mmu.c:221: warning: format '%ld' expects type 'long int', but argument 3 has type 'phys_addr_t'

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2009-02-12 16:54:53 -06:00
Kumar Gala d66c82ea45 powerpc/fsl-booke: Add new ISA 2.06 page sizes and MAS defines
The Power ISA 2.06 added power of two page sizes to the embedded MMU
architecture.  Its done it such a way to be code compatiable with the
existing HW.  Made the minor code changes to support both power of two
and power of four page sizes.  Also added some new MAS bits and macros
that are defined as part of the 2.06 ISA.  Renamed some things to use
the 'Book-3e' concept to convey the new MMU that is based on the
Freescale Book-E MMU programming model.

Note, its still invalid to try and use a page size that isn't supported
by cpu.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2009-02-12 16:37:11 -06:00
Benjamin Herrenschmidt edbc29d76d Merge commit 'kumar/next' into next 2009-02-11 13:37:44 +11:00
Kumar Gala 6c24b17453 powerpc/fsl-booke: Fix mapping functions to use phys_addr_t
Fixed v_mapped_by_tlbcam() and p_mapped_by_tlbcam() to use phys_addr_t
instead of unsigned long.  In 36-bit physical mode we really need these
functions to deal with phys_addr_t when trying to match a physical
address or when returning one.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2009-02-09 21:11:55 -06:00
Trent Piepho 96051465fd powerpc/fsl-booke: Make CAM entries used for lowmem configurable
On booke processors, the code that maps low memory only uses up to three
CAM entries, even though there are sixteen and nothing else uses them.

Make this number configurable in the advanced options menu along with max
low memory size.  If one wants 1 GB of lowmem, then it's typically
necessary to have four CAM entries.

Signed-off-by: Trent Piepho <tpiepho@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2009-01-28 18:16:54 -06:00
Trent Piepho c8f3570b7e powerpc/fsl-booke: Allow larger CAM sizes than 256 MB
The code that maps kernel low memory would only use page sizes up to 256
MB.  On E500v2 pages up to 4 GB are supported.

However, a page must be aligned to a multiple of the page's size.  I.e.
256 MB pages must aligned to a 256 MB boundary.  This was enforced by a
requirement that the physical and virtual addresses of the start of lowmem
be aligned to 256 MB.  Clearly requiring 1GB or 4GB alignment to allow
pages of that size isn't acceptable.

To solve this, I simply have adjust_total_lowmem() take alignment into
account when it decides what size pages to use.  Give it PAGE_OFFSET =
0x7000_0000, PHYSICAL_START = 0x3000_0000, and 2GB of RAM, and it will map
pages like this:
PA 0x3000_0000 VA 0x7000_0000 Size 256 MB
PA 0x4000_0000 VA 0x8000_0000 Size 1 GB
PA 0x8000_0000 VA 0xC000_0000 Size 256 MB
PA 0x9000_0000 VA 0xD000_0000 Size 256 MB
PA 0xA000_0000 VA 0xE000_0000 Size 256 MB

Because the lowmem mapping code now takes alignment into account,
PHYSICAL_ALIGN can be lowered from 256 MB to 64 MB.  Even lower might be
possible.  The lowmem code will work down to 4 kB but it's possible some of
the boot code will fail before then.  Poor alignment will force small pages
to be used, which combined with the limited number of TLB1 pages available,
will result in very little memory getting mapped.  So alignments less than
64 MB probably aren't very useful anyway.

Signed-off-by: Trent Piepho <tpiepho@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2009-01-28 18:16:53 -06:00
Trent Piepho f88747e7f6 powerpc/fsl-booke: Remove code duplication in lowmem mapping
The code to map lowmem uses three CAM aka TLB[1] entries to cover it.  The
size of each is stored in three globals named __cam0, __cam1, and __cam2.
All the code that uses them is duplicated three times for each of the three
variables.

We have these things called arrays and loops....

Once converted to use an array, it will be easier to make the number of
CAMs configurable.

Signed-off-by: Trent Piepho <tpiepho@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2009-01-28 18:16:51 -06:00
Trent Piepho 6fd8be4bf7 powerpc/fsl-booke: Remove num_tlbcam_entries
This is a global variable defined in fsl_booke_mmu.c with a value that gets
initialized in assembly code in head_fsl_booke.S.

It's never used.

If some code ever does want to know the number of entries in TLB1, then
"numcams = mfspr(SPRN_TLB1CFG) & 0xfff", is a whole lot simpler than a
global initialized during kernel boot from assembly.

Signed-off-by: Trent Piepho <tpiepho@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2009-01-07 15:33:07 -06:00
Trent Piepho 19f5465e82 powerpc/fsl-booke: Don't hard-code size of struct tlbcam
Some assembly code in head_fsl_booke.S hard-coded the size of struct tlbcam
to 20 when it indexed the TLBCAM table.  Anyone changing the size of struct
tlbcam would not know to expect that.

The kernel already has a system to get the size of C structures into
assembly language files, asm-offsets, so let's use it.

The definition of the struct gets moved to a header, so that asm-offsets.c
can include it.

Signed-off-by: Trent Piepho <tpiepho@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2009-01-07 15:33:06 -06:00
Becky Bruce 82331ab15f powerpc/85xx: fix build warning, remove silly cast
This fixes a build warning when PHYS_64BIT is enabled, and removes an
unnecessary cast to phys_addr_t (the variable being cast is already
a phys_addr_t)

Signed-off-by: Becky Bruce <becky.bruce@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-09-16 10:01:35 -05:00
Kumar Gala 37dd2badcf [POWERPC] 85xx: Add support for relocatable kernel (and booting at non-zero)
Added support to allow an 85xx kernel to be run from a non-zero physical
address (useful for cooperative asymmetric multiprocessing situations and
kdump).  The support can be configured at compile time by setting
CONFIG_PAGE_OFFSET, CONFIG_KERNEL_START, and CONFIG_PHYSICAL_START as
desired.

Alternatively, the kernel build can set CONFIG_RELOCATABLE.  Setting this
config option causes the kernel to determine at runtime the physical
addresses of CONFIG_PAGE_OFFSET and CONFIG_KERNEL_START.  If
CONFIG_RELOCATABLE is set, then CONFIG_PHYSICAL_START has no meaning.
However, CONFIG_PHYSICAL_START will always be used to set the LOAD program
header physical address field in the resulting ELF image.

Currently we are limited to running at a physical address that is a
multiple of 256M.  This is due to how we map TLBs to cover
lowmem.  This should be fixed to allow 64M or maybe even 16M alignment
in the future.  It is considered an error to try and run a kernel at a
non-aligned physical address.

All the magic for this support is accomplished by proper initialization
of the kernel memory subsystem and use of ARCH_PFN_OFFSET.

The use of ARCH_PFN_OFFSET only affects normal memory and not IO mappings.
ioremap uses map_page and isn't affected by ARCH_PFN_OFFSET.

/dev/mem continues to allow access to any physical address in the system
regardless of how CONFIG_PHYSICAL_START is set.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-04-24 20:58:01 +10:00
Kumar Gala 09b5e63f82 [POWERPC] Rename __initial_memory_limit to __initial_memory_limit_addr
We always use __initial_memory_limit as an address so rename it
to be clear.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-04-17 07:46:13 +10:00
Kumar Gala 0aef996b37 [POWERPC] 85xx: Cleanup TLB initialization
* Determine the RPN we are running the kernel at runtime rather
  than using compile time constant for initial TLB

* Cleanup adjust_total_lowmem() to respect memstart_addr and
  be a bit more clear on variables that are sizes vs addresses.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-04-17 07:46:13 +10:00
Kumar Gala 99c62dd773 [POWERPC] Remove and replace uses of PPC_MEMSTART with memstart_addr
A number of users of PPC_MEMSTART (40x, ppc_mmu_32) can just always
use 0 as we don't support booting these kernels at non-zero physical
addresses since their exception vectors must be at 0 (or 0xfffx_xxxx).

For the sub-arches that support relocatable interrupt vectors
(book-e), it's reasonable to have memory start at a non-zero physical
address.  For those cases use the variable memstart_addr instead of
the #define PPC_MEMSTART since the only uses of PPC_MEMSTART are for
initialization and in the future we can set memstart_addr at runtime
to have a relocatable kernel.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-04-17 07:46:12 +10:00
Dale Farnsworth e8b6376155 [POWERPC] 85xx: Respect KERNELBASE, PAGE_OFFSET, and PHYSICAL_START on e500
The e500 MMU init code previously assumed KERNELBASE always equaled
PAGE_OFFSET and PHYSICAL_START was 0.  This is useful for kdump
support as well as asymetric multicore.

For the initial kdump support the secondary kernel will run at 32M
but need access to all of memory so we bump the initial TLB up to
64M.  This also matches with the forth coming ePAPR spec.

Signed-off-by: Dale Farnsworth <dale@farnsworth.org>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-01-23 19:34:36 -06:00
Dale Farnsworth 873553b3d6 [POWERPC] 85xx: Failure with odd memory sizes and CONFIG_HIGHMEM
The CONFIG_FSL_BOOKE mmu setup code fails when CONFIG_HIGHMEM=y
and the 3 fixed TLB entries cannot exactly map the lowmem size.
Each TLB entry can map 4MB, 16MB, 64MB or 256MB, so the failure
is observed when the kernel lowmem size is not equal to the
sum of up to 3 of those values.

Normally, memory is sized in nice numbers, but I observed this
problem while testing a crash dump kernel.  The failure can
also be observed by artificially reducing the kernel's main
memory via the mem= kernel command line parameter.

This commit fixes the problem by setting __initial_memory_limit
in adjust_total_lowmem().

Signed-off-by: Dale Farnsworth <dale@farnsworth.org>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2007-10-08 08:38:34 -05:00
David Gibson f21f49ea63 [POWERPC] Remove the dregs of APUS support from arch/powerpc
APUS (the Amiga Power-Up System) is not supported under arch/powerpc
and it's unlikely it ever will be.  Therefore, this patch removes the
fragments of APUS support code from arch/powerpc which have been
copied from arch/ppc.

A few APUS references are left in asm-powerpc in .h files which are
still used from arch/ppc.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-06-14 22:30:15 +10:00
Jörn Engel 6ab3d5624e Remove obsolete #include <linux/config.h>
Signed-off-by: Jörn Engel <joern@wohnheim.fh-wedel.de>
Signed-off-by: Adrian Bunk <bunk@stusta.de>
2006-06-30 19:25:36 +02:00
Kumar Gala 4c8d3d997e [PATCH] Update email address for Kumar
Changed jobs and the Freescale address is no longer valid.

Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-11-13 18:14:10 -08:00
Paul Mackerras 14cf11af6c powerpc: Merge enough to start building in arch/powerpc.
This creates the directory structure under arch/powerpc and a bunch
of Kconfig files.  It does a first-cut merge of arch/powerpc/mm,
arch/powerpc/lib and arch/powerpc/platforms/powermac.  This is enough
to build a 32-bit powermac kernel with ARCH=powerpc.

For now we are getting some unmerged files from arch/ppc/kernel and
arch/ppc/syslib, or arch/ppc64/kernel.  This makes some minor changes
to files in those directories and files outside arch/powerpc.

The boot directory is still not merged.  That's going to be interesting.

Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-09-26 16:04:21 +10:00