Commit Graph

36 Commits

Author SHA1 Message Date
Anton Blanchard fd9648dff6 [PATCH] ppc64: Add ptrace data breakpoint support
Add hardware data breakpoint support.

Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-09-12 17:19:12 +10:00
Linus Torvalds 486a153f0e Merge master.kernel.org:/pub/scm/linux/kernel/git/sam/kbuild 2005-09-09 15:46:49 -07:00
Sam Ravnborg 0013a85454 kbuild: m68k,parisc,ppc,ppc64,s390,xtensa use generic asm-offsets.h support
Delete obsoleted parts form arch makefiles and rename to asm-offsets.h

Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
2005-09-09 20:57:26 +02:00
Michael Ellerman 34b5233f37 [PATCH] ppc64: Fix oops for !CONFIG_NUMA
The SPARSEMEM EXTREME code (802f192e4a) that
went in yesterday broke PPC64 for !CONFIG_NUMA.

The problem is that (free|reserve)_bootmem don't take a page number as their
first argument, they take an address. Ruh roh.

Booted on P5 LPAR, iSeries and G5.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-09-09 22:11:34 +10:00
Prasanna S Panchamukhi bb144a85c7 [PATCH] Kprobes: prevent possible race conditions ppc64 changes
This patch contains the ppc64 architecture specific changes to prevent the
possible race conditions.

Signed-off-by: Prasanna S Panchamukhi <prasanna@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-09-07 16:58:00 -07:00
David Gibson 14b3466161 [PATCH] Invert sense of SLB class bit
Currently, we set the class bit in kernel SLB entries, and clear it on
user SLB entries.  On POWER5, ERAT entries created in real mode have
the class bit clear.  So to avoid flushing kernel ERAT entries on each
context switch, this patch inverts our usage of the class bit, setting
it on user SLB entries and clearing it on kernel SLB entries.

Booted on POWER5 and G5.

Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-09-06 16:57:46 +10:00
Anton Blanchard 4721e2214b [PATCH] ppc64: poison initmem
Poison initmem after we free it so we catch use after free issues.

Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-09-06 16:07:53 +10:00
Bob Picco 802f192e4a [PATCH] SPARSEMEM EXTREME
A new option for SPARSEMEM is ARCH_SPARSEMEM_EXTREME.  Architecture
platforms with a very sparse physical address space would likely want to
select this option.  For those architecture platforms that don't select the
option, the code generated is equivalent to SPARSEMEM currently in -mm.
I'll be posting a patch on ia64 ml which uses this new SPARSEMEM feature.

ARCH_SPARSEMEM_EXTREME makes mem_section a one dimensional array of
pointers to mem_sections.  This two level layout scheme is able to achieve
smaller memory requirements for SPARSEMEM with the tradeoff of an
additional shift and load when fetching the memory section.  The current
SPARSEMEM -mm implementation is a one dimensional array of mem_sections
which is the default SPARSEMEM configuration.  The patch attempts isolates
the implementation details of the physical layout of the sparsemem section
array.

ARCH_SPARSEMEM_EXTREME depends on 64BIT and is by default boolean false.

I've boot tested under aim load ia64 configured for ARCH_SPARSEMEM_EXTREME.
 I've also boot tested a 4 way Opteron machine with !ARCH_SPARSEMEM_EXTREME
and tested with aim.

Signed-off-by: Andy Whitcroft <apw@shadowen.org>
Signed-off-by: Bob Picco <bob.picco@hp.com>
Signed-off-by: Dave Hansen <haveblue@us.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-09-05 00:05:38 -07:00
David Gibson 319e76a1ae [PATCH] Fix bug in ppc64 dynamic hugepage support
In adjusting the logic for SLB miss for the dynamic hugepage stuff, I
messed up the !CONFIG_HUGETLB_PAGE case, failing to set the SLB flags
properly.

This fixes it.  It also streamlines the logic for the HUGETLB_PAGE case
(removing a couple of branches) while we're at it.

Booted, and roughly tested on POWER5 (with and without HUGETLB_PAGE),
iSeries/RS64 (no hugepage available), and G5 (with and without
HUGETLB_PAGE).

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-09-01 10:48:20 -07:00
David Gibson 8913ca1c9c [PATCH] Remove nested feature sections
The {BEGIN,END}_FTR_SECTION asm macros used in ppc64 to nop out
sections of code at runtime cannot be nested.  However, we do nest
them in hash_low.S.  We get away with it there, because there is
nothing between the BEGIN markers for each section.  However, that's
confusing to someone reading the code.

This patch removes the nested ifset and ifclr feature sections,
replacing them with a single feature section in the full mask/value
form.

Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-08-30 13:52:12 +10:00
David Gibson c594adad56 [PATCH] Dynamic hugepage addresses for ppc64
Paulus, I think this is now a reasonable candidate for the post-2.6.13
queue.

Relax address restrictions for hugepages on ppc64

Presently, 64-bit applications on ppc64 may only use hugepages in the
address region from 1-1.5T.  Furthermore, if hugepages are enabled in
the kernel config, they may only use hugepages and never normal pages
in this area.  This patch relaxes this restriction, allowing any
address to be used with hugepages, but with a 1TB granularity.  That
is if you map a hugepage anywhere in the region 1TB-2TB, that entire
area will be reserved exclusively for hugepages for the remainder of
the process's lifetime.  This works analagously to hugepages in 32-bit
applications, where hugepages can be mapped anywhere, but with 256MB
(mmu segment) granularity.

This patch applies on top of the four level pagetable patch
(http://patchwork.ozlabs.org/linuxppc64/patch?id=1936).

Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-08-29 10:53:38 +10:00
Michael Ellerman 180379dcef [PATCH] ppc64: Remove physbase from the lmb_property struct
We no longer need the lmb code to know about abs and phys addresses, so
remove the physbase variable from the lmb_property struct.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-08-29 10:53:37 +10:00
Michael Ellerman e88bcd1b29 [PATCH] ppc64: Remove redundant abs_to_phys() macro
abs_to_phys() is a macro that turns out to do nothing, and also has the
unfortunate property that it's not the inverse of phys_to_abs() on iSeries.

The following is for my benefit as much as everyone else.

With CONFIG_MSCHUNKS enabled, the lmb code is changed such that it keeps
a physbase variable for each lmb region. This is used to take the possibly
discontiguous lmb regions and present them as a contiguous address space
beginning from zero.

In this context each lmb region's base address is its "absolute" base
address, and its physbase is it's "physical" address (from Linux's point of
view). The abs_to_phys() macro does the mapping from "absolute" to "physical".

Note: This is not related to the iSeries mapping of physical to absolute
(ie. Hypervisor) addresses which is maintained with the msChunks structure.
And the msChunks structure is not controlled via CONFIG_MSCHUNKS.

Once upon a time you could compile for non-iSeries with CONFIG_MSCHUNKS
enabled. But these days CONFIG_MSCHUNKS depends on CONFIG_PPC_ISERIES, so
for non-iSeries code abs_to_phys() is a no-op.

On iSeries we always have one lmb region which spans from 0 to
systemcfg->physicalMemorySize (arch/ppc64/kernel/iSeries_setup.c line 383).
This region has a base (ie. absolute) address of 0, and a physbase address
of 0 (as calculated in lmb_analyze() (arch/ppc64/kernel/lmb.c line 144)).

On iSeries, abs_to_phys(aa) is defined as lmb_abs_to_phys(aa), which finds
the lmb region containing aa (and there's only one, ie. 0), and then does:

 return lmb.memory.region[0].physbase + (aa - lmb.memory.region[0].base)

physbase == base == 0, so you're left with "return aa".

So remove abs_to_phys(), and lmb_abs_to_phys() which is the implementation
of abs_to_phys() for iSeries.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-08-29 10:53:37 +10:00
Michael Ellerman aefd16b0c5 [PATCH] ppc64: Remove redundant uses of physRpn_to_absRpn
physRpn_to_absRpn is a no-op on non-iSeries platforms, remove the two
redundant calls.

There's only one caller on iSeries so fold the logic in there so we can get
rid of it completely.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-08-29 10:53:36 +10:00
Stephen Rothwell 8c65b5c955 [PATCH] ppc64: move iSeries vio iommu init
Since the iSeries vio iommu tables cannot be used until after the vio bus has
been initialised, move the initialisation of the tables to there.

Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-08-29 10:53:32 +10:00
David Gibson e28f7faf05 [PATCH] Four level pagetables for ppc64
Implement 4-level pagetables for ppc64

This patch implements full four-level page tables for ppc64, thereby
extending the usable user address range to 44 bits (16T).

The patch uses a full page for the tables at the bottom and top level,
and a quarter page for the intermediate levels.  It uses full 64-bit
pointers at every level, thus also increasing the addressable range of
physical memory.  This patch also tweaks the VSID allocation to allow
matching range for user addresses (this halves the number of available
contexts) and adds some #if and BUILD_BUG sanity checks.

Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2005-08-29 10:53:31 +10:00
Mike Kravetz 96cd5b0856 [PATCH] ppc64: POWER 4 fails to boot with NUMA
If CONFIG_NUMA is set, some POWER 4 systems will fail to boot.  This is
because of special processing needed to handle invalid node IDs (0xffff) on
POWER 4.  My previous patch to handle memory 'holes' within nodes forgot to
add this special case for POWER 4 in one place.

In reality, I'm not sure that configuring the kernel for NUMA on POWER 4 makes
much sense.  Are there POWER 4 based systems with NUMA characteristics that
are presented by the firmware?  But, distros want one kernel for all systems
so NUMA is on by default in their kernels.  The patch handles those cases.

Signed-off-by: Mike Kravetz <kravetz@us.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-08-01 21:38:01 -07:00
David Gibson 533f08172e [PATCH] ppc64: dynamically allocate segment tables
PPC64 machines before Power4 need a segment table page allocated for each
CPU.  Currently these are allocated statically in a big array in head.S for
all CPUs.  The segment tables need to be in the first segment (so
do_stab_bolted doesn't take a recursive fault on the stab itself), but
other than that there are no constraints which require the stabs for the
secondary CPUs to be statically allocated.

This patch allocates segment tables dynamically during boot, using
lmb_alloc() to ensure they are within the first 256M segment.  This reduces
the kernel image size by 192k...

Tested on RS64 iSeries, POWER3 pSeries, and POWER5.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-07-27 16:25:58 -07:00
David Gibson 96e2844999 [PATCH] ppc64: kill bitfields in ppc64 hash code
This patch removes the use of bitfield types from the ppc64 hash table
manipulation code.

Signed-off-by: David Gibson <dwg@au1.ibm.com>
Acked-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-07-13 11:25:25 -07:00
R Sharada f4c82d5132 [PATCH] ppc64 kexec: native hash clear
Add code to clear the hash table and invalidate the tlb for native (SMP,
non-LPAR) mode.  Supports 16M and 4k pages.

Signed-off-by: Milton Miller <miltonm@bga.com>
Signed-off-by: R Sharada <sharada@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-25 16:24:51 -07:00
Andy Whitcroft 145e664231 [PATCH] ppc64: sparsemem memory model
Provide the architecture specific implementation for SPARSEMEM for PPC64
systems.

Signed-off-by: Andy Whitcroft <apw@shadowen.org>
Signed-off-by: Dave Hansen <haveblue@us.ibm.com>
Signed-off-by: Mike Kravetz <kravetz@us.ibm.com> (in part)
Signed-off-by: Martin Bligh <mbligh@aracnet.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 09:45:06 -07:00
Andy Whitcroft 74b30be2e1 [PATCH] ppc64: add memory present
Provide hooks for PPC64 to allow memory models to be informed of installed
memory areas.  This allows SPARSEMEM to instantiate mem_map for the populated
areas.

Signed-off-by: Andy Whitcroft <apw@shadowen.org>
Signed-off-by: Dave Hansen <haveblue@us.ibm.com>
Signed-off-by: Martin Bligh <mbligh@aracnet.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 09:45:05 -07:00
Dave Hansen 408fde81c1 [PATCH] remove non-DISCONTIG use of pgdat->node_mem_map
This patch effectively eliminates direct use of pgdat->node_mem_map outside
of the DISCONTIG code.  On a flat memory system, these fields aren't
currently used, neither are they on a sparsemem system.

There was also a node_mem_map(nid) macro on many architectures.  Its use
along with the use of ->node_mem_map itself was not consistent.  It has
been removed in favor of two new, more explicit, arch-independent macros:

	pgdat_page_nr(pgdat, pagenr)
	nid_page_nr(nid, pagenr)

I called them "pgdat" and "nid" because we overload the term "node" to mean
"NUMA node", "DISCONTIG node" or "pg_data_t" in very confusing ways.  I
believe the newer names are much clearer.

These macros can be overridden in the sparsemem case with a theoretically
slower operation using node_start_pfn and pfn_to_page(), instead.  We could
make this the only behavior if people want, but I don't want to change too
much at once.  One thing at a time.

This patch removes more code than it adds.

Compile tested on alpha, alpha discontig, arm, arm-discontig, i386, i386
generic, NUMAQ, Summit, ppc64, ppc64 discontig, and x86_64.  Full list
here: http://sr71.net/patches/2.6.12/2.6.12-rc1-mhp2/configs/

Boot tested on NUMAQ, x86 SMP and ppc64 power4/5 LPARs.

Signed-off-by: Dave Hansen <haveblue@us.ibm.com>
Signed-off-by: Martin J. Bligh <mbligh@aracnet.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-23 09:45:00 -07:00
Anton Blanchard 515bae9cdc [PATCH] ppc64: Mark kernel hptes dirty
We dont use the hardware referenced and changed bits and setting them early
avoids a store to memory.  We already do this for userspace hptes but not
kernel ones.  Do it.

Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-21 18:46:31 -07:00
David Gibson 20cee16ced [PATCH] ppc64: Abolish ioremap_mm
Currently ppc64 has two mm_structs for the kernel, init_mm and also
ioremap_mm.  The latter really isn't necessary: this patch abolishes it,
instead restricting vmallocs to the lower 1TB of the init_mm's range and
placing io mappings in the upper 1TB.  This simplifies the code in a number
of places and eliminates an unecessary set of pagetables.  It also tweaks
the unmap/free path a little, allowing us to remove the unmap_im_area() set
of page table walkers, replacing them with unmap_vm_area().

Signed-off-by: David Gibson <dwg@au1.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-21 18:46:26 -07:00
Wolfgang Wander 1363c3cd86 [PATCH] Avoiding mmap fragmentation
Ingo recently introduced a great speedup for allocating new mmaps using the
free_area_cache pointer which boosts the specweb SSL benchmark by 4-5% and
causes huge performance increases in thread creation.

The downside of this patch is that it does lead to fragmentation in the
mmap-ed areas (visible via /proc/self/maps), such that some applications
that work fine under 2.4 kernels quickly run out of memory on any 2.6
kernel.

The problem is twofold:

  1) the free_area_cache is used to continue a search for memory where
     the last search ended.  Before the change new areas were always
     searched from the base address on.

     So now new small areas are cluttering holes of all sizes
     throughout the whole mmap-able region whereas before small holes
     tended to close holes near the base leaving holes far from the base
     large and available for larger requests.

  2) the free_area_cache also is set to the location of the last
     munmap-ed area so in scenarios where we allocate e.g.  five regions of
     1K each, then free regions 4 2 3 in this order the next request for 1K
     will be placed in the position of the old region 3, whereas before we
     appended it to the still active region 1, placing it at the location
     of the old region 2.  Before we had 1 free region of 2K, now we only
     get two free regions of 1K -> fragmentation.

The patch addresses thes issues by introducing yet another cache descriptor
cached_hole_size that contains the largest known hole size below the
current free_area_cache.  If a new request comes in the size is compared
against the cached_hole_size and if the request can be filled with a hole
below free_area_cache the search is started from the base instead.

The results look promising: Whereas 2.6.12-rc4 fragments quickly and my
(earlier posted) leakme.c test program terminates after 50000+ iterations
with 96 distinct and fragmented maps in /proc/self/maps it performs nicely
(as expected) with thread creation, Ingo's test_str02 with 20000 threads
requires 0.7s system time.

Taking out Ingo's patch (un-patch available per request) by basically
deleting all mentions of free_area_cache from the kernel and starting the
search for new memory always at the respective bases we observe: leakme
terminates successfully with 11 distinctive hardly fragmented areas in
/proc/self/maps but thread creating is gringdingly slow: 30+s(!) system
time for Ingo's test_str02 with 20000 threads.

Now - drumroll ;-) the appended patch works fine with leakme: it ends with
only 7 distinct areas in /proc/self/maps and also thread creation seems
sufficiently fast with 0.71s for 20000 threads.

Signed-off-by: Wolfgang Wander <wwc@rentec.com>
Credit-to: "Richard Purdie" <rpurdie@rpsys.net>
Signed-off-by: Ken Chen <kenneth.w.chen@intel.com>
Acked-by: Ingo Molnar <mingo@elte.hu> (partly)
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-21 18:46:16 -07:00
David Gibson 63551ae0fe [PATCH] Hugepage consolidation
A lot of the code in arch/*/mm/hugetlbpage.c is quite similar.  This patch
attempts to consolidate a lot of the code across the arch's, putting the
combined version in mm/hugetlb.c.  There are a couple of uglyish hacks in
order to covert all the hugepage archs, but the result is a very large
reduction in the total amount of code.  It also means things like hugepage
lazy allocation could be implemented in one place, instead of six.

Tested, at least a little, on ppc64, i386 and x86_64.

Notes:
	- this patch changes the meaning of set_huge_pte() to be more
	  analagous to set_pte()
	- does SH4 need s special huge_ptep_get_and_clear()??

Acked-by: William Lee Irwin <wli@holomorphy.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-06-21 18:46:15 -07:00
Paul Mackerras 2512809255 [PATCH] ppc64: remove explicit contig_page_data reference
Trivial patch to remove our last direct reference to contig_page_data.
This will make it just that much less hard to seperate NUMA and
DISCONTIG.  Please forward on.  Against 2.6.12-rc1

Signed-off-by: Joel Schopp <jschopp@austin.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-05-05 22:00:52 -07:00
David Gibson 1f8d419e29 [PATCH] ppc64: pgtable.h and other header cleanups
This patch started as simply removing a few never-used macros from
asm-ppc64/pgtable.h, then kind of grew.  It now makes a bunch of
cleanups to the ppc64 low-level header files (with corresponding
changes to .c files where necessary) such as:
	- Abolishing never-used macros
	- Eliminating multiple #defines with the same purpose
	- Removing pointless macros (cases where just expanding the
macro everywhere turns out clearer and more sensible)
	- Removing some cases where macros which could be defined in
terms of each other weren't
	- Moving imalloc() related definitions from pgtable.h to their
own header file (imalloc.h)
	- Re-arranging headers to group things more logically
	- Moving all VSID allocation related things to mmu.h, instead
of being split between mmu.h and mmu_context.h
	- Removing some reserved space for flags from the PMD - we're
not using it.
	- Fix some bugs which broke compile with STRICT_MM_TYPECHECKS.

Signed-off-by: David Gibson <dwg@au1.ibm.com>
Acked-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-05-05 16:36:32 -07:00
Olof Johansson d03853d566 [PATCH] PPC64: Remove hot busy-wait loop in __hash_page
It turns out that our current __hash_page code will do a very hot busy-wait
loop waiting on _PAGE_BUSY to be cleared.  It even does ldarx/stdcx in the
loop, which will bounce reservations around like crazy if there's more than
one CPU spinning on the same PTE (or even another PTE in the same
reservation granule).  The end result is that each fault takes longer when
there's contention, which in turn increases the chance of another thread
hitting the same fault and also piling up.  Not pretty.

There's two options here:
1. Do an out-of-line busy loop a'la spinlocks with just loads (no
   reserves)
2. Just bail and refault if needed.

(2) makes sense here: If the PTE is busy, chances are it's in flux anyway
and the other code path making a change might just be ready to hash it.

This fixes a stampede seen on a large-ish system where a multithreaded
HPC app faults in the same text pages on several cpus at the same time.

Signed-off-by: Olof Johansson <olof@lixom.net>
Cc: Paul Mackerras <paulus@samba.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-05-01 08:58:45 -07:00
Benjamin Herrenschmidt 58366af586 [PATCH] ppc64: update to use the new 4L headers
This patch converts ppc64 to use the generic pgtable-nopud.h instead of the
"fixup" header.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-05-01 08:58:44 -07:00
Olof Johansson bb78cb7220 [PATCH] ppc64: remove unused argument to create_slbe
Remove vsid argument to create_slbe, since it's no longer used.

Spotted by R Sharada.

Signed-off-by: Olof Johansson <olof@austin.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-05-01 08:58:44 -07:00
Hugh Dickins 021740dc30 [PATCH] freepgt: hugetlb area is clean
Once we're strict about clearing away page tables, hugetlb_prefault can assume
there are no page tables left within its range.  Since the other arches
continue if !pte_none here, let i386 do the same.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-04-19 13:29:18 -07:00
Hugh Dickins 3bf5ee9564 [PATCH] freepgt: hugetlb_free_pgd_range
ia64 and ppc64 had hugetlb_free_pgtables functions which were no longer being
called, and it wasn't obvious what to do about them.

The ppc64 case turns out to be easy: the associated tables are noted elsewhere
and freed later, safe to either skip its hugetlb areas or go through the
motions of freeing nothing.  Since ia64 does need a special case, restore to
ppc64 the special case of skipping them.

The ia64 hugetlb case has been broken since pgd_addr_end went in, though it
probably appeared to work okay if you just had one such area; in fact it's
been broken much longer if you consider a long munmap spanning from another
region into the hugetlb region.

In the ia64 hugetlb region, more virtual address bits are available than in
the other regions, yet the page tables are structured the same way: the page
at the bottom is larger.  Here we need to scale down each addr before passing
it to the standard free_pgd_range.  Was about to write a hugely_scaled_down
macro, but found htlbpage_to_page already exists for just this purpose.  Fixed
off-by-one in ia64 is_hugepage_only_range.

Uninline free_pgd_range to make it available to ia64.  Make sure the
vma-gathering loop in free_pgtables cannot join a hugepage_only_range to any
other (safe to join huges?  probably but don't bother).

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-04-19 13:29:16 -07:00
Benjamin Herrenschmidt dfbacdc1a0 [PATCH] ppc64: Fix semantics of __ioremap
This patch fixes ppc64 __ioremap() so that it stops adding implicitely
_PAGE_GUARDED when the cache is not writeback, and instead, let the callers
provide the flag they want here.  This allows things like framebuffers to
explicitely request a non-cacheable and non-guarded mapping which is more
efficient for that type of memory without side effects.  The patch also
fixes all current callers to add _PAGE_GUARDED except btext, which is fine
without it.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-04-16 15:24:33 -07:00
Linus Torvalds 1da177e4c3 Linux-2.6.12-rc2
Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.

Let it rip!
2005-04-16 15:20:36 -07:00