original_kernel/mm
Lee Schermerhorn 3ad33b2436 Migration: find correct vma in new_vma_page()
We hit the BUG_ON() in mm/rmap.c:vma_address() when trying to migrate via
mbind(MPOL_MF_MOVE) a non-anon region that spans multiple vmas.  For
anon-regions, we just fail to migrate any pages beyond the 1st vma in the
range.

This occurs because do_mbind() collects a list of pages to migrate by
calling check_range().  check_range() walks the task's mm, spanning vmas as
necessary, to collect the migratable pages into a list.  Then, do_mbind()
calls migrate_pages() passing the list of pages, a function to allocate new
pages based on vma policy [new_vma_page()], and a pointer to the first vma
of the range.

For each page in the list, new_vma_page() calls page_address_in_vma()
passing the page and the vma [first in range] to obtain the address to get
for alloc_page_vma().  The page address is needed to get interleaving
policy correct.  If the pages in the list come from multiple vmas,
eventually, new_page_address() will pass that page to page_address_in_vma()
with the incorrect vma.  For !PageAnon pages, this will result in a bug
check in rmap.c:vma_address().  For anon pages, vma_address() will just
return EFAULT and fail the migration.

This patch modifies new_vma_page() to check the return value from
page_address_in_vma().  If the return value is EFAULT, new_vma_page()
searchs forward via vm_next for the vma that maps the page--i.e., that does
not return EFAULT.  This assumes that the pages in the list handed to
migrate_pages() is in address order.  This is currently case.  The patch
documents this assumption in a new comment block for new_vma_page().

If new_vma_page() cannot locate the vma mapping the page in a forward
search in the mm, it will pass a NULL vma to alloc_page_vma().  This will
result in the allocation using the task policy, if any, else system default
policy.  This situation is unlikely, but the patch documents this behavior
with a comment.

Note, this patch results in restarting from the first vma in a multi-vma
range each time new_vma_page() is called.  If this is not acceptable, we
can make the vma argument a pointer, both in new_vma_page() and it's caller
unmap_and_move() so that the value held by the loop in migrate_pages()
always passes down the last vma in which a page was found.  This will
require changes to all new_page_t functions passed to migrate_pages().  Is
this necessary?

For this patch to work, we can't bug check in vma_address() for pages
outside the argument vma.  This patch removes the BUG_ON().  All other
callers [besides new_vma_page()] already check the return status.

Tested on x86_64, 4 node NUMA platform.

Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com>
Acked-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-11-14 18:45:38 -08:00
..
Kconfig small documentation fixes 2007-10-20 02:46:58 +02:00
Makefile
allocpercpu.c
backing-dev.c
bootmem.c
bounce.c
fadvise.c
filemap.c Remove broken ptrace() special-case code from file mapping 2007-10-31 09:19:46 -07:00
filemap_xip.c
fremap.c
highmem.c
hugetlb.c spelling fixes: mm/ 2007-10-20 01:27:18 +02:00
internal.h
madvise.c
memory.c unexport access_process_vm 2007-11-05 21:53:39 +11:00
memory_hotplug.c memory hotplug: rearrange memory hotplug notifier 2007-10-22 08:13:17 -07:00
mempolicy.c Migration: find correct vma in new_vma_page() 2007-11-14 18:45:38 -08:00
mempool.c spelling fixes: mm/ 2007-10-20 01:27:18 +02:00
migrate.c Typo fixes retrun -> return 2007-10-20 02:13:26 +02:00
mincore.c
mlock.c
mmap.c fix mprotect vma_wants_writenotify prot 2007-10-23 08:32:06 -07:00
mmzone.c
mprotect.c fix mprotect vma_wants_writenotify prot 2007-10-23 08:32:06 -07:00
mremap.c
msync.c
nommu.c NOMMU: mm/nommu.c needs linux/module.h 2007-10-29 07:53:26 -07:00
oom_kill.c oom_kill bug 2007-10-20 15:04:06 -07:00
page-writeback.c spelling fixes: mm/ 2007-10-20 01:27:18 +02:00
page_alloc.c Revert "Bias the placement of kernel pages at lower PFNs" 2007-11-12 14:14:44 -08:00
page_io.c
page_isolation.c
pdflush.c
prio_tree.c spelling fixes: mm/ 2007-10-20 01:27:18 +02:00
quicklist.c
readahead.c
rmap.c Migration: find correct vma in new_vma_page() 2007-11-14 18:45:38 -08:00
shmem.c fix tmpfs BUG and AOP_WRITEPAGE_ACTIVATE 2007-10-30 08:06:55 -07:00
shmem_acl.c
slab.c slab: fix typo in allocation failure handling 2007-11-14 18:45:36 -08:00
slob.c
slub.c SLUB: killed the unused "end" variable 2007-11-12 10:32:29 -08:00
sparse-vmemmap.c mm/sparse-vmemmap.c: make sure init_mm is included 2007-10-30 08:06:55 -07:00
sparse.c Revert "x86_64: allocate sparsemem memmap above 4G" 2007-10-29 14:05:37 -07:00
swap.c spelling fixes: mm/ 2007-10-20 01:27:18 +02:00
swap_state.c
swapfile.c
thrash.c
tiny-shmem.c
truncate.c
util.c
vmalloc.c spelling fixes: mm/ 2007-10-20 01:27:18 +02:00
vmscan.c spelling fixes: mm/ 2007-10-20 01:27:18 +02:00
vmstat.c