original_kernel/mm
Hugh Dickins c74df32c72 [PATCH] mm: ptd_alloc take ptlock
Second step in pushing down the page_table_lock.  Remove the temporary
bridging hack from __pud_alloc, __pmd_alloc, __pte_alloc: expect callers not
to hold page_table_lock, whether it's on init_mm or a user mm; take
page_table_lock internally to check if a racing task already allocated.

Convert their callers from common code.  But avoid coming back to change them
again later: instead of moving the spin_lock(&mm->page_table_lock) down,
switch over to new macros pte_alloc_map_lock and pte_unmap_unlock, which
encapsulate the mapping+locking and unlocking+unmapping together, and in the
end may use alternatives to the mm page_table_lock itself.

These callers all hold mmap_sem (some exclusively, some not), so at no level
can a page table be whipped away from beneath them; and pte_alloc uses the
"atomic" pmd_present to test whether it needs to allocate.  It appears that on
all arches we can safely descend without page_table_lock.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-10-29 21:40:40 -07:00
..
Kconfig
Makefile
bootmem.c
fadvise.c
filemap.c
filemap.h
filemap_xip.c
fremap.c [PATCH] mm: ptd_alloc take ptlock 2005-10-29 21:40:40 -07:00
highmem.c
hugetlb.c [PATCH] mm: ptd_alloc take ptlock 2005-10-29 21:40:40 -07:00
internal.h
madvise.c
memory.c [PATCH] mm: ptd_alloc take ptlock 2005-10-29 21:40:40 -07:00
mempolicy.c
mempool.c
mincore.c
mlock.c
mmap.c
mprotect.c
mremap.c [PATCH] mm: ptd_alloc take ptlock 2005-10-29 21:40:40 -07:00
msync.c
nommu.c
oom_kill.c
page-writeback.c
page_alloc.c
page_io.c
pdflush.c
prio_tree.c
readahead.c
rmap.c
shmem.c
slab.c
sparse.c
swap.c
swap_state.c
swapfile.c
thrash.c
tiny-shmem.c
truncate.c
vmalloc.c [PATCH] mm: init_mm without ptlock 2005-10-29 21:40:40 -07:00
vmscan.c