RCU free the struct inode. This will allow: - Subsequent store-free path walking patch. The inode must be consulted for permissions when walking, so an RCU inode reference is a must. - sb_inode_list_lock to be moved inside i_lock because sb list walkers who want to take i_lock no longer need to take sb_inode_list_lock to walk the list in the first place. This will simplify and optimize locking. - Could remove some nested trylock loops in dcache code - Could potentially simplify things a bit in VM land. Do not need to take the page lock to follow page->mapping. The downsides of this is the performance cost of using RCU. In a simple creat/unlink microbenchmark, performance drops by about 10% due to inability to reuse cache-hot slab objects. As iterations increase and RCU freeing starts kicking over, this increases to about 20%. In cases where inode lifetimes are longer (ie. many inodes may be allocated during the average life span of a single inode), a lot of this cache reuse is not applicable, so the regression caused by this patch is smaller. The cache-hot regression could largely be avoided by using SLAB_DESTROY_BY_RCU, however this adds some complexity to list walking and store-free path walking, so I prefer to implement this at a later date, if it is shown to be a win in real situations. I haven't found a regression in any non-micro benchmark so I doubt it will be a problem. Signed-off-by: Nick Piggin <npiggin@kernel.dk> |
||
---|---|---|
.. | ||
.gitignore | ||
Makefile | ||
backing_ops.c | ||
context.c | ||
coredump.c | ||
fault.c | ||
file.c | ||
gang.c | ||
hw_ops.c | ||
inode.c | ||
lscsa_alloc.c | ||
run.c | ||
sched.c | ||
spu_restore.c | ||
spu_restore_crt0.S | ||
spu_restore_dump.h_shipped | ||
spu_save.c | ||
spu_save_crt0.S | ||
spu_save_dump.h_shipped | ||
spu_utils.h | ||
spufs.h | ||
sputrace.h | ||
switch.c | ||
syscalls.c |