original_kernel/arch/x86
Yang Xiaowei 2496afbf1e xen: use stronger barrier after unlocking lock
We need to have a stronger barrier between releasing the lock and
checking for any waiting spinners.  A compiler barrier is not sufficient
because the CPU's ordering rules do not prevent the read xl->spinners
from happening before the unlock assignment, as they are different
memory locations.

We need to have an explicit barrier to enforce the write-read ordering
to different memory locations.

Because of it, I can't bring up > 4 HVM guests on one SMP machine.

[ Code and commit comments expanded -J ]

[ Impact: avoid deadlock when using Xen PV spinlocks ]

Signed-off-by: Yang Xiaowei <xiaowei.yang@intel.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
2009-09-09 16:38:44 -07:00
..
boot x86: add vmlinux.lds to targets in arch/x86/boot/compressed/Makefile 2009-08-20 16:08:58 -07:00
configs
crypto
ia32
include/asm x86, pat: Allow ISA memory range uncacheable mapping requests 2009-08-17 14:12:44 -07:00
kernel x86: Fix vSMP boot crash 2009-08-26 10:13:17 +02:00
kvm KVM: MMU: limit rmap chain length 2009-08-06 12:06:54 +03:00
lguest lguest: update commentry 2009-07-30 16:03:46 +09:30
lib x86, msr: execute on the correct CPU subset 2009-08-03 14:48:13 -07:00
math-emu
mm xen: make -fstack-protector work under Xen 2009-09-09 16:37:39 -07:00
oprofile
pci
power
vdso
video
xen xen: use stronger barrier after unlocking lock 2009-09-09 16:38:44 -07:00
Kbuild
Kconfig perf_counter, x86: Fix/improve apic fallback 2009-08-12 14:12:49 +02:00
Kconfig.cpu
Kconfig.debug
Makefile
Makefile_32.cpu