2496afbf1e
We need to have a stronger barrier between releasing the lock and checking for any waiting spinners. A compiler barrier is not sufficient because the CPU's ordering rules do not prevent the read xl->spinners from happening before the unlock assignment, as they are different memory locations. We need to have an explicit barrier to enforce the write-read ordering to different memory locations. Because of it, I can't bring up > 4 HVM guests on one SMP machine. [ Code and commit comments expanded -J ] [ Impact: avoid deadlock when using Xen PV spinlocks ] Signed-off-by: Yang Xiaowei <xiaowei.yang@intel.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> |
||
---|---|---|
.. | ||
boot | ||
configs | ||
crypto | ||
ia32 | ||
include/asm | ||
kernel | ||
kvm | ||
lguest | ||
lib | ||
math-emu | ||
mm | ||
oprofile | ||
pci | ||
power | ||
vdso | ||
video | ||
xen | ||
Kbuild | ||
Kconfig | ||
Kconfig.cpu | ||
Kconfig.debug | ||
Makefile | ||
Makefile_32.cpu |