From: Greg KH on
2.6.33-stable review patch. If anyone has any objections, please let me know.


From: Avi Kivity <avi(a)>

When cr0.wp=0, we may shadow a gpte having u/s=1 and r/w=0 with an spte
having u/s=0 and r/w=1. This allows excessive access if the guest sets
cr0.wp=1 and accesses through this spte.

Fix by making cr0.wp part of the base role; we'll have different sptes for
the two cases and the problem disappears.

Signed-off-by: Avi Kivity <avi(a)>
Signed-off-by: Marcelo Tosatti <mtosatti(a)>
Signed-off-by: Greg Kroah-Hartman <gregkh(a)>
(cherry picked from commit 3dbe141595faa48a067add3e47bba3205b79d33c)
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/mmu.c | 3 ++-
2 files changed, 3 insertions(+), 1 deletion(-)

--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -193,6 +193,7 @@ union kvm_mmu_page_role {
unsigned invalid:1;
unsigned cr4_pge:1;
unsigned nxe:1;
+ unsigned cr0_wp:1;

--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -227,7 +227,7 @@ void kvm_mmu_set_mask_ptes(u64 user_mask

-static int is_write_protection(struct kvm_vcpu *vcpu)
+static bool is_write_protection(struct kvm_vcpu *vcpu)
return vcpu->arch.cr0 & X86_CR0_WP;
@@ -2448,6 +2448,7 @@ static int init_kvm_softmmu(struct kvm_v
r = paging32_init_context(vcpu);

vcpu->arch.mmu.base_role.glevels = vcpu->arch.mmu.root_level;
+ vcpu->arch.mmu.base_role.cr0_wp = is_write_protection(vcpu);

return r;

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)
More majordomo info at
Please read the FAQ at