First  |  Prev |  Next  |  Last
Pages: 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851
[PATCH v3 11/11] KVM: MMU: trace pte prefetch
Trace pte prefetch, it can help us to improve the prefetch's performance Signed-off-by: Xiao Guangrong <xiaoguangrong(a)cn.fujitsu.com> --- arch/x86/kvm/mmu.c | 45 +++++++++++++++++++++++++++++++++---------- arch/x86/kvm/mmutrace.h | 33 ++++++++++++++++++++++++++++++++ arch/x86/kvm/paging_tmpl.h ... 30 Jun 2010 04:20
[PATCH v3 8/11] KVM: MMU: introduce pte_prefetch_topup_memory_cache()
Introduce this function to topup prefetch cache Signed-off-by: Xiao Guangrong <xiaoguangrong(a)cn.fujitsu.com> --- arch/x86/kvm/mmu.c | 25 +++++++++++++++++++++---- 1 files changed, 21 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index a0c5c31..6673484 100644 --- a/arc... 30 Jun 2010 04:20
[PATCH 10/11] KVM: MMU: combine guest pte read between walk and pte prefetch
Combine guest pte read between guest pte walk and pte prefetch Signed-off-by: Xiao Guangrong <xiaoguangrong(a)cn.fujitsu.com> --- arch/x86/kvm/paging_tmpl.h | 48 ++++++++++++++++++++++++++++++------------- 1 files changed, 33 insertions(+), 15 deletions(-) diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/k... 30 Jun 2010 04:20
[PATCH v3 9/11] KVM: MMU: prefetch ptes when intercepted guest #PF
Support prefetch ptes when intercept guest #PF, avoid to #PF by later access If we meet any failure in the prefetch path, we will exit it and not try other ptes to avoid become heavy path Note: this speculative will mark page become dirty but it not really accessed, the same issue is in other speculative paths... 30 Jun 2010 04:20
[PATCH v3 7/11] KVM: MMU: introduce gfn_to_hva_many() function
This function not only return the gfn's hva but also the page number after @gfn in the slot It's used in the later patch Signed-off-by: Xiao Guangrong <xiaoguangrong(a)cn.fujitsu.com> --- include/linux/kvm_host.h | 1 + virt/kvm/kvm_main.c | 13 ++++++++++++- 2 files changed, 13 insertions(+), 1 del... 30 Jun 2010 04:20
x86: enlightenment for ticket spin locks - improve yield behavior on Xen
On Tue, 2010-06-29 at 15:35 +0100, Jan Beulich wrote: The (only) additional overhead this introduces for native execution is the writing of the owning CPU in the lock acquire paths. Uhm, and growing the size of spinlock_t to 6 (or 8 bytes when aligned) bytes when NR_CPUS>256. -- To unsubscribe from thi... 30 Jun 2010 06:29
[PATCH v3 3/11] KVM: MMU: fix direct sp's access corruptted
If the mapping is writable but the dirty flag is not set, we will find the read-only direct sp and setup the mapping, then if the write #PF occur, we will mark this mapping writable in the read-only direct sp, now, other real read-only mapping will happily write it without #PF. It may hurt guest's COW Fixed by... 30 Jun 2010 04:20
[PATCH v3 1/11] KVM: MMU: fix writable sync sp mapping
While we sync many unsync sp at one time(in mmu_sync_children()), we may mapping the spte writable, it's dangerous, if one unsync sp's mapping gfn is another unsync page's gfn. For example: SP1.pte[0] = P SP2.gfn's pfn = P [SP1.pte[0] = SP2.gfn's pfn] First, we write protected SP1 and SP2, but SP1 and SP2 ... 30 Jun 2010 04:20
x86: enlightenment for ticket spin locks - base implementation
On Tue, 2010-06-29 at 15:31 +0100, Jan Beulich wrote: Add optional (alternative instructions based) callout hooks to the contended ticket lock and the ticket unlock paths, to allow hypervisor specific code to be used for reducing/eliminating the bad effects ticket locks have on performance when running virt... 30 Jun 2010 08:40
[PATCH v3 6/11] KVM: MMU: introduce gfn_to_pfn_atomic() function
Introduce gfn_to_pfn_atomic(), it's the fast path and can used in atomic context, the later patch will use it Signed-off-by: Xiao Guangrong <xiaoguangrong(a)cn.fujitsu.com> --- arch/x86/mm/gup.c | 2 ++ include/linux/kvm_host.h | 1 + virt/kvm/kvm_main.c | 32 +++++++++++++++++++++++++------- ... 30 Jun 2010 04:20
First  |  Prev |  Next  |  Last
Pages: 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851