• David Matlack's avatar
    KVM: x86/mmu: Split huge pages mapped by the TDP MMU when dirty logging is enabled · a3fe5dbd
    David Matlack authored
    When dirty logging is enabled without initially-all-set, try to split
    all huge pages in the memslot down to 4KB pages so that vCPUs do not
    have to take expensive write-protection faults to split huge pages.
    
    Eager page splitting is best-effort only. This commit only adds the
    support for the TDP MMU, and even there splitting may fail due to out
    of memory conditions. Failures to split a huge page is fine from a
    correctness standpoint because KVM will always follow up splitting by
    write-protecting any remaining huge pages.
    
    Eager page splitting moves the cost of splitting huge pages off of the
    vCPU threads and onto the thread enabling dirty logging on the memslot.
    This is useful because:
    
     1. Splitting on the vCPU thread interrupts vCPUs execution and is
        disruptive to customers whereas splitting on VM ioctl threads can
        run in parallel with vCPU execution.
    
     2. Splitting all huge pages at once is more efficient because it does
        not require performing VM-exit handling or walking the page table for
        every 4KiB page in the memslot, and greatly reduces the amount of
        contention on the mmu_lock.
    
    For example, when running dirty_log_perf_test with 96 virtual CPUs, 1GiB
    per vCPU, and 1GiB HugeTLB memory, the time it takes vCPUs to write to
    all of their memory after dirty logging is enabled decreased by 95% from
    2.94s to 0.14s.
    
    Eager Page Splitting is over 100x more efficient than the current
    implementation of splitting on fault under the read lock. For example,
    taking the same workload as above, Eager Page Splitting reduced the CPU
    required to split all huge pages from ~270 CPU-seconds ((2.94s - 0.14s)
    * 96 vCPU threads) to only 1.55 CPU-seconds.
    
    Eager page splitting does increase the amount of time it takes to enable
    dirty logging since it has split all huge pages. For example, the time
    it took to enable dirty logging in the 96GiB region of the
    aforementioned test increased from 0.001s to 1.55s.
    Reviewed-by: default avatarPeter Xu <peterx@redhat.com>
    Signed-off-by: default avatarDavid Matlack <dmatlack@google.com>
    Message-Id: <20220119230739.2234394-16-dmatlack@google.com>
    Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
    a3fe5dbd
x86.c 336 KB