• Nicolai Stange's avatar
    x86/KVM/VMX: Initialize the vmx_l1d_flush_pages' content · bbd637c6
    Nicolai Stange authored
    The slow path in vmx_l1d_flush() reads from vmx_l1d_flush_pages in order
    to evict the L1d cache.
    
    However, these pages are never cleared and, in theory, their data could be
    leaked.
    
    More importantly, KSM could merge a nested hypervisor's vmx_l1d_flush_pages
    to fewer than 1 << L1D_CACHE_ORDER host physical pages and this would break
    the L1d flushing algorithm: L1D on x86_64 is tagged by physical addresses.
    
    Fix this by initializing the individual vmx_l1d_flush_pages with a
    different pattern each.
    
    Rename the "empty_zp" asm constraint identifier in vmx_l1d_flush() to
    "flush_pages" to reflect this change.
    
    Fixes: a47dd5f0 ("x86/KVM/VMX: Add L1D flush algorithm")
    Signed-off-by: default avatarNicolai Stange <nstange@suse.de>
    Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
    
    CVE-2018-3620
    CVE-2018-3646
    Signed-off-by: default avatarStefan Bader <stefan.bader@canonical.com>
    bbd637c6
vmx.c 318 KB