• Dan Williams's avatar
    x86/copy_mc: Introduce copy_mc_enhanced_fast_string() · 5da8e4a6
    Dan Williams authored
    The motivations to go rework memcpy_mcsafe() are that the benefit of
    doing slow and careful copies is obviated on newer CPUs, and that the
    current opt-in list of CPUs to instrument recovery is broken relative to
    those CPUs.  There is no need to keep an opt-in list up to date on an
    ongoing basis if pmem/dax operations are instrumented for recovery by
    default. With recovery enabled by default the old "mcsafe_key" opt-in to
    careful copying can be made a "fragile" opt-out. Where the "fragile"
    list takes steps to not consume poison across cachelines.
    
    The discussion with Linus made clear that the current "_mcsafe" suffix
    was imprecise to a fault. The operations that are needed by pmem/dax are
    to copy from a source address that might throw #MC to a destination that
    may write-fault, if it is a user page.
    
    So copy_to_user_mcsafe() becomes copy_mc_to_user() to indicate
    the separate precautions taken on source and destination.
    copy_mc_to_kernel() is introduced as a non-SMAP version that does not
    expect write-faults on the destination, but is still prepared to abort
    with an error code upon taking #MC.
    
    The original copy_mc_fragile() implementation had negative performance
    implications since it did not use the fast-string instruction sequence
    to perform copies. For this reason copy_mc_to_kernel() fell back to
    plain memcpy() to preserve performance on platforms that did not indicate
    the capability to recover from machine check exceptions. However, that
    capability detection was not architectural and now that some platforms
    can recover from fast-string consumption of memory errors the memcpy()
    fallback now causes these more capable platforms to fail.
    
    Introduce copy_mc_enhanced_fast_string() as the fast default
    implementation of copy_mc_to_kernel() and finalize the transition of
    copy_mc_fragile() to be a platform quirk to indicate 'copy-carefully'.
    With this in place, copy_mc_to_kernel() is fast and recovery-ready by
    default regardless of hardware capability.
    
    Thanks to Vivek for identifying that copy_user_generic() is not suitable
    as the copy_mc_to_user() backend since the #MC handler explicitly checks
    ex_has_fault_handler(). Thanks to the 0day robot for catching a
    performance bug in the x86/copy_mc_to_user implementation.
    
     [ bp: Add the "why" for this change from the 0/2th message, massage. ]
    
    Fixes: 92b0729c ("x86/mm, x86/mce: Add memcpy_mcsafe()")
    Reported-by: default avatarErwin Tsaur <erwin.tsaur@intel.com>
    Reported-by: default avatar0day robot <lkp@intel.com>
    Signed-off-by: default avatarDan Williams <dan.j.williams@intel.com>
    Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
    Reviewed-by: default avatarTony Luck <tony.luck@intel.com>
    Tested-by: default avatarErwin Tsaur <erwin.tsaur@intel.com>
    Cc: <stable@vger.kernel.org>
    Link: https://lkml.kernel.org/r/160195562556.2163339.18063423034951948973.stgit@dwillia2-desk3.amr.corp.intel.com
    5da8e4a6
copy_mc_64.S 4.02 KB