1. 12 Dec, 2016 34 commits
  2. 11 Dec, 2016 6 commits
    • Linus Torvalds's avatar
      Linux 4.9 · 69973b83
      Linus Torvalds authored
      69973b83
    • Linus Torvalds's avatar
      Merge branch 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus · 2e4333c1
      Linus Torvalds authored
      Pull MIPS fixes from Ralf Baechle:
       "Two more MIPS fixes for 4.9:
      
         - RTC: Return -ENODEV so an external RTC will be tried
      
         - Fix mask of GPE frequency
      
        These two have been tested on Imagination's automated test system and
        also both received positive reviews on the linux-mips mailing list"
      
      * 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus:
        MIPS: Lantiq: Fix mask of GPE frequency
        MIPS: Return -ENODEV from weak implementation of rtc_mips_set_time
      2e4333c1
    • Pablo Neira's avatar
      netfilter: nft_counter: rework atomic dump and reset · d84701ec
      Pablo Neira authored
      Dump and reset doesn't work unless cmpxchg64() is used both from packet
      and control plane paths. This approach is going to be slow though.
      Instead, use a percpu seqcount to fetch counters consistently, then
      subtract bytes and packets in case a reset was requested.
      
      The cpu that running over the reset code is guaranteed to own this stats
      exclusively, we have to turn counters into signed 64bit though so stats
      update on reset don't get wrong on underflow.
      
      This patch is based on original sketch from Eric Dumazet.
      
      Fixes: 43da04a5 ("netfilter: nf_tables: atomic dump and reset for stateful objects")
      Suggested-by: default avatarEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: default avatarPablo Neira Ayuso <pablo@netfilter.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      d84701ec
    • Peter Zijlstra's avatar
      x86/paravirt: Fix bool return type for PVOP_CALL() · 11f254db
      Peter Zijlstra authored
      Commit:
      
        3cded417 ("x86/paravirt: Optimize native pv_lock_ops.vcpu_is_preempted()")
      
      introduced a paravirt op with bool return type [*]
      
      It turns out that the PVOP_CALL*() macros miscompile when rettype is
      bool. Code that looked like:
      
         83 ef 01                sub    $0x1,%edi
         ff 15 32 a0 d8 00       callq  *0xd8a032(%rip)        # ffffffff81e28120 <pv_lock_ops+0x20>
         84 c0                   test   %al,%al
      
      ended up looking like so after PVOP_CALL1() was applied:
      
         83 ef 01                sub    $0x1,%edi
         48 63 ff                movslq %edi,%rdi
         ff 14 25 20 81 e2 81    callq  *0xffffffff81e28120
         48 85 c0                test   %rax,%rax
      
      Note how it tests the whole of %rax, even though a typical bool return
      function only sets %al, like:
      
        0f 95 c0                setne  %al
        c3                      retq
      
      This is because ____PVOP_CALL() does:
      
      		__ret = (rettype)__eax;
      
      and while regular integer type casts truncate the result, a cast to
      bool tests for any !0 value. Fix this by explicitly truncating to
      sizeof(rettype) before casting.
      
      [*] The actual bug should've been exposed in commit:
            446f3dc8 ("locking/core, x86/paravirt: Implement vcpu_is_preempted(cpu) for KVM and Xen guests")
          but that didn't properly implement the paravirt call.
      Reported-by: default avatarkernel test robot <xiaolong.ye@intel.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alok Kataria <akataria@vmware.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Chris Wright <chrisw@sous-sol.org>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Peter Anvin <hpa@zytor.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Fixes: 3cded417 ("x86/paravirt: Optimize native pv_lock_ops.vcpu_is_preempted()")
      Link: http://lkml.kernel.org/r/20161208154349.346057680@infradead.orgSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      11f254db
    • Peter Zijlstra's avatar
      x86/paravirt: Fix native_patch() · 45dbea5f
      Peter Zijlstra authored
      While chasing a regression I noticed we potentially patch the wrong
      code in native_patch().
      
      If we do not select the native code sequence, we must use the default
      patcher, not fall-through the switch case.
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alok Kataria <akataria@vmware.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Chris Wright <chrisw@sous-sol.org>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Peter Anvin <hpa@zytor.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: kernel test robot <xiaolong.ye@intel.com>
      Fixes: 3cded417 ("x86/paravirt: Optimize native pv_lock_ops.vcpu_is_preempted()")
      Link: http://lkml.kernel.org/r/20161208154349.270616999@infradead.orgSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      45dbea5f
    • Ingo Molnar's avatar
      6f387515