1. 27 Jul, 2016 40 commits
    • Scott Bauer's avatar
      HID: hiddev: validate num_values for HIDIOCGUSAGES, HIDIOCSUSAGES commands · 300851ff
      Scott Bauer authored
      commit 93a2001b upstream.
      
      This patch validates the num_values parameter from userland during the
      HIDIOCGUSAGES and HIDIOCSUSAGES commands. Previously, if the report id was set
      to HID_REPORT_ID_UNKNOWN, we would fail to validate the num_values parameter
      leading to a heap overflow.
      Signed-off-by: default avatarScott Bauer <sbauer@plzdonthack.me>
      Signed-off-by: default avatarJiri Kosina <jkosina@suse.cz>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      300851ff
    • Oliver Neukum's avatar
      HID: elo: kill not flush the work · 2d7a2ff1
      Oliver Neukum authored
      commit ed596a4a upstream.
      
      Flushing a work that reschedules itself is not a sensible operation. It needs
      to be killed. Failure to do so leads to a kernel panic in the timer code.
      Signed-off-by: default avatarOliver Neukum <ONeukum@suse.com>
      Reviewed-by: default avatarBenjamin Tissoires <benjamin.tissoires@redhat.com>
      Signed-off-by: default avatarJiri Kosina <jkosina@suse.cz>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      2d7a2ff1
    • Quentin Casasnovas's avatar
      KVM: nVMX: VMX instructions: fix segment checks when L1 is in long mode. · 44dd5cec
      Quentin Casasnovas authored
      commit ff30ef40 upstream.
      
      I couldn't get Xen to boot a L2 HVM when it was nested under KVM - it was
      getting a GP(0) on a rather unspecial vmread from Xen:
      
           (XEN) ----[ Xen-4.7.0-rc  x86_64  debug=n  Not tainted ]----
           (XEN) CPU:    1
           (XEN) RIP:    e008:[<ffff82d0801e629e>] vmx_get_segment_register+0x14e/0x450
           (XEN) RFLAGS: 0000000000010202   CONTEXT: hypervisor (d1v0)
           (XEN) rax: ffff82d0801e6288   rbx: ffff83003ffbfb7c   rcx: fffffffffffab928
           (XEN) rdx: 0000000000000000   rsi: 0000000000000000   rdi: ffff83000bdd0000
           (XEN) rbp: ffff83000bdd0000   rsp: ffff83003ffbfab0   r8:  ffff830038813910
           (XEN) r9:  ffff83003faf3958   r10: 0000000a3b9f7640   r11: ffff83003f82d418
           (XEN) r12: 0000000000000000   r13: ffff83003ffbffff   r14: 0000000000004802
           (XEN) r15: 0000000000000008   cr0: 0000000080050033   cr4: 00000000001526e0
           (XEN) cr3: 000000003fc79000   cr2: 0000000000000000
           (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
           (XEN) Xen code around <ffff82d0801e629e> (vmx_get_segment_register+0x14e/0x450):
           (XEN)  00 00 41 be 02 48 00 00 <44> 0f 78 74 24 08 0f 86 38 56 00 00 b8 08 68 00
           (XEN) Xen stack trace from rsp=ffff83003ffbfab0:
      
           ...
      
           (XEN) Xen call trace:
           (XEN)    [<ffff82d0801e629e>] vmx_get_segment_register+0x14e/0x450
           (XEN)    [<ffff82d0801f3695>] get_page_from_gfn_p2m+0x165/0x300
           (XEN)    [<ffff82d0801bfe32>] hvmemul_get_seg_reg+0x52/0x60
           (XEN)    [<ffff82d0801bfe93>] hvm_emulate_prepare+0x53/0x70
           (XEN)    [<ffff82d0801ccacb>] handle_mmio+0x2b/0xd0
           (XEN)    [<ffff82d0801be591>] emulate.c#_hvm_emulate_one+0x111/0x2c0
           (XEN)    [<ffff82d0801cd6a4>] handle_hvm_io_completion+0x274/0x2a0
           (XEN)    [<ffff82d0801f334a>] __get_gfn_type_access+0xfa/0x270
           (XEN)    [<ffff82d08012f3bb>] timer.c#add_entry+0x4b/0xb0
           (XEN)    [<ffff82d08012f80c>] timer.c#remove_entry+0x7c/0x90
           (XEN)    [<ffff82d0801c8433>] hvm_do_resume+0x23/0x140
           (XEN)    [<ffff82d0801e4fe7>] vmx_do_resume+0xa7/0x140
           (XEN)    [<ffff82d080164aeb>] context_switch+0x13b/0xe40
           (XEN)    [<ffff82d080128e6e>] schedule.c#schedule+0x22e/0x570
           (XEN)    [<ffff82d08012c0cc>] softirq.c#__do_softirq+0x5c/0x90
           (XEN)    [<ffff82d0801602c5>] domain.c#idle_loop+0x25/0x50
           (XEN)
           (XEN)
           (XEN) ****************************************
           (XEN) Panic on CPU 1:
           (XEN) GENERAL PROTECTION FAULT
           (XEN) [error_code=0000]
           (XEN) ****************************************
      
      Tracing my host KVM showed it was the one injecting the GP(0) when
      emulating the VMREAD and checking the destination segment permissions in
      get_vmx_mem_address():
      
           3)               |    vmx_handle_exit() {
           3)               |      handle_vmread() {
           3)               |        nested_vmx_check_permission() {
           3)               |          vmx_get_segment() {
           3)   0.074 us    |            vmx_read_guest_seg_base();
           3)   0.065 us    |            vmx_read_guest_seg_selector();
           3)   0.066 us    |            vmx_read_guest_seg_ar();
           3)   1.636 us    |          }
           3)   0.058 us    |          vmx_get_rflags();
           3)   0.062 us    |          vmx_read_guest_seg_ar();
           3)   3.469 us    |        }
           3)               |        vmx_get_cs_db_l_bits() {
           3)   0.058 us    |          vmx_read_guest_seg_ar();
           3)   0.662 us    |        }
           3)               |        get_vmx_mem_address() {
           3)   0.068 us    |          vmx_cache_reg();
           3)               |          vmx_get_segment() {
           3)   0.074 us    |            vmx_read_guest_seg_base();
           3)   0.068 us    |            vmx_read_guest_seg_selector();
           3)   0.071 us    |            vmx_read_guest_seg_ar();
           3)   1.756 us    |          }
           3)               |          kvm_queue_exception_e() {
           3)   0.066 us    |            kvm_multiple_exception();
           3)   0.684 us    |          }
           3)   4.085 us    |        }
           3)   9.833 us    |      }
           3) + 10.366 us   |    }
      
      Cross-checking the KVM/VMX VMREAD emulation code with the Intel Software
      Developper Manual Volume 3C - "VMREAD - Read Field from Virtual-Machine
      Control Structure", I found that we're enforcing that the destination
      operand is NOT located in a read-only data segment or any code segment when
      the L1 is in long mode - BUT that check should only happen when it is in
      protected mode.
      
      Shuffling the code a bit to make our emulation follow the specification
      allows me to boot a Xen dom0 in a nested KVM and start HVM L2 guests
      without problems.
      
      Fixes: f9eb4af6 ("KVM: nVMX: VMX instructions: add checks for #GP/#SS exceptions")
      Signed-off-by: default avatarQuentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Eugene Korenevsky <ekorenevsky@gmail.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      44dd5cec
    • Xiubo Li's avatar
      kvm: Fix irq route entries exceeding KVM_MAX_IRQ_ROUTES · 54f87e16
      Xiubo Li authored
      commit caf1ff26 upstream.
      
      These days, we experienced one guest crash with 8 cores and 3 disks,
      with qemu error logs as bellow:
      
      qemu-system-x86_64: /build/qemu-2.0.0/kvm-all.c:984:
      kvm_irqchip_commit_routes: Assertion `ret == 0' failed.
      
      And then we found one patch(bdf026317d) in qemu tree, which said
      could fix this bug.
      
      Execute the following script will reproduce the BUG quickly:
      
      irq_affinity.sh
      ========================================================================
      
      vda_irq_num=25
      vdb_irq_num=27
      while [ 1 ]
      do
          for irq in {1,2,4,8,10,20,40,80}
              do
                  echo $irq > /proc/irq/$vda_irq_num/smp_affinity
                  echo $irq > /proc/irq/$vdb_irq_num/smp_affinity
                  dd if=/dev/vda of=/dev/zero bs=4K count=100 iflag=direct
                  dd if=/dev/vdb of=/dev/zero bs=4K count=100 iflag=direct
              done
      done
      ========================================================================
      
      The following qemu log is added in the qemu code and is displayed when
      this bug reproduced:
      
      kvm_irqchip_commit_routes: max gsi: 1008, nr_allocated_irq_routes: 1024,
      irq_routes->nr: 1024, gsi_count: 1024.
      
      That's to say when irq_routes->nr == 1024, there are 1024 routing entries,
      but in the kernel code when routes->nr >= 1024, will just return -EINVAL;
      
      The nr is the number of the routing entries which is in of
      [1 ~ KVM_MAX_IRQ_ROUTES], not the index in [0 ~ KVM_MAX_IRQ_ROUTES - 1].
      
      This patch fix the BUG above.
      Signed-off-by: default avatarXiubo Li <lixiubo@cmss.chinamobile.com>
      Signed-off-by: default avatarWei Tang <tangwei@cmss.chinamobile.com>
      Signed-off-by: default avatarZhang Zhuoyu <zhangzhuoyu@cmss.chinamobile.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      54f87e16
    • Dan Carpenter's avatar
      KEYS: potential uninitialized variable · 398051f2
      Dan Carpenter authored
      commit 38327424 upstream.
      
      If __key_link_begin() failed then "edit" would be uninitialized.  I've
      added a check to fix that.
      
      This allows a random user to crash the kernel, though it's quite
      difficult to achieve.  There are three ways it can be done as the user
      would have to cause an error to occur in __key_link():
      
       (1) Cause the kernel to run out of memory.  In practice, this is difficult
           to achieve without ENOMEM cropping up elsewhere and aborting the
           attempt.
      
       (2) Revoke the destination keyring between the keyring ID being looked up
           and it being tested for revocation.  In practice, this is difficult to
           time correctly because the KEYCTL_REJECT function can only be used
           from the request-key upcall process.  Further, users can only make use
           of what's in /sbin/request-key.conf, though this does including a
           rejection debugging test - which means that the destination keyring
           has to be the caller's session keyring in practice.
      
       (3) Have just enough key quota available to create a key, a new session
           keyring for the upcall and a link in the session keyring, but not then
           sufficient quota to create a link in the nominated destination keyring
           so that it fails with EDQUOT.
      
      The bug can be triggered using option (3) above using something like the
      following:
      
      	echo 80 >/proc/sys/kernel/keys/root_maxbytes
      	keyctl request2 user debug:fred negate @t
      
      The above sets the quota to something much lower (80) to make the bug
      easier to trigger, but this is dependent on the system.  Note also that
      the name of the keyring created contains a random number that may be
      between 1 and 10 characters in size, so may throw the test off by
      changing the amount of quota used.
      
      Assuming the failure occurs, something like the following will be seen:
      
      	kfree_debugcheck: out of range ptr 6b6b6b6b6b6b6b68h
      	------------[ cut here ]------------
      	kernel BUG at ../mm/slab.c:2821!
      	...
      	RIP: 0010:[<ffffffff811600f9>] kfree_debugcheck+0x20/0x25
      	RSP: 0018:ffff8804014a7de8  EFLAGS: 00010092
      	RAX: 0000000000000034 RBX: 6b6b6b6b6b6b6b68 RCX: 0000000000000000
      	RDX: 0000000000040001 RSI: 00000000000000f6 RDI: 0000000000000300
      	RBP: ffff8804014a7df0 R08: 0000000000000001 R09: 0000000000000000
      	R10: ffff8804014a7e68 R11: 0000000000000054 R12: 0000000000000202
      	R13: ffffffff81318a66 R14: 0000000000000000 R15: 0000000000000001
      	...
      	Call Trace:
      	  kfree+0xde/0x1bc
      	  assoc_array_cancel_edit+0x1f/0x36
      	  __key_link_end+0x55/0x63
      	  key_reject_and_link+0x124/0x155
      	  keyctl_reject_key+0xb6/0xe0
      	  keyctl_negate_key+0x10/0x12
      	  SyS_keyctl+0x9f/0xe7
      	  do_syscall_64+0x63/0x13a
      	  entry_SYSCALL64_slow_path+0x25/0x25
      
      Fixes: f70e2e06 ('KEYS: Do preallocation for __key_link()')
      Signed-off-by: default avatarDan Carpenter <dan.carpenter@oracle.com>
      Signed-off-by: default avatarDavid Howells <dhowells@redhat.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      398051f2
    • Vineet Gupta's avatar
    • Vineet Gupta's avatar
      f06a5a01
    • Martin KaFai Lau's avatar
      ipv6: Fix mem leak in rt6i_pcpu · 9c458a86
      Martin KaFai Lau authored
      [ Upstream commit 903ce4ab ]
      
      It was first reported and reproduced by Petr (thanks!) in
      https://bugzilla.kernel.org/show_bug.cgi?id=119581
      
      free_percpu(rt->rt6i_pcpu) used to always happen in ip6_dst_destroy().
      
      However, after fixing a deadlock bug in
      commit 9c7370a1 ("ipv6: Fix a potential deadlock when creating pcpu rt"),
      free_percpu() is not called before setting non_pcpu_rt->rt6i_pcpu to NULL.
      
      It is worth to note that rt6i_pcpu is protected by table->tb6_lock.
      
      kmemleak somehow did not report it.  We nailed it down by
      observing the pcpu entries in /proc/vmallocinfo (first suggested
      by Hannes, thanks!).
      Signed-off-by: default avatarMartin KaFai Lau <kafai@fb.com>
      Fixes: 9c7370a1 ("ipv6: Fix a potential deadlock when creating pcpu rt")
      Reported-by: default avatarPetr Novopashenniy <pety@rusnet.ru>
      Tested-by: default avatarPetr Novopashenniy <pety@rusnet.ru>
      Acked-by: default avatarHannes Frederic Sowa <hannes@stressinduktion.org>
      Cc: Hannes Frederic Sowa <hannes@stressinduktion.org>
      Cc: Petr Novopashenniy <pety@rusnet.ru>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      9c458a86
    • Bjørn Mork's avatar
      cdc_ncm: workaround for EM7455 "silent" data interface · 61f602d8
      Bjørn Mork authored
      [ Upstream commit c086e709 ]
      
      Several Lenovo users have reported problems with their Sierra
      Wireless EM7455 modem. The driver has loaded successfully and
      the MBIM management channel has appeared to work, including
      establishing a connection to the mobile network. But no frames
      have been received over the data interface.
      
      The problem affects all EM7455 and MC7455, and is assumed to
      affect other modems based on the same Qualcomm chipset and
      baseband firmware.
      
      Testing narrowed the problem down to what seems to be a
      firmware timing bug during initialization. Adding a short sleep
      while probing is sufficient to make the problem disappear.
      Experiments have shown that 1-2 ms is too little to have any
      effect, while 10-20 ms is enough to reliably succeed.
      Reported-by: default avatarStefan Armbruster <ml001@armbruster-it.de>
      Reported-by: default avatarRalph Plawetzki <ralph@purejava.org>
      Reported-by: default avatarAndreas Fett <andreas.fett@secunet.com>
      Reported-by: default avatarRasmus Lerdorf <rasmus@lerdorf.com>
      Reported-by: default avatarSamo Ratnik <samo.ratnik@gmail.com>
      Reported-and-tested-by: default avatarAleksander Morgado <aleksander@aleksander.es>
      Signed-off-by: default avatarBjørn Mork <bjorn@mork.no>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      61f602d8
    • WANG Cong's avatar
      net_sched: fix mirrored packets checksum · 2832302f
      WANG Cong authored
      [ Upstream commit 82a31b92 ]
      
      Similar to commit 9b368814 ("net: fix bridge multicast packet checksum validation")
      we need to fixup the checksum for CHECKSUM_COMPLETE when
      pushing skb on RX path. Otherwise we get similar splats.
      
      Cc: Jamal Hadi Salim <jhs@mojatatu.com>
      Cc: Tom Herbert <tom@herbertland.com>
      Signed-off-by: default avatarCong Wang <xiyou.wangcong@gmail.com>
      Acked-by: default avatarJamal Hadi Salim <jhs@mojatatu.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      2832302f
    • David S. Miller's avatar
      packet: Use symmetric hash for PACKET_FANOUT_HASH. · 424848bd
      David S. Miller authored
      [ Upstream commit eb70db87 ]
      
      People who use PACKET_FANOUT_HASH want a symmetric hash, meaning that
      they want packets going in both directions on a flow to hash to the
      same bucket.
      
      The core kernel SKB hash became non-symmetric when the ipv6 flow label
      and other entities were incorporated into the standard flow hash order
      to increase entropy.
      
      But there are no users of PACKET_FANOUT_HASH who want an assymetric
      hash, they all want a symmetric one.
      
      Therefore, use the flow dissector to compute a flat symmetric hash
      over only the protocol, addresses and ports.  This hash does not get
      installed into and override the normal skb hash, so this change has
      no effect whatsoever on the rest of the stack.
      Reported-by: default avatarEric Leblond <eric@regit.org>
      Tested-by: default avatarEric Leblond <eric@regit.org>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      424848bd
    • Peter Zijlstra's avatar
      sched/fair: Fix cfs_rq avg tracking underflow · 43b1bfec
      Peter Zijlstra authored
      commit 89741892 upstream.
      
      As per commit:
      
        b7fa30c9 ("sched/fair: Fix post_init_entity_util_avg() serialization")
      
      > the code generated from update_cfs_rq_load_avg():
      >
      > 	if (atomic_long_read(&cfs_rq->removed_load_avg)) {
      > 		s64 r = atomic_long_xchg(&cfs_rq->removed_load_avg, 0);
      > 		sa->load_avg = max_t(long, sa->load_avg - r, 0);
      > 		sa->load_sum = max_t(s64, sa->load_sum - r * LOAD_AVG_MAX, 0);
      > 		removed_load = 1;
      > 	}
      >
      > turns into:
      >
      > ffffffff81087064:       49 8b 85 98 00 00 00    mov    0x98(%r13),%rax
      > ffffffff8108706b:       48 85 c0                test   %rax,%rax
      > ffffffff8108706e:       74 40                   je     ffffffff810870b0 <update_blocked_averages+0xc0>
      > ffffffff81087070:       4c 89 f8                mov    %r15,%rax
      > ffffffff81087073:       49 87 85 98 00 00 00    xchg   %rax,0x98(%r13)
      > ffffffff8108707a:       49 29 45 70             sub    %rax,0x70(%r13)
      > ffffffff8108707e:       4c 89 f9                mov    %r15,%rcx
      > ffffffff81087081:       bb 01 00 00 00          mov    $0x1,%ebx
      > ffffffff81087086:       49 83 7d 70 00          cmpq   $0x0,0x70(%r13)
      > ffffffff8108708b:       49 0f 49 4d 70          cmovns 0x70(%r13),%rcx
      >
      > Which you'll note ends up with sa->load_avg -= r in memory at
      > ffffffff8108707a.
      
      So I _should_ have looked at other unserialized users of ->load_avg,
      but alas. Luckily nikbor reported a similar /0 from task_h_load() which
      instantly triggered recollection of this here problem.
      
      Aside from the intermediate value hitting memory and causing problems,
      there's another problem: the underflow detection relies on the signed
      bit. This reduces the effective width of the variables, IOW its
      effectively the same as having these variables be of signed type.
      
      This patch changes to a different means of unsigned underflow
      detection to not rely on the signed bit. This allows the variables to
      use the 'full' unsigned range. And it does so with explicit LOAD -
      STORE to ensure any intermediate value will never be visible in
      memory, allowing these unserialized loads.
      
      Note: GCC generates crap code for this, might warrant a look later.
      
      Note2: I say 'full' above, if we end up at U*_MAX we'll still explode;
             maybe we should do clamping on add too.
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Yuyang Du <yuyang.du@intel.com>
      Cc: bsegall@google.com
      Cc: kernel@kyup.com
      Cc: morten.rasmussen@arm.com
      Cc: pjt@google.com
      Cc: steve.muckle@linaro.org
      Fixes: 9d89c257 ("sched/fair: Rewrite runnable load and utilization average tracking")
      Link: http://lkml.kernel.org/r/20160617091948.GJ30927@twins.programming.kicks-ass.netSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      
      43b1bfec
    • Kirill A. Shutemov's avatar
      UBIFS: Implement ->migratepage() · 1e1f4ff7
      Kirill A. Shutemov authored
      commit 4ac1c17b upstream.
      
      During page migrations UBIFS might get confused
      and the following assert triggers:
      [  213.480000] UBIFS assert failed in ubifs_set_page_dirty at 1451 (pid 436)
      [  213.490000] CPU: 0 PID: 436 Comm: drm-stress-test Not tainted 4.4.4-00176-geaa802524636-dirty #1008
      [  213.490000] Hardware name: Allwinner sun4i/sun5i Families
      [  213.490000] [<c0015e70>] (unwind_backtrace) from [<c0012cdc>] (show_stack+0x10/0x14)
      [  213.490000] [<c0012cdc>] (show_stack) from [<c02ad834>] (dump_stack+0x8c/0xa0)
      [  213.490000] [<c02ad834>] (dump_stack) from [<c0236ee8>] (ubifs_set_page_dirty+0x44/0x50)
      [  213.490000] [<c0236ee8>] (ubifs_set_page_dirty) from [<c00fa0bc>] (try_to_unmap_one+0x10c/0x3a8)
      [  213.490000] [<c00fa0bc>] (try_to_unmap_one) from [<c00fadb4>] (rmap_walk+0xb4/0x290)
      [  213.490000] [<c00fadb4>] (rmap_walk) from [<c00fb1bc>] (try_to_unmap+0x64/0x80)
      [  213.490000] [<c00fb1bc>] (try_to_unmap) from [<c010dc28>] (migrate_pages+0x328/0x7a0)
      [  213.490000] [<c010dc28>] (migrate_pages) from [<c00d0cb0>] (alloc_contig_range+0x168/0x2f4)
      [  213.490000] [<c00d0cb0>] (alloc_contig_range) from [<c010ec00>] (cma_alloc+0x170/0x2c0)
      [  213.490000] [<c010ec00>] (cma_alloc) from [<c001a958>] (__alloc_from_contiguous+0x38/0xd8)
      [  213.490000] [<c001a958>] (__alloc_from_contiguous) from [<c001ad44>] (__dma_alloc+0x23c/0x274)
      [  213.490000] [<c001ad44>] (__dma_alloc) from [<c001ae08>] (arm_dma_alloc+0x54/0x5c)
      [  213.490000] [<c001ae08>] (arm_dma_alloc) from [<c035cecc>] (drm_gem_cma_create+0xb8/0xf0)
      [  213.490000] [<c035cecc>] (drm_gem_cma_create) from [<c035cf20>] (drm_gem_cma_create_with_handle+0x1c/0xe8)
      [  213.490000] [<c035cf20>] (drm_gem_cma_create_with_handle) from [<c035d088>] (drm_gem_cma_dumb_create+0x3c/0x48)
      [  213.490000] [<c035d088>] (drm_gem_cma_dumb_create) from [<c0341ed8>] (drm_ioctl+0x12c/0x444)
      [  213.490000] [<c0341ed8>] (drm_ioctl) from [<c0121adc>] (do_vfs_ioctl+0x3f4/0x614)
      [  213.490000] [<c0121adc>] (do_vfs_ioctl) from [<c0121d30>] (SyS_ioctl+0x34/0x5c)
      [  213.490000] [<c0121d30>] (SyS_ioctl) from [<c000f2c0>] (ret_fast_syscall+0x0/0x34)
      
      UBIFS is using PagePrivate() which can have different meanings across
      filesystems. Therefore the generic page migration code cannot handle this
      case correctly.
      We have to implement our own migration function which basically does a
      plain copy but also duplicates the page private flag.
      UBIFS is not a block device filesystem and cannot use buffer_migrate_page().
      Signed-off-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      [rw: Massaged changelog, build fixes, etc...]
      Signed-off-by: default avatarRichard Weinberger <richard@nod.at>
      Acked-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      1e1f4ff7
    • Richard Weinberger's avatar
      mm: Export migrate_page_move_mapping and migrate_page_copy · 4b1cb3c8
      Richard Weinberger authored
      commit 1118dce7 upstream.
      
      Export these symbols such that UBIFS can implement
      ->migratepage.
      Signed-off-by: default avatarRichard Weinberger <richard@nod.at>
      Acked-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      4b1cb3c8
    • James Hogan's avatar
      MIPS: KVM: Fix modular KVM under QEMU · 7ad26023
      James Hogan authored
      commit 797179bc upstream.
      
      Copy __kvm_mips_vcpu_run() into unmapped memory, so that we can never
      get a TLB refill exception in it when KVM is built as a module.
      
      This was observed to happen with the host MIPS kernel running under
      QEMU, due to a not entirely transparent optimisation in the QEMU TLB
      handling where TLB entries replaced with TLBWR are copied to a separate
      part of the TLB array. Code in those pages continue to be executable,
      but those mappings persist only until the next ASID switch, even if they
      are marked global.
      
      An ASID switch happens in __kvm_mips_vcpu_run() at exception level after
      switching to the guest exception base. Subsequent TLB mapped kernel
      instructions just prior to switching to the guest trigger a TLB refill
      exception, which enters the guest exception handlers without updating
      EPC. This appears as a guest triggered TLB refill on a host kernel
      mapped (host KSeg2) address, which is not handled correctly as user
      (guest) mode accesses to kernel (host) segments always generate address
      error exceptions.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: kvm@vger.kernel.org
      Cc: linux-mips@linux-mips.org
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      7ad26023
    • Steve Capper's avatar
      ARM: 8579/1: mm: Fix definition of pmd_mknotpresent · 490a71c5
      Steve Capper authored
      commit 56530f5d upstream.
      
      Currently pmd_mknotpresent will use a zero entry to respresent an
      invalidated pmd.
      
      Unfortunately this definition clashes with pmd_none, thus it is
      possible for a race condition to occur if zap_pmd_range sees pmd_none
      whilst __split_huge_pmd_locked is running too with pmdp_invalidate
      just called.
      
      This patch fixes the race condition by modifying pmd_mknotpresent to
      create non-zero faulting entries (as is done in other architectures),
      removing the ambiguity with pmd_none.
      
      [catalin.marinas@arm.com: using L_PMD_SECT_VALID instead of PMD_TYPE_SECT]
      
      Fixes: 8d962507 ("ARM: mm: Transparent huge page support for LPAE systems.")
      Reported-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Cc: Russell King <linux@armlinux.org.uk>
      Signed-off-by: default avatarSteve Capper <steve.capper@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      490a71c5
    • Will Deacon's avatar
      ARM: 8578/1: mm: ensure pmd_present only checks the valid bit · 54cf0dde
      Will Deacon authored
      commit 62453188 upstream.
      
      In a subsequent patch, pmd_mknotpresent will clear the valid bit of the
      pmd entry, resulting in a not-present entry from the hardware's
      perspective. Unfortunately, pmd_present simply checks for a non-zero pmd
      value and will therefore continue to return true even after a
      pmd_mknotpresent operation. Since pmd_mknotpresent is only used for
      managing huge entries, this is only an issue for the 3-level case.
      
      This patch fixes the 3-level pmd_present implementation to take into
      account the valid bit. For bisectability, the change is made before the
      fix to pmd_mknotpresent.
      
      [catalin.marinas@arm.com: comment update regarding pmd_mknotpresent patch]
      
      Fixes: 8d962507 ("ARM: mm: Transparent huge page support for LPAE systems.")
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Steve Capper <Steve.Capper@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      54cf0dde
    • Fabio Estevam's avatar
      ARM: imx6ul: Fix Micrel PHY mask · 91ac7387
      Fabio Estevam authored
      commit 20c15226 upstream.
      
      The value used for Micrel PHY mask is not correct. Use the
      MICREL_PHY_ID_MASK definition instead.
      
      Thanks to Jiri Luznicky for proposing the fix at
      https://community.freescale.com/thread/387739
      
      Fixes: 709bc065 ("ARM: imx6ul: add fec MAC refrence clock and phy fixup init")
      Signed-off-by: default avatarFabio Estevam <fabio.estevam@nxp.com>
      Reviewed-by: default avatarAndrew Lunn <andrew@lunn.ch>
      Signed-off-by: default avatarShawn Guo <shawnguo@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      91ac7387
    • Trond Myklebust's avatar
      NFS: Fix another OPEN_DOWNGRADE bug · b5d4a793
      Trond Myklebust authored
      commit e547f262 upstream.
      
      Olga Kornievskaia reports that the following test fails to trigger
      an OPEN_DOWNGRADE on the wire, and only triggers the final CLOSE.
      
      	fd0 = open(foo, RDRW)   -- should be open on the wire for "both"
      	fd1 = open(foo, RDONLY)  -- should be open on the wire for "read"
      	close(fd0) -- should trigger an open_downgrade
      	read(fd1)
      	close(fd1)
      
      The issue is that we're missing a check for whether or not the current
      state transitioned from an O_RDWR state as opposed to having transitioned
      from a combination of O_RDONLY and O_WRONLY.
      Reported-by: default avatarOlga Kornievskaia <aglo@umich.edu>
      Fixes: cd9288ff ("NFSv4: Fix another bug in the close/open_downgrade code")
      Signed-off-by: default avatarTrond Myklebust <trond.myklebust@primarydata.com>
      Signed-off-by: default avatarAnna Schumaker <Anna.Schumaker@Netapp.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      b5d4a793
    • Al Viro's avatar
      make nfs_atomic_open() call d_drop() on all ->open_context() errors. · 44d86dbf
      Al Viro authored
      commit d20cb71d upstream.
      
      In "NFSv4: Move dentry instantiation into the NFSv4-specific atomic open code"
      unconditional d_drop() after the ->open_context() had been removed.  It had
      been correct for success cases (there ->open_context() itself had been doing
      dcache manipulations), but not for error ones.  Only one of those (ENOENT)
      got a compensatory d_drop() added in that commit, but in fact it should've
      been done for all errors.  As it is, the case of O_CREAT non-exclusive open
      on a hashed negative dentry racing with e.g. symlink creation from another
      client ended up with ->open_context() getting an error and proceeding to
      call nfs_lookup().  On a hashed dentry, which would've instantly triggered
      BUG_ON() in d_materialise_unique() (or, these days, its equivalent in
      d_splice_alias()).
      Tested-by: default avatarOleg Drokin <green@linuxhacker.ru>
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: default avatarTrond Myklebust <trond.myklebust@primarydata.com>
      Signed-off-by: default avatarAnna Schumaker <Anna.Schumaker@Netapp.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      44d86dbf
    • Ben Hutchings's avatar
      nfsd: check permissions when setting ACLs · 412cfeec
      Ben Hutchings authored
      commit 99965378 upstream.
      
      Use set_posix_acl, which includes proper permission checks, instead of
      calling ->set_acl directly.  Without this anyone may be able to grant
      themselves permissions to a file by setting the ACL.
      
      Lock the inode to make the new checks atomic with respect to set_acl.
      (Also, nfsd was the only caller of set_acl not locking the inode, so I
      suspect this may fix other races.)
      
      This also simplifies the code, and ensures our ACLs are checked by
      posix_acl_valid.
      
      The permission checks and the inode locking were lost with commit
      4ac7249e, which changed nfsd to use the set_acl inode operation directly
      instead of going through xattr handlers.
      Reported-by: default avatarDavid Sinquin <david@sinquin.eu>
      [agreunba@redhat.com: use set_posix_acl]
      Fixes: 4ac7249e
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: default avatarJ. Bruce Fields <bfields@redhat.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      412cfeec
    • Andreas Gruenbacher's avatar
      posix_acl: Add set_posix_acl · c3fa141c
      Andreas Gruenbacher authored
      commit 485e71e8 upstream.
      
      Factor out part of posix_acl_xattr_set into a common function that takes
      a posix_acl, which nfsd can also call.
      
      The prototype already exists in include/linux/posix_acl.h.
      Signed-off-by: default avatarAndreas Gruenbacher <agruenba@redhat.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: default avatarJ. Bruce Fields <bfields@redhat.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      c3fa141c
    • Oleg Drokin's avatar
      nfsd: Extend the mutex holding region around in nfsd4_process_open2() · f78ffdc2
      Oleg Drokin authored
      commit 5cc1fb2a upstream.
      
      To avoid racing entry into nfs4_get_vfs_file().
      Make init_open_stateid() return with locked stateid to be unlocked
      by the caller.
      Signed-off-by: default avatarOleg Drokin <green@linuxhacker.ru>
      Signed-off-by: default avatarJ. Bruce Fields <bfields@redhat.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      f78ffdc2
    • Oleg Drokin's avatar
      nfsd: Always lock state exclusively. · 087f8fe0
      Oleg Drokin authored
      commit feb9dad5 upstream.
      
      It used to be the case that state had an rwlock that was locked for write
      by downgrades, but for read for upgrades (opens). Well, the problem is
      if there are two competing opens for the same state, they step on
      each other toes potentially leading to leaking file descriptors
      from the state structure, since access mode is a bitmap only set once.
      Signed-off-by: default avatarOleg Drokin <green@linuxhacker.ru>
      Signed-off-by: default avatarJ. Bruce Fields <bfields@redhat.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      087f8fe0
    • J. Bruce Fields's avatar
      nfsd4/rpc: move backchannel create logic into rpc code · 58e9e70a
      J. Bruce Fields authored
      commit d50039ea upstream.
      
      Also simplify the logic a bit.
      Signed-off-by: default avatarJ. Bruce Fields <bfields@redhat.com>
      Acked-by: default avatarTrond Myklebust <trondmy@primarydata.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      58e9e70a
    • Tejun Heo's avatar
      writeback: use higher precision calculation in domain_dirty_limits() · 400850bd
      Tejun Heo authored
      commit 62a584fe upstream.
      
      As vm.dirty_[background_]bytes can't be applied verbatim to multiple
      cgroup writeback domains, they get converted to percentages in
      domain_dirty_limits() and applied the same way as
      vm.dirty_[background]ratio.  However, if the specified bytes is lower
      than 1% of available memory, the calculated ratios become zero and the
      writeback domain gets throttled constantly.
      
      Fix it by using per-PAGE_SIZE instead of percentage for ratio
      calculations.  Also, the updated DIV_ROUND_UP() usages now should
      yield 1/4096 (0.0244%) as the minimum ratio as long as the specified
      bytes are above zero.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reported-by: default avatarMiao Xie <miaoxie@huawei.com>
      Link: http://lkml.kernel.org/g/57333E75.3080309@huawei.com
      Fixes: 9fc3a43e ("writeback: separate out domain_dirty_limits()")
      Reviewed-by: default avatarJan Kara <jack@suse.cz>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      
      Adjusted comment based on Jan's suggestion.
      Signed-off-by: default avatarJens Axboe <axboe@fb.com>
      400850bd
    • Lukasz Luba's avatar
      thermal: cpu_cooling: fix improper order during initialization · a519bfe6
      Lukasz Luba authored
      commit f840ab18 upstream.
      
      The freq_table array is not populated before calling
      thermal_of_cooling_register. The code which populates the freq table was
      introduced in commit f6859014.
      This should be done before registering new thermal cooling device.
      The log shows effects of this wrong decision.
      [    2.172614] cpu cpu1: Failed to get voltage for frequency 1984518656000: -34
      [    2.220863] cpu cpu0: Failed to get voltage for frequency 1984524416000: -34
      
      Fixes: f6859014 ("thermal: cpu_cooling: Store frequencies in descending order")
      Signed-off-by: default avatarLukasz Luba <lukasz.luba@arm.com>
      Acked-by: default avatarJavi Merino <javi.merino@arm.com>
      Acked-by: default avatarViresh Kumar <viresh.kumar@linaro.org>
      Signed-off-by: default avatarZhang Rui <rui.zhang@intel.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      a519bfe6
    • Andy Lutomirski's avatar
      uvc: Forward compat ioctls to their handlers directly · f77ea5ca
      Andy Lutomirski authored
      commit a44323e2 upstream.
      
      The current code goes through a lot of indirection just to call a
      known handler.  Simplify it: just call the handlers directly.
      Signed-off-by: default avatarAndy Lutomirski <luto@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      f77ea5ca
    • Johan Hovold's avatar
      Revert "gpiolib: Split GPIO flags parsing and GPIO configuration" · 93f25db7
      Johan Hovold authored
      commit 85b03b30 upstream.
      
      This reverts commit 923b93e4.
      
      Make sure consumers do not overwrite gpio flags for pins that have
      already been claimed.
      
      While adding support for gpio drivers to refuse a request using
      unsupported flags, the order of when the requested flag was checked and
      the new flags were applied was reversed to that consumers could
      overwrite flags for already requested gpios.
      
      This not only affects device-tree setups where two drivers could request
      the same gpio using conflicting configurations, but also allowed user
      space to clear gpio flags for already claimed pins simply by attempting
      to export them through the sysfs interface. By for example clearing the
      FLAG_ACTIVE_LOW flag this way, user space could effectively change the
      polarity of a signal.
      
      Reverting this change obviously prevents gpio drivers from doing sanity
      checks on the flags in their request callbacks. Fortunately only one
      recently added driver (gpio-tps65218 in v4.6) appears to do this, and a
      follow up patch could restore this functionality through a different
      interface.
      Signed-off-by: default avatarJohan Hovold <johan@kernel.org>
      Signed-off-by: default avatarLinus Walleij <linus.walleij@linaro.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      93f25db7
    • Borislav Petkov's avatar
      x86/amd_nb: Fix boot crash on non-AMD systems · d9c5952f
      Borislav Petkov authored
      commit 1ead852d upstream.
      
      Fix boot crash that triggers if this driver is built into a kernel and
      run on non-AMD systems.
      
      AMD northbridges users call amd_cache_northbridges() and it returns
      a negative value to signal that we weren't able to cache/detect any
      northbridges on the system.
      
      At least, it should do so as all its callers expect it to do so. But it
      does return a negative value only when kmalloc() fails.
      
      Fix it to return -ENODEV if there are no NBs cached as otherwise, amd_nb
      users like amd64_edac, for example, which relies on it to know whether
      it should load or not, gets loaded on systems like Intel Xeons where it
      shouldn't.
      Reported-and-tested-by: default avatarTony Battersby <tonyb@cybernetics.com>
      Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1466097230-5333-2-git-send-email-bp@alien8.de
      Link: https://lkml.kernel.org/r/5761BEB0.9000807@cybernetics.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      d9c5952f
    • Masami Hiramatsu's avatar
      kprobes/x86: Clear TF bit in fault on single-stepping · 66af3f62
      Masami Hiramatsu authored
      commit dcfc4724 upstream.
      
      Fix kprobe_fault_handler() to clear the TF (trap flag) bit of
      the flags register in the case of a fault fixup on single-stepping.
      
      If we put a kprobe on the instruction which caused a
      page fault (e.g. actual mov instructions in copy_user_*),
      that fault happens on the single-stepping buffer. In this
      case, kprobes resets running instance so that the CPU can
      retry execution on the original ip address.
      
      However, current code forgets to reset the TF bit. Since this
      fault happens with TF bit set for enabling single-stepping,
      when it retries, it causes a debug exception and kprobes
      can not handle it because it already reset itself.
      
      On the most of x86-64 platform, it can be easily reproduced
      by using kprobe tracer. E.g.
      
        # cd /sys/kernel/debug/tracing
        # echo p copy_user_enhanced_fast_string+5 > kprobe_events
        # echo 1 > events/kprobes/enable
      
      And you'll see a kernel panic on do_debug(), since the debug
      trap is not handled by kprobes.
      
      To fix this problem, we just need to clear the TF bit when
      resetting running kprobe.
      Signed-off-by: default avatarMasami Hiramatsu <mhiramat@kernel.org>
      Reviewed-by: default avatarAnanth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
      Acked-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: systemtap@sourceware.org
      Link: http://lkml.kernel.org/r/20160611140648.25885.37482.stgit@devbox
      [ Updated the comments. ]
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      66af3f62
    • H. Peter Anvin's avatar
      x86, build: copy ldlinux.c32 to image.iso · f7acd40e
      H. Peter Anvin authored
      commit 9c77679c upstream.
      
      For newer versions of Syslinux, we need ldlinux.c32 in addition to
      isolinux.bin to reside on the boot disk, so if the latter is found,
      copy it, too, to the isoimage tree.
      Signed-off-by: default avatarH. Peter Anvin <hpa@zytor.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      f7acd40e
    • Paolo Bonzini's avatar
      locking/static_key: Fix concurrent static_key_slow_inc() · 71ef2c11
      Paolo Bonzini authored
      commit 4c5ea0a9 upstream.
      
      The following scenario is possible:
      
          CPU 1                                   CPU 2
          static_key_slow_inc()
           atomic_inc_not_zero()
            -> key.enabled == 0, no increment
           jump_label_lock()
           atomic_inc_return()
            -> key.enabled == 1 now
                                                  static_key_slow_inc()
                                                   atomic_inc_not_zero()
                                                    -> key.enabled == 1, inc to 2
                                                   return
                                                  ** static key is wrong!
           jump_label_update()
           jump_label_unlock()
      
      Testing the static key at the point marked by (**) will follow the
      wrong path for jumps that have not been patched yet.  This can
      actually happen when creating many KVM virtual machines with userspace
      LAPIC emulation; just run several copies of the following program:
      
          #include <fcntl.h>
          #include <unistd.h>
          #include <sys/ioctl.h>
          #include <linux/kvm.h>
      
          int main(void)
          {
              for (;;) {
                  int kvmfd = open("/dev/kvm", O_RDONLY);
                  int vmfd = ioctl(kvmfd, KVM_CREATE_VM, 0);
                  close(ioctl(vmfd, KVM_CREATE_VCPU, 1));
                  close(vmfd);
                  close(kvmfd);
              }
              return 0;
          }
      
      Every KVM_CREATE_VCPU ioctl will attempt a static_key_slow_inc() call.
      The static key's purpose is to skip NULL pointer checks and indeed one
      of the processes eventually dereferences NULL.
      
      As explained in the commit that introduced the bug:
      
        706249c2 ("locking/static_keys: Rework update logic")
      
      jump_label_update() needs key.enabled to be true.  The solution adopted
      here is to temporarily make key.enabled == -1, and use go down the
      slow path when key.enabled <= 0.
      Reported-by: default avatarDmitry Vyukov <dvyukov@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Fixes: 706249c2 ("locking/static_keys: Rework update logic")
      Link: http://lkml.kernel.org/r/1466527937-69798-1-git-send-email-pbonzini@redhat.com
      [ Small stylistic edits to the changelog and the code. ]
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      71ef2c11
    • Peter Zijlstra's avatar
      locking/qspinlock: Fix spin_unlock_wait() some more · a39e660a
      Peter Zijlstra authored
      commit 2c610022 upstream.
      
      While this prior commit:
      
        54cf809b ("locking,qspinlock: Fix spin_is_locked() and spin_unlock_wait()")
      
      ... fixes spin_is_locked() and spin_unlock_wait() for the usage
      in ipc/sem and netfilter, it does not in fact work right for the
      usage in task_work and futex.
      
      So while the 2 locks crossed problem:
      
      	spin_lock(A)		spin_lock(B)
      	if (!spin_is_locked(B)) spin_unlock_wait(A)
      	  foo()			foo();
      
      ... works with the smp_mb() injected by both spin_is_locked() and
      spin_unlock_wait(), this is not sufficient for:
      
      	flag = 1;
      	smp_mb();		spin_lock()
      	spin_unlock_wait()	if (!flag)
      				  // add to lockless list
      	// iterate lockless list
      
      ... because in this scenario, the store from spin_lock() can be delayed
      past the load of flag, uncrossing the variables and loosing the
      guarantee.
      
      This patch reworks spin_is_locked() and spin_unlock_wait() to work in
      both cases by exploiting the observation that while the lock byte
      store can be delayed, the contender must have registered itself
      visibly in other state contained in the word.
      
      It also allows for architectures to override both functions, as PPC
      and ARM64 have an additional issue for which we currently have no
      generic solution.
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Boqun Feng <boqun.feng@gmail.com>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Giovanni Gherdovich <ggherdovich@suse.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Waiman Long <waiman.long@hpe.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Fixes: 54cf809b ("locking,qspinlock: Fix spin_is_locked() and spin_unlock_wait()")
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      a39e660a
    • Chris Wilson's avatar
      locking/ww_mutex: Report recursive ww_mutex locking early · c7f47e59
      Chris Wilson authored
      commit 0422e83d upstream.
      
      Recursive locking for ww_mutexes was originally conceived as an
      exception. However, it is heavily used by the DRM atomic modesetting
      code. Currently, the recursive deadlock is checked after we have queued
      up for a busy-spin and as we never release the lock, we spin until
      kicked, whereupon the deadlock is discovered and reported.
      
      A simple solution for the now common problem is to move the recursive
      deadlock discovery to the first action when taking the ww_mutex.
      Suggested-by: default avatarMaarten Lankhorst <maarten.lankhorst@linux.intel.com>
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: default avatarMaarten Lankhorst <maarten.lankhorst@linux.intel.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1464293297-19777-1-git-send-email-chris@chris-wilson.co.ukSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      c7f47e59
    • Sergei Shtylyov's avatar
      of: irq: fix of_irq_get[_byname]() kernel-doc · c5f2e833
      Sergei Shtylyov authored
      commit 39935466 upstream.
      
      The kernel-doc for the of_irq_get[_byname]()  is clearly inadequate in
      describing the return values -- of_irq_get_byname() is documented better
      than of_irq_get() but it  still doesn't mention that 0 is returned iff
      irq_create_of_mapping() fails (it doesn't return an error code in this
      case). Document all possible return value variants, making the writing
      of the word "IRQ" consistent, while at it...
      
      Fixes: 9ec36caf ("of/irq: do irq resolution in platform_get_irq")
      Fixes: ad69674e ("of/irq: do irq resolution in platform_get_irq_byname()")
      Signed-off-by: default avatarSergei Shtylyov <sergei.shtylyov@cogentembedded.com>
      Signed-off-by: default avatarRob Herring <robh@kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      c5f2e833
    • Wolfram Sang's avatar
      of: fix autoloading due to broken modalias with no 'compatible' · 6d58954b
      Wolfram Sang authored
      commit b3c0a4da upstream.
      
      Because of an improper dereference, a stray 'C' character was output to
      the modalias when no 'compatible' was specified. This is the case for
      some old PowerMac drivers which only set the 'name' property. Fix it to
      let them match again.
      Reported-by: default avatarMathieu Malaterre <malat@debian.org>
      Signed-off-by: default avatarWolfram Sang <wsa@the-dreams.de>
      Tested-by: default avatarMathieu Malaterre <malat@debian.org>
      Cc: Philipp Zabel <p.zabel@pengutronix.de>
      Cc: Andreas Schwab <schwab@linux-m68k.org>
      Fixes: 6543becf ("mod/file2alias: make modalias generation safe for cross compiling")
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      6d58954b
    • Eric W. Biederman's avatar
      mnt: If fs_fully_visible fails call put_filesystem. · a400a793
      Eric W. Biederman authored
      commit 97c1df3e upstream.
      
      Add this trivial missing error handling.
      
      Fixes: 1b852bce ("mnt: Refactor the logic for mounting sysfs and proc in a user namespace")
      Signed-off-by: default avatar"Eric W. Biederman" <ebiederm@xmission.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      a400a793
    • Eric W. Biederman's avatar
      mnt: Account for MS_RDONLY in fs_fully_visible · 6256d2f5
      Eric W. Biederman authored
      commit 695e9df0 upstream.
      
      In rare cases it is possible for s_flags & MS_RDONLY to be set but
      MNT_READONLY to be clear.  This starting combination can cause
      fs_fully_visible to fail to ensure that the new mount is readonly.
      Therefore force MNT_LOCK_READONLY in the new mount if MS_RDONLY
      is set on the source filesystem of the mount.
      
      In general both MS_RDONLY and MNT_READONLY are set at the same for
      mounts so I don't expect any programs to care.  Nor do I expect
      MS_RDONLY to be set on proc or sysfs in the initial user namespace,
      which further decreases the likelyhood of problems.
      
      Which means this change should only affect system configurations by
      paranoid sysadmins who should welcome the additional protection
      as it keeps people from wriggling out of their policies.
      
      Fixes: 8c6cf9cc ("mnt: Modify fs_fully_visible to deal with locked ro nodev and atime")
      Signed-off-by: default avatar"Eric W. Biederman" <ebiederm@xmission.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      6256d2f5
    • Eric W. Biederman's avatar
      mnt: fs_fully_visible test the proper mount for MNT_LOCKED · 57eb6e3d
      Eric W. Biederman authored
      commit d71ed6c9 upstream.
      
      MNT_LOCKED implies on a child mount implies the child is locked to the
      parent.  So while looping through the children the children should be
      tested (not their parent).
      
      Typically an unshare of a mount namespace locks all mounts together
      making both the parent and the slave as locked but there are a few
      corner cases where other things work.
      
      Fixes: ceeb0e5d ("vfs: Ignore unlocked mounts in fs_fully_visible")
      Reported-by: default avatarSeth Forshee <seth.forshee@canonical.com>
      Signed-off-by: default avatar"Eric W. Biederman" <ebiederm@xmission.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      57eb6e3d