1. 08 May, 2015 9 commits
    • Peter Zijlstra (Intel)'s avatar
      locking/qspinlock: Revert to test-and-set on hypervisors · 2aa79af6
      Peter Zijlstra (Intel) authored
      When we detect a hypervisor (!paravirt, see qspinlock paravirt support
      patches), revert to a simple test-and-set lock to avoid the horrors
      of queue preemption.
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarWaiman Long <Waiman.Long@hp.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Daniel J Blueman <daniel@numascale.com>
      Cc: David Vrabel <david.vrabel@citrix.com>
      Cc: Douglas Hatch <doug.hatch@hp.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Paolo Bonzini <paolo.bonzini@gmail.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Scott J Norton <scott.norton@hp.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: virtualization@lists.linux-foundation.org
      Cc: xen-devel@lists.xenproject.org
      Link: http://lkml.kernel.org/r/1429901803-29771-8-git-send-email-Waiman.Long@hp.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      2aa79af6
    • Waiman Long's avatar
      locking/qspinlock: Use a simple write to grab the lock · 2c83e8e9
      Waiman Long authored
      Currently, atomic_cmpxchg() is used to get the lock. However, this
      is not really necessary if there is more than one task in the queue
      and the queue head don't need to reset the tail code. For that case,
      a simple write to set the lock bit is enough as the queue head will
      be the only one eligible to get the lock as long as it checks that
      both the lock and pending bits are not set. The current pending bit
      waiting code will ensure that the bit will not be set as soon as the
      tail code in the lock is set.
      
      With that change, the are some slight improvement in the performance
      of the queued spinlock in the 5M loop micro-benchmark run on a 4-socket
      Westere-EX machine as shown in the tables below.
      
      		[Standalone/Embedded - same node]
        # of tasks	Before patch	After patch	%Change
        ----------	-----------	----------	-------
             3	 2324/2321	2248/2265	 -3%/-2%
             4	 2890/2896	2819/2831	 -2%/-2%
             5	 3611/3595	3522/3512	 -2%/-2%
             6	 4281/4276	4173/4160	 -3%/-3%
             7	 5018/5001	4875/4861	 -3%/-3%
             8	 5759/5750	5563/5568	 -3%/-3%
      
      		[Standalone/Embedded - different nodes]
        # of tasks	Before patch	After patch	%Change
        ----------	-----------	----------	-------
             3	12242/12237	12087/12093	 -1%/-1%
             4	10688/10696	10507/10521	 -2%/-2%
      
      It was also found that this change produced a much bigger performance
      improvement in the newer IvyBridge-EX chip and was essentially to close
      the performance gap between the ticket spinlock and queued spinlock.
      
      The disk workload of the AIM7 benchmark was run on a 4-socket
      Westmere-EX machine with both ext4 and xfs RAM disks at 3000 users
      on a 3.14 based kernel. The results of the test runs were:
      
                      AIM7 XFS Disk Test
        kernel                 JPM    Real Time   Sys Time    Usr Time
        -----                  ---    ---------   --------    --------
        ticketlock            5678233    3.17       96.61       5.81
        qspinlock             5750799    3.13       94.83       5.97
      
                      AIM7 EXT4 Disk Test
        kernel                 JPM    Real Time   Sys Time    Usr Time
        -----                  ---    ---------   --------    --------
        ticketlock            1114551   16.15      509.72       7.11
        qspinlock             2184466    8.24      232.99       6.01
      
      The ext4 filesystem run had a much higher spinlock contention than
      the xfs filesystem run.
      
      The "ebizzy -m" test was also run with the following results:
      
        kernel               records/s  Real Time   Sys Time    Usr Time
        -----                ---------  ---------   --------    --------
        ticketlock             2075       10.00      216.35       3.49
        qspinlock              3023       10.00      198.20       4.80
      Signed-off-by: default avatarWaiman Long <Waiman.Long@hp.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Daniel J Blueman <daniel@numascale.com>
      Cc: David Vrabel <david.vrabel@citrix.com>
      Cc: Douglas Hatch <doug.hatch@hp.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Paolo Bonzini <paolo.bonzini@gmail.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Scott J Norton <scott.norton@hp.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: virtualization@lists.linux-foundation.org
      Cc: xen-devel@lists.xenproject.org
      Link: http://lkml.kernel.org/r/1429901803-29771-7-git-send-email-Waiman.Long@hp.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      2c83e8e9
    • Peter Zijlstra (Intel)'s avatar
      locking/qspinlock: Optimize for smaller NR_CPUS · 69f9cae9
      Peter Zijlstra (Intel) authored
      When we allow for a max NR_CPUS < 2^14 we can optimize the pending
      wait-acquire and the xchg_tail() operations.
      
      By growing the pending bit to a byte, we reduce the tail to 16bit.
      This means we can use xchg16 for the tail part and do away with all
      the repeated compxchg() operations.
      
      This in turn allows us to unconditionally acquire; the locked state
      as observed by the wait loops cannot change. And because both locked
      and pending are now a full byte we can use simple stores for the
      state transition, obviating one atomic operation entirely.
      
      This optimization is needed to make the qspinlock achieve performance
      parity with ticket spinlock at light load.
      
      All this is horribly broken on Alpha pre EV56 (and any other arch that
      cannot do single-copy atomic byte stores).
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarWaiman Long <Waiman.Long@hp.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Daniel J Blueman <daniel@numascale.com>
      Cc: David Vrabel <david.vrabel@citrix.com>
      Cc: Douglas Hatch <doug.hatch@hp.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Paolo Bonzini <paolo.bonzini@gmail.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Scott J Norton <scott.norton@hp.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: virtualization@lists.linux-foundation.org
      Cc: xen-devel@lists.xenproject.org
      Link: http://lkml.kernel.org/r/1429901803-29771-6-git-send-email-Waiman.Long@hp.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      69f9cae9
    • Waiman Long's avatar
      locking/qspinlock: Extract out code snippets for the next patch · 6403bd7d
      Waiman Long authored
      This is a preparatory patch that extracts out the following 2 code
      snippets to prepare for the next performance optimization patch.
      
       1) the logic for the exchange of new and previous tail code words
          into a new xchg_tail() function.
       2) the logic for clearing the pending bit and setting the locked bit
          into a new clear_pending_set_locked() function.
      
      This patch also simplifies the trylock operation before queuing by
      calling queued_spin_trylock() directly.
      Signed-off-by: default avatarWaiman Long <Waiman.Long@hp.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Daniel J Blueman <daniel@numascale.com>
      Cc: David Vrabel <david.vrabel@citrix.com>
      Cc: Douglas Hatch <doug.hatch@hp.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Paolo Bonzini <paolo.bonzini@gmail.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Scott J Norton <scott.norton@hp.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: virtualization@lists.linux-foundation.org
      Cc: xen-devel@lists.xenproject.org
      Link: http://lkml.kernel.org/r/1429901803-29771-5-git-send-email-Waiman.Long@hp.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      6403bd7d
    • Peter Zijlstra (Intel)'s avatar
      locking/qspinlock: Add pending bit · c1fb159d
      Peter Zijlstra (Intel) authored
      Because the qspinlock needs to touch a second cacheline (the per-cpu
      mcs_nodes[]); add a pending bit and allow a single in-word spinner
      before we punt to the second cacheline.
      
      It is possible so observe the pending bit without the locked bit when
      the last owner has just released but the pending owner has not yet
      taken ownership.
      
      In this case we would normally queue -- because the pending bit is
      already taken. However, in this case the pending bit is guaranteed
      to be released 'soon', therefore wait for it and avoid queueing.
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarWaiman Long <Waiman.Long@hp.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Daniel J Blueman <daniel@numascale.com>
      Cc: David Vrabel <david.vrabel@citrix.com>
      Cc: Douglas Hatch <doug.hatch@hp.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Paolo Bonzini <paolo.bonzini@gmail.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Scott J Norton <scott.norton@hp.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: virtualization@lists.linux-foundation.org
      Cc: xen-devel@lists.xenproject.org
      Link: http://lkml.kernel.org/r/1429901803-29771-4-git-send-email-Waiman.Long@hp.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      c1fb159d
    • Waiman Long's avatar
      locking/qspinlock, x86: Enable x86-64 to use queued spinlocks · d73a3397
      Waiman Long authored
      This patch makes the necessary changes at the x86 architecture
      specific layer to enable the use of queued spinlocks for x86-64. As
      x86-32 machines are typically not multi-socket. The benefit of queue
      spinlock may not be apparent. So queued spinlocks are not enabled.
      
      Currently, there is some incompatibilities between the para-virtualized
      spinlock code (which hard-codes the use of ticket spinlock) and the
      queued spinlocks. Therefore, the use of queued spinlocks is disabled
      when the para-virtualized spinlock is enabled.
      
      The arch/x86/include/asm/qspinlock.h header file includes some x86
      specific optimization which will make the queueds spinlock code
      perform better than the generic implementation.
      Signed-off-by: default avatarWaiman Long <Waiman.Long@hp.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Daniel J Blueman <daniel@numascale.com>
      Cc: David Vrabel <david.vrabel@citrix.com>
      Cc: Douglas Hatch <doug.hatch@hp.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Paolo Bonzini <paolo.bonzini@gmail.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Scott J Norton <scott.norton@hp.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: virtualization@lists.linux-foundation.org
      Cc: xen-devel@lists.xenproject.org
      Link: http://lkml.kernel.org/r/1429901803-29771-3-git-send-email-Waiman.Long@hp.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      d73a3397
    • Waiman Long's avatar
      locking/qspinlock: Introduce a simple generic 4-byte queued spinlock · a33fda35
      Waiman Long authored
      This patch introduces a new generic queued spinlock implementation that
      can serve as an alternative to the default ticket spinlock. Compared
      with the ticket spinlock, this queued spinlock should be almost as fair
      as the ticket spinlock. It has about the same speed in single-thread
      and it can be much faster in high contention situations especially when
      the spinlock is embedded within the data structure to be protected.
      
      Only in light to moderate contention where the average queue depth
      is around 1-3 will this queued spinlock be potentially a bit slower
      due to the higher slowpath overhead.
      
      This queued spinlock is especially suit to NUMA machines with a large
      number of cores as the chance of spinlock contention is much higher
      in those machines. The cost of contention is also higher because of
      slower inter-node memory traffic.
      
      Due to the fact that spinlocks are acquired with preemption disabled,
      the process will not be migrated to another CPU while it is trying
      to get a spinlock. Ignoring interrupt handling, a CPU can only be
      contending in one spinlock at any one time. Counting soft IRQ, hard
      IRQ and NMI, a CPU can only have a maximum of 4 concurrent lock waiting
      activities.  By allocating a set of per-cpu queue nodes and used them
      to form a waiting queue, we can encode the queue node address into a
      much smaller 24-bit size (including CPU number and queue node index)
      leaving one byte for the lock.
      
      Please note that the queue node is only needed when waiting for the
      lock. Once the lock is acquired, the queue node can be released to
      be used later.
      Signed-off-by: default avatarWaiman Long <Waiman.Long@hp.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Daniel J Blueman <daniel@numascale.com>
      Cc: David Vrabel <david.vrabel@citrix.com>
      Cc: Douglas Hatch <doug.hatch@hp.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Paolo Bonzini <paolo.bonzini@gmail.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Scott J Norton <scott.norton@hp.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: virtualization@lists.linux-foundation.org
      Cc: xen-devel@lists.xenproject.org
      Link: http://lkml.kernel.org/r/1429901803-29771-2-git-send-email-Waiman.Long@hp.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      a33fda35
    • Preeti U Murthy's avatar
      kernel: Replace reference to ASSIGN_ONCE() with WRITE_ONCE() in comment · 663fdcbe
      Preeti U Murthy authored
      Looks like commit :
      
       43239cbe ("kernel: Change ASSIGN_ONCE(val, x) to WRITE_ONCE(x, val)")
      
      left behind a reference to ASSIGN_ONCE(). Update this to WRITE_ONCE().
      Signed-off-by: default avatarPreeti U Murthy <preeti@linux.vnet.ibm.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: borntraeger@de.ibm.com
      Cc: dave@stgolabs.net
      Cc: paulmck@linux.vnet.ibm.com
      Link: http://lkml.kernel.org/r/20150430115721.22278.94082.stgit@preeti.in.ibm.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      663fdcbe
    • Waiman Long's avatar
      locking/rwsem: Reduce spinlock contention in wakeup after up_read()/up_write() · 59aabfc7
      Waiman Long authored
      In up_write()/up_read(), rwsem_wake() will be called whenever it
      detects that some writers/readers are waiting. The rwsem_wake()
      function will take the wait_lock and call __rwsem_do_wake() to do the
      real wakeup.  For a heavily contended rwsem, doing a spin_lock() on
      wait_lock will cause further contention on the heavily contended rwsem
      cacheline resulting in delay in the completion of the up_read/up_write
      operations.
      
      This patch makes the wait_lock taking and the call to __rwsem_do_wake()
      optional if at least one spinning writer is present. The spinning
      writer will be able to take the rwsem and call rwsem_wake() later
      when it calls up_write(). With the presence of a spinning writer,
      rwsem_wake() will now try to acquire the lock using trylock. If that
      fails, it will just quit.
      Suggested-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarWaiman Long <Waiman.Long@hp.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: default avatarDavidlohr Bueso <dave@stgolabs.net>
      Acked-by: default avatarJason Low <jason.low2@hp.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Douglas Hatch <doug.hatch@hp.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Scott J Norton <scott.norton@hp.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1430428337-16802-2-git-send-email-Waiman.Long@hp.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      59aabfc7
  2. 07 May, 2015 6 commits
    • Linus Torvalds's avatar
      Merge tag 'pm+acpi-4.1-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm · 3e0283a5
      Linus Torvalds authored
      Pull power management and ACPI fixes from Rafael Wysocki:
       "These include three regression fixes (PCI resources management,
        ACPI/PNP device enumeration, ACPI SBS on MacBook) and two ACPI
        documentation fixes related to GPIO.
      
        Specifics:
      
         - Fix for a PCI resources management regression introduced during the
           4.0 cycle and related to the handling of ACPI resources'
           Producer/Consumer flags that turn out to be useless (Jiang Liu)
      
         - Fix for a MacBook regression related to the Smart Battery Subsystem
           (SBS) driver causing various problems (stalls on boot, failure to
           detect or report battery) to happen and introduced during the 3.18
           cycle (Chris Bainbridge)
      
         - Fix for an ACPI/PNP device enumeration regression introduced during
           the 3.16 cycle caused by failing to include two PNP device IDs into
           the list of IDs that PNP device objects need to be created for
           (Witold Szczeponik)
      
         - Fixes for two minor mistakes in the ACPI GPIO properties
           documentation (Antonio Ospite, Rafael J Wysocki)"
      
      * tag 'pm+acpi-4.1-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
        ACPI / PNP: add two IDs to list for PNPACPI device enumeration
        ACPI / documentation: Fix ambiguity in the GPIO properties document
        ACPI / documentation: fix a sentence about GPIO resources
        ACPI / SBS: Add 5 us delay to fix SBS hangs on MacBook
        x86/PCI/ACPI: Make all resources except [io 0xcf8-0xcff] available on PCI bus
      3e0283a5
    • Rafael J. Wysocki's avatar
      Merge branches 'acpi-resources', 'acpi-battery', 'acpi-doc' and 'acpi-pnp' · 9a5d9315
      Rafael J. Wysocki authored
      * acpi-resources:
        x86/PCI/ACPI: Make all resources except [io 0xcf8-0xcff] available on PCI bus
      
      * acpi-battery:
        ACPI / SBS: Add 5 us delay to fix SBS hangs on MacBook
      
      * acpi-doc:
        ACPI / documentation: Fix ambiguity in the GPIO properties document
        ACPI / documentation: fix a sentence about GPIO resources
      
      * acpi-pnp:
        ACPI / PNP: add two IDs to list for PNPACPI device enumeration
      9a5d9315
    • Linus Torvalds's avatar
      Merge tag 'for-f2fs-4.1-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs · 68c2f356
      Linus Torvalds authored
      Pull f2fs fixes from Jaegeuk Kim:
       "Fix a performance regression and a bug"
      
      * tag 'for-f2fs-4.1-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs:
        f2fs: fix wrong error hanlder in f2fs_follow_link
        Revert "f2fs: enhance multi-threads performance"
      68c2f356
    • Linus Torvalds's avatar
      Merge tag 'pinctrl-v4.1-3' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl · fbb7b92f
      Linus Torvalds authored
      Pull pin control fixes from Linus Walleij:
       "Here is a smallish set of pin control fixes for the v4.1 cycle,
        collected the last two weeks:
      
         - fix a real nasty legacy bug that has screwed up the protection of
           adding pinctrl maps dynamically.  Normally this didn't happen so
           much but Dough Anderson ran into it and fixed it, kudos!
      
        - minor driver fixes for Qualcomm spmi, mediatek and Marvell drivers"
      
      * tag 'pinctrl-v4.1-3' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl:
        pinctrl: Don't just pretend to protect pinctrl_maps, do it for real
        pinctrl: mediatek: mtk-common: initialize unmask
        pinctrl: qcom-spmi-mpp: Fix input value report
        pinctrl: qcom-spmi: Fix pin direction configuration
        pinctrl: mvebu: Fix mapping of pin 63 (gpo -> gpio)
      fbb7b92f
    • Linus Torvalds's avatar
      Merge tag 'vfio-v4.1-rc3' of git://github.com/awilliam/linux-vfio · 7bbcd1b8
      Linus Torvalds authored
      Pull vfio fixes from Alex Williamson:
       "Fix some undesirable behavior with the vfio device request interface:
      
         - increase verbosity of device request channel (Alex Williamson)
      
         - fix runaway interruptible timeout (Alex Williamson)"
      
      * tag 'vfio-v4.1-rc3' of git://github.com/awilliam/linux-vfio:
        vfio: Fix runaway interruptible timeout
        vfio-pci: Log device requests more verbosely
      7bbcd1b8
    • Linus Torvalds's avatar
      Merge tag 'for-linus' of git://github.com/dledford/linux · 8cb7c15b
      Linus Torvalds authored
      Pull infiniband updates from Doug Ledford:
       "Minor updates for 4.1-rc
      
        Most of the changes are fairly small and well confined.  The iWARP
        address reporting changes are the only ones that are a medium size.  I
        had these queued up prior to rc1, but due to the shuffle in
        maintainers, they did not get submitted when I expected.  My apologies
        for that.  I feel comfortable with them however due to the testing
        they've received, so I left them in this submission"
      
      * tag 'for-linus' of git://github.com/dledford/linux:
        MAINTAINERS: Update InfiniBand subsystem maintainer
        MAINTAINERS: add include/rdma/ to InfiniBand subsystem
        IPoIB/CM: Fix indentation level
        iw_cxgb4: Remove negative advice dmesg warnings
        IB/core: Fix unaligned accesses
        IB/core: change rdma_gid2ip into void function as it always return zero
        IB/qib: use arch_phys_wc_add()
        IB/qib: add acounting for MTRR
        IB/core: dma unmap optimizations
        IB/core: dma map/unmap locking optimizations
        RDMA/cxgb4: Report the actual address of the remote connecting peer
        RDMA/nes: Report the actual address of the remote connecting peer
        RDMA/core: Enable the iWarp Port Mapper to provide the actual address of the connecting peer to its clients
        iw_cxgb4: enforce qp/cq id requirements
        iw_cxgb4: use BAR2 GTS register for T5 kernel mode CQs
        iw_cxgb4: 32b platform fixes
        iw_cxgb4: Cleanup register defines/MACROS
        RDMA/CMA: Canonize IPv4 on IPV6 sockets properly
      8cb7c15b
  3. 06 May, 2015 25 commits