1. 10 Aug, 2015 7 commits
    • Robert Jarzmik's avatar
      ARM: pxa: fix dm9000 platform data regression · 7185dab0
      Robert Jarzmik authored
      commit a927ef89 upstream.
      
      Since dm9000 driver added support for a vcc regulator, platform data
      based platforms have their ethernet broken, as the regulator claiming
      returns -EPROBE_DEFER and prevents dm9000 loading.
      
      This patch fixes this for all pxa boards using dm9000, by using the
      specific regulator_has_full_constraints() function.
      
      This was discovered and tested on the cm-x300 board.
      
      Fixes: 7994fe55 ("dm9000: Add regulator and reset support to dm9000")
      Signed-off-by: default avatarRobert Jarzmik <robert.jarzmik@free.fr>
      Acked-by: default avatarIgor Grinberg <grinberg@compulab.co.il>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      7185dab0
    • Christophe Jaillet's avatar
      parisc: mm: Fix a memory leak related to pmd not attached to the pgd · 0ab58712
      Christophe Jaillet authored
      commit 4c4ac9a4 upstream.
      
      Commit 0e0da48d ("parisc: mm: don't count preallocated pmds")
      introduced a memory leak.
      
      After this commit, the 'return' statement in pmd_free is executed in all
      cases. Even for pmd that are not attached to the pgd.  So 'free_pages'
      can never be called anymore, leading to a memory leak.
      Signed-off-by: default avatarChristophe JAILLET <christophe.jaillet@wanadoo.fr>
      Acked-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Acked-by: default avatarHelge Deller <deller@gmx.de>
      Signed-off-by: default avatarHelge Deller <deller@gmx.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      0ab58712
    • John David Anglin's avatar
      parisc: Fix some PTE/TLB race conditions and optimize __flush_tlb_range based on timing results · 78eb5efb
      John David Anglin authored
      commit 01ab6057 upstream.
      
      The increased use of pdtlb/pitlb instructions seemed to increase the
      frequency of random segmentation faults building packages. Further, we
      had a number of cases where TLB inserts would repeatedly fail and all
      forward progress would stop. The Haskell ghc package caused a lot of
      trouble in this area. The final indication of a race in pte handling was
      this syslog entry on sibaris (C8000):
      
       swap_free: Unused swap offset entry 00000004
       BUG: Bad page map in process mysqld  pte:00000100 pmd:019bbec5
       addr:00000000ec464000 vm_flags:00100073 anon_vma:0000000221023828 mapping: (null) index:ec464
       CPU: 1 PID: 9176 Comm: mysqld Not tainted 4.0.0-2-parisc64-smp #1 Debian 4.0.5-1
       Backtrace:
        [<0000000040173eb0>] show_stack+0x20/0x38
        [<0000000040444424>] dump_stack+0x9c/0x110
        [<00000000402a0d38>] print_bad_pte+0x1a8/0x278
        [<00000000402a28b8>] unmap_single_vma+0x3d8/0x770
        [<00000000402a4090>] zap_page_range+0xf0/0x198
        [<00000000402ba2a4>] SyS_madvise+0x404/0x8c0
      
      Note that the pte value is 0 except for the accessed bit 0x100. This bit
      shouldn't be set without the present bit.
      
      It should be noted that the madvise system call is probably a trigger for many
      of the random segmentation faults.
      
      In looking at the kernel code, I found the following problems:
      
      1) The pte_clear define didn't take TLB lock when clearing a pte.
      2) We didn't test pte present bit inside lock in exception support.
      3) The pte and tlb locks needed to merged in order to ensure consistency
      between page table and TLB. This also has the effect of serializing TLB
      broadcasts on SMP systems.
      
      The attached change implements the above and a few other tweaks to try
      to improve performance. Based on the timing code, TLB purges are very
      slow (e.g., ~ 209 cycles per page on rp3440). Thus, I think it
      beneficial to test the split_tlb variable to avoid duplicate purges.
      Probably, all PA 2.0 machines have combined TLBs.
      
      I dropped using __flush_tlb_range in flush_tlb_mm as I realized all
      applications and most threads have a stack size that is too large to
      make this useful. I added some comments to this effect.
      
      Since implementing 1 through 3, I haven't had any random segmentation
      faults on mx3210 (rp3440) in about one week of building code and running
      as a Debian buildd.
      Signed-off-by: default avatarJohn David Anglin <dave.anglin@bell.net>
      Signed-off-by: default avatarHelge Deller <deller@gmx.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      78eb5efb
    • Dmitry Torokhov's avatar
      Revert "Input: synaptics - allocate 3 slots to keep stability in image sensors" · d474669e
      Dmitry Torokhov authored
      commit dbf3c370 upstream.
      
      This reverts commit 63c4fda3 as it
      causes issues with detecting 3-finger taps.
      
      Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=100481Acked-by: default avatarBenjamin Tissoires <benjamin.tissoires@redhat.com>
      d474669e
    • Shreyas B. Prabhu's avatar
      powerpc/powernv: Fix race in updating core_idle_state · 5f0f854a
      Shreyas B. Prabhu authored
      commit b32aadc1 upstream.
      
      core_idle_state is maintained for each core. It uses 0-7 bits to track
      whether a thread in the core has entered fastsleep or winkle. 8th bit is
      used as a lock bit.
      The lock bit is set in these 2 scenarios-
       - The thread is first in subcore to wakeup from sleep/winkle.
       - If its the last thread in the core about to enter sleep/winkle
      
      While the lock bit is set, if any other thread in the core wakes up, it
      loops until the lock bit is cleared before proceeding in the wakeup
      path. This helps prevent race conditions w.r.t fastsleep workaround and
      prevents threads from switching to process context before core/subcore
      resources are restored.
      
      But, in the path to sleep/winkle entry, we currently don't check for
      lock-bit. This exposes us to following race when running with subcore
      on-
      
      First thread in the subcorea		Another thread in the same
      waking up		   		core entering sleep/winkle
      
      lwarx   r15,0,r14
      ori     r15,r15,PNV_CORE_IDLE_LOCK_BIT
      stwcx.  r15,0,r14
      [Code to restore subcore state]
      
      						lwarx   r15,0,r14
      						[clear thread bit]
      						stwcx.  r15,0,r14
      
      andi.   r15,r15,PNV_CORE_IDLE_THREAD_BITS
      stw     r15,0(r14)
      
      Here, after the thread entering sleep clears its thread bit in
      core_idle_state, the value is overwritten by the thread waking up.
      In such cases when the core enters fastsleep, code mistakes an idle
      thread as running. Because of this, the first thread waking up from
      fastsleep which is supposed to resync timebase skips it. So we can
      end up having a core with stale timebase value.
      
      This patch fixes the above race by looping on the lock bit even while
      entering the idle states.
      Signed-off-by: default avatarShreyas B. Prabhu <shreyas@linux.vnet.ibm.com>
      Fixes: 7b54e9f213f76 'powernv/powerpc: Add winkle support for offline cpus'
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      5f0f854a
    • Daniel Axtens's avatar
      cxl: Check if afu is not null in cxl_slbia · 4a0c377c
      Daniel Axtens authored
      commit 2c069a11 upstream.
      
      The pointer to an AFU in the adapter's list of AFUs can be null
      if we're in the process of removing AFUs. The afu_list_lock
      doesn't guard against this.
      
      Say we have 2 slices, and we're in the process of removing cxl.
       - We remove the AFUs in order (see cxl_remove). In cxl_remove_afu
         for AFU 0, we take the lock, set adapter->afu[0] = NULL, and
         release the lock.
       - Then we get an slbia. In cxl_slbia we take the lock, and set
         afu = adapter->afu[0], which is NULL.
       - Therefore our attempt to check afu->enabled will blow up.
      
      Therefore, check if afu is a null pointer before dereferencing it.
      Signed-off-by: default avatarDaniel Axtens <dja@axtens.net>
      Acked-by: default avatarMichael Neuling <mikey@neuling.org>
      Acked-by: default avatarIan Munsie <imunsie@au1.ibm.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      4a0c377c
    • Ian Munsie's avatar
      cxl: Fix off by one error allowing subsequent mmap page to be accessed · bbf9f2c9
      Ian Munsie authored
      commit 10a5894f upstream.
      
      It was discovered that if a process mmaped their problem state area they
      were able to access one page more than expected, potentially allowing
      them to access the problem state area of an unrelated process.
      
      This was due to a simple off by one error in the mmap fault handler
      introduced in 0712dc7e ("cxl: Fix issues
      when unmapping contexts"), which is fixed in this patch.
      
      Fixes: 0712dc7e ("cxl: Fix issues when unmapping contexts")
      Signed-off-by: default avatarIan Munsie <imunsie@au1.ibm.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      bbf9f2c9
  2. 03 Aug, 2015 33 commits