1. 21 Nov, 2014 7 commits
  2. 08 Nov, 2014 1 commit
  3. 07 Nov, 2014 32 commits
    • Xiubo Li's avatar
      regmap: debugfs: fix possbile NULL pointer dereference · 0ecf1af8
      Xiubo Li authored
      If 'map->dev' is NULL and there will lead dev_name() to be NULL pointer
      dereference. So before dev_name(), we need to have check of the map->dev
      pionter.
      
      We also should make sure that the 'name' pointer shouldn't be NULL for
      debugfs_create_dir(). So here using one default "dummy" debugfs name when
      the 'name' pointer and 'map->dev' are both NULL.
      Signed-off-by: default avatarXiubo Li <Li.Xiubo@freescale.com>
      Signed-off-by: default avatarMark Brown <broonie@kernel.org>
      Cc: stable@vger.kernel.org
      
      (cherry picked from commit 2c98e0c1)
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      0ecf1af8
    • Michael Ellerman's avatar
      powerpc: Add smp_mb() to arch_spin_is_locked() · c0841c82
      Michael Ellerman authored
      The kernel defines the function spin_is_locked(), which can be used to
      check if a spinlock is currently locked.
      
      Using spin_is_locked() on a lock you don't hold is obviously racy. That
      is, even though you may observe that the lock is unlocked, it may become
      locked at any time.
      
      There is (at least) one exception to that, which is if two locks are
      used as a pair, and the holder of each checks the status of the other
      before doing any update.
      
      Assuming *A and *B are two locks, and *COUNTER is a shared non-atomic
      value:
      
      The first CPU does:
      
      	spin_lock(*A)
      
      	if spin_is_locked(*B)
      		# nothing
      	else
      		smp_mb()
      		LOAD r = *COUNTER
      		r++
      		STORE *COUNTER = r
      
      	spin_unlock(*A)
      
      And the second CPU does:
      
      	spin_lock(*B)
      
      	if spin_is_locked(*A)
      		# nothing
      	else
      		smp_mb()
      		LOAD r = *COUNTER
      		r++
      		STORE *COUNTER = r
      
      	spin_unlock(*B)
      
      Although this is a strange locking construct, it should work.
      
      It seems to be understood, but not documented, that spin_is_locked() is
      not a memory barrier, so in the examples above and below the caller
      inserts its own memory barrier before acting on the result of
      spin_is_locked().
      
      For now we assume spin_is_locked() is implemented as below, and we break
      it out in our examples:
      
      	bool spin_is_locked(*LOCK) {
      		LOAD l = *LOCK
      		return l.locked
      	}
      
      Our intuition is that there should be no problem even if the two code
      sequences run simultaneously such as:
      
      	CPU 0			CPU 1
      	==================================================
      	spin_lock(*A)		spin_lock(*B)
      	LOAD b = *B		LOAD a = *A
      	if b.locked # true	if a.locked # true
      	# nothing		# nothing
      	spin_unlock(*A)		spin_unlock(*B)
      
      If one CPU gets the lock before the other then it will do the update and
      the other CPU will back off:
      
      	CPU 0			CPU 1
      	==================================================
      	spin_lock(*A)
      	LOAD b = *B
      				spin_lock(*B)
      	if b.locked # false	LOAD a = *A
      	else			if a.locked # true
      	smp_mb()		# nothing
      	LOAD r1 = *COUNTER	spin_unlock(*B)
      	r1++
      	STORE *COUNTER = r1
      	spin_unlock(*A)
      
      However in reality spin_lock() itself is not indivisible. On powerpc we
      implement it as a load-and-reserve and store-conditional.
      
      Ignoring the retry logic for the lost reservation case, it boils down to:
      	spin_lock(*LOCK) {
      		LOAD l = *LOCK
      		l.locked = true
      		STORE *LOCK = l
      		ACQUIRE_BARRIER
      	}
      
      The ACQUIRE_BARRIER is required to give spin_lock() ACQUIRE semantics as
      defined in memory-barriers.txt:
      
           This acts as a one-way permeable barrier.  It guarantees that all
           memory operations after the ACQUIRE operation will appear to happen
           after the ACQUIRE operation with respect to the other components of
           the system.
      
      On modern powerpc systems we use lwsync for ACQUIRE_BARRIER. lwsync is
      also know as "lightweight sync", or "sync 1".
      
      As described in Power ISA v2.07 section B.2.1.1, in this scenario the
      lwsync is not the barrier itself. It instead causes the LOAD of *LOCK to
      act as the barrier, preventing any loads or stores in the locked region
      from occurring prior to the load of *LOCK.
      
      Whether this behaviour is in accordance with the definition of ACQUIRE
      semantics in memory-barriers.txt is open to discussion, we may switch to
      a different barrier in future.
      
      What this means in practice is that the following can occur:
      
      	CPU 0			CPU 1
      	==================================================
      	LOAD a = *A 		LOAD b = *B
      	a.locked = true		b.locked = true
      	LOAD b = *B		LOAD a = *A
      	STORE *A = a		STORE *B = b
      	if b.locked # false	if a.locked # false
      	else			else
      	smp_mb()		smp_mb()
      	LOAD r1 = *COUNTER	LOAD r2 = *COUNTER
      	r1++			r2++
      	STORE *COUNTER = r1
      				STORE *COUNTER = r2	# Lost update
      	spin_unlock(*A)		spin_unlock(*B)
      
      That is, the load of *B can occur prior to the store that makes *A
      visibly locked. And similarly for CPU 1. The result is both CPUs hold
      their lock and believe the other lock is unlocked.
      
      The easiest fix for this is to add a full memory barrier to the start of
      spin_is_locked(), so adding to our previous definition would give us:
      
      	bool spin_is_locked(*LOCK) {
      		smp_mb()
      		LOAD l = *LOCK
      		return l.locked
      	}
      
      The new barrier orders the store to the lock we are locking vs the load
      of the other lock:
      
      	CPU 0			CPU 1
      	==================================================
      	LOAD a = *A 		LOAD b = *B
      	a.locked = true		b.locked = true
      	STORE *A = a		STORE *B = b
      	smp_mb()		smp_mb()
      	LOAD b = *B		LOAD a = *A
      	if b.locked # true	if a.locked # true
      	# nothing		# nothing
      	spin_unlock(*A)		spin_unlock(*B)
      
      Although the above example is theoretical, there is code similar to this
      example in sem_lock() in ipc/sem.c. This commit in addition to the next
      commit appears to be a fix for crashes we are seeing in that code where
      we believe this race happens in practice.
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      
      (cherry picked from commit 51d7d520)
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      c0841c82
    • Vince Weaver's avatar
      perf/x86/intel: Use proper dTLB-load-misses event on IvyBridge · 7ab6e2b0
      Vince Weaver authored
      This was discussed back in February:
      
      	https://lkml.org/lkml/2014/2/18/956
      
      But I never saw a patch come out of it.
      
      On IvyBridge we share the SandyBridge cache event tables, but the
      dTLB-load-miss event is not compatible.  Patch it up after
      the fact to the proper DTLB_LOAD_MISSES.DEMAND_LD_MISS_CAUSES_A_WALK
      Signed-off-by: default avatarVince Weaver <vincent.weaver@maine.edu>
      Signed-off-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Link: http://lkml.kernel.org/r/alpine.DEB.2.11.1407141528200.17214@vincent-weaver-1.umelst.maine.eduSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      
      (cherry picked from commit 1996388e)
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      7ab6e2b0
    • Xiubo Li's avatar
      regmap: fix possible ZERO_SIZE_PTR pointer dereferencing error. · 3abc920d
      Xiubo Li authored
      Since we cannot make sure the 'val_count' will always be none zero
      here, and then if it equals to zero, the kmemdup() will return
      ZERO_SIZE_PTR, which equals to ((void *)16).
      
      So this patch fix this with just doing the zero check before calling
      kmemdup().
      Signed-off-by: default avatarXiubo Li <Li.Xiubo@freescale.com>
      Signed-off-by: default avatarMark Brown <broonie@kernel.org>
      Cc: stable@vger.kernel.org
      
      (cherry picked from commit d6b41cb0)
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      3abc920d
    • Xiubo Li's avatar
      regmap: debugfs: fix possbile NULL pointer dereference · cc0ef317
      Xiubo Li authored
      If 'map->dev' is NULL and there will lead dev_name() to be NULL pointer
      dereference. So before dev_name(), we need to have check of the map->dev
      pionter.
      
      We also should make sure that the 'name' pointer shouldn't be NULL for
      debugfs_create_dir(). So here using one default "dummy" debugfs name when
      the 'name' pointer and 'map->dev' are both NULL.
      Signed-off-by: default avatarXiubo Li <Li.Xiubo@freescale.com>
      Signed-off-by: default avatarMark Brown <broonie@kernel.org>
      Cc: stable@vger.kernel.org
      
      (cherry picked from commit 2c98e0c1)
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      cc0ef317
    • David S. Miller's avatar
      sparc64: Increase MAX_PHYS_ADDRESS_BITS to 53. · de321568
      David S. Miller authored
      Make sure, at compile time, that the kernel can properly support
      whatever MAX_PHYS_ADDRESS_BITS is defined to.
      
      On M7 chips, use a max_phys_bits value of 49.
      
      Based upon a patch by Bob Picco.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Acked-by: default avatarBob Picco <bob.picco@oracle.com>
      
      (cherry picked from commit 7c0fa0f2)
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      de321568
    • David S. Miller's avatar
      sparc64: Define VA hole at run time, rather than at compile time. · e7b37dc0
      David S. Miller authored
      Now that we use 4-level page tables, we can provide up to 53-bits of
      virtual address space to the user.
      
      Adjust the VA hole based upon the capabilities of the cpu type probed.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Acked-by: default avatarBob Picco <bob.picco@oracle.com>
      
      (cherry picked from commit 4397bed0)
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      e7b37dc0
    • David S. Miller's avatar
      sparc64: Switch to 4-level page tables. · 35600d44
      David S. Miller authored
      This has become necessary with chips that support more than 43-bits
      of physical addressing.
      
      Based almost entirely upon a patch by Bob Picco.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Acked-by: default avatarBob Picco <bob.picco@oracle.com>
      
      (cherry picked from commit ac55c768)
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      35600d44
    • David S. Miller's avatar
      sparc64: Add basic validations to {pud,pmd}_bad(). · 1edd4d01
      David S. Miller authored
      Instead of returning false we should at least check the most basic
      things, otherwise page table corruptions will be very difficult to
      debug.
      
      PMD and PTE tables are of size PAGE_SIZE, so none of the sub-PAGE_SIZE
      bits should be set.
      
      We also complement this with a check that the physical address the
      pud/pmd points to is valid memory.
      
      PowerPC was used as a guide while implementating this.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      
      (cherry picked from commit 26cf4325)
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      1edd4d01
    • David S. Miller's avatar
      sparc64: Don't use _PAGE_PRESENT in pte_modify() mask. · ea8bae4e
      David S. Miller authored
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      
      (cherry picked from commit eaf85da8)
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      ea8bae4e
    • David S. Miller's avatar
      sparc64: Fix bugs in get_user_pages_fast() wrt. THP. · 825e066e
      David S. Miller authored
      The large PMD path needs to check _PAGE_VALID not _PAGE_PRESENT, to
      decide if it needs to bail and return 0.
      
      pmd_large() should therefore just check _PAGE_PMD_HUGE.
      
      Calls to gup_huge_pmd() are guarded with a check of pmd_large(), so we
      just need to add a valid bit check.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      
      (cherry picked from commit 04df419d)
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      825e066e
    • David S. Miller's avatar
      sparc64: Fix executable bit testing in set_pmd_at() paths. · 795339db
      David S. Miller authored
      This code was mistakenly using the exec bit from the PMD in all
      cases, even when the PMD isn't a huge PMD.
      
      If it's not a huge PMD, test the exec bit in the individual ptes down
      in tlb_batch_pmd_scan().
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      
      (cherry picked from commit 5b1e94fa)
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      795339db
    • David S. Miller's avatar
      sparc64: Encode huge PMDs using PTE encoding. · c328731d
      David S. Miller authored
      Now that we have 64-bits for PMDs we can stop using special encodings
      for the huge PMD values, and just put real PTEs in there.
      
      We allocate a _PAGE_PMD_HUGE bit to distinguish between plain PMDs and
      huge ones.  It is the same for both 4U and 4V PTE layouts.
      
      We also use _PAGE_SPECIAL to indicate the splitting state, since a
      huge PMD cannot also be special.
      
      All of the PMD --> PTE translation code disappears, and most of the
      huge PMD bit modifications and tests just degenerate into the PTE
      operations.  In particular USER_PGTABLE_CHECK_PMD_HUGE becomes
      trivial.
      
      As a side effect, normal PMDs don't shift the physical address around.
      This also speeds up the page table walks in the TLB miss paths since
      they don't have to do the shifts any more.
      
      Another non-trivial aspect is that pte_modify() has to be changed
      to preserve the _PAGE_PMD_HUGE bits as well as the page size field
      of the pte.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      
      (cherry picked from commit a7b9403f)
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      c328731d
    • David S. Miller's avatar
      sparc64: Move to 64-bit PGDs and PMDs. · 54d34fdc
      David S. Miller authored
      To make the page tables compact, we were using 32-bit PGDs and PMDs.
      We only had to support <= 43 bits of physical addresses so this was
      quite feasible.
      
      In order to support larger physical addresses we have to move to
      64-bit PGDs and PMDs.
      
      Most of the changes are straight-forward:
      
      1) {pgd,pmd}_t --> unsigned long
      
      2) Anything that tries to use plain "unsigned int" types with pgd/pmd
         values needs to be adjusted.  In particular things like "0U" become
         "0UL".
      
      3) {PGDIR,PMD}_BITS decrease by one.
      
      4) In the assembler page table walkers, use "ldxa" instead of "lduwa"
         and adjust the low bit masks to clear out the low 3 bits instead of
         just the low 2 bits during pgd/pmd address formation.
      
      Also, use PTRS_PER_PGD and PTRS_PER_PMD in the sizing of the
      swapper_{pg_dir,low_pmd_dir} arrays.
      
      This patch does not try to take advantage of having 64-bits in the
      PMDs to simplify the hugepage code, that will come in a subsequent
      change.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      
      (cherry picked from commit 2b77933c)
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      54d34fdc
    • David S. Miller's avatar
      sparc64: Move from 4MB to 8MB huge pages. · 92ce8d3d
      David S. Miller authored
      The impetus for this is that we would like to move to 64-bit PMDs and
      PGDs, but that would result in only supporting a 42-bit address space
      with the current page table layout.  It'd be nice to support at least
      43-bits.
      
      The reason we'd end up with only 42-bits after making PMDs and PGDs
      64-bit is that we only use half-page sized PTE tables in order to make
      PMDs line up to 4MB, the hardware huge page size we use.
      
      So what we do here is we make huge pages 8MB, and fabricate them using
      4MB hw TLB entries.
      
      Facilitate this by providing a "REAL_HPAGE_SHIFT" which is used in
      places that really need to operate on hardware 4MB pages.
      
      Use full pages (512 entries) for PTE tables, and adjust PMD_SHIFT,
      PGD_SHIFT, and the build time CPP test as needed.  Use a CPP test to
      make sure REAL_HPAGE_SHIFT and the _PAGE_SZHUGE_* we use match up.
      
      This makes the pgtable cache completely unused, so remove the code
      managing it and the state used in mm_context_t.  Now we have less
      spinlocks taken in the page table allocation path.
      
      The technique we use to fabricate the 8MB pages is to transfer bit 22
      from the missing virtual address into the PTEs physical address field.
      That takes care of the transparent huge pages case.
      
      For hugetlb, we fill things in at the PTE level and that code already
      puts the sub huge page physical bits into the PTEs, based upon the
      offset, so there is nothing special we need to do.  It all just works
      out.
      
      So, a small amount of complexity in the THP case, but this code is
      about to get much simpler when we move the 64-bit PMDs as we can move
      away from the fancy 32-bit huge PMD encoding and just put a real PTE
      value in there.
      
      With bug fixes and help from Bob Picco.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      
      (cherry picked from commit 37b3a8ff)
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      92ce8d3d
    • David S. Miller's avatar
      sparc64: Make PAGE_OFFSET variable. · 0c1369f1
      David S. Miller authored
      Choose PAGE_OFFSET dynamically based upon cpu type.
      
      Original UltraSPARC-I (spitfire) chips only supported a 44-bit
      virtual address space.
      
      Newer chips (T4 and later) support 52-bit virtual addresses
      and up to 47-bits of physical memory space.
      
      Therefore we have to adjust PAGE_SIZE dynamically based upon
      the capabilities of the chip.
      
      Note that this change alone does not allow us to support > 43-bit
      physical memory, to do that we need to re-arrange our page table
      support.  The current encodings of the pmd_t and pgd_t pointers
      restricts us to "32 + 11" == 43 bits.
      
      This change can waste quite a bit of memory for the various tables.
      In particular, a future change should work to size and allocate
      kern_linear_bitmap[] and sparc64_valid_addr_bitmap[] dynamically.
      This isn't easy as we really cannot take a TLB miss when accessing
      kern_linear_bitmap[].  We'd have to lock it into the TLB or similar.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Acked-by: default avatarBob Picco <bob.picco@oracle.com>
      
      (cherry picked from commit b2d43834)
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      0c1369f1
    • David S. Miller's avatar
      sparc64: Fix inconsistent max-physical-address defines. · 62f69d39
      David S. Miller authored
      Some parts of the code use '41' others use '42', make them
      all use the same value.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Acked-by: default avatarBob Picco <bob.picco@oracle.com>
      
      (cherry picked from commit f998c9c0)
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      62f69d39
    • David S. Miller's avatar
      sparc64: Document the shift counts used to validate linear kernel addresses. · d6992c53
      David S. Miller authored
      This way we can see exactly what they are derived from, and in particular
      how they would change if we were to use a different PAGE_OFFSET value.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Acked-by: default avatarBob Picco <bob.picco@oracle.com>
      
      (cherry picked from commit bb7b4353)
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      d6992c53
    • David S. Miller's avatar
      sparc64: Define PAGE_OFFSET in terms of physical address bits. · 6904704d
      David S. Miller authored
      This makes clearer the implications for a given choosen
      value.
      
      Based upon patches by Bob Picco.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Acked-by: default avatarBob Picco <bob.picco@oracle.com>
      
      (cherry picked from commit e0a45e35)
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      6904704d
    • David S. Miller's avatar
      sparc64: Use PAGE_OFFSET instead of a magic constant. · b18a9864
      David S. Miller authored
      This pertains to all of the computations of the kernel fast
      TLB miss xor values.
      
      Based upon a patch by Bob Picco.
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Acked-by: default avatarBob Picco <bob.picco@oracle.com>
      
      (cherry picked from commit 922631b9)
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      b18a9864
    • David S. Miller's avatar
      sparc64: Clean up 64-bit mmap exclusion defines. · a867a924
      David S. Miller authored
      Older UltraSPARC chips had an address space hole due to the MMU only
      supporting 44-bit virtual addresses.
      
      The top end of this hole also has the same value as the current
      definition of PAGE_OFFSET, so this can be confusing.
      
      Consolidate the defines for the userspace mmap exclusion range into
      page_64.h and use them in sys_sparc_64.c and hugetlbpage.c
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Acked-by: default avatarBob Picco <bob.picco@oracle.com>
      
      (cherry picked from commit c920745e)
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      a867a924
    • Al Viro's avatar
      fold __d_shrink() into its only remaining caller · eecd56cd
      Al Viro authored
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      
      (cherry picked from commit b61625d2)
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      eecd56cd
    • Sanjeev Sharma's avatar
      uas: replace WARN_ON_ONCE() with lockdep_assert_held() · d73b515a
      Sanjeev Sharma authored
      on some architecture spin_is_locked() always return false in
      uniprocessor configuration and therefore it would be advise
      to replace with lockdep_assert_held().
      Signed-off-by: default avatarSanjeev Sharma <Sanjeev_Sharma@mentor.com>
      Acked-by: default avatarHans de Goede <hdegoede@redhat.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      
      (cherry picked from commit ab945eff)
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      d73b515a
    • Mark Knibbs's avatar
      storage: Add quirks for Castlewood and Double-H USB-SCSI converters · 066b5a4c
      Mark Knibbs authored
      Castlewood Systems supplied various models of USB-SCSI converter with their
      ORB external removable-media drive. The ORB Windows and Macintosh drivers
      support six USB IDs:
       084B:A001     [VID 084B is Castlewood Systems]
       04E6:0002 (*) ORB USB Smart Cable P/N 88205-001 (generic SCM ID)
       2027:A001     Double-H Technology DH-2000SC
       1822:0001 (*) Ariston iConnect/iSCSI
       07AF:0004 (*) Microtech XpressSCSI (25-pin)
       07AF:0005 (*) Microtech XpressSCSI (50-pin)
      
      *: quirk already in unusual-devs.h
      
      [Apparently the official VID for Double-H Technology is 0x07EB = 2027
      decimal. That's another hex/decimal mix-up with these SCM-based products
      (in addition to the Ariston and Entrega ones). Perhaps the USB-IF informed
      companies of their allocated VID in decimal, but they assumed it was hex?
      It seems all Entrega products used VID 0x1645, not just the USB-SCSI
      converter.]
      
      Double-H Technology Co., Ltd. produced a USB-SCSI converter, model
      DH-2000SC, which is probably the one supported by the ORB drivers. Perhaps
      the Castlewood-bundled product had a different label or PID though?
      Castlewood mentioned Conmate as being one type of USB-SCSI converter.
      Conmate and Double-H seem related somehow; both company addresses in the
      same road, and at one point the Conmate web site mentioned DH-2000H4,
      DH-200D4/DH-2000C4 as models of USB hub (DH short for Double-H presumably).
      Conmate did show a USB-SCSI converter model CM-660 on their web site at one
      point. My guess is that was identical to the DH-2000SC.
      
      Mention of the Double-H product:
        http://web.archive.org/web/20010221010141/http://www.doubleh.com.tw/dh-2000sc.htm
      The only picture I could find is at
        http://jp.acesuppliers.com/catalog/j64/component/page03.html
      The casing design looks the same as my ORB USB Smart Cable which has ID
      04E6:0002.
      
      Anyway, that's enough rambling. Here's the patch.
      
      storage: Add quirks for Castlewood and Double-H USB-SCSI converters
      
      Add quirks for two SCM-based USB-SCSI converters which were bundled with
      some Castlewood ORB removable drives. Without the quirk only the (single)
      drive with SCSI ID 0 can be accessed.
      Signed-off-by: default avatarMark Knibbs <markk@clara.co.uk>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      
      (cherry picked from commit 57cde01a)
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      066b5a4c
    • Mark Knibbs's avatar
      storage: Add quirk for another SCM-based USB-SCSI converter · 41826ac9
      Mark Knibbs authored
      There is apparently another SCM USB-SCSI converter with ID 04E6:000F. It
      is listed along with 04E6:000B in the Windows INF file for the Startech
      ICUSBSCSI2 as "eUSB SCSI Adapter (Bus Powered)". The quirk allows
      devices with SCSI ID other than 0 to be accessed.
      
      Also make a couple of existing SCM product IDs lower case to be
      consistent with other entries.
      Signed-off-by: default avatarMark Knibbs <markk@clara.co.uk>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      
      (cherry picked from commit 3512e7bf)
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      41826ac9
    • Lu Baolu's avatar
      USB: Add device quirk for ASUS T100 Base Station keyboard · d018b501
      Lu Baolu authored
      This full-speed USB device generates spurious remote wakeup event
      as soon as USB_DEVICE_REMOTE_WAKEUP feature is set. As the result,
      Linux can't enter system suspend and S0ix power saving modes once
      this keyboard is used.
      
      This patch tries to introduce USB_QUIRK_IGNORE_REMOTE_WAKEUP quirk.
      With this quirk set, wakeup capability will be ignored during
      device configure.
      
      This patch could be back-ported to kernels as old as 2.6.39.
      Signed-off-by: default avatarLu Baolu <baolu.lu@linux.intel.com>
      Acked-by: default avatarAlan Stern <stern@rowland.harvard.edu>
      Cc: stable <stable@vger.kernel.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      
      (cherry picked from commit ddbe1fca)
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      d018b501
    • Eliad Peller's avatar
      regulatory: fix misapplied alpha2 fix · 94286a69
      Eliad Peller authored
      Upstream commit a5fe8e76 (regulatory:
      add NUL to alpha2) contained a hunk that was supposed to be applied to
      struct ieee80211_reg_rule.  However in stable 3.12 (3.12.31 in
      particular), it ended up in struct regulatory_request. Fix that now.
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      Cc: Eliad Peller <eliadx.peller@intel.com>
      Cc: Johannes Berg <johannes.berg@intel.com>
      (cherry picked from commit 84197d64)
      
      (cherry picked from commit HEAD)
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      94286a69
    • Pali Rohár's avatar
      dell-wmi: Fix access out of memory · 8fd856ad
      Pali Rohár authored
      Without this patch, dell-wmi is trying to access elements of dynamically
      allocated array without checking the array size. This can lead to memory
      corruption or a kernel panic. This patch adds the missing checks for
      array size.
      Signed-off-by: default avatarPali Rohár <pali.rohar@gmail.com>
      Signed-off-by: default avatarDarren Hart <dvhart@linux.intel.com>
      
      (cherry picked from commit a666b6ff)
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      8fd856ad
    • Benjamin Tissoires's avatar
      HID: logitech-dj: prevent false errors to be shown · f137971a
      Benjamin Tissoires authored
      Commit "HID: logitech: perform bounds checking on device_id early
      enough" unfortunately leaks some errors to dmesg which are not real
      ones:
      - if the report is not a DJ one, then there is not point in checking
        the device_id
      - the receiver (index 0) can also receive some notifications which
        can be safely ignored given the current implementation
      
      Move out the test regarding the report_id and also discards
      printing errors when the receiver got notified.
      
      Fixes: ad3e14d7
      
      Cc: stable@vger.kernel.org
      Reported-and-tested-by: default avatarMarkus Trippelsdorf <markus@trippelsdorf.de>
      Signed-off-by: default avatarBenjamin Tissoires <benjamin.tissoires@redhat.com>
      Signed-off-by: default avatarJiri Kosina <jkosina@suse.cz>
      
      (cherry picked from commit 5abfe85c)
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      f137971a
    • Jiri Kosina's avatar
      HID: logitech: perform bounds checking on device_id early enough · 4decac82
      Jiri Kosina authored
      device_index is a char type and the size of paired_dj_deivces is 7
      elements, therefore proper bounds checking has to be applied to
      device_index before it is used.
      
      We are currently performing the bounds checking in
      logi_dj_recv_add_djhid_device(), which is too late, as malicious device
      could send REPORT_TYPE_NOTIF_DEVICE_UNPAIRED early enough and trigger the
      problem in one of the report forwarding functions called from
      logi_dj_raw_event().
      
      Fix this by performing the check at the earliest possible ocasion in
      logi_dj_raw_event().
      
      Cc: stable@vger.kernel.org
      Reported-by: default avatarBen Hawkes <hawkes@google.com>
      Reviewed-by: default avatarBenjamin Tissoires <benjamin.tissoires@redhat.com>
      Signed-off-by: default avatarJiri Kosina <jkosina@suse.cz>
      
      (cherry picked from commit ad3e14d7)
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      4decac82
    • David S. Miller's avatar
      sparc64: Fix register corruption in top-most kernel stack frame during boot. · a32886ba
      David S. Miller authored
      Meelis Roos reported that kernels built with gcc-4.9 do not boot, we
      eventually narrowed this down to only impacting machines using
      UltraSPARC-III and derivitive cpus.
      
      The crash happens right when the first user process is spawned:
      
      [   54.451346] Kernel panic - not syncing: Attempted to kill init! exitcode=0x00000004
      [   54.451346]
      [   54.571516] CPU: 1 PID: 1 Comm: init Not tainted 3.16.0-rc2-00211-gd7933ab7 #96
      [   54.666431] Call Trace:
      [   54.698453]  [0000000000762f8c] panic+0xb0/0x224
      [   54.759071]  [000000000045cf68] do_exit+0x948/0x960
      [   54.823123]  [000000000042cbc0] fault_in_user_windows+0xe0/0x100
      [   54.902036]  [0000000000404ad0] __handle_user_windows+0x0/0x10
      [   54.978662] Press Stop-A (L1-A) to return to the boot prom
      [   55.050713] ---[ end Kernel panic - not syncing: Attempted to kill init! exitcode=0x00000004
      
      Further investigation showed that compiling only per_cpu_patch() with
      an older compiler fixes the boot.
      
      Detailed analysis showed that the function is not being miscompiled by
      gcc-4.9, but it is using a different register allocation ordering.
      
      With the gcc-4.9 compiled function, something during the code patching
      causes some of the %i* input registers to get corrupted.  Perhaps
      we have a TLB miss path into the firmware that is deep enough to
      cause a register window spill and subsequent restore when we get
      back from the TLB miss trap.
      
      Let's plug this up by doing two things:
      
      1) Stop using the firmware stack for client interface calls into
         the firmware.  Just use the kernel's stack.
      
      2) As soon as we can, call into a new function "start_early_boot()"
         to put a one-register-window buffer between the firmware's
         deepest stack frame and the top-most initial kernel one.
      Reported-by: default avatarMeelis Roos <mroos@linux.ee>
      Tested-by: default avatarMeelis Roos <mroos@linux.ee>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      
      (cherry picked from commit ef3e035c)
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      a32886ba
    • bob picco's avatar
      sparc64: sparse irq · 211165d6
      bob picco authored
      This patch attempts to do a few things. The highlights are: 1) enable
      SPARSE_IRQ unconditionally, 2) kills off !SPARSE_IRQ code 3) allocates
      ivector_table at boot time and 4) default to cookie only VIRQ mechanism
      for supported firmware. The first firmware with cookie only support for
      me appears on T5. You can optionally force the HV firmware to not cookie
      only mode which is the sysino support.
      
      The sysino is a deprecated HV mechanism according to the most recent
      SPARC Virtual Machine Specification. HV_GRP_INTR is what controls the
      cookie/sysino firmware versioning.
      
      The history of this interface is:
      
      1) Major version 1.0 only supported sysino based interrupt interfaces.
      
      2) Major version 2.0 added cookie based VIRQs, however due to the fact
         that OSs were using the VIRQs without negoatiating major version
         2.0 (Linux and Solaris are both guilty), the VIRQs calls were
         allowed even with major version 1.0
      
         To complicate things even further, the VIRQ interfaces were only
         actually hooked up in the hypervisor for LDC interrupt sources.
         VIRQ calls on other device types would result in HV_EINVAL errors.
      
         So effectively, major version 2.0 is unusable.
      
      3) Major version 3.0 was created to signal use of VIRQs and the fact
         that the hypervisor has these calls hooked up for all interrupt
         sources, not just those for LDC devices.
      
      A new boot option is provided should cookie only HV support have issues.
      hvirq - this is the version for HV_GRP_INTR. This is related to HV API
      versioning.  The code attempts major=3 first by default. The option can
      be used to override this default.
      
      I've tested with SPARSE_IRQ on T5-8, M7-4 and T4-X and Jalap?no.
      Signed-off-by: default avatarBob Picco <bob.picco@oracle.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      
      (cherry picked from commit ee6a9333)
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      211165d6