1. 27 Jun, 2016 1 commit
    • Cyril Bur's avatar
      powerpc/tm: Always reclaim in start_thread() for exec() class syscalls · 8e96a87c
      Cyril Bur authored
      Userspace can quite legitimately perform an exec() syscall with a
      suspended transaction. exec() does not return to the old process, rather
      it load a new one and starts that, the expectation therefore is that the
      new process starts not in a transaction. Currently exec() is not treated
      any differently to any other syscall which creates problems.
      
      Firstly it could allow a new process to start with a suspended
      transaction for a binary that no longer exists. This means that the
      checkpointed state won't be valid and if the suspended transaction were
      ever to be resumed and subsequently aborted (a possibility which is
      exceedingly likely as exec()ing will likely doom the transaction) the
      new process will jump to invalid state.
      
      Secondly the incorrect attempt to keep the transactional state while
      still zeroing state for the new process creates at least two TM Bad
      Things. The first triggers on the rfid to return to userspace as
      start_thread() has given the new process a 'clean' MSR but the suspend
      will still be set in the hardware MSR. The second TM Bad Thing triggers
      in __switch_to() as the processor is still transactionally suspended but
      __switch_to() wants to zero the TM sprs for the new process.
      
      This is an example of the outcome of calling exec() with a suspended
      transaction. Note the first 700 is likely the first TM bad thing
      decsribed earlier only the kernel can't report it as we've loaded
      userspace registers. c000000000009980 is the rfid in
      fast_exception_return()
      
        Bad kernel stack pointer 3fffcfa1a370 at c000000000009980
        Oops: Bad kernel stack pointer, sig: 6 [#1]
        CPU: 0 PID: 2006 Comm: tm-execed Not tainted
        NIP: c000000000009980 LR: 0000000000000000 CTR: 0000000000000000
        REGS: c00000003ffefd40 TRAP: 0700   Not tainted
        MSR: 8000000300201031 <SF,ME,IR,DR,LE,TM[SE]>  CR: 00000000  XER: 00000000
        CFAR: c0000000000098b4 SOFTE: 0
        PACATMSCRATCH: b00000010000d033
        GPR00: 0000000000000000 00003fffcfa1a370 0000000000000000 0000000000000000
        GPR04: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
        GPR08: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
        GPR12: 00003fff966611c0 0000000000000000 0000000000000000 0000000000000000
        NIP [c000000000009980] fast_exception_return+0xb0/0xb8
        LR [0000000000000000]           (null)
        Call Trace:
        Instruction dump:
        f84d0278 e9a100d8 7c7b03a6 e84101a0 7c4ff120 e8410170 7c5a03a6 e8010070
        e8410080 e8610088 e8810090 e8210078 <4c000024> 48000000 e8610178 88ed023b
      
        Kernel BUG at c000000000043e80 [verbose debug info unavailable]
        Unexpected TM Bad Thing exception at c000000000043e80 (msr 0x201033)
        Oops: Unrecoverable exception, sig: 6 [#2]
        CPU: 0 PID: 2006 Comm: tm-execed Tainted: G      D
        task: c0000000fbea6d80 ti: c00000003ffec000 task.ti: c0000000fb7ec000
        NIP: c000000000043e80 LR: c000000000015a24 CTR: 0000000000000000
        REGS: c00000003ffef7e0 TRAP: 0700   Tainted: G      D
        MSR: 8000000300201033 <SF,ME,IR,DR,RI,LE,TM[SE]>  CR: 28002828  XER: 00000000
        CFAR: c000000000015a20 SOFTE: 0
        PACATMSCRATCH: b00000010000d033
        GPR00: 0000000000000000 c00000003ffefa60 c000000000db5500 c0000000fbead000
        GPR04: 8000000300001033 2222222222222222 2222222222222222 00000000ff160000
        GPR08: 0000000000000000 800000010000d033 c0000000fb7e3ea0 c00000000fe00004
        GPR12: 0000000000002200 c00000000fe00000 0000000000000000 0000000000000000
        GPR16: 0000000000000000 0000000000000000 0000000000000000 0000000000000000
        GPR20: 0000000000000000 0000000000000000 c0000000fbea7410 00000000ff160000
        GPR24: c0000000ffe1f600 c0000000fbea8700 c0000000fbea8700 c0000000fbead000
        GPR28: c000000000e20198 c0000000fbea6d80 c0000000fbeab680 c0000000fbea6d80
        NIP [c000000000043e80] tm_restore_sprs+0xc/0x1c
        LR [c000000000015a24] __switch_to+0x1f4/0x420
        Call Trace:
        Instruction dump:
        7c800164 4e800020 7c0022a6 f80304a8 7c0222a6 f80304b0 7c0122a6 f80304b8
        4e800020 e80304a8 7c0023a6 e80304b0 <7c0223a6> e80304b8 7c0123a6 4e800020
      
      This fixes CVE-2016-5828.
      
      Fixes: bc2a9408 ("powerpc: Hook in new transactional memory code")
      Cc: stable@vger.kernel.org # v3.9+
      Signed-off-by: default avatarCyril Bur <cyrilbur@gmail.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      8e96a87c
  2. 23 Jun, 2016 1 commit
  3. 22 Jun, 2016 1 commit
    • Michael Ellerman's avatar
      powerpc: Fix faults caused by radix patching of SLB miss handler · 6e914ee6
      Michael Ellerman authored
      As part of the Radix MMU support we added some feature sections in the
      SLB miss handler. These are intended to catch the case that we
      incorrectly take an SLB miss when Radix is enabled, and instead of
      crashing weirdly they bail out to a well defined exit path and trigger
      an oops.
      
      However the way they were written meant the bailout case was enabled by
      default until we did CPU feature patching.
      
      On powermacs the early debug prints in setup_system() can cause an SLB
      miss, which happens before code patching, and so the SLB miss handler
      would incorrectly bailout and crash during boot.
      
      Fix it by inverting the sense of the feature section, so that the code
      which is in place at boot is correct for the hash case. Once we
      determine we are using Radix - which will never happen on a powermac -
      only then do we patch in the bailout case which unconditionally jumps.
      
      Fixes: caca285e ("powerpc/mm/radix: Use STD_MMU_64 to properly isolate hash related code")
      Reported-by: default avatarDenis Kirjanov <kda@linux-powerpc.org>
      Tested-by: default avatarDenis Kirjanov <kda@linux-powerpc.org>
      Reviewed-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      6e914ee6
  4. 17 Jun, 2016 3 commits
  5. 14 Jun, 2016 1 commit
  6. 10 Jun, 2016 3 commits
  7. 08 Jun, 2016 4 commits
  8. 06 Jun, 2016 2 commits
  9. 01 Jun, 2016 4 commits
    • Thomas Huth's avatar
      powerpc/pseries: Add POWER8NVL support to ibm,client-architecture-support call · 7cc85103
      Thomas Huth authored
      If we do not provide the PVR for POWER8NVL, a guest on this system
      currently ends up in PowerISA 2.06 compatibility mode on KVM, since QEMU
      does not provide a generic PowerISA 2.07 mode yet. So some new
      instructions from POWER8 (like "mtvsrd") get disabled for the guest,
      resulting in crashes when using code compiled explicitly for
      POWER8 (e.g. with the "-mcpu=power8" option of GCC).
      
      Fixes: ddee09c0 ("powerpc: Add PVR for POWER8NVL processor")
      Cc: stable@vger.kernel.org # v4.0+
      Signed-off-by: default avatarThomas Huth <thuth@redhat.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      7cc85103
    • Aneesh Kumar K.V's avatar
      powerpc/mm/radix: Add missing tlb flush · 157d4d06
      Aneesh Kumar K.V authored
      This should not have any impact on hash, because hash does tlb
      invalidate with every pte update and we don't implement
      flush_tlb_* functions for hash. With radix we should make an explicit
      call to flush tlb outside pte update.
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      157d4d06
    • Aneesh Kumar K.V's avatar
      powerpc/mm/hash: Fix the reference bit update when handling hash fault · dc47c0c1
      Aneesh Kumar K.V authored
      When we converted the asm routines to C functions, we missed updating
      HPTE_R_R based on _PAGE_ACCESSED. ASM code used to copy over the lower
      bits from pte via.
      
      andi.	r3,r30,0x1fe		/* Get basic set of flags */
      
      We also update the code such that we won't update the Change bit ('C'
      bit) always. This was added by commit c5cf0e30 ("powerpc: Fix
      buglet with MMU hash management").
      
      With hash64, we need to make sure that hardware doesn't do a pte update
      directly. This is because we do end up with entries in TLB with no hash
      page table entry. This happens because when we find a hash bucket full,
      we "evict" a more/less random entry from it. When we do that we don't
      invalidate the TLB (hpte_remove) because we assume the old translation
      is still technically "valid". For more info look at commit
      0608d692("powerpc/mm: Always invalidate tlb on hpte invalidate and
      update").
      
      Thus it's critical that valid hash PTEs always have reference bit set
      and writeable ones have change bit set. We do this by hashing a
      non-dirty linux PTE as read-only and always setting _PAGE_ACCESSED (and
      thus R) when hashing anything else in. Any attempt by Linux at clearing
      those bits also removes the corresponding hash entry.
      
      Commit 5cf0e30bf3d8 did that for 'C' bit by enabling 'C' bit always.
      We don't really need to do that because we never map a RW pte entry
      without setting 'C' bit. On READ fault on a RW pte entry, we still map
      it READ only, hence a store update in the page will still cause a hash
      pte fault.
      
      This patch reverts the part of commit c5cf0e30 ("[PATCH] powerpc:
      Fix buglet with MMU hash management") and retain the updatepp part.
      
      - If we hit the updatepp path on native, the old code without that
        commit, would fail to set C bcause native_hpte_updatepp()
        was implemented to filter the same bits as H_PROTECT and not let C
        through thus we would "upgrade" a RO HPTE to RW without setting C
        thus causing the bug. So the real fix in that commit was the change
        to native_hpte_updatepp
      
      Fixes: 89ff7250 ("powerpc/mm: Convert __hash_page_64K to C")
      Cc: stable@vger.kernel.org # v4.5+
      Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      dc47c0c1
    • Aneesh Kumar K.V's avatar
      powerpc/mm/radix: Update LPCR only if it is powernv · d6c88600
      Aneesh Kumar K.V authored
      LPCR cannot be updated when running in guest mode.
      
      Fixes: 2bfd65e4 ("powerpc/mm/radix: Add radix callbacks for early init routines")
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      d6c88600
  10. 31 May, 2016 2 commits
  11. 30 May, 2016 2 commits
  12. 29 May, 2016 3 commits
  13. 28 May, 2016 13 commits
    • Mikulas Patocka's avatar
      hpfs: implement the show_options method · 037369b8
      Mikulas Patocka authored
      The HPFS filesystem used generic_show_options to produce string that is
      displayed in /proc/mounts.  However, there is a problem that the options
      may disappear after remount.  If we mount the filesystem with option1
      and then remount it with option2, /proc/mounts should show both option1
      and option2, however it only shows option2 because the whole option
      string is replaced with replace_mount_options in hpfs_remount_fs.
      
      To fix this bug, implement the hpfs_show_options function that prints
      options that are currently selected.
      Signed-off-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      037369b8
    • Mikulas Patocka's avatar
      affs: fix remount failure when there are no options changed · 01d6e087
      Mikulas Patocka authored
      Commit c8f33d0b ("affs: kstrdup() memory handling") checks if the
      kstrdup function returns NULL due to out-of-memory condition.
      
      However, if we are remounting a filesystem with no change to
      filesystem-specific options, the parameter data is NULL.  In this case,
      kstrdup returns NULL (because it was passed NULL parameter), although no
      out of memory condition exists.  The mount syscall then fails with
      ENOMEM.
      
      This patch fixes the bug.  We fail with ENOMEM only if data is non-NULL.
      
      The patch also changes the call to replace_mount_options - if we didn't
      pass any filesystem-specific options, we don't call
      replace_mount_options (thus we don't erase existing reported options).
      
      Fixes: c8f33d0b ("affs: kstrdup() memory handling")
      Signed-off-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Cc: stable@vger.kernel.org	# v4.1+
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      01d6e087
    • Mikulas Patocka's avatar
      hpfs: fix remount failure when there are no options changed · 44d51706
      Mikulas Patocka authored
      Commit ce657611 ("hpfs: kstrdup() out of memory handling") checks if
      the kstrdup function returns NULL due to out-of-memory condition.
      
      However, if we are remounting a filesystem with no change to
      filesystem-specific options, the parameter data is NULL.  In this case,
      kstrdup returns NULL (because it was passed NULL parameter), although no
      out of memory condition exists.  The mount syscall then fails with
      ENOMEM.
      
      This patch fixes the bug.  We fail with ENOMEM only if data is non-NULL.
      
      The patch also changes the call to replace_mount_options - if we didn't
      pass any filesystem-specific options, we don't call
      replace_mount_options (thus we don't erase existing reported options).
      
      Fixes: ce657611 ("hpfs: kstrdup() out of memory handling")
      Signed-off-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      44d51706
    • Linus Torvalds's avatar
      Merge branch 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus · 4029632c
      Linus Torvalds authored
      Pull more MIPS updates from Ralf Baechle:
       "This is the secondnd batch of MIPS patches for 4.7. Summary:
      
        CPS:
         - Copy EVA configuration when starting secondary VPs.
      
        EIC:
         - Clear Status IPL.
      
        Lasat:
         - Fix a few off by one bugs.
      
        lib:
         - Mark intrinsics notrace.  Not only are the intrinsics
           uninteresting, it would cause infinite recursion.
      
        MAINTAINERS:
         - Add file patterns for MIPS BRCM device tree bindings.
         - Add file patterns for mips device tree bindings.
      
        MT7628:
         - Fix MT7628 pinmux typos.
         - wled_an pinmux gpio.
         - EPHY LEDs pinmux support.
      
        Pistachio:
         - Enable KASLR
      
        VDSO:
         - Build microMIPS VDSO for microMIPS kernels.
         - Fix aliasing warning by building with `-fno-strict-aliasing' for
           debugging but also tracing them might result in recursion.
      
        Misc:
         - Add missing FROZEN hotplug notifier transitions.
         - Fix clk binding example for varioius PIC32 devices.
         - Fix cpu interrupt controller node-names in the DT files.
         - Fix XPA CPU feature separation.
         - Fix write_gc0_* macros when writing zero.
         - Add inline asm encoding helpers.
         - Add missing VZ accessor microMIPS encodings.
         - Fix little endian microMIPS MSA encodings.
         - Add 64-bit HTW fields and fix its configuration.
         - Fix sigreturn via VDSO on microMIPS kernel.
         - Lots of typo fixes.
         - Add definitions of SegCtl registers and use them"
      
      * 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus: (49 commits)
        MIPS: Add missing FROZEN hotplug notifier transitions
        MIPS: Build microMIPS VDSO for microMIPS kernels
        MIPS: Fix sigreturn via VDSO on microMIPS kernel
        MIPS: devicetree: fix cpu interrupt controller node-names
        MIPS: VDSO: Build with `-fno-strict-aliasing'
        MIPS: Pistachio: Enable KASLR
        MIPS: lib: Mark intrinsics notrace
        MIPS: Fix 64-bit HTW configuration
        MIPS: Add 64-bit HTW fields
        MAINTAINERS: Add file patterns for mips device tree bindings
        MAINTAINERS: Add file patterns for mips brcm device tree bindings
        MIPS: Simplify DSP instruction encoding macros
        MIPS: Add missing tlbinvf/XPA microMIPS encodings
        MIPS: Fix little endian microMIPS MSA encodings
        MIPS: Add missing VZ accessor microMIPS encodings
        MIPS: Add inline asm encoding helpers
        MIPS: Spelling fix lets -> let's
        MIPS: VR41xx: Fix typo
        MIPS: oprofile: Fix typo
        MIPS: math-emu: Fix typo
        ...
      4029632c
    • Guenter Roeck's avatar
      fs: fix binfmt_aout.c build error · d66492bc
      Guenter Roeck authored
      Various builds (such as i386:allmodconfig) fail with
      
        fs/binfmt_aout.c:133:2: error: expected identifier or '(' before 'return'
        fs/binfmt_aout.c:134:1: error: expected identifier or '(' before '}' token
      
      [ Oops. My bad, I had stupidly thought that "allmodconfig" covered this
        on x86-64 too, but it obviously doesn't.  Egg on my face.  - Linus ]
      
      Fixes: 5d22fc25 ("mm: remove more IS_ERR_VALUE abuses")
      Signed-off-by: default avatarGuenter Roeck <linux@roeck-us.net>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d66492bc
    • Linus Torvalds's avatar
      Merge branch 'hash' of git://ftp.sciencehorizons.net/linux · 7e0fb73c
      Linus Torvalds authored
      Pull string hash improvements from George Spelvin:
       "This series does several related things:
      
         - Makes the dcache hash (fs/namei.c) useful for general kernel use.
      
           (Thanks to Bruce for noticing the zero-length corner case)
      
         - Converts the string hashes in <linux/sunrpc/svcauth.h> to use the
           above.
      
         - Avoids 64-bit multiplies in hash_64() on 32-bit platforms.  Two
           32-bit multiplies will do well enough.
      
         - Rids the world of the bad hash multipliers in hash_32.
      
           This finishes the job started in commit 689de1d6 ("Minimal
           fix-up of bad hashing behavior of hash_64()")
      
           The vast majority of Linux architectures have hardware support for
           32x32-bit multiply and so derive no benefit from "simplified"
           multipliers.
      
           The few processors that do not (68000, h8/300 and some models of
           Microblaze) have arch-specific implementations added.  Those
           patches are last in the series.
      
         - Overhauls the dcache hash mixing.
      
           The patch in commit 0fed3ac8 ("namei: Improve hash mixing if
           CONFIG_DCACHE_WORD_ACCESS") was an off-the-cuff suggestion.
           Replaced with a much more careful design that's simultaneously
           faster and better.  (My own invention, as there was noting suitable
           in the literature I could find.  Comments welcome!)
      
         - Modify the hash_name() loop to skip the initial HASH_MIX().  This
           would let us salt the hash if we ever wanted to.
      
         - Sort out partial_name_hash().
      
           The hash function is declared as using a long state, even though
           it's truncated to 32 bits at the end and the extra internal state
           contributes nothing to the result.  And some callers do odd things:
      
            - fs/hfs/string.c only allocates 32 bits of state
            - fs/hfsplus/unicode.c uses it to hash 16-bit unicode symbols not bytes
      
         - Modify bytemask_from_count to handle inputs of 1..sizeof(long)
           rather than 0..sizeof(long)-1.  This would simplify users other
           than full_name_hash"
      
        Special thanks to Bruce Fields for testing and finding bugs in v1.  (I
        learned some humbling lessons about "obviously correct" code.)
      
        On the arch-specific front, the m68k assembly has been tested in a
        standalone test harness, I've been in contact with the Microblaze
        maintainers who mostly don't care, as the hardware multiplier is never
        omitted in real-world applications, and I haven't heard anything from
        the H8/300 world"
      
      * 'hash' of git://ftp.sciencehorizons.net/linux:
        h8300: Add <asm/hash.h>
        microblaze: Add <asm/hash.h>
        m68k: Add <asm/hash.h>
        <linux/hash.h>: Add support for architecture-specific functions
        fs/namei.c: Improve dcache hash function
        Eliminate bad hash multipliers from hash_32() and  hash_64()
        Change hash_64() return value to 32 bits
        <linux/sunrpc/svcauth.h>: Define hash_str() in terms of hashlen_string()
        fs/namei.c: Add hashlen_string() function
        Pull out string hash to <linux/stringhash.h>
      7e0fb73c
    • George Spelvin's avatar
      h8300: Add <asm/hash.h> · 4684fe95
      George Spelvin authored
      This will improve the performance of hash_32() and hash_64(), but due
      to complete lack of multi-bit shift instructions on H8, performance will
      still be bad in surrounding code.
      
      Designing H8-specific hash algorithms to work around that is a separate
      project.  (But if the maintainers would like to get in touch...)
      Signed-off-by: default avatarGeorge Spelvin <linux@sciencehorizons.net>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: uclinux-h8-devel@lists.sourceforge.jp
      4684fe95
    • George Spelvin's avatar
      microblaze: Add <asm/hash.h> · 7b13277b
      George Spelvin authored
      Microblaze is an FPGA soft core that can be configured various ways.
      
      If it is configured without a multiplier, the standard __hash_32()
      will require a call to __mulsi3, which is a slow software loop.
      
      Instead, use a shift-and-add sequence for the constant multiply.
      GCC knows how to do this, but it's not as clever as some.
      Signed-off-by: default avatarGeorge Spelvin <linux@sciencehorizons.net>
      Cc: Alistair Francis <alistair.francis@xilinx.com>
      Cc: Michal Simek <michal.simek@xilinx.com>
      7b13277b
    • George Spelvin's avatar
      m68k: Add <asm/hash.h> · 14c44b95
      George Spelvin authored
      This provides a multiply by constant GOLDEN_RATIO_32 = 0x61C88647
      for the original mc68000, which lacks a 32x32-bit multiply instruction.
      
      Yes, the amount of optimization effort put in is excessive. :-)
      
      Shift-add chain found by Yevgen Voronenko's Hcub algorithm at
      http://spiral.ece.cmu.edu/mcm/gen.htmlSigned-off-by: default avatarGeorge Spelvin <linux@sciencehorizons.net>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Greg Ungerer <gerg@linux-m68k.org>
      Cc: Andreas Schwab <schwab@linux-m68k.org>
      Cc: Philippe De Muyter <phdm@macq.eu>
      Cc: linux-m68k@lists.linux-m68k.org
      14c44b95
    • George Spelvin's avatar
      <linux/hash.h>: Add support for architecture-specific functions · 468a9428
      George Spelvin authored
      This is just the infrastructure; there are no users yet.
      
      This is modelled on CONFIG_ARCH_RANDOM; a CONFIG_ symbol declares
      the existence of <asm/hash.h>.
      
      That file may define its own versions of various functions, and define
      HAVE_* symbols (no CONFIG_ prefix!) to suppress the generic ones.
      
      Included is a self-test (in lib/test_hash.c) that verifies the basics.
      It is NOT in general required that the arch-specific functions compute
      the same thing as the generic, but if a HAVE_* symbol is defined with
      the value 1, then equality is tested.
      Signed-off-by: default avatarGeorge Spelvin <linux@sciencehorizons.net>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Greg Ungerer <gerg@linux-m68k.org>
      Cc: Andreas Schwab <schwab@linux-m68k.org>
      Cc: Philippe De Muyter <phdm@macq.eu>
      Cc: linux-m68k@lists.linux-m68k.org
      Cc: Alistair Francis <alistai@xilinx.com>
      Cc: Michal Simek <michal.simek@xilinx.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: uclinux-h8-devel@lists.sourceforge.jp
      468a9428
    • George Spelvin's avatar
      fs/namei.c: Improve dcache hash function · 2a18da7a
      George Spelvin authored
      Patch 0fed3ac8 improved the hash mixing, but the function is slower
      than necessary; there's a 7-instruction dependency chain (10 on x86)
      each loop iteration.
      
      Word-at-a-time access is a very tight loop (which is good, because
      link_path_walk() is one of the hottest code paths in the entire kernel),
      and the hash mixing function must not have a longer latency to avoid
      slowing it down.
      
      There do not appear to be any published fast hash functions that:
      1) Operate on the input a word at a time, and
      2) Don't need to know the length of the input beforehand, and
      3) Have a single iterated mixing function, not needing conditional
         branches or unrolling to distinguish different loop iterations.
      
      One of the algorithms which comes closest is Yann Collet's xxHash, but
      that's two dependent multiplies per word, which is too much.
      
      The key insights in this design are:
      
      1) Barring expensive ops like multiplies, to diffuse one input bit
         across 64 bits of hash state takes at least log2(64) = 6 sequentially
         dependent instructions.  That is more cycles than we'd like.
      2) An operation like "hash ^= hash << 13" requires a second temporary
         register anyway, and on a 2-operand machine like x86, it's three
         instructions.
      3) A better use of a second register is to hold a two-word hash state.
         With careful design, no temporaries are needed at all, so it doesn't
         increase register pressure.  And this gets rid of register copying
         on 2-operand machines, so the code is smaller and faster.
      4) Using two words of state weakens the requirement for one-round mixing;
         we now have two rounds of mixing before cancellation is possible.
      5) A two-word hash state also allows operations on both halves to be
         done in parallel, so on a superscalar processor we get more mixing
         in fewer cycles.
      
      I ended up using a mixing function inspired by the ChaCha and Speck
      round functions.  It is 6 simple instructions and 3 cycles per iteration
      (assuming multiply by 9 can be done by an "lea" instruction):
      
      		x ^= *input++;
      	y ^= x;	x = ROL(x, K1);
      	x += y;	y = ROL(y, K2);
      	y *= 9;
      
      Not only is this reversible, two consecutive rounds are reversible:
      if you are given the initial and final states, but not the intermediate
      state, it is possible to compute both input words.  This means that at
      least 3 words of input are required to create a collision.
      
      (It also has the property, used by hash_name() to avoid a branch, that
      it hashes all-zero to all-zero.)
      
      The rotate constants K1 and K2 were found by experiment.  The search took
      a sample of random initial states (I used 1023) and considered the effect
      of flipping each of the 64 input bits on each of the 128 output bits two
      rounds later.  Each of the 8192 pairs can be considered a biased coin, and
      adding up the Shannon entropy of all of them produces a score.
      
      The best-scoring shifts also did well in other tests (flipping bits in y,
      trying 3 or 4 rounds of mixing, flipping all 64*63/2 pairs of input bits),
      so the choice was made with the additional constraint that the sum of the
      shifts is odd and not too close to the word size.
      
      The final state is then folded into a 32-bit hash value by a less carefully
      optimized multiply-based scheme.  This also has to be fast, as pathname
      components tend to be short (the most common case is one iteration!), but
      there's some room for latency, as there is a fair bit of intervening logic
      before the hash value is used for anything.
      
      (Performance verified with "bonnie++ -s 0 -n 1536:-2" on tmpfs.  I need
      a better benchmark; the numbers seem to show a slight dip in performance
      between 4.6.0 and this patch, but they're too noisy to quote.)
      
      Special thanks to Bruce fields for diligent testing which uncovered a
      nasty fencepost error in an earlier version of this patch.
      
      [checkpatch.pl formatting complaints noted and respectfully disagreed with.]
      Signed-off-by: default avatarGeorge Spelvin <linux@sciencehorizons.net>
      Tested-by: default avatarJ. Bruce Fields <bfields@redhat.com>
      2a18da7a
    • George Spelvin's avatar
      Eliminate bad hash multipliers from hash_32() and hash_64() · ef703f49
      George Spelvin authored
      The "simplified" prime multipliers made very bad hash functions, so get rid
      of them.  This completes the work of 689de1d6.
      
      To avoid the inefficiency which was the motivation for the "simplified"
      multipliers, hash_64() on 32-bit systems is changed to use a different
      algorithm.  It makes two calls to hash_32() instead.
      
      drivers/media/usb/dvb-usb-v2/af9015.c uses the old GOLDEN_RATIO_PRIME_32
      for some horrible reason, so it inherits a copy of the old definition.
      Signed-off-by: default avatarGeorge Spelvin <linux@sciencehorizons.net>
      Cc: Antti Palosaari <crope@iki.fi>
      Cc: Mauro Carvalho Chehab <m.chehab@samsung.com>
      ef703f49
    • George Spelvin's avatar
      Change hash_64() return value to 32 bits · 92d56774
      George Spelvin authored
      That's all that's ever asked for, and it makes the return
      type of hash_long() consistent.
      
      It also allows (upcoming patch) an optimized implementation
      of hash_64 on 32-bit machines.
      
      I tried adding a BUILD_BUG_ON to ensure the number of bits requested
      was never more than 32 (most callers use a compile-time constant), but
      adding <linux/bug.h> to <linux/hash.h> breaks the tools/perf compiler
      unless tools/perf/MANIFEST is updated, and understanding that code base
      well enough to update it is too much trouble.  I did the rest of an
      allyesconfig build with such a check, and nothing tripped.
      Signed-off-by: default avatarGeorge Spelvin <linux@sciencehorizons.net>
      92d56774