1. 15 Jul, 2016 3 commits
    • Vegard Nossum's avatar
      ext4: short-cut orphan cleanup on error · c65d5c6c
      Vegard Nossum authored
      If we encounter a filesystem error during orphan cleanup, we should stop.
      Otherwise, we may end up in an infinite loop where the same inode is
      processed again and again.
      
          EXT4-fs (loop0): warning: checktime reached, running e2fsck is recommended
          EXT4-fs error (device loop0): ext4_mb_generate_buddy:758: group 2, block bitmap and bg descriptor inconsistent: 6117 vs 0 free clusters
          Aborting journal on device loop0-8.
          EXT4-fs (loop0): Remounting filesystem read-only
          EXT4-fs error (device loop0) in ext4_free_blocks:4895: Journal has aborted
          EXT4-fs error (device loop0) in ext4_do_update_inode:4893: Journal has aborted
          EXT4-fs error (device loop0) in ext4_do_update_inode:4893: Journal has aborted
          EXT4-fs error (device loop0) in ext4_ext_remove_space:3068: IO failure
          EXT4-fs error (device loop0) in ext4_ext_truncate:4667: Journal has aborted
          EXT4-fs error (device loop0) in ext4_orphan_del:2927: Journal has aborted
          EXT4-fs error (device loop0) in ext4_do_update_inode:4893: Journal has aborted
          EXT4-fs (loop0): Inode 16 (00000000618192a0): orphan list check failed!
          [...]
          EXT4-fs (loop0): Inode 16 (0000000061819748): orphan list check failed!
          [...]
          EXT4-fs (loop0): Inode 16 (0000000061819bf0): orphan list check failed!
          [...]
      
      See-also: c9eb13a9 ("ext4: fix hang when processing corrupted orphaned inode list")
      Cc: Jan Kara <jack@suse.cz>
      Signed-off-by: default avatarVegard Nossum <vegard.nossum@oracle.com>
      Signed-off-by: default avatarTheodore Ts'o <tytso@mit.edu>
      Cc: stable@vger.kernel.org
      c65d5c6c
    • Vegard Nossum's avatar
      ext4: fix reference counting bug on block allocation error · 554a5ccc
      Vegard Nossum authored
      If we hit this error when mounted with errors=continue or
      errors=remount-ro:
      
          EXT4-fs error (device loop0): ext4_mb_mark_diskspace_used:2940: comm ext4.exe: Allocating blocks 5090-6081 which overlap fs metadata
      
      then ext4_mb_new_blocks() will call ext4_mb_release_context() and try to
      continue. However, ext4_mb_release_context() is the wrong thing to call
      here since we are still actually using the allocation context.
      
      Instead, just error out. We could retry the allocation, but there is a
      possibility of getting stuck in an infinite loop instead, so this seems
      safer.
      
      [ Fixed up so we don't return EAGAIN to userspace. --tytso ]
      
      Fixes: 8556e8f3 ("ext4: Don't allow new groups to be added during block allocation")
      Signed-off-by: default avatarVegard Nossum <vegard.nossum@oracle.com>
      Signed-off-by: default avatarTheodore Ts'o <tytso@mit.edu>
      Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Cc: stable@vger.kernel.org
      554a5ccc
    • Theodore Ts'o's avatar
      MAINTAINRES: fs-crypto maintainers update · 598c7d7a
      Theodore Ts'o authored
      Signed-off-by: default avatarTheodore Ts'o <tytso@mit.edu>
      Cc: Jaegeuk Kim <jaegeuk@kernel.org>
      598c7d7a
  2. 10 Jul, 2016 1 commit
  3. 06 Jul, 2016 3 commits
    • Carlos Maiolino's avatar
      ext2: fix filesystem deadlock while reading corrupted xattr block · ff0031d8
      Carlos Maiolino authored
      This bug can be reproducible with fsfuzzer, although, I couldn't reproduce it
      100% of my tries, it is quite easily reproducible.
      
      During the deletion of an inode, ext2_xattr_delete_inode() does not check if the
      block pointed by EXT2_I(inode)->i_file_acl is a valid data block, this might
      lead to a deadlock, when i_file_acl == 1, and the filesystem block size is 1024.
      
      In that situation, ext2_xattr_delete_inode, will load the superblock's buffer
      head (instead of a valid i_file_acl block), and then lock that buffer head,
      which, ext2_sync_super will also try to lock, making the filesystem deadlock in
      the following stack trace:
      
      root     17180  0.0  0.0 113660   660 pts/0    D+   07:08   0:00 rmdir
      /media/test/dir1
      
      [<ffffffff8125da9f>] __sync_dirty_buffer+0xaf/0x100
      [<ffffffff8125db03>] sync_dirty_buffer+0x13/0x20
      [<ffffffffa03f0d57>] ext2_sync_super+0xb7/0xc0 [ext2]
      [<ffffffffa03f10b9>] ext2_error+0x119/0x130 [ext2]
      [<ffffffffa03e9d93>] ext2_free_blocks+0x83/0x350 [ext2]
      [<ffffffffa03f3d03>] ext2_xattr_delete_inode+0x173/0x190 [ext2]
      [<ffffffffa03ee9e9>] ext2_evict_inode+0xc9/0x130 [ext2]
      [<ffffffff8123fd23>] evict+0xb3/0x180
      [<ffffffff81240008>] iput+0x1b8/0x240
      [<ffffffff8123c4ac>] d_delete+0x11c/0x150
      [<ffffffff8122fa7e>] vfs_rmdir+0xfe/0x120
      [<ffffffff812340ee>] do_rmdir+0x17e/0x1f0
      [<ffffffff81234dd6>] SyS_rmdir+0x16/0x20
      [<ffffffff81838cf2>] entry_SYSCALL_64_fastpath+0x1a/0xa4
      [<ffffffffffffffff>] 0xffffffffffffffff
      
      Fix this by using the same approach ext4 uses to test data blocks validity,
      implementing ext2_data_block_valid.
      
      An another possibility when the superblock is very corrupted, is that i_file_acl
      is 1, block_count is 1 and first_data_block is 0. For such situations, we might
      have i_file_acl pointing to a 'valid' block, but still step over the superblock.
      The approach I used was to also test if the superblock is not in the range
      described by ext2_data_block_valid() arguments
      Signed-off-by: default avatarCarlos Maiolino <cmaiolino@redhat.com>
      Signed-off-by: default avatarTheodore Ts'o <tytso@mit.edu>
      ff0031d8
    • Wang Shilong's avatar
      ext4: fix project quota accounting without quota limits enabled · 079788d0
      Wang Shilong authored
      We should always transfer quota accounting, regardless of whether
      quota limits are enabled.
      
      Steps to reproduce:
        # mkfs.ext4 /dev/sda4 -O quota,project
        # mount /dev/sda4 /mnt/test
        # cp /bin/bash /mnt/test
        # chattr -p 123 /mnt/test/bash
        # quota -v -P 123
      Signed-off-by: default avatarWang Shilong <wshilong@ddn.com>
      Signed-off-by: default avatarTheodore Ts'o <tytso@mit.edu>
      079788d0
    • Theodore Ts'o's avatar
      ext4: validate s_reserved_gdt_blocks on mount · 5b9554dc
      Theodore Ts'o authored
      If s_reserved_gdt_blocks is extremely large, it's possible for
      ext4_init_block_bitmap(), which is called when ext4 sets up an
      uninitialized block bitmap, to corrupt random kernel memory.  Add the
      same checks which e2fsck has --- it must never be larger than
      blocksize / sizeof(__u32) --- and then add a backup check in
      ext4_init_block_bitmap() in case the superblock gets modified after
      the file system is mounted.
      Reported-by: default avatarVegard Nossum <vegard.nossum@oracle.com>
      Signed-off-by: default avatarTheodore Ts'o <tytso@mit.edu>
      Cc: stable@vger.kernel.org
      5b9554dc
  4. 05 Jul, 2016 1 commit
  5. 04 Jul, 2016 4 commits
    • Vegard Nossum's avatar
      ext4: don't call ext4_should_journal_data() on the journal inode · 6a7fd522
      Vegard Nossum authored
      If ext4_fill_super() fails early, it's possible for ext4_evict_inode()
      to call ext4_should_journal_data() before superblock options and flags
      are fully set up.  In that case, the iput() on the journal inode can
      end up causing a BUG().
      
      Work around this problem by reordering the tests so we only call
      ext4_should_journal_data() after we know it's not the journal inode.
      
      Fixes: 2d859db3 ("ext4: fix data corruption in inodes with journalled data")
      Fixes: 2b405bfa ("ext4: fix data=journal fast mount/umount hang")
      Cc: Jan Kara <jack@suse.cz>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarVegard Nossum <vegard.nossum@oracle.com>
      Signed-off-by: default avatarTheodore Ts'o <tytso@mit.edu>
      Reviewed-by: default avatarJan Kara <jack@suse.cz>
      6a7fd522
    • Pranay Kr. Srivastava's avatar
      ext4: Fix WARN_ON_ONCE in ext4_commit_super() · 4743f839
      Pranay Kr. Srivastava authored
      If there are racing calls to ext4_commit_super() it's possible for
      another writeback of the superblock to result in the buffer being
      marked with an error after we check if the buffer is marked as having
      a write error and the buffer up-to-date flag is set again.  If that
      happens mark_buffer_dirty() can end up throwing a WARN_ON_ONCE.
      
      Fix this by moving this check to write before we call
      write_buffer_dirty(), and keeping the buffer locked during this whole
      sequence.
      Signed-off-by: default avatarPranay Kr. Srivastava <pranjas@gmail.com>
      Signed-off-by: default avatarTheodore Ts'o <tytso@mit.edu>
      4743f839
    • Jan Kara's avatar
      ext4: fix deadlock during page writeback · 646caa9c
      Jan Kara authored
      Commit 06bd3c36 (ext4: fix data exposure after a crash) uncovered a
      deadlock in ext4_writepages() which was previously much harder to hit.
      After this commit xfstest generic/130 reproduces the deadlock on small
      filesystems.
      
      The problem happens when ext4_do_update_inode() sets LARGE_FILE feature
      and marks current inode handle as synchronous. That subsequently results
      in ext4_journal_stop() called from ext4_writepages() to block waiting for
      transaction commit while still holding page locks, reference to io_end,
      and some prepared bio in mpd structure each of which can possibly block
      transaction commit from completing and thus results in deadlock.
      
      Fix the problem by releasing page locks, io_end reference, and
      submitting prepared bio before calling ext4_journal_stop().
      
      [ Changed to defer the call to ext4_journal_stop() only if the handle
        is synchronous.  --tytso ]
      Reported-and-tested-by: default avatarEryu Guan <eguan@redhat.com>
      Signed-off-by: default avatarTheodore Ts'o <tytso@mit.edu>
      CC: stable@vger.kernel.org
      Signed-off-by: default avatarJan Kara <jack@suse.cz>
      646caa9c
    • Daeho Jeong's avatar
      ext4: correct error value of function verifying dx checksum · fa964540
      Daeho Jeong authored
      ext4_dx_csum_verify() returns the success return value in two checksum
      verification failure cases. We need to set the return values to zero
      as failure like ext4_dirent_csum_verify() returning zero when failing
      to find a checksum dirent at the tail.
      Signed-off-by: default avatarDaeho Jeong <daeho.jeong@samsung.com>
      Signed-off-by: default avatarTheodore Ts'o <tytso@mit.edu>
      Reviewed-by: default avatarDarrick J. Wong <darrick.wong@oracle.com>
      fa964540
  6. 03 Jul, 2016 1 commit
  7. 30 Jun, 2016 5 commits
    • Vegard Nossum's avatar
      ext4: check for extents that wrap around · f70749ca
      Vegard Nossum authored
      An extent with lblock = 4294967295 and len = 1 will pass the
      ext4_valid_extent() test:
      
      	ext4_lblk_t last = lblock + len - 1;
      
      	if (len == 0 || lblock > last)
      		return 0;
      
      since last = 4294967295 + 1 - 1 = 4294967295. This would later trigger
      the BUG_ON(es->es_lblk + es->es_len < es->es_lblk) in ext4_es_end().
      
      We can simplify it by removing the - 1 altogether and changing the test
      to use lblock + len <= lblock, since now if len = 0, then lblock + 0 ==
      lblock and it fails, and if len > 0 then lblock + len > lblock in order
      to pass (i.e. it doesn't overflow).
      
      Fixes: 5946d089 ("ext4: check for overlapping extents in ext4_valid_extent_entries()")
      Fixes: 2f974865 ("ext4: check for zero length extent explicitly")
      Cc: Eryu Guan <guaneryu@gmail.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarPhil Turnbull <phil.turnbull@oracle.com>
      Signed-off-by: default avatarVegard Nossum <vegard.nossum@oracle.com>
      Signed-off-by: default avatarTheodore Ts'o <tytso@mit.edu>
      f70749ca
    • Arnd Bergmann's avatar
      jbd2: make journal y2038 safe · abcfb5d9
      Arnd Bergmann authored
      The jbd2 journal stores the commit time in 64-bit seconds and 32-bit
      nanoseconds, which avoids an overflow in 2038, but it gets the numbers
      from current_kernel_time(), which uses 'long' seconds on 32-bit
      architectures.
      
      This simply changes the code to call current_kernel_time64() so
      we use 64-bit seconds consistently.
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      Signed-off-by: default avatarTheodore Ts'o <tytso@mit.edu>
      Reviewed-by: default avatarJan Kara <jack@suse.cz>
      Cc: stable@vger.kernel.org
      abcfb5d9
    • Jan Kara's avatar
      jbd2: track more dependencies on transaction commit · 1eaa566d
      Jan Kara authored
      So far we were tracking only dependency on transaction commit due to
      starting a new handle (which may require commit to start a new
      transaction). Now add tracking also for other cases where we wait for
      transaction commit. This way lockdep can catch deadlocks e. g. because we
      call jbd2_journal_stop() for a synchronous handle with some locks held
      which rank below transaction start.
      Signed-off-by: default avatarJan Kara <jack@suse.cz>
      Signed-off-by: default avatarTheodore Ts'o <tytso@mit.edu>
      1eaa566d
    • Jan Kara's avatar
      jbd2: move lockdep tracking to journal_s · ab714aff
      Jan Kara authored
      Currently lockdep map is tracked in each journal handle. To be able to
      expand lockdep support to cover also other cases where we depend on
      transaction commit and where handle is not available, move lockdep map
      into struct journal_s. Since this makes the lockdep map shared for all
      handles, we have to use rwsem_acquire_read() for acquisitions now.
      Signed-off-by: default avatarJan Kara <jack@suse.cz>
      Signed-off-by: default avatarTheodore Ts'o <tytso@mit.edu>
      ab714aff
    • Jan Kara's avatar
      jbd2: move lockdep instrumentation for jbd2 handles · 7a4b188f
      Jan Kara authored
      The transaction the handle references is free to commit once we've
      decremented t_updates counter. Move the lockdep instrumentation to that
      place. Currently it was a bit later which did not really matter but
      subsequent improvements to lockdep instrumentation would cause false
      positives with it.
      Signed-off-by: default avatarJan Kara <jack@suse.cz>
      Signed-off-by: default avatarTheodore Ts'o <tytso@mit.edu>
      7a4b188f
  8. 26 Jun, 2016 2 commits
  9. 29 May, 2016 3 commits
  10. 28 May, 2016 17 commits
    • Mikulas Patocka's avatar
      hpfs: implement the show_options method · 037369b8
      Mikulas Patocka authored
      The HPFS filesystem used generic_show_options to produce string that is
      displayed in /proc/mounts.  However, there is a problem that the options
      may disappear after remount.  If we mount the filesystem with option1
      and then remount it with option2, /proc/mounts should show both option1
      and option2, however it only shows option2 because the whole option
      string is replaced with replace_mount_options in hpfs_remount_fs.
      
      To fix this bug, implement the hpfs_show_options function that prints
      options that are currently selected.
      Signed-off-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      037369b8
    • Mikulas Patocka's avatar
      affs: fix remount failure when there are no options changed · 01d6e087
      Mikulas Patocka authored
      Commit c8f33d0b ("affs: kstrdup() memory handling") checks if the
      kstrdup function returns NULL due to out-of-memory condition.
      
      However, if we are remounting a filesystem with no change to
      filesystem-specific options, the parameter data is NULL.  In this case,
      kstrdup returns NULL (because it was passed NULL parameter), although no
      out of memory condition exists.  The mount syscall then fails with
      ENOMEM.
      
      This patch fixes the bug.  We fail with ENOMEM only if data is non-NULL.
      
      The patch also changes the call to replace_mount_options - if we didn't
      pass any filesystem-specific options, we don't call
      replace_mount_options (thus we don't erase existing reported options).
      
      Fixes: c8f33d0b ("affs: kstrdup() memory handling")
      Signed-off-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Cc: stable@vger.kernel.org	# v4.1+
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      01d6e087
    • Mikulas Patocka's avatar
      hpfs: fix remount failure when there are no options changed · 44d51706
      Mikulas Patocka authored
      Commit ce657611 ("hpfs: kstrdup() out of memory handling") checks if
      the kstrdup function returns NULL due to out-of-memory condition.
      
      However, if we are remounting a filesystem with no change to
      filesystem-specific options, the parameter data is NULL.  In this case,
      kstrdup returns NULL (because it was passed NULL parameter), although no
      out of memory condition exists.  The mount syscall then fails with
      ENOMEM.
      
      This patch fixes the bug.  We fail with ENOMEM only if data is non-NULL.
      
      The patch also changes the call to replace_mount_options - if we didn't
      pass any filesystem-specific options, we don't call
      replace_mount_options (thus we don't erase existing reported options).
      
      Fixes: ce657611 ("hpfs: kstrdup() out of memory handling")
      Signed-off-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      44d51706
    • Linus Torvalds's avatar
      Merge branch 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus · 4029632c
      Linus Torvalds authored
      Pull more MIPS updates from Ralf Baechle:
       "This is the secondnd batch of MIPS patches for 4.7. Summary:
      
        CPS:
         - Copy EVA configuration when starting secondary VPs.
      
        EIC:
         - Clear Status IPL.
      
        Lasat:
         - Fix a few off by one bugs.
      
        lib:
         - Mark intrinsics notrace.  Not only are the intrinsics
           uninteresting, it would cause infinite recursion.
      
        MAINTAINERS:
         - Add file patterns for MIPS BRCM device tree bindings.
         - Add file patterns for mips device tree bindings.
      
        MT7628:
         - Fix MT7628 pinmux typos.
         - wled_an pinmux gpio.
         - EPHY LEDs pinmux support.
      
        Pistachio:
         - Enable KASLR
      
        VDSO:
         - Build microMIPS VDSO for microMIPS kernels.
         - Fix aliasing warning by building with `-fno-strict-aliasing' for
           debugging but also tracing them might result in recursion.
      
        Misc:
         - Add missing FROZEN hotplug notifier transitions.
         - Fix clk binding example for varioius PIC32 devices.
         - Fix cpu interrupt controller node-names in the DT files.
         - Fix XPA CPU feature separation.
         - Fix write_gc0_* macros when writing zero.
         - Add inline asm encoding helpers.
         - Add missing VZ accessor microMIPS encodings.
         - Fix little endian microMIPS MSA encodings.
         - Add 64-bit HTW fields and fix its configuration.
         - Fix sigreturn via VDSO on microMIPS kernel.
         - Lots of typo fixes.
         - Add definitions of SegCtl registers and use them"
      
      * 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus: (49 commits)
        MIPS: Add missing FROZEN hotplug notifier transitions
        MIPS: Build microMIPS VDSO for microMIPS kernels
        MIPS: Fix sigreturn via VDSO on microMIPS kernel
        MIPS: devicetree: fix cpu interrupt controller node-names
        MIPS: VDSO: Build with `-fno-strict-aliasing'
        MIPS: Pistachio: Enable KASLR
        MIPS: lib: Mark intrinsics notrace
        MIPS: Fix 64-bit HTW configuration
        MIPS: Add 64-bit HTW fields
        MAINTAINERS: Add file patterns for mips device tree bindings
        MAINTAINERS: Add file patterns for mips brcm device tree bindings
        MIPS: Simplify DSP instruction encoding macros
        MIPS: Add missing tlbinvf/XPA microMIPS encodings
        MIPS: Fix little endian microMIPS MSA encodings
        MIPS: Add missing VZ accessor microMIPS encodings
        MIPS: Add inline asm encoding helpers
        MIPS: Spelling fix lets -> let's
        MIPS: VR41xx: Fix typo
        MIPS: oprofile: Fix typo
        MIPS: math-emu: Fix typo
        ...
      4029632c
    • Guenter Roeck's avatar
      fs: fix binfmt_aout.c build error · d66492bc
      Guenter Roeck authored
      Various builds (such as i386:allmodconfig) fail with
      
        fs/binfmt_aout.c:133:2: error: expected identifier or '(' before 'return'
        fs/binfmt_aout.c:134:1: error: expected identifier or '(' before '}' token
      
      [ Oops. My bad, I had stupidly thought that "allmodconfig" covered this
        on x86-64 too, but it obviously doesn't.  Egg on my face.  - Linus ]
      
      Fixes: 5d22fc25 ("mm: remove more IS_ERR_VALUE abuses")
      Signed-off-by: default avatarGuenter Roeck <linux@roeck-us.net>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d66492bc
    • Linus Torvalds's avatar
      Merge branch 'hash' of git://ftp.sciencehorizons.net/linux · 7e0fb73c
      Linus Torvalds authored
      Pull string hash improvements from George Spelvin:
       "This series does several related things:
      
         - Makes the dcache hash (fs/namei.c) useful for general kernel use.
      
           (Thanks to Bruce for noticing the zero-length corner case)
      
         - Converts the string hashes in <linux/sunrpc/svcauth.h> to use the
           above.
      
         - Avoids 64-bit multiplies in hash_64() on 32-bit platforms.  Two
           32-bit multiplies will do well enough.
      
         - Rids the world of the bad hash multipliers in hash_32.
      
           This finishes the job started in commit 689de1d6 ("Minimal
           fix-up of bad hashing behavior of hash_64()")
      
           The vast majority of Linux architectures have hardware support for
           32x32-bit multiply and so derive no benefit from "simplified"
           multipliers.
      
           The few processors that do not (68000, h8/300 and some models of
           Microblaze) have arch-specific implementations added.  Those
           patches are last in the series.
      
         - Overhauls the dcache hash mixing.
      
           The patch in commit 0fed3ac8 ("namei: Improve hash mixing if
           CONFIG_DCACHE_WORD_ACCESS") was an off-the-cuff suggestion.
           Replaced with a much more careful design that's simultaneously
           faster and better.  (My own invention, as there was noting suitable
           in the literature I could find.  Comments welcome!)
      
         - Modify the hash_name() loop to skip the initial HASH_MIX().  This
           would let us salt the hash if we ever wanted to.
      
         - Sort out partial_name_hash().
      
           The hash function is declared as using a long state, even though
           it's truncated to 32 bits at the end and the extra internal state
           contributes nothing to the result.  And some callers do odd things:
      
            - fs/hfs/string.c only allocates 32 bits of state
            - fs/hfsplus/unicode.c uses it to hash 16-bit unicode symbols not bytes
      
         - Modify bytemask_from_count to handle inputs of 1..sizeof(long)
           rather than 0..sizeof(long)-1.  This would simplify users other
           than full_name_hash"
      
        Special thanks to Bruce Fields for testing and finding bugs in v1.  (I
        learned some humbling lessons about "obviously correct" code.)
      
        On the arch-specific front, the m68k assembly has been tested in a
        standalone test harness, I've been in contact with the Microblaze
        maintainers who mostly don't care, as the hardware multiplier is never
        omitted in real-world applications, and I haven't heard anything from
        the H8/300 world"
      
      * 'hash' of git://ftp.sciencehorizons.net/linux:
        h8300: Add <asm/hash.h>
        microblaze: Add <asm/hash.h>
        m68k: Add <asm/hash.h>
        <linux/hash.h>: Add support for architecture-specific functions
        fs/namei.c: Improve dcache hash function
        Eliminate bad hash multipliers from hash_32() and  hash_64()
        Change hash_64() return value to 32 bits
        <linux/sunrpc/svcauth.h>: Define hash_str() in terms of hashlen_string()
        fs/namei.c: Add hashlen_string() function
        Pull out string hash to <linux/stringhash.h>
      7e0fb73c
    • George Spelvin's avatar
      h8300: Add <asm/hash.h> · 4684fe95
      George Spelvin authored
      This will improve the performance of hash_32() and hash_64(), but due
      to complete lack of multi-bit shift instructions on H8, performance will
      still be bad in surrounding code.
      
      Designing H8-specific hash algorithms to work around that is a separate
      project.  (But if the maintainers would like to get in touch...)
      Signed-off-by: default avatarGeorge Spelvin <linux@sciencehorizons.net>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: uclinux-h8-devel@lists.sourceforge.jp
      4684fe95
    • George Spelvin's avatar
      microblaze: Add <asm/hash.h> · 7b13277b
      George Spelvin authored
      Microblaze is an FPGA soft core that can be configured various ways.
      
      If it is configured without a multiplier, the standard __hash_32()
      will require a call to __mulsi3, which is a slow software loop.
      
      Instead, use a shift-and-add sequence for the constant multiply.
      GCC knows how to do this, but it's not as clever as some.
      Signed-off-by: default avatarGeorge Spelvin <linux@sciencehorizons.net>
      Cc: Alistair Francis <alistair.francis@xilinx.com>
      Cc: Michal Simek <michal.simek@xilinx.com>
      7b13277b
    • George Spelvin's avatar
      m68k: Add <asm/hash.h> · 14c44b95
      George Spelvin authored
      This provides a multiply by constant GOLDEN_RATIO_32 = 0x61C88647
      for the original mc68000, which lacks a 32x32-bit multiply instruction.
      
      Yes, the amount of optimization effort put in is excessive. :-)
      
      Shift-add chain found by Yevgen Voronenko's Hcub algorithm at
      http://spiral.ece.cmu.edu/mcm/gen.htmlSigned-off-by: default avatarGeorge Spelvin <linux@sciencehorizons.net>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Greg Ungerer <gerg@linux-m68k.org>
      Cc: Andreas Schwab <schwab@linux-m68k.org>
      Cc: Philippe De Muyter <phdm@macq.eu>
      Cc: linux-m68k@lists.linux-m68k.org
      14c44b95
    • George Spelvin's avatar
      <linux/hash.h>: Add support for architecture-specific functions · 468a9428
      George Spelvin authored
      This is just the infrastructure; there are no users yet.
      
      This is modelled on CONFIG_ARCH_RANDOM; a CONFIG_ symbol declares
      the existence of <asm/hash.h>.
      
      That file may define its own versions of various functions, and define
      HAVE_* symbols (no CONFIG_ prefix!) to suppress the generic ones.
      
      Included is a self-test (in lib/test_hash.c) that verifies the basics.
      It is NOT in general required that the arch-specific functions compute
      the same thing as the generic, but if a HAVE_* symbol is defined with
      the value 1, then equality is tested.
      Signed-off-by: default avatarGeorge Spelvin <linux@sciencehorizons.net>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Greg Ungerer <gerg@linux-m68k.org>
      Cc: Andreas Schwab <schwab@linux-m68k.org>
      Cc: Philippe De Muyter <phdm@macq.eu>
      Cc: linux-m68k@lists.linux-m68k.org
      Cc: Alistair Francis <alistai@xilinx.com>
      Cc: Michal Simek <michal.simek@xilinx.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: uclinux-h8-devel@lists.sourceforge.jp
      468a9428
    • George Spelvin's avatar
      fs/namei.c: Improve dcache hash function · 2a18da7a
      George Spelvin authored
      Patch 0fed3ac8 improved the hash mixing, but the function is slower
      than necessary; there's a 7-instruction dependency chain (10 on x86)
      each loop iteration.
      
      Word-at-a-time access is a very tight loop (which is good, because
      link_path_walk() is one of the hottest code paths in the entire kernel),
      and the hash mixing function must not have a longer latency to avoid
      slowing it down.
      
      There do not appear to be any published fast hash functions that:
      1) Operate on the input a word at a time, and
      2) Don't need to know the length of the input beforehand, and
      3) Have a single iterated mixing function, not needing conditional
         branches or unrolling to distinguish different loop iterations.
      
      One of the algorithms which comes closest is Yann Collet's xxHash, but
      that's two dependent multiplies per word, which is too much.
      
      The key insights in this design are:
      
      1) Barring expensive ops like multiplies, to diffuse one input bit
         across 64 bits of hash state takes at least log2(64) = 6 sequentially
         dependent instructions.  That is more cycles than we'd like.
      2) An operation like "hash ^= hash << 13" requires a second temporary
         register anyway, and on a 2-operand machine like x86, it's three
         instructions.
      3) A better use of a second register is to hold a two-word hash state.
         With careful design, no temporaries are needed at all, so it doesn't
         increase register pressure.  And this gets rid of register copying
         on 2-operand machines, so the code is smaller and faster.
      4) Using two words of state weakens the requirement for one-round mixing;
         we now have two rounds of mixing before cancellation is possible.
      5) A two-word hash state also allows operations on both halves to be
         done in parallel, so on a superscalar processor we get more mixing
         in fewer cycles.
      
      I ended up using a mixing function inspired by the ChaCha and Speck
      round functions.  It is 6 simple instructions and 3 cycles per iteration
      (assuming multiply by 9 can be done by an "lea" instruction):
      
      		x ^= *input++;
      	y ^= x;	x = ROL(x, K1);
      	x += y;	y = ROL(y, K2);
      	y *= 9;
      
      Not only is this reversible, two consecutive rounds are reversible:
      if you are given the initial and final states, but not the intermediate
      state, it is possible to compute both input words.  This means that at
      least 3 words of input are required to create a collision.
      
      (It also has the property, used by hash_name() to avoid a branch, that
      it hashes all-zero to all-zero.)
      
      The rotate constants K1 and K2 were found by experiment.  The search took
      a sample of random initial states (I used 1023) and considered the effect
      of flipping each of the 64 input bits on each of the 128 output bits two
      rounds later.  Each of the 8192 pairs can be considered a biased coin, and
      adding up the Shannon entropy of all of them produces a score.
      
      The best-scoring shifts also did well in other tests (flipping bits in y,
      trying 3 or 4 rounds of mixing, flipping all 64*63/2 pairs of input bits),
      so the choice was made with the additional constraint that the sum of the
      shifts is odd and not too close to the word size.
      
      The final state is then folded into a 32-bit hash value by a less carefully
      optimized multiply-based scheme.  This also has to be fast, as pathname
      components tend to be short (the most common case is one iteration!), but
      there's some room for latency, as there is a fair bit of intervening logic
      before the hash value is used for anything.
      
      (Performance verified with "bonnie++ -s 0 -n 1536:-2" on tmpfs.  I need
      a better benchmark; the numbers seem to show a slight dip in performance
      between 4.6.0 and this patch, but they're too noisy to quote.)
      
      Special thanks to Bruce fields for diligent testing which uncovered a
      nasty fencepost error in an earlier version of this patch.
      
      [checkpatch.pl formatting complaints noted and respectfully disagreed with.]
      Signed-off-by: default avatarGeorge Spelvin <linux@sciencehorizons.net>
      Tested-by: default avatarJ. Bruce Fields <bfields@redhat.com>
      2a18da7a
    • George Spelvin's avatar
      Eliminate bad hash multipliers from hash_32() and hash_64() · ef703f49
      George Spelvin authored
      The "simplified" prime multipliers made very bad hash functions, so get rid
      of them.  This completes the work of 689de1d6.
      
      To avoid the inefficiency which was the motivation for the "simplified"
      multipliers, hash_64() on 32-bit systems is changed to use a different
      algorithm.  It makes two calls to hash_32() instead.
      
      drivers/media/usb/dvb-usb-v2/af9015.c uses the old GOLDEN_RATIO_PRIME_32
      for some horrible reason, so it inherits a copy of the old definition.
      Signed-off-by: default avatarGeorge Spelvin <linux@sciencehorizons.net>
      Cc: Antti Palosaari <crope@iki.fi>
      Cc: Mauro Carvalho Chehab <m.chehab@samsung.com>
      ef703f49
    • George Spelvin's avatar
      Change hash_64() return value to 32 bits · 92d56774
      George Spelvin authored
      That's all that's ever asked for, and it makes the return
      type of hash_long() consistent.
      
      It also allows (upcoming patch) an optimized implementation
      of hash_64 on 32-bit machines.
      
      I tried adding a BUILD_BUG_ON to ensure the number of bits requested
      was never more than 32 (most callers use a compile-time constant), but
      adding <linux/bug.h> to <linux/hash.h> breaks the tools/perf compiler
      unless tools/perf/MANIFEST is updated, and understanding that code base
      well enough to update it is too much trouble.  I did the rest of an
      allyesconfig build with such a check, and nothing tripped.
      Signed-off-by: default avatarGeorge Spelvin <linux@sciencehorizons.net>
      92d56774
    • George Spelvin's avatar
      <linux/sunrpc/svcauth.h>: Define hash_str() in terms of hashlen_string() · 917ea166
      George Spelvin authored
      Finally, the first use of previous two patches: eliminate the
      separate ad-hoc string hash functions in the sunrpc code.
      
      Now hash_str() is a wrapper around hash_string(), and hash_mem() is
      likewise a wrapper around full_name_hash().
      
      Note that sunrpc code *does* call hash_mem() with a zero length, which
      is why the previous patch needed to handle that in full_name_hash().
      (Thanks, Bruce, for finding that!)
      
      This also eliminates the only caller of hash_long which asks for
      more than 32 bits of output.
      
      The comment about the quality of hashlen_string() and full_name_hash()
      is jumping the gun by a few patches; they aren't very impressive now,
      but will be improved greatly later in the series.
      Signed-off-by: default avatarGeorge Spelvin <linux@sciencehorizons.net>
      Tested-by: default avatarJ. Bruce Fields <bfields@redhat.com>
      Acked-by: default avatarJ. Bruce Fields <bfields@redhat.com>
      Cc: Jeff Layton <jlayton@poochiereds.net>
      Cc: linux-nfs@vger.kernel.org
      917ea166
    • George Spelvin's avatar
      fs/namei.c: Add hashlen_string() function · fcfd2fbf
      George Spelvin authored
      We'd like to make more use of the highly-optimized dcache hash functions
      throughout the kernel, rather than have every subsystem create its own,
      and a function that hashes basic null-terminated strings is required
      for that.
      
      (The name is to emphasize that it returns both hash and length.)
      
      It's actually useful in the dcache itself, specifically d_alloc_name().
      Other uses in the next patch.
      
      full_name_hash() is also tweaked to make it more generally useful:
      1) Take a "char *" rather than "unsigned char *" argument, to
         be consistent with hash_name().
      2) Handle zero-length inputs.  If we want more callers, we don't want
         to make them worry about corner cases.
      Signed-off-by: default avatarGeorge Spelvin <linux@sciencehorizons.net>
      fcfd2fbf
    • George Spelvin's avatar
      Pull out string hash to <linux/stringhash.h> · f4bcbe79
      George Spelvin authored
      ... so they can be used without the rest of <linux/dcache.h>
      
      The hashlen_* macros will make sense next patch.
      Signed-off-by: default avatarGeorge Spelvin <linux@sciencehorizons.net>
      f4bcbe79
    • Linus Torvalds's avatar
      Merge branch 'i2c/for-next' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux · 4e8440b3
      Linus Torvalds authored
      Pull i2c fix from Wolfram Sang:
       "A fix for a regression introduced yesterday.
      
        The regression didn't show up here locally because I did not have
        PAGE_POISONING enabled.  And buildbots discovered this only after it
        hit your tree.  Thanks to Dan for the quick response"
      
      * 'i2c/for-next' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux:
        i2c: dev: use after free in detach
      4e8440b3