1. 05 Oct, 2014 35 commits
  2. 17 Sep, 2014 5 commits
    • Greg Kroah-Hartman's avatar
      Linux 3.10.55 · 339f8f37
      Greg Kroah-Hartman authored
      339f8f37
    • Sage Weil's avatar
      libceph: gracefully handle large reply messages from the mon · 12477ec8
      Sage Weil authored
      commit 73c3d481 upstream.
      
      We preallocate a few of the message types we get back from the mon.  If we
      get a larger message than we are expecting, fall back to trying to allocate
      a new one instead of blindly using the one we have.
      Signed-off-by: default avatarSage Weil <sage@redhat.com>
      Reviewed-by: default avatarIlya Dryomov <ilya.dryomov@inktank.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      12477ec8
    • Ilya Dryomov's avatar
      libceph: rename ceph_msg::front_max to front_alloc_len · 842a5780
      Ilya Dryomov authored
      commit 3cea4c30 upstream.
      
      Rename front_max field of struct ceph_msg to front_alloc_len to make
      its purpose more clear.
      Signed-off-by: default avatarIlya Dryomov <ilya.dryomov@inktank.com>
      Reviewed-by: default avatarSage Weil <sage@inktank.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      842a5780
    • Jason Gunthorpe's avatar
      tpm: Provide a generic means to override the chip returned timeouts · d64269e3
      Jason Gunthorpe authored
      commit 8e54caf4 upstream.
      
      Some Atmel TPMs provide completely wrong timeouts from their
      TPM_CAP_PROP_TIS_TIMEOUT query. This patch detects that and returns
      new correct values via a DID/VID table in the TIS driver.
      
      Tested on ARM using an AT97SC3204T FW version 37.16
      
      [PHuewe: without this fix these 'broken' Atmel TPMs won't function on
      older kernels]
      Signed-off-by: default avatar"Berg, Christopher" <Christopher.Berg@atmel.com>
      Signed-off-by: default avatarJason Gunthorpe <jgunthorpe@obsidianresearch.com>
      Signed-off-by: default avatarPeter Huewe <peterhuewe@gmx.de>
      [bwh: Backported to 3.10:
       - Adjust filename, context
       - s/chip->ops->/chip->vendor./]
      Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      d64269e3
    • Linus Torvalds's avatar
      vfs: fix bad hashing of dentries · d4c96061
      Linus Torvalds authored
      commit 99d263d4 upstream.
      
      Josef Bacik found a performance regression between 3.2 and 3.10 and
      narrowed it down to commit bfcfaa77 ("vfs: use 'unsigned long'
      accesses for dcache name comparison and hashing"). He reports:
      
       "The test case is essentially
      
            for (i = 0; i < 1000000; i++)
                    mkdir("a$i");
      
        On xfs on a fio card this goes at about 20k dir/sec with 3.2, and 12k
        dir/sec with 3.10.  This is because we spend waaaaay more time in
        __d_lookup on 3.10 than in 3.2.
      
        The new hashing function for strings is suboptimal for <
        sizeof(unsigned long) string names (and hell even > sizeof(unsigned
        long) string names that I've tested).  I broke out the old hashing
        function and the new one into a userspace helper to get real numbers
        and this is what I'm getting:
      
            Old hash table had 1000000 entries, 0 dupes, 0 max dupes
            New hash table had 12628 entries, 987372 dupes, 900 max dupes
            We had 11400 buckets with a p50 of 30 dupes, p90 of 240 dupes, p99 of 567 dupes for the new hash
      
        My test does the hash, and then does the d_hash into a integer pointer
        array the same size as the dentry hash table on my system, and then
        just increments the value at the address we got to see how many
        entries we overlap with.
      
        As you can see the old hash function ended up with all 1 million
        entries in their own bucket, whereas the new one they are only
        distributed among ~12.5k buckets, which is why we're using so much
        more CPU in __d_lookup".
      
      The reason for this hash regression is two-fold:
      
       - On 64-bit architectures the down-mixing of the original 64-bit
         word-at-a-time hash into the final 32-bit hash value is very
         simplistic and suboptimal, and just adds the two 32-bit parts
         together.
      
         In particular, because there is no bit shuffling and the mixing
         boundary is also a byte boundary, similar character patterns in the
         low and high word easily end up just canceling each other out.
      
       - the old byte-at-a-time hash mixed each byte into the final hash as it
         hashed the path component name, resulting in the low bits of the hash
         generally being a good source of hash data.  That is not true for the
         word-at-a-time case, and the hash data is distributed among all the
         bits.
      
      The fix is the same in both cases: do a better job of mixing the bits up
      and using as much of the hash data as possible.  We already have the
      "hash_32|64()" functions to do that.
      Reported-by: default avatarJosef Bacik <jbacik@fb.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Chris Mason <clm@fb.com>
      Cc: linux-fsdevel@vger.kernel.org
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      d4c96061