1. 27 Oct, 2023 3 commits
    • Eric Biggers's avatar
      crypto: ahash - remove support for nonzero alignmask · c626910f
      Eric Biggers authored
      
      Currently, the ahash API checks the alignment of all key and result
      buffers against the algorithm's declared alignmask, and for any
      unaligned buffers it falls back to manually aligned temporary buffers.
      
      This is virtually useless, however.  First, since it does not apply to
      the message, its effect is much more limited than e.g. is the case for
      the alignmask for "skcipher".  Second, the key and result buffers are
      given as virtual addresses and cannot (in general) be DMA'ed into, so
      drivers end up having to copy to/from them in software anyway.  As a
      result it's easy to use memcpy() or the unaligned access helpers.
      
      The crypto_hash_walk_*() helper functions do use the alignmask to align
      the message.  But with one exception those are only used for shash
      algorithms being exposed via the ahash API, not for native ahashes, and
      aligning the message is not required in this case, especially now that
      alignmask support has been removed from shash.  The exception is the
      n2_core driver, which doesn't set an alignmask.
      
      In any case, no ahash algorithms actually set a nonzero alignmask
      anymore.  Therefore, remove support for it from ahash.  The benefit is
      that all the code to handle "misaligned" buffers in the ahash API goes
      away, reducing the overhead of the ahash API.
      
      This follows the same change that was made to shash.
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      c626910f
    • Eric Biggers's avatar
      crypto: shash - remove support for nonzero alignmask · 345bfa3c
      Eric Biggers authored
      
      Currently, the shash API checks the alignment of all message, key, and
      digest buffers against the algorithm's declared alignmask, and for any
      unaligned buffers it falls back to manually aligned temporary buffers.
      
      This is virtually useless, however.  In the case of the message buffer,
      cryptographic hash functions internally operate on fixed-size blocks, so
      implementations end up needing to deal with byte-aligned data anyway
      because the length(s) passed to ->update might not be divisible by the
      block size.  Word-alignment of the message can theoretically be helpful
      for CRCs, like what was being done in crc32c-sparc64.  But in practice
      it's better for the algorithms to use unaligned accesses or align the
      message themselves.  A similar argument applies to the key and digest.
      
      In any case, no shash algorithms actually set a nonzero alignmask
      anymore.  Therefore, remove support for it from shash.  The benefit is
      that all the code to handle "misaligned" buffers in the shash API goes
      away, reducing the overhead of the shash API.
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      345bfa3c
    • Eric Biggers's avatar
      crypto: shash - eliminate indirect call for default import and export · 08debaa5
      Eric Biggers authored
      
      Most shash algorithms don't have custom ->import and ->export functions,
      resulting in the memcpy() based default being used.  Yet,
      crypto_shash_import() and crypto_shash_export() still make an indirect
      call, which is expensive.  Therefore, change how the default import and
      export are called to make it so that crypto_shash_import() and
      crypto_shash_export() don't do an indirect call in this case.
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      08debaa5
  2. 20 Oct, 2023 2 commits
    • Eric Biggers's avatar
      crypto: shash - fold shash_digest_unaligned() into crypto_shash_digest() · 2e02c25a
      Eric Biggers authored
      
      Fold shash_digest_unaligned() into its only remaining caller.  Also,
      avoid a redundant check of CRYPTO_TFM_NEED_KEY by replacing the call to
      crypto_shash_init() with shash->init(desc).  Finally, replace
      shash_update_unaligned() + shash_final_unaligned() with
      shash_finup_unaligned() which does exactly that.
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      2e02c25a
    • Eric Biggers's avatar
      crypto: shash - optimize the default digest and finup · 313a4074
      Eric Biggers authored
      
      For an shash algorithm that doesn't implement ->digest, currently
      crypto_shash_digest() with aligned input makes 5 indirect calls: 1 to
      shash_digest_unaligned(), 1 to ->init, 2 to ->update ('alignmask + 1'
      bytes, then the rest), then 1 to ->final.  This is true even if the
      algorithm implements ->finup.  This is caused by an unnecessary fallback
      to code meant to handle unaligned inputs.  In fact,
      crypto_shash_digest() already does the needed alignment check earlier.
      Therefore, optimize the number of indirect calls for aligned inputs to 3
      when the algorithm implements ->finup.  It remains at 5 when the
      algorithm implements neither ->finup nor ->digest.
      
      Similarly, for an shash algorithm that doesn't implement ->finup,
      currently crypto_shash_finup() with aligned input makes 4 indirect
      calls: 1 to shash_finup_unaligned(), 2 to ->update, and
      1 to ->final.  Optimize this to 3 calls.
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      313a4074
  3. 24 May, 2023 1 commit
  4. 02 May, 2023 1 commit
  5. 20 Apr, 2023 1 commit
  6. 06 Apr, 2023 1 commit
  7. 14 Mar, 2023 2 commits
    • Herbert Xu's avatar
      crypto: api - Check CRYPTO_USER instead of NET for report · c0f9e01d
      Herbert Xu authored
      
      The report function is currently conditionalised on CONFIG_NET.
      As it's only used by CONFIG_CRYPTO_USER, conditionalising on that
      instead of CONFIG_NET makes more sense.
      
      This gets rid of a rarely used code-path.
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      c0f9e01d
    • Herbert Xu's avatar
      crypto: hash - Count error stats differently · 42808e5d
      Herbert Xu authored
      
      Move all stat code specific to hash into the hash code.
      
      While we're at it, change the stats so that bytes and counts
      are always incremented even in case of error.  This allows the
      reference counting to be removed as we can now increment the
      counters prior to the operation.
      
      After the operation we simply increase the error count if necessary.
      This is safe as errors can only occur synchronously (or rather,
      the existing code already ignored asynchronous errors which are
      only visible to the callback function).
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      42808e5d
  8. 30 Dec, 2022 1 commit
    • Ard Biesheuvel's avatar
      crypto: scatterwalk - use kmap_local() not kmap_atomic() · aa969515
      Ard Biesheuvel authored
      
      kmap_atomic() is used to create short-lived mappings of pages that may
      not be accessible via the kernel direct map. This is only needed on
      32-bit architectures that implement CONFIG_HIGHMEM, but it can be used
      on 64-bit other architectures too, where the returned mapping is simply
      the kernel direct address of the page.
      
      However, kmap_atomic() does not support migration on CONFIG_HIGHMEM
      configurations, due to the use of per-CPU kmap slots, and so it disables
      preemption on all architectures, not just the 32-bit ones. This implies
      that all scatterwalk based crypto routines essentially execute with
      preemption disabled all the time, which is less than ideal.
      
      So let's switch scatterwalk_map/_unmap and the shash/ahash routines to
      kmap_local() instead, which serves a similar purpose, but without the
      resulting impact on preemption on architectures that have no need for
      CONFIG_HIGHMEM.
      
      Cc: Eric Biggers <ebiggers@kernel.org>
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Cc: "Elliott, Robert (Servers)" <elliott@hpe.com>
      Signed-off-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      aa969515
  9. 02 Dec, 2022 1 commit
  10. 25 Nov, 2022 1 commit
  11. 02 Aug, 2022 1 commit
  12. 17 Jun, 2021 1 commit
  13. 07 Aug, 2020 1 commit
    • Waiman Long's avatar
      mm, treewide: rename kzfree() to kfree_sensitive() · 453431a5
      Waiman Long authored
      
      As said by Linus:
      
        A symmetric naming is only helpful if it implies symmetries in use.
        Otherwise it's actively misleading.
      
        In "kzalloc()", the z is meaningful and an important part of what the
        caller wants.
      
        In "kzfree()", the z is actively detrimental, because maybe in the
        future we really _might_ want to use that "memfill(0xdeadbeef)" or
        something. The "zero" part of the interface isn't even _relevant_.
      
      The main reason that kzfree() exists is to clear sensitive information
      that should not be leaked to other future users of the same memory
      objects.
      
      Rename kzfree() to kfree_sensitive() to follow the example of the recently
      added kvfree_sensitive() and make the intention of the API more explicit.
      In addition, memzero_explicit() is used to clear the memory to make sure
      that it won't get optimized away by the compiler.
      
      The renaming is done by using the command sequence:
      
        git grep -w --name-only kzfree |\
        xargs sed -i 's/kzfree/kfree_sensitive/'
      
      followed by some editing of the kfree_sensitive() kerneldoc and adding
      a kzfree backward compatibility macro in slab.h.
      
      [akpm@linux-foundation.org: fs/crypto/inline_crypt.c needs linux/slab.h]
      [akpm@linux-foundation.org: fix fs/crypto/inline_crypt.c some more]
      Suggested-by: default avatarJoe Perches <joe@perches.com>
      Signed-off-by: default avatarWaiman Long <longman@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Acked-by: default avatarDavid Howells <dhowells@redhat.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
      Cc: James Morris <jmorris@namei.org>
      Cc: "Serge E. Hallyn" <serge@hallyn.com>
      Cc: Joe Perches <joe@perches.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Dan Carpenter <dan.carpenter@oracle.com>
      Cc: "Jason A . Donenfeld" <Jason@zx2c4.com>
      Link: http://lkml.kernel.org/r/20200616154311.12314-3-longman@redhat.com
      
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      453431a5
  14. 08 May, 2020 1 commit
    • Eric Biggers's avatar
      crypto: hash - introduce crypto_shash_tfm_digest() · 822a98b8
      Eric Biggers authored
      
      Currently the simplest use of the shash API is to use
      crypto_shash_digest() to digest a whole buffer.  However, this still
      requires allocating a hash descriptor (struct shash_desc).  Many users
      don't really want to preallocate one and instead just use a one-off
      descriptor on the stack like the following:
      
      	{
      		SHASH_DESC_ON_STACK(desc, tfm);
      		int err;
      
      		desc->tfm = tfm;
      
      		err = crypto_shash_digest(desc, data, len, out);
      
      		shash_desc_zero(desc);
      	}
      
      Wrap this in a new helper function crypto_shash_tfm_digest() that can be
      used instead of the above.
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      822a98b8
  15. 09 Jan, 2020 6 commits
  16. 20 Dec, 2019 1 commit
    • Eric Biggers's avatar
      crypto: algapi - make unregistration functions return void · c6d633a9
      Eric Biggers authored
      
      Some of the algorithm unregistration functions return -ENOENT when asked
      to unregister a non-registered algorithm, while others always return 0
      or always return void.  But no users check the return value, except for
      two of the bulk unregistration functions which print a message on error
      but still always return 0 to their caller, and crypto_del_alg() which
      calls crypto_unregister_instance() which always returns 0.
      
      Since unregistering a non-registered algorithm is always a kernel bug
      but there isn't anything callers should do to handle this situation at
      runtime, let's simplify things by making all the unregistration
      functions return void, and moving the error message into
      crypto_unregister_alg() and upgrading it to a WARN().
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      c6d633a9
  17. 11 Dec, 2019 2 commits
  18. 30 May, 2019 1 commit
  19. 25 Apr, 2019 2 commits
    • Eric Biggers's avatar
      crypto: shash - remove shash_desc::flags · 877b5691
      Eric Biggers authored
      
      The flags field in 'struct shash_desc' never actually does anything.
      The only ostensibly supported flag is CRYPTO_TFM_REQ_MAY_SLEEP.
      However, no shash algorithm ever sleeps, making this flag a no-op.
      
      With this being the case, inevitably some users who can't sleep wrongly
      pass MAY_SLEEP.  These would all need to be fixed if any shash algorithm
      actually started sleeping.  For example, the shash_ahash_*() functions,
      which wrap a shash algorithm with the ahash API, pass through MAY_SLEEP
      from the ahash API to the shash API.  However, the shash functions are
      called under kmap_atomic(), so actually they're assumed to never sleep.
      
      Even if it turns out that some users do need preemption points while
      hashing large buffers, we could easily provide a helper function
      crypto_shash_update_large() which divides the data into smaller chunks
      and calls crypto_shash_update() and cond_resched() for each chunk.  It's
      not necessary to have a flag in 'struct shash_desc', nor is it necessary
      to make individual shash algorithms aware of this at all.
      
      Therefore, remove shash_desc::flags, and document that the
      crypto_shash_*() functions can be called from any context.
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      877b5691
    • Eric Biggers's avatar
      crypto: shash - remove useless crypto_yield() in shash_ahash_digest() · 54fe792b
      Eric Biggers authored
      
      The crypto_yield() in shash_ahash_digest() occurs after the entire
      digest operation already happened, so there's no real point.  Remove it.
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      54fe792b
  20. 18 Apr, 2019 1 commit
  21. 18 Jan, 2019 3 commits
    • Eric Biggers's avatar
      crypto: shash - remove pointless checks of shash_alg::{export,import} · 2b091e32
      Eric Biggers authored
      
      crypto_init_shash_ops_async() only gives the ahash tfm non-NULL
      ->export() and ->import() if the underlying shash alg has these
      non-NULL.  This doesn't make sense because when an shash algorithm is
      registered, shash_prepare_alg() sets a default ->export() and ->import()
      if the implementor didn't provide them.  And elsewhere it's assumed that
      all shash algs and ahash tfms have non-NULL ->export() and ->import().
      
      Therefore, remove these unnecessary, always-true conditions.
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      2b091e32
    • Eric Biggers's avatar
      crypto: shash - require neither or both ->export() and ->import() · 41a2e94f
      Eric Biggers authored
      
      Prevent registering shash algorithms that implement ->export() but not
      ->import(), or ->import() but not ->export().  Such cases don't make
      sense and could confuse the check that shash_prepare_alg() does for just
      ->export().
      
      I don't believe this affects any existing algorithms; this is just
      preventing future mistakes.
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      41a2e94f
    • Eric Biggers's avatar
      crypto: hash - set CRYPTO_TFM_NEED_KEY if ->setkey() fails · ba7d7433
      Eric Biggers authored
      Some algorithms have a ->setkey() method that is not atomic, in the
      sense that setting a key can fail after changes were already made to the
      tfm context.  In this case, if a key was already set the tfm can end up
      in a state that corresponds to neither the old key nor the new key.
      
      It's not feasible to make all ->setkey() methods atomic, especially ones
      that have to key multiple sub-tfms.  Therefore, make the crypto API set
      CRYPTO_TFM_NEED_KEY if ->setkey() fails and the algorithm requires a
      key, to prevent the tfm from being used until a new key is set.
      
      Note: we can't set CRYPTO_TFM_NEED_KEY for OPTIONAL_KEY algorithms, so
      ->setkey() for those must nevertheless be atomic.  That's fine for now
      since only the crc32 and crc32c algorithms set OPTIONAL_KEY, and it's
      not intended that OPTIONAL_KEY be used much.
      
      [Cc stable mainly because when introducing the NEED_KEY flag I changed
       AF_ALG to rely on it; and unlike in-kernel crypto API users, AF_ALG
       previously didn't have this problem.  So these "incompletely keyed"
       states became theoretically accessible via AF_ALG -- though, the
       opportunities for causing real mischief seem pretty limited.]
      
      Fixes: 9fa68f62
      
       ("crypto: hash - prevent using keyed hashes without setting key")
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      ba7d7433
  22. 09 Nov, 2018 1 commit
    • Eric Biggers's avatar
      crypto: user - clean up report structure copying · 37db69e0
      Eric Biggers authored
      There have been a pretty ridiculous number of issues with initializing
      the report structures that are copied to userspace by NETLINK_CRYPTO.
      Commit 4473710d ("crypto: user - Prepare for CRYPTO_MAX_ALG_NAME
      expansion") replaced some strncpy()s with strlcpy()s, thereby
      introducing information leaks.  Later two other people tried to replace
      other strncpy()s with strlcpy() too, which would have introduced even
      more information leaks:
      
          - https://lore.kernel.org/patchwork/patch/954991/
          - https://patchwork.kernel.org/patch/10434351/
      
      Commit cac5818c
      
       ("crypto: user - Implement a generic crypto
      statistics") also uses the buggy strlcpy() approach and therefore leaks
      uninitialized memory to userspace.  A fix was proposed, but it was
      originally incomplete.
      
      Seeing as how apparently no one can get this right with the current
      approach, change all the reporting functions to:
      
      - Start by memsetting the report structure to 0.  This guarantees it's
        always initialized, regardless of what happens later.
      - Initialize all strings using strscpy().  This is safe after the
        memset, ensures null termination of long strings, avoids unnecessary
        work, and avoids the -Wstringop-truncation warnings from gcc.
      - Use sizeof(var) instead of sizeof(type).  This is more robust against
        copy+paste errors.
      
      For simplicity, also reuse the -EMSGSIZE return value from nla_put().
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      37db69e0
  23. 04 Sep, 2018 2 commits
  24. 12 Jan, 2018 1 commit
    • Eric Biggers's avatar
      crypto: hash - prevent using keyed hashes without setting key · 9fa68f62
      Eric Biggers authored
      Currently, almost none of the keyed hash algorithms check whether a key
      has been set before proceeding.  Some algorithms are okay with this and
      will effectively just use a key of all 0's or some other bogus default.
      However, others will severely break, as demonstrated using
      "hmac(sha3-512-generic)", the unkeyed use of which causes a kernel crash
      via a (potentially exploitable) stack buffer overflow.
      
      A while ago, this problem was solved for AF_ALG by pairing each hash
      transform with a 'has_key' bool.  However, there are still other places
      in the kernel where userspace can specify an arbitrary hash algorithm by
      name, and the kernel uses it as unkeyed hash without checking whether it
      is really unkeyed.  Examples of this include:
      
          - KEYCTL_DH_COMPUTE, via the KDF extension
          - dm-verity
          - dm-crypt, via the ESSIV support
          - dm-integrity, via the "internal hash" mode with no key given
          - drbd (Distributed Replicated Block Device)
      
      Th...
      9fa68f62
  25. 29 Nov, 2017 1 commit
    • Eric Biggers's avatar
      crypto: hmac - require that the underlying hash algorithm is unkeyed · af3ff804
      Eric Biggers authored
      Because the HMAC template didn't check that its underlying hash
      algorithm is unkeyed, trying to use "hmac(hmac(sha3-512-generic))"
      through AF_ALG or through KEYCTL_DH_COMPUTE resulted in the inner HMAC
      being used without having been keyed, resulting in sha3_update() being
      called without sha3_init(), causing a stack buffer overflow.
      
      This is a very old bug, but it seems to have only started causing real
      problems when SHA-3 support was added (requires CONFIG_CRYPTO_SHA3)
      because the innermost hash's state is ->import()ed from a zeroed buffer,
      and it just so happens that other hash algorithms are fine with that,
      but SHA-3 is not.  However, there could be arch or hardware-dependent
      hash algorithms also affected; I couldn't test everything.
      
      Fix the bug by introducing a function crypto_shash_alg_has_setkey()
      which tests whether a shash algorithm is keyed.  Then update the HMAC
      template to require that its underlying hash algorithm is unkeyed....
      af3ff804
  26. 10 Oct, 2017 1 commit